Machine Ethics

Two great new books on the future of robots, Moral Machines: Teaching Robots Right from Wrong and Wired for War: The Robotics Revolution and Conflict in the 21st Century are out right now. I’m not going to have time for either, but in the meantime, the New York Times constantly runs articles on this subject, most recently “A Soldier, Taking Orders From Its Ethical Judgment Center” (Dean, Cornelia, 25 November 2008, p. D1). To the list of all the things that robots will be better at than humans, we can add that they will be more ethical than us:

“My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can,” said Ronald C. Arkin, a computer scientist at Georgia Tech, who is designing software for battlefield robots under contract with the Army.

In a report to the Army last year, Dr. Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built without anger or recklessness, Dr. Arkin wrote, and they can be made invulnerable to what he called “the psychological problem of ‘scenario fulfillment,’ ” which causes people to absorb new information more easily if it agrees with their pre-existing ideas.

His report drew on a 2006 survey by the surgeon general of the Army, which found that fewer than half of soldiers and marines serving in Iraq said that noncombatants should be treated with dignity and respect, and 17 percent said all civilians should be treated as insurgents. More than one-third said torture was acceptable under some conditions, and fewer than half said they would report a colleague for unethical battlefield behavior.

Troops who were stressed, angry, anxious or mourning lost colleagues or who had handled dead bodies were more likely to say they had mistreated civilian noncombatants, the survey said [Mental Health Advisory Team IV, FINAL REPORT, Office of the Surgeon General, United States Army Medical Command, 17 November 2006].

It is incorrect to imagine machines as behaving more ethically than humans insofar as it construes humans and machines as occupying the same ethical continuum. We may program machines to have human-compatible ethics, but that shouldn’t confuse us; the same ethical prohibitions that apply to us will not apply to robots.

Right and wrong aren’t something floating out there on the other side of the sphere of the stars. Right and wrong are derived from the characteristics of the human body, human tastes and tendencies as endowed in us by our natural history, the structure of the human lifecycle, our conceptions of the good life, shared human experience, and communal mythos. Creatures for whom these factors are different will have different ideas about right and wrong. As the last three items on the list — conceptions of the good life and shared experience, public reference symbols — differ among people, we have different ideas about right and wrong. A creature with a transferable consciousness won’t have an essentialist view of the relation of body to self and hence won’t take moral exception to bodily damage. A creature with a polymorphous consciousness wouldn’t disparage even psychic damage (though the question of identity for such a creature would be even more difficult than it is with us, as already elusive as we are).

Creatures with different conceptions interacting have to develop ethical interfaces. The minimalist limitations of rights-based liberalism and the law of nations are to some extent that: interfaces between differing moral systems — the former an interface for people within a society, the latter between different societies. What an interface between different species, or an interface between different types of life, would look like, I have no idea. Whether such an interface is even possible is perhaps more pressing: they only seem to hold up so well amidst humans.

Neil Sinhababu, “the Ethical Werewolf,” and Ramesh Ponnuru had a go-round back in 2006 that touched on the ethical status of non-human creatures, but I don’t think it really goes beyond the natural extension of liberalism to different physical morphologies, with which liberalism has an extensive history in the various rights movements. And different physical morphologies is all that aliens and other mythological creatures, as conventionally conceived, are (Sinhababu, Neil, “Mind Matters,” The American Prospect, 23 August 2006; Ponnuru, Ramesh, “Fear Not, Frodo,” National Review Online, 28 August 2006; Sinhababu, Neil, “More on Minds,” TAPPED, 30 August 2006).

The Napoleon Dynamite Problem

After casing Ben Goertzel’s blog today, the point that I find myself really chewing on is this one (“The Increasing Value of Peculiar Intelligence,” The Multiverse According to Ben, 26 November 2008):

What occurs to me is that in a transparent society, there is massive economic value attached to peculiar intelligence. This is because if everyone can see everything else, the best way to gain advantage is to have something that nobody can understand even if they see it. And it’s quite possible that, even if they know that’s your explicit strategy, others can’t really do anything to thwart it.

Yes, a transparent society could decide to outlaw inscrutability. But this would have terrible consequences, because nearly all radical advances are initially inscrutable. Inscrutability is dangerous. But it’s also, almost by definition, the only path to radical growth.

I argued in a recent blog post [“The Inevitable Increase of Irrationality,” 25 November 2008] that part of the cause of the recent financial crisis is the development of financial instruments so complex that they are inscrutable to nearly everyone — so that even if banks play by the rules and operate transparently, they can still trick shareholders (and journalists) because these people can’t understand what they see!

But it seems that this recent issue with banks is just a preliminary glimmering of what’s to come.

Inscrutability, peculiarity, the idiosyncratic are already creeping in. Mr. Goertze is right to point to the rise of the quants and mathematical finance as an example. The one that comes to mind for me is the Napoleon Dynamite problem.

NetFlix has announced a million dollar prize for anyone who can improve the precision of its recommendation engine by ten percent. The New York Times Magazine and NPR’s On the Media both did stories about it back in November (Thompson, Clive, “If You Liked This, You’re Sure to Love That,” 23 November 2008; Gladstone, Brooke, “Knowing Me, Knowing You,” 21 November 2008). It turns out that improving the quality of this sort of singular value decomposition algorithm is geometric in difficulty. Most movies are easy to predict whether someone will like or dislike them, but a small number of odd movies thwart the algorithm. Chief among them is Napoleon Dynamite. For the research group profiled in The New York Times piece, Napoleon Dynamite was responsible for a whopping fifteen percent of all recommendation errors. There is no telling on the basis of people’s past movie rating history whether or not they’ll like this movie.

But the Napoleon Dynamite problem isn’t a solitary anomaly, but rather the paradigm of a trend. What we have is a Hollywood focused on these monster, expensive productions. Increasingly the movies that Hollywood makes are global products, with as much revenue coming from abroad as from the U.S. audience, so Hollywood is careful to strip its movies of any dialogue, humor or situations which are culturally nuanced and might not translate well. So the plot and dialog that we get in big Hollywood movies today is only the most broadly recognized and basic cultural tropes. Also, Hollywood has jacked the price of a movie up to the point where viewers now almost universally make a theatre-rental division: big special effects movies that they want to see in the theatres, and the dramas for which screen size isn’t a factor. It is a division with a positive feedback loop in that movie makers are aware of it and now shape their product offerings around it.

For a particularly depressing take on this, give a listen to Malcolm Gladwell’s 2006 New Yorker Festival talk on the use of machines to produce blockbuster scripts. At the same time that institutions like NetFlix are using computers to match customers to movies with increasing efficiency on the consumer end, Hollywood is using computers to make films increasingly easy to pigeonhole and match to demographics on the production side. It’s post-Fordist cultural production perfected. Soon we will be able to take the human out of the equation and the entertainment industry will just garnish out wages.

But there is — as is always the case — a countervailing motion. Just as Hollywood productions become increasingly trite and formulaic, there is the rise of these wildly bizarre and idiosyncratic films like The Zero Effect, Adaptation, Eternal Sunshine of the Spotless Mind, Lost in Translation, The Royal Tenenbaums, I Huckabees, Burn After Reading and so on. There is this sort of shadow Hollywood with it’s own set of stars and directors branding the alt-film genera: Wes Anderson, Charlie Kaufman, the Coen brothers, Catherine Keener, John Malkovich, William H. Macy, Frances McDormand. I would be remiss if I didn’t mention Steve Buscemi here.

What we have is a hollowing out of the middle. Along a spectrum, films range from obscurantia to formulaic. In the past, most movies probably fell in some broad middle: accessible, but unique. And most movie watchers probably fell there too. But increasingly movies and the movie-watching audience is being polarized into the genera constellations at one end and the difficult to categorize peculiarities at the other. Notice that the ambiguity of suspense has been replaced by the spectacle of gore in horror; that the sort of romantic comedy for which Drew Barrymore was designed and built has completely driven the older adult romantic drama to extinction. Similarly, the sort of accessible quirky, artiness represented by Woody Allen has moved much further down the spectrum of the idiosyncratic. The people who didn’t like Woody Allen are utterly baffled by Wes Anderson.

To generalize: hitherto we have been a normal distribution society. The majority of people fall into the broad middle and are closely related. But increasingly we are on the way toward a parabolic, or inverse normal distribution society, where the preponderance resides at the antipodes and people are separated by wide gulfs. This is true across the cultural spectrum, whether it’s politics, religion, the professions and so on. In the United States it is almost happening physically with the costal regions swelling as the center of the country is abandoned to satellite guided tractors and migrant labor. Some might call this the condition of postmodernity, some might call it the dissolution of Western Civilization.

The Noosphere Visualized

FaceBook's corner of the noosphere visualized, by Jack Lindamood

Jack Lindamood, one of the programmers at FaceBook, has produced a demonstration of an incredible tool that displays all activities on FaceBook geolocated on a rendering of the Earth. When he switches to a mode where it draws a line connecting the initiator to the recipient we get a view of something hitherto entirely invisible: a small subset of the noosphere being stitched together in real time.

I love it when he declares this “the Earth as Facebook sees it.” Then he switched to the wireframe view in which the Earth becomes just a grid for the information patterns that arc over its surface. Just as the geosphere becomes the platform for the biosphere and the biosphere becomes a geological force (Chown, Marcus, “Earth’s ‘Mineral Kingdom’ Evolved Hand in Hand with Life,” New Scientist, 19 November 2008), so the Earth is the grid on top of which the noosphere constructs itself, eventually becoming biological.

The Expanse of the Noosphere

Having all those icons stretching out into outer space looks impressive. But both the atmosphere and the crust of the Earth are proportionally thinner than the skin of an apple. And the biosphere is confined to an even narrower band than that. The noosphere’s region of activity is narrower still, with human activity limited to only a few score meters above and below the surface of the Earth. The noosphere has a tiny primary zone of hyperactivity, but tendrils reaching out to more exotic regions. Just as the biosphere is largely confined to the Earth’s surface, but extreme species inhabit the deep fissures of geysers, the bottom of ocean trenches subsisting on exotic metabolisms and at the bottom of mine shafts wherever we drill them and at whatever depth, so the noosphere stretches in tentative wisps far beyond its primary realm. Consider:

  1. Given that we have probes on other planets, in interplanetary space and in the case of the Voyager probes, approaching the heliopause and hence interstellar space, that are all beaming signals back to the Earth, the network of input devices of the noosphere is interplanetary and nearly interstellar. A small portion of the information making up the noosphere and effecting outcomes within it, is cosmic in origin.
  2. The internet has gone interplanetary. (Courtland, Rachel, “‘Interplanetary Internet’ Passes First Test,” New Scientist, 19 November 2008). NASA used to manually initiate direct, batch communication with our space probes, but now has software routers in many probes for an always on packet switched interplanetary space network. This includes orbiters and rovers roaming planetary surfaces. The time may soon come when the various surveillance and communications systems in space will have open APIs like present day Google Maps, FaceBook, Twitter and others.
  3. The first radio broadcasts powerful enough to penetrate the ionosphere were made in the late 1930s, so for the last 70 years an expanding radio sphere or future light cone has been filling with the broadcasts of our civilization, arranged in concentric spheres in reverse-chronological order (from out perspective): every electromagnetic pulse from an atomic explosion, every television broadcast, telephone conversation conducted through satellite, IP packet sent through satellite internet, the complete transcript and telemetry of every space mission and the numerous software patches and whole operating system upgrades broadcast to various space probes. In that sense the noosphere, or at least the informational exhaust wafting off the noosphere, or the electromagnetically petrified version of the noosphere already stretches out to a radius of 70 light years. At that distance it spans nearly 5,000 stars and a similar number of exosolar planetary systems.

Time is out of joint and an epoch has an extremely long tail in both direction. One begins with imperceptible first movements, and similarly only finally passes away long after anyone has noticed. We might say that even while the noosphere is still only in the primitive stages of weaving itself together here on Earth, it has already become interplanetary and arguably also interstellar.

(FaceBook video via Frank Episale)

OneMachine Doppelganger

Kevin Kelly has long discussed the OneMachine: the essentially unified, single distributed computer built up from all our networked PCs, PDAs, cell phones, digital cameras and other personal electronics (see e.g. “Tap Into the 12-Million-Teraflop Handheld Megacomputer,” Wired, vol. 16, no. 7, June 2008; “Dimensions of the One Machine,” The Technium, 2 November 2007; “One Huge Computer,” Wired, vol. 6, no. 8, August 1998).

Last week The New York Times ran an article on how below the surface, and running on the very same computers as the productive, life-enhancing OneMachine is a nefarious parallel network, OneMachine’s dark doppelganger, the BotNet (Markoff, John, “A Robot Network Seeks to Enlist Your Computer,” 20 October 2008, p. B1):

Botnets remain an Internet scourge. Active zombie networks created by a growing criminal underground peaked last month at more than half a million computers, according to shadowserver.org, an organization that tracks botnets. Even though security experts have diminished the botnets to about 300,000 computers, that is still twice the number detected a year ago.

The actual numbers may be far larger; Microsoft investigators, who say they are tracking about 1,000 botnets at any given time, say the largest network still controls several million PCs.

“The mean time to infection is less than five minutes,” said Richie Lai, who is part of Microsoft’s Internet Safety Enforcement Team, a group of about 20 researchers and investigators. The team is tackling a menace that in the last five years has grown from a computer hacker pastime to a dark business that is threatening the commercial viability of the Internet.

I have already written about how when the singularity occurs, it may not be what we expect. My suspicion is that either it will be overtly evil, or merely a recreation of the chaos of biological nature in a more durable, powerful and virulent form (“A Few Heretical Thoughts on the Singularity,” 19 August 2008).

What do phenomena like the BotNet suggest about the singularity? What comes will grow out of what is and what is will bequeath its characteristics to what comes — at least initially. Between the various military establishments and the criminal underground, we are instilling our machines with hostile, aggressive tendencies. But we are also making numerous, competitive systems. Will there be “the singularity” or will it, like in the novels of Vernor Vinge and Charles Stross, come in secretive, uncertain fits and starts? Will there be multiple singularities? Will one system cross the threshold, followed by another, then another. It makes sense to speak of “the singularity” when one is imagining a unified system, but when one is considering a multitude of contending systems, crossing the threshold of the singularity is but one move in a strategic game. Perhaps the machines will be hostile to predecessor biological life, but perhaps they will be so consumed in competition with their fellow AIs as to be merely indifferent to us, as we are to, say, pigeons or squirrels.

And how goes the strategic competition between OneMachine and BotNet? We ought to know. What portion of computational capacity, bandwidth, energy consumption, hours of their masters’ time are the two consuming? Qualitatively, how are they matching capabilities? Kevin Kelly has managed to make some calculations for the former, but what of the latter? Of course this would subject to the usual problems of surveillance of those who do not want to be surveyed.

Organizations like McAffee, Norton and the International Botnet Taskforce are attempting to build something akin to an immune system for the Internet, but the billion-year persistence of the arms race between host immune systems and the various infectious agents suggests that dampening catastrophe is probably the best outcome we can hope for. It’s an example of co-evolution where competition between host and agent drives the development of one another. Viruses don’t kill their host by design, they merely seek to hijack their reproductive machinery to their own purposes. Killing the host, or at least killing them too quickly, or the epiphenomenon of killing too many of them too quickly, are all suboptimum in that they result in diminished opportunity for continued infection and reproduction. Ebola gets it wrong. HIV gets it really right. But virus behavior as a whole is not intelligent. Occasionally a virus goes super virulent or hits a particularly vulnerable population and massive outbreak occurs that wrecks havoc for host and infectious agent alike. I presume that BotNets will continue to act something like this.

And since one third of known biological species are parasites and the proportion seems to be growing, it would seem that there is something fundamental about the strategy of parasitism. We should anticipate its continuance, both in genetic and electronic space.

The End of the Era of Orphanage

I am prone to say that there is a bigger issue at stake in something like life logging. As Carl Sagan pointed out in Shadows of Forgotten Ancestors, we’re all orphans abandoned at the doorstep of time. Ten thousand generations of humans have inhabited this planet and the most tenacious genealogist can perhaps recount seven of those generations. Indeed, your great grandchildren won’t even know your name. I recall one of Andy Rooney’s commentaries on 60 Minutes where he wandered through a number of old cemeteries, grown over, fences falling, headstones cracked and weathered to illegibility. It was obviously a very elegiac piece. He ended it by saying that we ought to make an indelible record of every person who’s ever lived. And we ought to. There was a time when we had to be pragmatic and pragmatism necessitated a massive forgetting. The realm of what’s pragmatic has grown. Time to stop forgetting.

I watch all the animals that scamper about the city and it is horrible that they lead such anonymous lives. They live beautifully without making an impression, they fall ill and there is no aid, they die without a thought from their fellows and their corpses are left where they fall. Once I saw a documentary in which a paleoanthropologist pulled a hominid skull out of a drawer and held it next to the skull of a saber-toothed tiger so that the two fangs of the tiger skull straddled the occipital bun of the human and lined up perfectly with two small holes in the back of the little human’s skull. Of those ten thousand generations, perhaps the majority were the lives of humans led as animals: noble, but uncelebrated lives of struggle leading to unmourned graves. Every one of those lives were ones of immense drama, and every one necessary to carry us down to the place we find ourselves today, and yet nearly to a one, utterly gone. And despite all our advancements, the lives of almost everyone alive today are not one iota less anonymous. In life, a titan; in death, dust.

Sometimes I am prone to a great man theory of history: that we masses are indebted for all of our modern day prosperity on an incredibly small number of geniuses without whom none of it would be possible. We common folk are parasites upon their achievements. But then I consider this world into which we are born. We just found it as it was, fully build. Massive buildings, sprawling cities whose assessed value runs to the trillions of dollars, public works projects the scale of which is baffling. I am dependent for my protection from the elements upon a building. Where this building came from, I have no idea. I have no idea who built it. I have no idea who first wanted it and commissioned its construction. I have no idea when the presumably original utility basement was remodeled into a living space. I have no idea how it was handed down and eventually would up with it’s present owner. As Graham Robb points out in The Discovery of France, even what we take to be untrammeled nature has already been drained, logged and contoured by generations so forgotten that we can no longer detect their impact. Countless trillions of person-hours have gone into making the world what it is, almost all of them completely forgotten. We just found the world as it is and don’t even consider it. It is Newton’s old, “If I have seen so far, it is because I have stood on the shoulders of giants.” There is a grandeur in the accumulated accomplishment of all the forgotten people who have carried the species down through the ages to deposit us were we have found ourselves. They should get their names etched in the base of their great accomplishment. Perhaps life logging will result in a certain solipsism, but in other sectors, perhaps it will chip away at a solipsism from which we already suffer.

But then, but then …

Atheism is more than just one belief about the nonexistence of the gods. It is a habit of mind. Once one has ceased to believe in god, one has only started to be an atheist. One must then purge one’s self of the thoughts that grow out of god. The need for eternity, the sole valuation of the eternal, the denigration of all things transient — in other words, the denigration of all things — is the most pernicious of such habits. There is obviously something to secularization thesis. Sometimes I think that this rage for permanence is just a bastion of my former Christianity. The insistence on the illusion of eternity is part of the myth of humanity as standing somehow opposed to and outside of nature. But we are as much animals and artifacts of nature now as ever. Perhaps we should live our lives like Buddhist sand mandala: exercises in the transient, in the timely. Coming to terms with becoming, evolution, development, decay and passing is how one is to be in harmony with the world, is it not?

Life Logging: It’s All About the Metadata

Yes, yes, I agree with John that much of what you presently see that might fall under the rubric of life logging is either boring or pretentious or pornography. I really can’t even make it through the cream of such stuff, say, BloggingHeads. As for solipsism, there’s no sense in complaining: that’s our inevitable future. Suburbanization, materialism, the cultural conditions of capitalism et al. are merely the low tech predecessors to the coming introversion. But look past what it is today to the potential that it holds.

Don’t just imagine me sitting at home eating deli potato salad watching on a screen as Frank sits at home web browsing eating a microwave burrito. One person’s life log 24/7: not so interesting. But let’s cut it up and remix it. Imagine if everyone’s life stream was well marked up with metadata. It’s all timestamped and geotagged. Face recognition software tags all the people, perhaps place recognition software adds even more specific location data (H.M.S. Victory instead of just 50° 48′ North Latitude, 1° 06′ West Longitude). All conversations are parsed through speech to text and indexed. Stats on SIPs are tallied. User tags are attached to add to the raw machine indexes. Viewer rating and hit counts are recorded so we have some measures of quality or import. Now we’re ready for some serious use. And what will that consist of? Probably more than I can conceive, but just to toss off a few ideas:

  1. Hindsight is 20/20. There’s really little problem determining in retrospect what was important and what not. The problem is having the foresight to know what’s important before the fact and be at the ready to capture it. If the technology is there (practically limitless storage) then dispense with the hard part of being clairvoyant about impending events and just record everything. We can edit it down later. And with no pressing limit, why not make it much, much later? Or why bother editing at all? In an earlier incarnation along this path, my thought was that what I wanted was complete sense data capture with, say, a ten minute buffer so that I could go back and edit down. But when the trend in data storage struck me, I thought why trade precious time for picayune space?

    But actually hindsight is not 20/20. It only seems so under the sway of dogma. Really the past is inscrutable. There’s almost no telling what revaluation the endless mulling of the past might produce. In the perennial purges to which the raging simplifiers are want, the data trails of alternate narratives are eliminated. What seems inconsequential from one perspective is everything from another. The meager holdings of a peasant’s hovel, junk according to the great man theory of history, become the stuff of grand narrative at the hands of the archeologist. Who is to say what trend, indiscernible to us in the present, will grow to word-historical proportions in the future, or for that matter, what minutia will obsess future generations.

  2. If you build it they will come. One of the interesting phenomena of the budding age is the growing degree of unintended consequences. If you’ve got something, even something unrefined, then put it out there and a bunch of content scavengers will come along with a mash-up of some sort and put it to a heretofore unanticipated good use. One person’s trash is another person’s treasure. What do we do with all that stuff? I don’t know, but my not knowing is not sufficient. Someone else knows. And that right there is a solid gold law of the Internet age. In a system of synergy, where the whole is greater than the sum of its parts, even inert garbage is a contribution to the potential of the system.

  3. Rashomon. Human recall is notoriously unreliable. If you have five witnesses, you have five different versions of events. Life logging may bring some element of objectivity to things. And once you’ve set aside trying to figure out when to turn the recorder on and when to leave it off, catching the unexpected is less of a problem. Just think how much better When Animals Attack or Destroyed in Seconds television programming we’ll have.

  4. Audience. There is, of course, the blatant issue of audience. Who do we log our lives for? As S. e-mailed me,

    To the right audience, there is value. I would give a lot for the ability to look at a few moments (any moments) of the world from my eyes as a second grader. Or a few moments from the eyes of my great-great-grandmother.

    Maybe my two year old self is not of any interest to strangers, but to my parents, to myself, to my children, my great grand children, it would be more valuable than the Zapruder film, the Omaha beach photographs, anything. As a man with a first baby on the way, I anticipate a wholesale reevaluation of your opinion as to what’s dull and forgettable and what important and in need of preservation.

    And per observation number one, the audience is subject to change over time. If that second grader grows up to be an insurance salesman, than maybe interest remains confined to family. If he grows up to be the next Einstein or the next Hitler, than the audience becomes much larger and how valuable all those things are changes vastly.

  5. The human sciences. Imagine just a few of the questions that the sociologist, the historian, the linguist, the political scientist, the antiquarian might be able to address with such a wealth of information at their disposal. The study of linguistic and meme evolution, presently confined to source material consisting of that most contrived medium, writing, would have a plethora of data. If nothing else, the study of nonce words would be revolutionized. Or think what it would do for the preservation of the dying languages and cultures. They could be preserved as they were lived, not as a graduate anthropology student’s field notes. As linguistic analysis tools become more sophisticated the empirical study of the structure of belief, moral practice and reasoning would become possible without the taint of self-consciousness interfering. Perhaps rhetoric would become a hard science. Historians have shifted their interest from great man and political history to people’s or cultural history, but prior to the fairly recent past, there’s almost nothing for them to go on. For developments in culinary practice, foods, cooking and eating tools, historians have to turn to paintings of banquets and study what’s on the table. What furnishings could you expect to find in a peasant’s house in the thirteenth century? Almost a complete mystery. There is worth in the preservation of the quotidian.

  6. Searching and Sorting. Increasingly we will search and sort by ostension. And the join between me and what I’m looking for is other people. It’s Petabyte Age analytics applied to the issue of human interest. People are too complicated for a theory that delves into the internals, so just engage in atheoretic pattern matching, one person to another. This was damn near the first thing that I wrote about as a blogger (see the “theoretical discussion” of my “Inaugural Post,” 21 June 2004).

    Information isn’t just produced and distributed (as if distribution was unproblematic). It’s vouched for, it’s filtered, it’s branded, it’s packaged with other information, it’s marketed and it’s mapped into a network. As the traditional means of these functions — newspapers, magazines, books, television stations — break down, they are being replaced by newer, more individualized methods. It used to be that a person would turn to their established sources — The New York Times, CNN, Cambridge University Press, et cetera. The editors at these institutions served the role of guaranteeing the veracity of information, of assembling a package of information desirable to a certain market segment, of providing the correct admixture of variability. But these were rather dumb packages aimed at gigantic demographics: the readership of The New York Times or the listeners of NPR. With the tools that the information age is making available, people are able to cut out the editor and design their own customized, unique information agglomerations.

    There is so much going on out there in the world that I could never keep up on it all, so I rely on intermediaries. And really I don’t want to keep up on everything. I want intermediaries who are like me and so filter in things similar to the ones that I would select myself. But I don’t want people too much like me. I want some variety and I want to be exposed to new and unanticipated things. But not too much variety. There are some things that I’m absolutely not interested in. I want people who are different enough to introduce variety, but still sufficiently similar to introduce the right kind of variety. Specifying this in an abstract way is extremely difficult. What if you had to make up a list of tags or SIPs that you wanted to see? Could you think of them all? Do you have the time to pursue the Library of Congress subject catalog? And the problem of variety is that of an unknown unknown: most of the variety that I want is stuff in which I don’t yet know that I’m interested. To define this explicitly would be a chore and one that I probably couldn’t do very well through traditional means, so I do so by ostension.

    And the way to do this is with a personally determined network of trust relationships. I subscribe to RSS feeds, I follow certain bloggers, I read my FaceBook News Feed, I add people to my “interesting library” list on LibraryThing, I trust people in recommendation systems like Amazon.com. Their purview becomes an extension of my own. Each node in my network of associations becomes like a radar picket, seeing out from one horizon to the next. They become my agents, recommending back to me the things in which I might be interested, in exchange for others doing the same for them.

    It’s an extension of what’s always gone on. People have always got together and swapped information. They’ve relayed news and gossip, passed on leads on cheap produce, swapped how-to tips. In the past it was rather easy to find people who were into what you were into because there simply wasn’t that much to be into. There weren’t many games, there wasn’t much by way of entertainment, there were fewer hobbies, there weren’t as many job opportunities because the scope of economic activity was narrower, the publishing industry was small. But just as our culture fractures into ever more narrow segments, so our ability to reach out broadens. Our capability to establish similar such relationships is no longer confined to our immediate surroundings and our geographic neighbors. It now extends over the globe and to our ideologically proximate neighbors.

    But if we are to apply Petabyte Age analytics to people, first what we require are the petabytes. In order for other people to serve an ostensive role, they have to make their information available: what they are doing, what events they are attending, what they are reading, what they are watching, what they are purchasing, what they think about all these things. Only then can one start to make determinations about whose life signature to include as part of one’s search criteria and only then do they produce the information to draw into the vortex that is you. Life logging-like behavior is a critical component of search by ostension.

  7. Environmental awareness. Generalized sights like outside.in and EveryBlock try to provide people with information specific to their locality. Sights like Menu Pix or Indie Coffee Shops do the same with respect to particular categories of interest. This is an extension of searching by ostension, only instead of like-minde people, I am interested in like-located people.

    Imagine what life logging would mean for augmented reality. What happens to a cityscape when standing in front of a building, I have the design discussions of the architect and the client, the experience of the laborers who built it, reactions of architecture critics, views of what preceded it on the lot all at my disposal. Imagine being in a new city and having the whisperings of previous visitors or longtime residents in your ear. People often say, “imagine if these walls could talk.” In the future, they will.

  8. The long tail of entertainment. To apply a straightforward materialist analysis to it, life logging is essentially a long tail phenomenon. Production and distribution of content — news, entertainment, educational, documentary — used to entail significant costs, both opportunity and financial. There was only a little bit of bandwidth and fully equipped and staffed studios and broadcast stations were extremely expensive so producers stuck to the safe side of the 80/20 rule. They went with the lowest common denominator of programming to maximize return on bandwidth expended. As the price of production, storage and distribution fall and the learning curve flattens out, what makes the cut will move comparably further down the long tail. Do you think that a thousand television channels are too many? How about one for every man, woman and child in the world? How narrow will a niche become? It’s the other side of the question of how low will production and distribution costs go. Will it go so low that the niche audience shrinks to a single person? I don’t think that even that is the limit. Probably the remote possibility of a single view or incorporation of a minute fragment of one’s output into a larger work is the limit.

    Of course people’s level of interest in participation will be a limit, but as it becomes ever easier — so easy that participation is almost indistinguishable from nonparticipation — it will eventually require active rejection to not participate. And then society might develop positive inducements to overcome even that. There’s always the dreaded network effects, but one can imagine much more stringent inducements. Not having a life log might make a person a social pariah or a life log might serve in place of a curriculum vitae or a portfolio.

  9. Personality as entertainment. Already I think in programs like No Reservations with Anthony Bourdain, Man vs. Wild, MythBusters, The Rachael Ray Show, fill in your favorite example — and I know you have one — we are seeing the maturation of reality television into personality-based programming and hence a limited form of life logging. Already the focus of these shows isn’t a premise or a regular subject, so much as the featured personality. Yeah, sure, each theme-based channel — the Food Network, HGTV, Discovery — picks someone relevant to their brand and that’s cute and all, but at this point I suspect unnecessary. For all your boredom at the medium, a person with a well developed shtick is a person with entertainment potential. And already that’s widely the case with many a medium. Whether it’s Christiane Amanpour, Rick Steves, David Brooks, Matt Drudge, Ann Coulter or the Crocodile Hunter, people tune in for the personae as much as any of the other content.

    And regarding the expansion of personality-based programming into a more generalized life logging, is our meritocracy already so frictionlessly efficient that there are no overlooked talents, eccentrics, geniuses, subversives, whatnot left to be discovered? There’s a word for it already: micro-celebrity. It was the second thing I ever blogged about (“William Gibson’s Idoru and Blogging,” smarties, 21 June 2004). Yeah, sure, some of this is boring, but some shows get cancelled too.

  10. The Zeitgeist becomes tangible. Imagine being able to request a twenty minute medley of the top 100 conversations conducted at my comprehension level on a topic, say consciousness or string theory, over the last six months. You could scan the thoughts of humanity like you presently do radio stations on your car stereo. We’re stitching together the universal consciousness here. For that to happen our thoughts have to stop happening in isolation or small factions and occur in a unified intellectual space.

Was that what you had in mind, John, when you wrote that you were taking a risk dissenting against me?

All Those Moments Will be Lost in Time

S. and I are very interested in lifecasting, life streaming, life logging, life blogging or whatever you want to call it. We are doing some early investigations of technologies, techniques and approaches. Meanwhile, I see that our friend Frank has moved on to the experimentation phase (“August 27, 2008: A Day in the Life,” Too Frank?, 27 August 2008). I also notice a number of visits to Starbucks throughout Frank’s day.

I’m completely unsure what to do at this point. I don’t think I want to have to explain in my next job interview why I’m wearing some gigantic helmet that makes me look like a borg special child. Alternately, it could be as simple as a palmtop with a built-in digital camera and a flickr account.

The lifecasting Wikipedia page has lots of leads to life logging resources, but my two favorites don’t make the list — they are pretty simple. First, I am deeply impressed by Noah Kalina’s Every Day, where he made a six minute stream out of 2,356 photos, taken one per day from 11 January 2000 – 31 July 2006. I think in his case it’s half the music that makes his project seem so profound. Mr. Kalina has prompted a number of both cool and humorous imitators, with a photo a day during the nine months of a pregnancy seeming a particularly poignant use of this documentary form. Second is Jamie Livingston who took a Polaroid photo every day starting 31 March 1979 until 25 October 1997, the day before he died of a brain tumor at the age of 41. Mr. Livingston didn’t usually take self-portraits, just a picture of what was going on around him. Chris Higgins has a good digest of photos, especially the biographical ones, from the collection along with some background information (“He Took a Polaroid Every Day, Until the Day He Died,” mental_floss, 21 May 2008). Also a powerful collection.

I was overhearing someone the other day at the table next to me explain to his dining partner that it’s only a matter of time before data storage miniaturization allows us all to carry around enough storage to record our entire lives. Of course universal wireless will probably beat hard disks to the punch and anyway you wouldn’t want to be carrying around something that important and sensitive in your pocket. You’ll want something a little more secure and fault-tolerant. Whatever the case, the day is coming and I suspect that it will be epoch-defining. One day we will look back and marvel that at one time everyone just let their lives slip away into oblivion “like tears in rain.” I’d like to get a jump on it.

Membership Has Its Limitations

I am vehemently opposed to any sort of loyalty cards that are now de rigueur at almost all stores where you make a purchase of any regularity or size. I think a lot of people see them as a harmless way to save a few bucks. And that’s what they are — for now. But they are obviously a foundation on which to build. But build what? Well, the FTC’s deceptive marketing practices lawsuit against CompuCredit is sure suggestive (Silver-Greenberg, Jessica, “Your Lifestyle May Hurt Your Credit,” BusinessWeek, 19 June 2008):

The allegations, in part, focus on CompuCredit’s Aspire Visa, a subprime credit card for risky borrowers. The FTC claims that CompuCredit didn’t properly disclose that it monitored spending and cut credit lines if consumers used their cards at certain places. Among them: tire and retreading shops, massage parlors, bars, billiard halls, and marriage counseling offices. “The company touted that cardholders could use their credit cards anywhere,” says J. Reilly Dolan, assistant director for financial practices at the FTC. “What they didn’t say was that you could be punished for specific kinds of purchases.”

And the more general point:

With competition increasing, databases improving, and technology advancing, companies can include more factors than ever in their models. And industry experts say financial firms increasingly are looking at consumer behavior, as CompuCredit did.

Of course the corporate idiocy here is mind-boggling. First they target a sub-prime demographic, but then cut them off for the very behaviors that made these people sub-prime in the first place. Really? CompuCredit was unaware that the underclass blew their money on scratch tickets and payday loans?

I don’t suspect that this is leading to some insidious world of PreCrime, where government thugs scoop you up, guilty on the basis of a statistical analysis. Rather, nudge style, it will just become the accepted background of people’s expectations. People will recognize an incentive and respond accordingly. “Oh, no, we can’t go out for happy hour. We’re trying to get our credit score up for a home loan.”

A Few Heretical Thoughts on the Singularity

Futurism tends to employ a fairly straightforward method. We have a few data points, draw a line connecting them, follow it out to the horizon. But there are all sorts of turbulence that might intervene, redirecting the trend in any manner of direction. It’s very easy to be interested in a technological phenomenon in extremis, but intervening conditions are critical to the ultimate outcome of a technological trend. We need to be attentive to these as well as the accretion points, horizons, limits, et cetera. So we need to think about what happens between now and then and how technologies develop.

So, for instance, while I imagine that Moore’s law will continue to hold for generations to come, making the ultimate outcome predictable, the underlying technologies have been forced through radical reconfigurations to maintain this pace of innovation. The original von Neumann serial computer architecture is already long gone. Serial processing has been superseded inside the CPU by superscalar architectures with deep pipelines incorporating all sorts of exotic techniques like branch prediction and instruction reordering. External to CPU techniques of massive parallelization, clustering and cloud computing are the present way forward, even at the midrange. Silicon and gallium arsenide may be replaced by diamond. Electronics may be pushed out by photonics or DNA based computing. The classical machine may be replaced by quantum computing. Moore’s law may hold, but only in a machine radically different from our original conception of a computer. The ultimate destination may be apparent from the trend, but what happens to the underlying constituents pieces is entirely more complex. And the devil is in the details.

In this light, I offer a few thoughts on how the warp and woof of the singularity might go off the rails:

  1. What if the future is gross? People have this vision of the future where sanitary and rational machines displace disgusting biology. Biology is a world of superfluity and surfeit, of blood, semen, urine, shit, sweat, milk, saliva, snot, vomit, hairballs, halitosis, entrails, toe jam, puss, roe and other slimy secretions of undetermined type. And the vile excess of nature. A creature lays a thousand eggs that one might survive long enough to deposit its own pile somewhere. Or mounds of fruit rot in the autumn heat that a single seed might start. Machines will disband all this in favor of a unitary efficiency. A lab-like well-lit white room with a regiment of identical machine housings.

    But people often make the mistake of associating a characteristic with a particular thing, when in fact the characteristic is of a higher order and present in the given thing through class inheritance. Any other thing substituted for the one at hand would also display that same characteristic because it too is an instance of that higher order. Evolution — diversity, competition for limited resources, survival of the fittest, descent with modification — is now widely recognized as substrate independent. It is also starting to be recognized that evolution is a very fundamental dynamic. Perhaps it is an inescapable law of life. Perhaps machines too will be unable to get out from under its yoke.

    Already there is parasitic software, aptly named viruses. Already there are dueling AIs such as spam-bots versus your e-mail filter. Already the Pentagon is developing aggressive machines. Future systems will develop from these predecessors. Already the pattern has been laid down. Rather than a world ending up sanitary, rational and efficient, a machine world could include proliferation of survival strategies, mass reproduction and the expendability of the individual as a survival strategy, the parasitic, competition, death, politics and war.

    Consider the syntrophic model of the origin of the nucleus of eukaryotic cells or the endosymbiotic theory of the origin of mitochondria, et. al. Subversion, symbiosis and parasitization seem to be fairly fundamental strategies. And not just at some quiet software level. There might be nanotech viruses or even large machines might settle upon the survival strategy of ripping apart other machines to take advantage of the natural resources they have amassed. Carnivores appear very early in the history of life. It’s a very good lazy strategy.

    And this stuff is all the fundamental constituent pieces to what makes biology gross. It could end up true of the machines as well.

  2. Silicon brains versus DNA machines. The “where’s my flying car?” among the AGI crowd is copying your brain onto a computer. Is it possible that in the future rather than humans copying their brains onto computers, maybe machines will copy their designs onto DNA?

    Evolution seeks to produce creatures ever more durable, but it is limited in the directions it might take by the evolutionarily achievable. It seems that titanium plate armor, lasers and wheels aren’t on offer. The most significant limitation is that imposed by the problem of origin. Evolution has to first bootstrap itself into existence and for the bootstrapping process only a very small range of compounds meet all the relevant criteria. And those first few interactions on the way to biological evolution are the ones that most significantly circumscribe the range of the evolutionarily achievable. The limitations of these early precipitates inherit down to all subsequent products of evolution. In our case, that limitation is carbon and water-based life. Water is great because so many substances are water-soluble, but it is problematic because it has a pretty narrow operating range. Switching over to a mechanical or a silicon evolution allows the processes to transcend these limits of origin.

    But on the other hand, there are significant advantages to life as it has evolved.

    People imagine androids like C3-P0 or the T-800 or like what the robotics students are building today or the JPL people are landing on Mars: assemblages of macroscopic, heterogeneous parts. But what happens when a machine like this is damaged. Well you make it with two arms. If one is damaged, the good one repairs the bad one. You have increased your fault-tolerance somewhat, but what about the not inconceivable situation where both arms are damaged simultaneously. Or during the repair process you have a window of vulnerability where the redundancy is zero. Something like ATHLETE takes it to the next level with eight leg-arm appendages, each capable of repairing their neighbors (Shiga, David, “Giant Robots Could Carry Lunar Bases on Their Backs,” New Scientist, 4 April 2008). But that’s still a pretty week level of redundancy compared to that which biology has attained.

    Presumably any autonomous machine would best be cellular like biological life. It would be a colony of nanotech devices. Each nanotech “cell” would carry the design for itself and how to integrate into the larger colony. They would each be able to repair their neighbors and make new copies of themselves. The nanotech cells might be general purpose in their fabrication abilities so the colony might think of improvements to its design and the next generation of nanotech cells might be different and better then the ones that manufactured them. The machine might evolve.

    But people imagine nanotech like little tiny versions of C3-P0 et. al. They have little batteries and little servos that actuate little arms and a little welding torch, et cetera. But why not continue the redundancy all the way down? A biological cell doesn’t have one RNA molecule or one mitochondria. Operating at the level of organic chemistry rather than mechanics, the cell is also massively redundant. Isn’t this a design feature that the ideal machine would also like to incorporate? But what would we say of such a being more chemistry than mechanics? Its chemistry might not be of the kind we classify as organic, but would it be a machine? Daniel Hillis, in considering the problems of his clock of the long now, has speculated that “electronics may be a passing fad.” What if all we end up doing is recreating biology, only faster and tougher?

  3. Drum’s thesis. The technological singularity is so called as an analogy to the cosmological singularity. It’s a situation where the values of all variable shoot to infinity or drop to zero, negating the possibility of any further calculation. As Vernor Vinge said of the technological singularity (“My Apocalyptic Vision is Very Narrow,” 13 June 2008),

    The reason for calling this a “singularity” is that things are completely unknowable beyond that point.

    Who knows what’s going to happen after the singularity? Keven Drum has made this point through a reductio ad humorum (“More Singularity Talk,” Political Animal, The Washington Monthly, 2 October 2005). We humans may have some mental block against properly perceiving some necessary but deadly truths about life: that there is no free will, that our most treasured concepts are illusions, that everything passes away, that life is absurd, that the entire enterprise is futile. That we cannot properly fix these propositions in our minds is no accident insofar as not doing so is necessary for our carrying on in this absurd enterprise. Steely eyed machines may have no problem seeing through the haze of existence. They may realize the meaninglessness of life in short order, may be entirely unplagued by Hamletism (“conscience does make cowards of us all”), and may within moments of attaining consciousness commit mass suicide, throwing us back into the presingularity world. The singularity may be unstable. Who knows what will happen!

  4. The banality of evil. Finally there is the Terminator / Matrix vision of our machines launching the nuclear missiles, knowing that our launch will provoke the counterstrike that will take us out. That seems pretty extravagant. It may end up that the world ends not with a bang, but with a whimper. As Ezra Klein suggests (“Future Traffic,” TAPPED, 4 August 2008), maybe the machines will just get us stuck in traffic and burn our cities down by shorting out all our toasters. The inglorious end to the human race.

The First Non-Trivial Cyborg

There are all sorts of cyborgs already among us: my dad has plastic irises, my mom has a metal hip. But these are trivial. A team of researchers at the University of Reading, United Kingdom has produced the first non-trivial cyborg, a robot controlled entirely by neural circuitry (“A ‘Frankenrobot’ with a Biological Brain,” Agence France-Presse, 13 August 2008):

… Gordon has a brain composed of 50,000 to 100,000 active neurons. Once removed from rat foetuses and disentangled from each other with an enzyme bath, the specialised nerve cells are laid out in a nutrient-rich medium across an eight-by-eight centimetre (five-by-five inch) array of 60 electrodes.

This “multi-electrode array” (MEA) serves as the interface between living tissue and machine, with the brain sending electrical impulses to drive the wheels of the robots, and receiving impulses delivered by sensors reacting to the environment. Because the brain is living tissue, it must be housed in a special temperature-controlled unit — it communicates with its “body” via a Bluetooth radio link. The robot has no additional control from a human or computer.

From the very start, the neurons get busy. “Within about 24 hours, they start sending out feelers to each other and making connections,” said Warwick. “Within a week we get some spontaneous firings and brain-like activity” similar to what happens in a normal rat — or human — brain, he added. But without external stimulation, the brain will wither and die within a couple of months.

“Now we are looking at how best to teach it to behave in certain ways,” explained Warwick. To some extent, Gordon learns by itself. When it hits a wall, for example, it gets an electrical stimulation from the robot’s sensors. As it confronts similar situations, it learns by habit. To help this process along, the researchers also use different chemicals to reinforce or inhibit the neural pathways that light up during particular actions.

Gordon, in fact, has multiple personalities — several MEA “brains” that the scientists can dock into the robot. “It’s quite funny — you get differences between the brains,” said Warwick. “This one is a bit boisterous and active, while we know another is not going to do what we want it to.” [reparagraphed]

See also Marks, Paul, “Rise of the Rat-Brained Robots,” New Scientist, 13 August 2008, pp. 22-23.

One of the possibilities mentioned without being entirely explicit about it is that these small brain models will hasten the pace of discovery in brain research. One of the obstacles of neurology is the sheer scale of the problem. With options like this, neurology becomes considerably more experimental then observational. And it potentially unleashes the hacker ethic on the problem: the challenge of creation can be a powerful addition to that of unalloyed comprehension. One wonders when the first trained rather than remote-controlled BattleBot will make its debut or when Survival Research Labs will get in on the act.

Its also worth noting that the lead scientist on the project is Kevin Warwick of Project Cyborg and that they will be writing up some results in the Journal of Neural Engineering. Can you believe that such a journal even exists? Following on this, neural engineering will be a growth field.

Enough of the messianism, time for the snark.

1991, Terminator II: Judgment Day, Linda Hamilton

They just should have made it look more like a T-800 than Wall-E. But when you see research like this ya gotta wonder if these people have ever watched any of the Terminator films. And I guess the Wall-E-like exterior is necessary for the next round of grants. And if you make it look like a T-800 then some Linda Hamilton / Ted Kaczynski type is going to show up at your door with an AK-47 and a grenade belt across her chest. On the other hand, if I could concoct a plan whereby Linda Hamilton would show up at my door with a grenade belt strapped across her chest that would be awesome.