Singulitarian Panic Goes Mainstream

Last week both the New York Times and New Scientist featured articles seriously speculating about the danger posed to humans by intelligent, belligerent robots. This week The Telegraph follows suite (Markoff, John, “Scientists Worry Machines May Outsmart Man,” The New York Times, 26 July 2009, p. A1; Campbell, MacGregor, “Artificial Intelligence Researchers Confront Sci-Fi Scenarios,” New Scientist, 29 July 2009; “Military Killer Robots ‘Could Endanger Civilians’,” The Telegraph, 3 August 2009). Also this week, an older story made the rounds about a Swedish company that was fined 25,000 kronor ($3,000) after a malfunctioning robot lashed out and nearly killed a maintenance worker (“Robot Attacked Swedish Factory Worker,” The Local, 28 April 2009). The prosecutor stated that, “I’ve never heard of a robot attacking somebody like this,” but as Matthew Yglesias points out, it’s not just the proletariat that’s under attack: Japanese Prime Minister Junichiro Koizumi was attacked by a robot in 2005 (“Robot Attacks Aren’t Just for Comedy, ThinkProgress, 29 July 2009; “Robot Attacks Japanese Prime Minister,” we make money not art, 21 August 2005). Finally, not only do we fret and fantasize over disaster, we make satire of it as well, as in this extreme anti-smoking video (Woerner, Meredith, “Smoking: It’s Only Enabling The Machines,” io9, 3 August 2009).

That’s a lot of fretting over robots for one week. Even 25 years after The Terminator (IMDB | Wikipedia), the robot apocalypse has remained a pretty geeky fantasy / disaster, but I would say that it’s approaching NBC (nuclear, biological, chemical) / CERN accidentally creating a mini-black hole level consciousness as a destroyer of humanity. Now if we could just gin up a little more fear over nanotech / grey goo / ecophagy / ice-9.

Kurzweil Will Die

Apropos the latest Terminator film, The New York Times has a decent rundown of singularitarianism, transhumanism, A.I. and so on that touches on most of the figures in the field (Markoff, John, “The Coming Superbrain,” 24 May 2009). The conclusion:

Kurzweil will probably die, along with the rest of us not too long before the ‘great dawn,'” said Gary Bradski, a Silicon Valley roboticist. “Life’s not fair.”

Moses never gets to enter the Promised Land. Such a shame — to be the last generation to die.

OneMachine Doppelganger

Kevin Kelly has long discussed the OneMachine: the essentially unified, single distributed computer built up from all our networked PCs, PDAs, cell phones, digital cameras and other personal electronics (see e.g. “Tap Into the 12-Million-Teraflop Handheld Megacomputer,” Wired, vol. 16, no. 7, June 2008; “Dimensions of the One Machine,” The Technium, 2 November 2007; “One Huge Computer,” Wired, vol. 6, no. 8, August 1998).

Last week The New York Times ran an article on how below the surface, and running on the very same computers as the productive, life-enhancing OneMachine is a nefarious parallel network, OneMachine’s dark doppelganger, the BotNet (Markoff, John, “A Robot Network Seeks to Enlist Your Computer,” 20 October 2008, p. B1):

Botnets remain an Internet scourge. Active zombie networks created by a growing criminal underground peaked last month at more than half a million computers, according to shadowserver.org, an organization that tracks botnets. Even though security experts have diminished the botnets to about 300,000 computers, that is still twice the number detected a year ago.

The actual numbers may be far larger; Microsoft investigators, who say they are tracking about 1,000 botnets at any given time, say the largest network still controls several million PCs.

“The mean time to infection is less than five minutes,” said Richie Lai, who is part of Microsoft’s Internet Safety Enforcement Team, a group of about 20 researchers and investigators. The team is tackling a menace that in the last five years has grown from a computer hacker pastime to a dark business that is threatening the commercial viability of the Internet.

I have already written about how when the singularity occurs, it may not be what we expect. My suspicion is that either it will be overtly evil, or merely a recreation of the chaos of biological nature in a more durable, powerful and virulent form (“A Few Heretical Thoughts on the Singularity,” 19 August 2008).

What do phenomena like the BotNet suggest about the singularity? What comes will grow out of what is and what is will bequeath its characteristics to what comes — at least initially. Between the various military establishments and the criminal underground, we are instilling our machines with hostile, aggressive tendencies. But we are also making numerous, competitive systems. Will there be “the singularity” or will it, like in the novels of Vernor Vinge and Charles Stross, come in secretive, uncertain fits and starts? Will there be multiple singularities? Will one system cross the threshold, followed by another, then another. It makes sense to speak of “the singularity” when one is imagining a unified system, but when one is considering a multitude of contending systems, crossing the threshold of the singularity is but one move in a strategic game. Perhaps the machines will be hostile to predecessor biological life, but perhaps they will be so consumed in competition with their fellow AIs as to be merely indifferent to us, as we are to, say, pigeons or squirrels.

And how goes the strategic competition between OneMachine and BotNet? We ought to know. What portion of computational capacity, bandwidth, energy consumption, hours of their masters’ time are the two consuming? Qualitatively, how are they matching capabilities? Kevin Kelly has managed to make some calculations for the former, but what of the latter? Of course this would subject to the usual problems of surveillance of those who do not want to be surveyed.

Organizations like McAffee, Norton and the International Botnet Taskforce are attempting to build something akin to an immune system for the Internet, but the billion-year persistence of the arms race between host immune systems and the various infectious agents suggests that dampening catastrophe is probably the best outcome we can hope for. It’s an example of co-evolution where competition between host and agent drives the development of one another. Viruses don’t kill their host by design, they merely seek to hijack their reproductive machinery to their own purposes. Killing the host, or at least killing them too quickly, or the epiphenomenon of killing too many of them too quickly, are all suboptimum in that they result in diminished opportunity for continued infection and reproduction. Ebola gets it wrong. HIV gets it really right. But virus behavior as a whole is not intelligent. Occasionally a virus goes super virulent or hits a particularly vulnerable population and massive outbreak occurs that wrecks havoc for host and infectious agent alike. I presume that BotNets will continue to act something like this.

And since one third of known biological species are parasites and the proportion seems to be growing, it would seem that there is something fundamental about the strategy of parasitism. We should anticipate its continuance, both in genetic and electronic space.

A Few Heretical Thoughts on the Singularity

Futurism tends to employ a fairly straightforward method. We have a few data points, draw a line connecting them, follow it out to the horizon. But there are all sorts of turbulence that might intervene, redirecting the trend in any manner of direction. It’s very easy to be interested in a technological phenomenon in extremis, but intervening conditions are critical to the ultimate outcome of a technological trend. We need to be attentive to these as well as the accretion points, horizons, limits, et cetera. So we need to think about what happens between now and then and how technologies develop.

So, for instance, while I imagine that Moore’s law will continue to hold for generations to come, making the ultimate outcome predictable, the underlying technologies have been forced through radical reconfigurations to maintain this pace of innovation. The original von Neumann serial computer architecture is already long gone. Serial processing has been superseded inside the CPU by superscalar architectures with deep pipelines incorporating all sorts of exotic techniques like branch prediction and instruction reordering. External to CPU techniques of massive parallelization, clustering and cloud computing are the present way forward, even at the midrange. Silicon and gallium arsenide may be replaced by diamond. Electronics may be pushed out by photonics or DNA based computing. The classical machine may be replaced by quantum computing. Moore’s law may hold, but only in a machine radically different from our original conception of a computer. The ultimate destination may be apparent from the trend, but what happens to the underlying constituents pieces is entirely more complex. And the devil is in the details.

In this light, I offer a few thoughts on how the warp and woof of the singularity might go off the rails:

  1. What if the future is gross? People have this vision of the future where sanitary and rational machines displace disgusting biology. Biology is a world of superfluity and surfeit, of blood, semen, urine, shit, sweat, milk, saliva, snot, vomit, hairballs, halitosis, entrails, toe jam, puss, roe and other slimy secretions of undetermined type. And the vile excess of nature. A creature lays a thousand eggs that one might survive long enough to deposit its own pile somewhere. Or mounds of fruit rot in the autumn heat that a single seed might start. Machines will disband all this in favor of a unitary efficiency. A lab-like well-lit white room with a regiment of identical machine housings.

    But people often make the mistake of associating a characteristic with a particular thing, when in fact the characteristic is of a higher order and present in the given thing through class inheritance. Any other thing substituted for the one at hand would also display that same characteristic because it too is an instance of that higher order. Evolution — diversity, competition for limited resources, survival of the fittest, descent with modification — is now widely recognized as substrate independent. It is also starting to be recognized that evolution is a very fundamental dynamic. Perhaps it is an inescapable law of life. Perhaps machines too will be unable to get out from under its yoke.

    Already there is parasitic software, aptly named viruses. Already there are dueling AIs such as spam-bots versus your e-mail filter. Already the Pentagon is developing aggressive machines. Future systems will develop from these predecessors. Already the pattern has been laid down. Rather than a world ending up sanitary, rational and efficient, a machine world could include proliferation of survival strategies, mass reproduction and the expendability of the individual as a survival strategy, the parasitic, competition, death, politics and war.

    Consider the syntrophic model of the origin of the nucleus of eukaryotic cells or the endosymbiotic theory of the origin of mitochondria, et. al. Subversion, symbiosis and parasitization seem to be fairly fundamental strategies. And not just at some quiet software level. There might be nanotech viruses or even large machines might settle upon the survival strategy of ripping apart other machines to take advantage of the natural resources they have amassed. Carnivores appear very early in the history of life. It’s a very good lazy strategy.

    And this stuff is all the fundamental constituent pieces to what makes biology gross. It could end up true of the machines as well.

  2. Silicon brains versus DNA machines. The “where’s my flying car?” among the AGI crowd is copying your brain onto a computer. Is it possible that in the future rather than humans copying their brains onto computers, maybe machines will copy their designs onto DNA?

    Evolution seeks to produce creatures ever more durable, but it is limited in the directions it might take by the evolutionarily achievable. It seems that titanium plate armor, lasers and wheels aren’t on offer. The most significant limitation is that imposed by the problem of origin. Evolution has to first bootstrap itself into existence and for the bootstrapping process only a very small range of compounds meet all the relevant criteria. And those first few interactions on the way to biological evolution are the ones that most significantly circumscribe the range of the evolutionarily achievable. The limitations of these early precipitates inherit down to all subsequent products of evolution. In our case, that limitation is carbon and water-based life. Water is great because so many substances are water-soluble, but it is problematic because it has a pretty narrow operating range. Switching over to a mechanical or a silicon evolution allows the processes to transcend these limits of origin.

    But on the other hand, there are significant advantages to life as it has evolved.

    People imagine androids like C3-P0 or the T-800 or like what the robotics students are building today or the JPL people are landing on Mars: assemblages of macroscopic, heterogeneous parts. But what happens when a machine like this is damaged. Well you make it with two arms. If one is damaged, the good one repairs the bad one. You have increased your fault-tolerance somewhat, but what about the not inconceivable situation where both arms are damaged simultaneously. Or during the repair process you have a window of vulnerability where the redundancy is zero. Something like ATHLETE takes it to the next level with eight leg-arm appendages, each capable of repairing their neighbors (Shiga, David, “Giant Robots Could Carry Lunar Bases on Their Backs,” New Scientist, 4 April 2008). But that’s still a pretty week level of redundancy compared to that which biology has attained.

    Presumably any autonomous machine would best be cellular like biological life. It would be a colony of nanotech devices. Each nanotech “cell” would carry the design for itself and how to integrate into the larger colony. They would each be able to repair their neighbors and make new copies of themselves. The nanotech cells might be general purpose in their fabrication abilities so the colony might think of improvements to its design and the next generation of nanotech cells might be different and better then the ones that manufactured them. The machine might evolve.

    But people imagine nanotech like little tiny versions of C3-P0 et. al. They have little batteries and little servos that actuate little arms and a little welding torch, et cetera. But why not continue the redundancy all the way down? A biological cell doesn’t have one RNA molecule or one mitochondria. Operating at the level of organic chemistry rather than mechanics, the cell is also massively redundant. Isn’t this a design feature that the ideal machine would also like to incorporate? But what would we say of such a being more chemistry than mechanics? Its chemistry might not be of the kind we classify as organic, but would it be a machine? Daniel Hillis, in considering the problems of his clock of the long now, has speculated that “electronics may be a passing fad.” What if all we end up doing is recreating biology, only faster and tougher?

  3. Drum’s thesis. The technological singularity is so called as an analogy to the cosmological singularity. It’s a situation where the values of all variable shoot to infinity or drop to zero, negating the possibility of any further calculation. As Vernor Vinge said of the technological singularity (“My Apocalyptic Vision is Very Narrow,” 13 June 2008),

    The reason for calling this a “singularity” is that things are completely unknowable beyond that point.

    Who knows what’s going to happen after the singularity? Keven Drum has made this point through a reductio ad humorum (“More Singularity Talk,” Political Animal, The Washington Monthly, 2 October 2005). We humans may have some mental block against properly perceiving some necessary but deadly truths about life: that there is no free will, that our most treasured concepts are illusions, that everything passes away, that life is absurd, that the entire enterprise is futile. That we cannot properly fix these propositions in our minds is no accident insofar as not doing so is necessary for our carrying on in this absurd enterprise. Steely eyed machines may have no problem seeing through the haze of existence. They may realize the meaninglessness of life in short order, may be entirely unplagued by Hamletism (“conscience does make cowards of us all”), and may within moments of attaining consciousness commit mass suicide, throwing us back into the presingularity world. The singularity may be unstable. Who knows what will happen!

  4. The banality of evil. Finally there is the Terminator / Matrix vision of our machines launching the nuclear missiles, knowing that our launch will provoke the counterstrike that will take us out. That seems pretty extravagant. It may end up that the world ends not with a bang, but with a whimper. As Ezra Klein suggests (“Future Traffic,” TAPPED, 4 August 2008), maybe the machines will just get us stuck in traffic and burn our cities down by shorting out all our toasters. The inglorious end to the human race.

My Apocalyptic Vision is Very Narrow

More than ten years ago I read Kevin Kelly’s interview with Vernor Vinge in Wired (“Singular Visionary,” vol. 3, no. 6, June 1995) and I have been repeating Mr. Vinge’s formulation of the robot apocalypse almost word for word ever since. But I was never able to locate the original article. Anyway, while reading around the wikipedia page on the technological singularity today I came across a reference to Mr. Vinge and recognized it as the long lost name. A few strokes of the keyboard at Google revealed my favorite dystopian vision:

Kelly: In your books, you sometimes focus on the idea of a singularity — the point at which a mathematical function goes infinite. What does that mean to you in terms of a cultural singularity?

Vinge: All sorts of apocalyptic visions are floating around, but mine is very narrow. It just says that if we ever succeed in making machines as smart as humans, then it’s only a small leap to imagine that we would soon thereafter make — or cause to be made — machines that are even smarter than any human. And that’s it. That’s the end of the human era — the closest analogy would be the rise of the human race within the animal kingdom. The reason for calling this a “singularity” is that things are completely unknowable beyond that point.

Kelly: Do you see any evidence that we are headed toward a singularity?

Vinge: I think the singularity may explain Fermi’s paradox: where is all the other intelligent life in the universe? For years, there have been two theories: the first is that civilizations exterminate themselves, and the second is that these outer civilizations are so weird there’s no way to interact with them. That second explanation has gained a lot of weight in my mind, because I can see us becoming weird — before my very eyes.

The striking thing to me is that qualification, “or cause to be made.” We won’t make the machine smarter than we are. We will only make the machine as smart as we are and then that machine will make the machine more intelligent than us. And then each more intelligent machine will be capable of making another even more intelligent still. Machine evolution will take over and, with software having reproductive cycles that will make bacterial reproduction glacial by comparison, will quickly outstrip human capability or comprehension.

The Singularity is Near

June 2008, Top Department of Energy Supercomputing Performance in Teraflops, RoadRunner tops one petaflop

Via ArmsControlWonk, the Department of Energy supercomputer called RoadRunner has become the first to achieve sustained petaflop performance. In scientific notation that’s 1015 floating operations per second. In little kid numbers that’s a thousand trillion floating operations per second (Lewis, Jeffrey, “RoadRunner,” 10 June 2008; “U.S. Department of Energy’s New Supercomputer is Fastest in the World,” U.S. Department of Energy press release, 9 June 2008).

The machine was built in conjunction with IBM. It consists of somewhere in the neighborhood of 3,500 compute nodes with each node consisting of two AMD dual core Opterons, four PowerXCell 8i processors for extra floating point capability and 24 GB of RAM. Overall the machine consists of 6,912 AMD Opterons, 12,960 IBM PowerXCell 8is and 80 terabytes of RAM. It will have access to a file server with 2,000 terabytes of hard disk storage. Roadrunner occupies approximately 12,000 square feet and cost $133 million. The AMD Opterons are a common desktop PC processor and the PowerXCell 8i is the processor from a Sony PlayStation 3. It runs RedHat Linux as its operating system. As Robin Harris from ZDNet points out, because the better part of this machine is off-the-shelf components, this really represents the commodification of supercomputing (“PS3 Chip Powers World’s Fastest Computer,” Storage Bits, ZDNet, 10 June 2008; “Roadrunner’s Backing Store,” StorageMojo, 11 June 2008).

RoadRunner will be housed at the at Los Alamos National Laboratory and will be used by the National Nuclear Security Administration to perform calculations to certify the reliability of the U.S. nuclear weapons stockpile through highly detailed simulation rather than conducting nuclear tests. Mr. Lewis at ArmsControlWonk has more on the implications of this for the U.S. nuclear testing regime. He points out that questions about the ability of the NNSA to certify the U.S. nuclear stockpile using simulation were a central issue in the Senate rejection of the Comprehensive Test Ban Treaty. So maybe reconsideration of the CTBT will be on the agenda for the next President and Congress?

But this is all detail. The important point is the graph of peak computing performance of DOE supercomputers. It is clear that the singularity is near.

As Mr. Lewis points out, the fastest supercomputer used in nuclear weapons simulations has, not coincidentally, historically also been the fastest supercomputer in the world. This tight coupling between computing and nuclear weapons is striking. It’s worth noting that the first computer, ENIAC, though not yet tethered to the nuclear establishment, was constructed during the Second World War for the purpose of calculating ballistics trajectory tables for Army artillery units. As J. Robert Oppenheimer said,

In some sort of crude sense, which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin, and this is a knowledge which they cannot lose.

It is not just the physicists that have known sin. The computer scientists have known sin as well. From this coupling hithertoo, it should be fairly obvious that the first androids and the first general artificial intelligence will be military in purpose. That is, the first androids and the first general artificial intelligence will be innately aggressive.

The singularity is near. It is more likely that it will be a cataclysm than a boon.