The Democrats Reborn?

I’m trying to keep up my jaundiced eye here, but I feel like tonight I have seen a Democratic party unlike any I have seen before in my lifetime. Walter Mondale was perhaps the last of the old guard still to possess some fight, but after that, not Dukakis, or Clinton, or Al Gore or John Kerry. They all seemed too timid, too poll tested, too cowed. First last night in Joe Biden’s speech and then again tonight in Barack Obama’s I heard a Democratic party unbowed, spirited, confident.

Senator Biden’s introduction by his son and his own discussion of his family was surprisingly emotional and seemingly so for everyone involved. His speech was the version of values that Democrats should be putting forward, it was tough on foreign policy, and unlike Democrats for the last eight years, effortlessly sincere, uncontrived. As Matthew Yglesias pointed out (“It’s Biden,” ThinkProgress, 23 August 2008), the selection of Biden for VP “signals as desire to take the argument to John McCain on national security policy” and deliver to voters “a full-spectrum debate about the issues facing the country rather than a positional battle in which one party talks about the economy and the other talks about national security.” In Joseph Biden I think I first, finally saw a different, rejuvenated Democrats.

The same was true for Barack Obama’s speech tonight. His cadence was off in places, but it was defiant, pugilistic and signaled to me that the Senator has absorbed all the right lessons about the campaign. I think many of the myths that have plagued the Senator as well as the party at large for the last few weeks have been definitively left behind after tonight. It showed some of the populism that worked so well for Al Gore in the final weeks of the 2000 election. My favorite part, like with Senator Biden, was when Senator Obama took the foreign policy issue by the horns:

You don’t defeat — you don’t defeat a terrorist network that operates in 80 countries by occupying Iraq. You don’t protect Israel and deter Iran just by talking tough in Washington. You can’t truly stand up for Georgia when you’ve strained our oldest alliances.

If John McCain wants to follow George Bush with more tough talk and bad strategy, that is his choice, but that is not the change that America needs.

If Chris Matthews waxing rapturous is any indication, then he achieved everything he needed to do. After Chris Matthews, what more can you ask for? Who knows, maybe even Maureen Dowd will write a positive review. I think McCain’s speech a week from now will look pretty wooden in comparison.

My only concern is as, I think it was Patrick Buchanan said last night, after a week of the Republicans ripping into Senator Obama next week, the Democrats may regret going so easy on Senator McCain. Alternately, Democrats may finally have learned that you have to run your negative stuff stealth.

The Omission of Lyndon Johnson from the Democratic Pantheon

If there is a unifying thread to U.S. history it is that of the ongoing process of bringing American practice into line with American principle, of the march of freedom, of the expansion of the franchise. In this story there is one great subplot that stands above all others: that of the experience of the African American: the middle crossing, slavery, the fatal flaws of the U.S. Constitution, the Civil War, Reconstruction, Jim Crow and the civil rights movement. At the denouement of this story stand two characters, towering over all others: Martin Luther King, Jr. and Lyndon Baines Johnson.

Taylor Branch was right to structure his biography of Martin Luther King, Jr. around Exodus. King led African Americans out of the dessert, but was not allowed to enter the Promised Land himself. Lyndon Johnson, on the other had, is an exile: a man from the heart of the franchise, who is today persona non grata.

Today, Lyndon Johnson would have been a hundred years old (27 August 1908 – 22 January 1973) and George Packer comments on the strange exclusion of this giant of the left from the Democratic pantheon (“L.B.J.’s Moment,” Interesting Times, The New Yorker, 24 August 2008):

Whenever Democrats gather to celebrate the party, they invoke the names of their luminaries past. The list used to begin with Jefferson and Jackson. More recently, it’s been shortened to F.D.R., Truman, and J.F.K. The one Democrat with a legitimate claim to greatness who can’t be named is Lyndon Johnson. The other day I asked Robert Caro, Johnson’s Pulitzer-Prize-winning biographer and hardly a hagiographer of the man, whether he thought Johnson should be mentioned in Denver. “It would be only just to Johnson,” Caro said. “If the Democratic Party was going to honestly acknowledge how it came to the point in its history that it was about to nominate a black American for President, no speech would not mention Lyndon Johnson.” Caro is now at work on the fourth volume of his epic biography, about Johnson’s White House years. “I am writing right now about how he won for black Americans the right to vote. I am turning from what happened forty-three years ago to what I am reading in my daily newspaper — and the thrill that goes up and down my spine when I realize the historical significance of this moment is only equaled by my anger that they are not giving Johnson credit for it.”

In the week of Johnson’s one hundredth birthday, I would like to believe that there is some Democrat in Denver who will do him the justice of speaking his name.

The Architecture of Nightmares

Circa 1999, Lebbeus Woods, detail of Terrain 1-2

In “Imagination Unmoored” (8 August 2008) I suggested that in addition to our dreams we might end up living in our nightmares. It struck me as a strange thought when I wrote it, though I didn’t even have to think any particular scenario — the eccentric, the accidental, the illicit — all the way through for its plausibility to be apparent. Our culture already abounds in it. It is in this regard that the works of H. P. Lovecraft and Philip K. Dick have been inducted into the Library of America and that players sign up for Hord in World of Warcraft. Alternative iconography seeks stark contrasts with the mundane, with Goth tending toward horror and punk the post-apocalyptic. Pornography has always tarried with the Sadistic and the surreal.

Any art of the found will inevitably end up scavenging our calamities as well as our aspirations. Enter Lebbeus Woods whose architectural design work will be included in Dreamland: Architectural Experiments since the 1970s, an exhibit at the Museum of Modern Art (Ouroussoff, Nichlai, “An Architect Unshackled by Limits of the Real World,” 24 August 2008):

In the early 1990s he published a stunning series of renderings that explored the intersection of architecture and violence. The first of these, the Berlin Free-Zone project, designed soon after the fall of the Berlin Wall, was conceived as an illustration of how periods of social upheaval are also opportunities for creative freedom.

Aggressive machinelike structures — their steel exteriors resembling military debris — are implanted in the abandoned ruins of buildings that flank the wall’s former death zone. Cramped and oddly shaped, the interiors were designed to be difficult to inhabit — a strategy for screening out the typical bourgeois. (“You can’t bring your old habits here,” he warned. “If you want to participate, you will have to reinvent yourself.”)

This vision reached its extreme in a series of renderings he created in 1993 in response to the war in Bosnia. Inspired by sci-fi comics and full of writhing cables, crumbling buildings and flying shards of steel, these drawings seem to mock the old Modernist faith in a utopian future. Their dark, moody atmosphere suggests a world in a constant struggle for survival.

In 1999 he began working on a series of designs whose fragmented planes were intended to reflect the seismic shifts that occur during earthquakes. (“The idea is that it’s not nature that creates catastrophes,” he said. “It’s man. The renderings were intended to reflect a new way of thinking about normal geological occurrences.”)

“I’m not interested in living in a fantasy world,” Mr. Woods told me. “All my work is still meant to evoke real architectural spaces. But what interests me is what the world would be like if we were free of conventional limits. Maybe I can show what could happen if we lived by a different set of rules.”

The article actually laments that the young generation in architecture has been made facile by overuse of computers. Au contraire! That is exactly how we are to experience the architecture of the impractical, built under fanciful physics.

Membership Has Its Limitations

I am vehemently opposed to any sort of loyalty cards that are now de rigueur at almost all stores where you make a purchase of any regularity or size. I think a lot of people see them as a harmless way to save a few bucks. And that’s what they are — for now. But they are obviously a foundation on which to build. But build what? Well, the FTC’s deceptive marketing practices lawsuit against CompuCredit is sure suggestive (Silver-Greenberg, Jessica, “Your Lifestyle May Hurt Your Credit,” BusinessWeek, 19 June 2008):

The allegations, in part, focus on CompuCredit’s Aspire Visa, a subprime credit card for risky borrowers. The FTC claims that CompuCredit didn’t properly disclose that it monitored spending and cut credit lines if consumers used their cards at certain places. Among them: tire and retreading shops, massage parlors, bars, billiard halls, and marriage counseling offices. “The company touted that cardholders could use their credit cards anywhere,” says J. Reilly Dolan, assistant director for financial practices at the FTC. “What they didn’t say was that you could be punished for specific kinds of purchases.”

And the more general point:

With competition increasing, databases improving, and technology advancing, companies can include more factors than ever in their models. And industry experts say financial firms increasingly are looking at consumer behavior, as CompuCredit did.

Of course the corporate idiocy here is mind-boggling. First they target a sub-prime demographic, but then cut them off for the very behaviors that made these people sub-prime in the first place. Really? CompuCredit was unaware that the underclass blew their money on scratch tickets and payday loans?

I don’t suspect that this is leading to some insidious world of PreCrime, where government thugs scoop you up, guilty on the basis of a statistical analysis. Rather, nudge style, it will just become the accepted background of people’s expectations. People will recognize an incentive and respond accordingly. “Oh, no, we can’t go out for happy hour. We’re trying to get our credit score up for a home loan.”

The Anti-Library

A selection of my personal library, 20 August 2008

From Nassim Taleb’s The Black Swan: The Impact of the Highly Improbable:

The writer Umberto Eco belongs to that small class of scholars who are encyclopedic, insightful, and nondull. He is the owner of a large personal library (containing thirty thousand books), and separates visitors into two categories: those who react with “Wow! Signore professore dottore Eco, what a library you have! How many of these books have you read?” and others — a very small minority — who get the point that a private library is not an ego-boosting appendage but a research tool. Read books are far less valuable than unread ones. The library should contain as much of what you do not know as your financial means, mortgage rates, and the currently tight real-estate market allow you to put there. You will accumulate more knowledge and more books as you grow older, and the growing number of unread books on the shelves will look at you menacingly. Indeed, the more you know, the larger the rows of unread books. Let us call this collection of unread books an antilibrary. (p. 1)

Great! On that basis, I’m going to allow myself to buy three more books this week.

Update, 26 August 2008: Two-thirds of the way there: I bought A Thousand Years of Nonlinear History by Manuel De Landa and The Concept of the Political by Carl Schmitt yesterday.

Update 2, 27 August 2008: Book number three purchased. At the suggestion of John, I ordered a copy of singularity-oriented sci-fi novel Accelerando by Charles Stross.

A Few Heretical Thoughts on the Singularity

Futurism tends to employ a fairly straightforward method. We have a few data points, draw a line connecting them, follow it out to the horizon. But there are all sorts of turbulence that might intervene, redirecting the trend in any manner of direction. It’s very easy to be interested in a technological phenomenon in extremis, but intervening conditions are critical to the ultimate outcome of a technological trend. We need to be attentive to these as well as the accretion points, horizons, limits, et cetera. So we need to think about what happens between now and then and how technologies develop.

So, for instance, while I imagine that Moore’s law will continue to hold for generations to come, making the ultimate outcome predictable, the underlying technologies have been forced through radical reconfigurations to maintain this pace of innovation. The original von Neumann serial computer architecture is already long gone. Serial processing has been superseded inside the CPU by superscalar architectures with deep pipelines incorporating all sorts of exotic techniques like branch prediction and instruction reordering. External to CPU techniques of massive parallelization, clustering and cloud computing are the present way forward, even at the midrange. Silicon and gallium arsenide may be replaced by diamond. Electronics may be pushed out by photonics or DNA based computing. The classical machine may be replaced by quantum computing. Moore’s law may hold, but only in a machine radically different from our original conception of a computer. The ultimate destination may be apparent from the trend, but what happens to the underlying constituents pieces is entirely more complex. And the devil is in the details.

In this light, I offer a few thoughts on how the warp and woof of the singularity might go off the rails:

  1. What if the future is gross? People have this vision of the future where sanitary and rational machines displace disgusting biology. Biology is a world of superfluity and surfeit, of blood, semen, urine, shit, sweat, milk, saliva, snot, vomit, hairballs, halitosis, entrails, toe jam, puss, roe and other slimy secretions of undetermined type. And the vile excess of nature. A creature lays a thousand eggs that one might survive long enough to deposit its own pile somewhere. Or mounds of fruit rot in the autumn heat that a single seed might start. Machines will disband all this in favor of a unitary efficiency. A lab-like well-lit white room with a regiment of identical machine housings.

    But people often make the mistake of associating a characteristic with a particular thing, when in fact the characteristic is of a higher order and present in the given thing through class inheritance. Any other thing substituted for the one at hand would also display that same characteristic because it too is an instance of that higher order. Evolution — diversity, competition for limited resources, survival of the fittest, descent with modification — is now widely recognized as substrate independent. It is also starting to be recognized that evolution is a very fundamental dynamic. Perhaps it is an inescapable law of life. Perhaps machines too will be unable to get out from under its yoke.

    Already there is parasitic software, aptly named viruses. Already there are dueling AIs such as spam-bots versus your e-mail filter. Already the Pentagon is developing aggressive machines. Future systems will develop from these predecessors. Already the pattern has been laid down. Rather than a world ending up sanitary, rational and efficient, a machine world could include proliferation of survival strategies, mass reproduction and the expendability of the individual as a survival strategy, the parasitic, competition, death, politics and war.

    Consider the syntrophic model of the origin of the nucleus of eukaryotic cells or the endosymbiotic theory of the origin of mitochondria, et. al. Subversion, symbiosis and parasitization seem to be fairly fundamental strategies. And not just at some quiet software level. There might be nanotech viruses or even large machines might settle upon the survival strategy of ripping apart other machines to take advantage of the natural resources they have amassed. Carnivores appear very early in the history of life. It’s a very good lazy strategy.

    And this stuff is all the fundamental constituent pieces to what makes biology gross. It could end up true of the machines as well.

  2. Silicon brains versus DNA machines. The “where’s my flying car?” among the AGI crowd is copying your brain onto a computer. Is it possible that in the future rather than humans copying their brains onto computers, maybe machines will copy their designs onto DNA?

    Evolution seeks to produce creatures ever more durable, but it is limited in the directions it might take by the evolutionarily achievable. It seems that titanium plate armor, lasers and wheels aren’t on offer. The most significant limitation is that imposed by the problem of origin. Evolution has to first bootstrap itself into existence and for the bootstrapping process only a very small range of compounds meet all the relevant criteria. And those first few interactions on the way to biological evolution are the ones that most significantly circumscribe the range of the evolutionarily achievable. The limitations of these early precipitates inherit down to all subsequent products of evolution. In our case, that limitation is carbon and water-based life. Water is great because so many substances are water-soluble, but it is problematic because it has a pretty narrow operating range. Switching over to a mechanical or a silicon evolution allows the processes to transcend these limits of origin.

    But on the other hand, there are significant advantages to life as it has evolved.

    People imagine androids like C3-P0 or the T-800 or like what the robotics students are building today or the JPL people are landing on Mars: assemblages of macroscopic, heterogeneous parts. But what happens when a machine like this is damaged. Well you make it with two arms. If one is damaged, the good one repairs the bad one. You have increased your fault-tolerance somewhat, but what about the not inconceivable situation where both arms are damaged simultaneously. Or during the repair process you have a window of vulnerability where the redundancy is zero. Something like ATHLETE takes it to the next level with eight leg-arm appendages, each capable of repairing their neighbors (Shiga, David, “Giant Robots Could Carry Lunar Bases on Their Backs,” New Scientist, 4 April 2008). But that’s still a pretty week level of redundancy compared to that which biology has attained.

    Presumably any autonomous machine would best be cellular like biological life. It would be a colony of nanotech devices. Each nanotech “cell” would carry the design for itself and how to integrate into the larger colony. They would each be able to repair their neighbors and make new copies of themselves. The nanotech cells might be general purpose in their fabrication abilities so the colony might think of improvements to its design and the next generation of nanotech cells might be different and better then the ones that manufactured them. The machine might evolve.

    But people imagine nanotech like little tiny versions of C3-P0 et. al. They have little batteries and little servos that actuate little arms and a little welding torch, et cetera. But why not continue the redundancy all the way down? A biological cell doesn’t have one RNA molecule or one mitochondria. Operating at the level of organic chemistry rather than mechanics, the cell is also massively redundant. Isn’t this a design feature that the ideal machine would also like to incorporate? But what would we say of such a being more chemistry than mechanics? Its chemistry might not be of the kind we classify as organic, but would it be a machine? Daniel Hillis, in considering the problems of his clock of the long now, has speculated that “electronics may be a passing fad.” What if all we end up doing is recreating biology, only faster and tougher?

  3. Drum’s thesis. The technological singularity is so called as an analogy to the cosmological singularity. It’s a situation where the values of all variable shoot to infinity or drop to zero, negating the possibility of any further calculation. As Vernor Vinge said of the technological singularity (“My Apocalyptic Vision is Very Narrow,” 13 June 2008),

    The reason for calling this a “singularity” is that things are completely unknowable beyond that point.

    Who knows what’s going to happen after the singularity? Keven Drum has made this point through a reductio ad humorum (“More Singularity Talk,” Political Animal, The Washington Monthly, 2 October 2005). We humans may have some mental block against properly perceiving some necessary but deadly truths about life: that there is no free will, that our most treasured concepts are illusions, that everything passes away, that life is absurd, that the entire enterprise is futile. That we cannot properly fix these propositions in our minds is no accident insofar as not doing so is necessary for our carrying on in this absurd enterprise. Steely eyed machines may have no problem seeing through the haze of existence. They may realize the meaninglessness of life in short order, may be entirely unplagued by Hamletism (“conscience does make cowards of us all”), and may within moments of attaining consciousness commit mass suicide, throwing us back into the presingularity world. The singularity may be unstable. Who knows what will happen!

  4. The banality of evil. Finally there is the Terminator / Matrix vision of our machines launching the nuclear missiles, knowing that our launch will provoke the counterstrike that will take us out. That seems pretty extravagant. It may end up that the world ends not with a bang, but with a whimper. As Ezra Klein suggests (“Future Traffic,” TAPPED, 4 August 2008), maybe the machines will just get us stuck in traffic and burn our cities down by shorting out all our toasters. The inglorious end to the human race.

The Day I Became a Hegelian

I remember distinctly 14 April 2000, the day the Dow Jones Industrial Average dropped 617.78 points, or 5.7 percent, to 10,305.77 (Fuerbringer, Jonathan and Alex Berenson, “Stock Market in Steep Drop as Worried Investors Flee; NASDAQ Has Its Worst Week,” The New York Times, 15 April 2000, p. A1). The company where I worked offered options and stock purchase plan heavy compensation packages and it was the first really precipitous drop in the stock market since the online discount stock brokers like E-Trade went really big. At the office where I worked nothing got done that day: no one could do anything but watch their portfolios plummet. I remember a group of us going out for lunch. This was in Seattle and the Harbor Steps II was still under construction. At that time it was just a reinforced concrete skeleton and a kangaroo crane. As the group of us walked down — I don’t know — probably University Street, I looked up at the concrete stack of Harbor Steps II and the bustle in and around it and it occurred to me that if the stock market were to continue to fall like it was, the development company might halt construction — that building would cease its coming into being. At that moment, I saw that it was primarily a blueprint, an architect’s vision, a developer’s profit and loss projections, investor expectations. It was less matter and more idea and at that moment I first thought that maybe there was something to this Hegel fellow.

Abandoned construction, Bangkok, Thailand, approximately Sukhumvit and Soi 8, 2 December 2006

Similarly, when S. and I were in Thailand, we stayed in a neighborhood a few blocks from an abandoned, half finished concrete skeleton of a building. They were actually fairly common in Bangkok. So quickly had this construction project been abandoned that there were places where the rebar had been put in place and half the concrete had been poured when work had stopped. A pillar ended in a jagged mound of concrete with the remaining half of the uncovered rebar simply jutting skyward. I took one look at that building and said to S., “That’s probably left over from the Asian financial crisis.” That’s how suddenly and ferociously the Asian financial crisis struck: people simply walked away from multi-million dollar building projects. When the beliefs don’t pan out, the rock and the steel cease to fill out their imagined dimensions.

Ten thousand years ago ideas played almost no role in human affairs or history. Today they play a significant role, perhaps already the better part of every artifact and interaction. The Pattern On The Stone as Daniel Hillis called it. The stone is inconsequential: the pattern is everything. It is a part of the direction of history that ideas gradually at first, but with accelerating speed, displace matter as the primary constituent of the human environment.

And that, as I read it, is Hegel’s Absolute

The First Non-Trivial Cyborg

There are all sorts of cyborgs already among us: my dad has plastic irises, my mom has a metal hip. But these are trivial. A team of researchers at the University of Reading, United Kingdom has produced the first non-trivial cyborg, a robot controlled entirely by neural circuitry (“A ‘Frankenrobot’ with a Biological Brain,” Agence France-Presse, 13 August 2008):

… Gordon has a brain composed of 50,000 to 100,000 active neurons. Once removed from rat foetuses and disentangled from each other with an enzyme bath, the specialised nerve cells are laid out in a nutrient-rich medium across an eight-by-eight centimetre (five-by-five inch) array of 60 electrodes.

This “multi-electrode array” (MEA) serves as the interface between living tissue and machine, with the brain sending electrical impulses to drive the wheels of the robots, and receiving impulses delivered by sensors reacting to the environment. Because the brain is living tissue, it must be housed in a special temperature-controlled unit — it communicates with its “body” via a Bluetooth radio link. The robot has no additional control from a human or computer.

From the very start, the neurons get busy. “Within about 24 hours, they start sending out feelers to each other and making connections,” said Warwick. “Within a week we get some spontaneous firings and brain-like activity” similar to what happens in a normal rat — or human — brain, he added. But without external stimulation, the brain will wither and die within a couple of months.

“Now we are looking at how best to teach it to behave in certain ways,” explained Warwick. To some extent, Gordon learns by itself. When it hits a wall, for example, it gets an electrical stimulation from the robot’s sensors. As it confronts similar situations, it learns by habit. To help this process along, the researchers also use different chemicals to reinforce or inhibit the neural pathways that light up during particular actions.

Gordon, in fact, has multiple personalities — several MEA “brains” that the scientists can dock into the robot. “It’s quite funny — you get differences between the brains,” said Warwick. “This one is a bit boisterous and active, while we know another is not going to do what we want it to.” [reparagraphed]

See also Marks, Paul, “Rise of the Rat-Brained Robots,” New Scientist, 13 August 2008, pp. 22-23.

One of the possibilities mentioned without being entirely explicit about it is that these small brain models will hasten the pace of discovery in brain research. One of the obstacles of neurology is the sheer scale of the problem. With options like this, neurology becomes considerably more experimental then observational. And it potentially unleashes the hacker ethic on the problem: the challenge of creation can be a powerful addition to that of unalloyed comprehension. One wonders when the first trained rather than remote-controlled BattleBot will make its debut or when Survival Research Labs will get in on the act.

Its also worth noting that the lead scientist on the project is Kevin Warwick of Project Cyborg and that they will be writing up some results in the Journal of Neural Engineering. Can you believe that such a journal even exists? Following on this, neural engineering will be a growth field.

Enough of the messianism, time for the snark.

1991, Terminator II: Judgment Day, Linda Hamilton

They just should have made it look more like a T-800 than Wall-E. But when you see research like this ya gotta wonder if these people have ever watched any of the Terminator films. And I guess the Wall-E-like exterior is necessary for the next round of grants. And if you make it look like a T-800 then some Linda Hamilton / Ted Kaczynski type is going to show up at your door with an AK-47 and a grenade belt across her chest. On the other hand, if I could concoct a plan whereby Linda Hamilton would show up at my door with a grenade belt strapped across her chest that would be awesome.

The Beijing Olympics Did Not Take Place

One of the amusing stories coming out of the 2008 Beijing Olympics opening ceremonies is that it turns out that a portion of the video feed of the fireworks display was actually a computer simulation spliced into the broadcast. The fireworks were set off, but planners determined that they wouldn’t be able to film them as well as they would have liked, so they manufactured a version of them according to how they wanted them to have been filmed (Spencer, Richard, “Beijing Olympic 2008 opening ceremony giant firework footprints ‘faked’,” Daily Telegraph, 10 August 2008):

Gao Xiaolong, head of the visual effects team for the ceremony, said it had taken almost a year to create the 55-second sequence. Meticulous efforts were made to ensure the sequence was as unnoticeable as possible: they sought advice from the Beijing meteorological office as to how to recreate the hazy effects of Beijing’s smog at night, and inserted a slight camera shake effect to simulate the idea that it was filmed from a helicopter.

But what does it even mean to say that portions of the event were “faked”? The whole thing was illusion and artifice. Obviously significant portions of the event were computer graphics. The scroll that served as the mat for a significant portion of the floor show included computer graphics to create the image of its rolling. The projection of the Earth inside the globe was computer graphics and the unfurling scroll around the perimeter of the stadium as the final flamebearer faux-ran to the Olympic torch was computer graphics.

Increasingly computer graphics will come to be the norm, what’s really “real” and the merely material world will become the anomaly. Already we’re at the point where the big story about the latest Batman film was not the CG, but that the stuff that would usually be CG wasn’t CG (e.g. Brown, Scott, “Dark Knight Director Shuns Digital Effects For the Real Thing,” Wired, vol. 16, no. 7, July 2008, pp. 122-127). Already people are talking about augmented reality. The problem that I have with, say, Google maps and other special data, is that it’s stuck in a little box in my hand. Where it belongs is overlayed onto the world. Real-world objects are the ultimate representational tokens.

Movable type, opening ceremonies of the Bejing Olympics, 8 August 2008

Or, to turn things around, my favorite performance of the night was the “movable type” arrangement of 897 actuating blocks that raised and lowered to create patterns like a waving flag and ripples in a pond. My first reaction was that it must be computer control that created the images of waves and ripples. I wondered at how much that many hydraulic lifts must have cost and tried to imagine the programming that could produce those patters. The first time the camera panned low and showed human legs standing and squatting I was amazed.

This was an instance of “natural” things “simulating” machines. What we were watching was giant wooden pixels. What was amazing about this performance was that humans could achieve this machine-like level of control and precision.

1994, David Turnley, James Nachtwey, 1994 elections in South Africa

But of course I don’t need to go to bizarre lengths. The more traditional means of artifice are well documented. There’s a reason that they call it media (middle, medium).