The Supernovae in Your Coffee Cup

The Supernovae in Your Coffee Cup

I loved the film π. I consider it a hugely flawed film, but what I loved about it was the way that it worked in subtle allusions to the underlying concepts motivating the film. The main character walked through a park and they point the camera skyward to show the denude winter branches of the trees, an example of fractal symmetry. One of the images that they showed a number of times throughout the film was that of a cup of coffee. Whenever someone ended up in a diner, we got a tight-in shot of them dumping the cream into their coffee and the blooms of turbulent fluid redounding from the depths. It’s a perfect example of turbulence, a phenomenon that utterly defies computation. Since π I’ve never looked at a cup of coffee the same. Every time I pour cream into my coffee it’s a little ritual where for just a second I consider the boundlessness complexity of the world, as close as the cup in my hand.

I was amused to see a recent article in New Scientist invoke the image of the cup of coffee in reference to the problem of turbulent fluids in supernovae (Clark, Stuart, “How to Make Yourself a Star,” vol. 200, no. 2679, 25 October 2008, pp. 38-41):

As the dense inner material is flung through the less dense outer layers of a star, it creates turbulence and mixes everything up. Traditional computer simulations do not model turbulence well.

“Our theoretical understanding of turbulence is incomplete,” says astrophysicist Alexei Khokhlov of the University of Chicago. In other words, you cannot write down a set of equations describing the state of a turbulent system at any given time and then use them to predict what it will look like next. Instead, you have to employ a brute-force approach, using sheer computer muscle.

To seen the scale of this problem, take your morning cup of coffee and stir in some milk. You are using turbulence to mix the two fluids. To determine how they mix, physicists mentally split the cup into boxes and assign numbers to represent the properties inside each box, such as the temperature and density of the fluid. A computer can then calculate how each box interacts with its neighbors during one brief instant of time and then re-evaluate those numbers. Once it has done this for every box, it starts again for the next slice of time and so on.

To do this massive computation perfectly, each box should be tiny and contain just one fluid particle, but before you can get anywhere near this sort of precision, the numbers become mind-bogglingly large. Scientists talk of degrees of freedom as a measure of both the numbers of particles in a system and the number of ways each particle can interact with those around it. A single cup of coffee possesses a staggering 1040 degrees of freedom — far more than you can model on today’s computers. “Maybe in 10 years we will be able to fully model a cup of coffee,” says Khokhlov.

Until then the computation will always be approximate, and thus prone to errors, because small-scale physical interactions are not being taken into account. … If it is going to take 10 years to fully model a cup of coffee, how long until we can model an entire star?

“Never,” Khokhlov says. “Not until someone comes up with a cleaver theory that does not depend on what is happening on the small scale.” The only hope is to continue to investigate turbulence to learn how to better approximate its behavior.

OneMachine Doppelganger

Kevin Kelly has long discussed the OneMachine: the essentially unified, single distributed computer built up from all our networked PCs, PDAs, cell phones, digital cameras and other personal electronics (see e.g. “Tap Into the 12-Million-Teraflop Handheld Megacomputer,” Wired, vol. 16, no. 7, June 2008; “Dimensions of the One Machine,” The Technium, 2 November 2007; “One Huge Computer,” Wired, vol. 6, no. 8, August 1998).

Last week The New York Times ran an article on how below the surface, and running on the very same computers as the productive, life-enhancing OneMachine is a nefarious parallel network, OneMachine’s dark doppelganger, the BotNet (Markoff, John, “A Robot Network Seeks to Enlist Your Computer,” 20 October 2008, p. B1):

Botnets remain an Internet scourge. Active zombie networks created by a growing criminal underground peaked last month at more than half a million computers, according to shadowserver.org, an organization that tracks botnets. Even though security experts have diminished the botnets to about 300,000 computers, that is still twice the number detected a year ago.

The actual numbers may be far larger; Microsoft investigators, who say they are tracking about 1,000 botnets at any given time, say the largest network still controls several million PCs.

“The mean time to infection is less than five minutes,” said Richie Lai, who is part of Microsoft’s Internet Safety Enforcement Team, a group of about 20 researchers and investigators. The team is tackling a menace that in the last five years has grown from a computer hacker pastime to a dark business that is threatening the commercial viability of the Internet.

I have already written about how when the singularity occurs, it may not be what we expect. My suspicion is that either it will be overtly evil, or merely a recreation of the chaos of biological nature in a more durable, powerful and virulent form (“A Few Heretical Thoughts on the Singularity,” 19 August 2008).

What do phenomena like the BotNet suggest about the singularity? What comes will grow out of what is and what is will bequeath its characteristics to what comes — at least initially. Between the various military establishments and the criminal underground, we are instilling our machines with hostile, aggressive tendencies. But we are also making numerous, competitive systems. Will there be “the singularity” or will it, like in the novels of Vernor Vinge and Charles Stross, come in secretive, uncertain fits and starts? Will there be multiple singularities? Will one system cross the threshold, followed by another, then another. It makes sense to speak of “the singularity” when one is imagining a unified system, but when one is considering a multitude of contending systems, crossing the threshold of the singularity is but one move in a strategic game. Perhaps the machines will be hostile to predecessor biological life, but perhaps they will be so consumed in competition with their fellow AIs as to be merely indifferent to us, as we are to, say, pigeons or squirrels.

And how goes the strategic competition between OneMachine and BotNet? We ought to know. What portion of computational capacity, bandwidth, energy consumption, hours of their masters’ time are the two consuming? Qualitatively, how are they matching capabilities? Kevin Kelly has managed to make some calculations for the former, but what of the latter? Of course this would subject to the usual problems of surveillance of those who do not want to be surveyed.

Organizations like McAffee, Norton and the International Botnet Taskforce are attempting to build something akin to an immune system for the Internet, but the billion-year persistence of the arms race between host immune systems and the various infectious agents suggests that dampening catastrophe is probably the best outcome we can hope for. It’s an example of co-evolution where competition between host and agent drives the development of one another. Viruses don’t kill their host by design, they merely seek to hijack their reproductive machinery to their own purposes. Killing the host, or at least killing them too quickly, or the epiphenomenon of killing too many of them too quickly, are all suboptimum in that they result in diminished opportunity for continued infection and reproduction. Ebola gets it wrong. HIV gets it really right. But virus behavior as a whole is not intelligent. Occasionally a virus goes super virulent or hits a particularly vulnerable population and massive outbreak occurs that wrecks havoc for host and infectious agent alike. I presume that BotNets will continue to act something like this.

And since one third of known biological species are parasites and the proportion seems to be growing, it would seem that there is something fundamental about the strategy of parasitism. We should anticipate its continuance, both in genetic and electronic space.

The First Non-Trivial Cyborg

There are all sorts of cyborgs already among us: my dad has plastic irises, my mom has a metal hip. But these are trivial. A team of researchers at the University of Reading, United Kingdom has produced the first non-trivial cyborg, a robot controlled entirely by neural circuitry (“A ‘Frankenrobot’ with a Biological Brain,” Agence France-Presse, 13 August 2008):

… Gordon has a brain composed of 50,000 to 100,000 active neurons. Once removed from rat foetuses and disentangled from each other with an enzyme bath, the specialised nerve cells are laid out in a nutrient-rich medium across an eight-by-eight centimetre (five-by-five inch) array of 60 electrodes.

This “multi-electrode array” (MEA) serves as the interface between living tissue and machine, with the brain sending electrical impulses to drive the wheels of the robots, and receiving impulses delivered by sensors reacting to the environment. Because the brain is living tissue, it must be housed in a special temperature-controlled unit — it communicates with its “body” via a Bluetooth radio link. The robot has no additional control from a human or computer.

From the very start, the neurons get busy. “Within about 24 hours, they start sending out feelers to each other and making connections,” said Warwick. “Within a week we get some spontaneous firings and brain-like activity” similar to what happens in a normal rat — or human — brain, he added. But without external stimulation, the brain will wither and die within a couple of months.

“Now we are looking at how best to teach it to behave in certain ways,” explained Warwick. To some extent, Gordon learns by itself. When it hits a wall, for example, it gets an electrical stimulation from the robot’s sensors. As it confronts similar situations, it learns by habit. To help this process along, the researchers also use different chemicals to reinforce or inhibit the neural pathways that light up during particular actions.

Gordon, in fact, has multiple personalities — several MEA “brains” that the scientists can dock into the robot. “It’s quite funny — you get differences between the brains,” said Warwick. “This one is a bit boisterous and active, while we know another is not going to do what we want it to.” [reparagraphed]

See also Marks, Paul, “Rise of the Rat-Brained Robots,” New Scientist, 13 August 2008, pp. 22-23.

One of the possibilities mentioned without being entirely explicit about it is that these small brain models will hasten the pace of discovery in brain research. One of the obstacles of neurology is the sheer scale of the problem. With options like this, neurology becomes considerably more experimental then observational. And it potentially unleashes the hacker ethic on the problem: the challenge of creation can be a powerful addition to that of unalloyed comprehension. One wonders when the first trained rather than remote-controlled BattleBot will make its debut or when Survival Research Labs will get in on the act.

Its also worth noting that the lead scientist on the project is Kevin Warwick of Project Cyborg and that they will be writing up some results in the Journal of Neural Engineering. Can you believe that such a journal even exists? Following on this, neural engineering will be a growth field.

Enough of the messianism, time for the snark.

1991, Terminator II: Judgment Day, Linda Hamilton

They just should have made it look more like a T-800 than Wall-E. But when you see research like this ya gotta wonder if these people have ever watched any of the Terminator films. And I guess the Wall-E-like exterior is necessary for the next round of grants. And if you make it look like a T-800 then some Linda Hamilton / Ted Kaczynski type is going to show up at your door with an AK-47 and a grenade belt across her chest. On the other hand, if I could concoct a plan whereby Linda Hamilton would show up at my door with a grenade belt strapped across her chest that would be awesome.

The Legitimacy of Argumentum ad Hominem

Will Wilkinson and Crispin Sartwell consider the satisfactions of environmental soothsaying:

As I’ve said, the insane jackup of rhetoric with regard to global warming, “the greatest crisis the species has ever faced,” the death of the planet, etc, is the secular humanist liberal apocalypse. It’s a sheer competition for who’s most dire, most obsessed, and who’s more unanimous than whom. It’s the flood, complete with the reasons: our moral culpability. I predict this: when Obama is elected, liberals will feel better about themselves and the probable verdict of cosmic judgment, and they’ll tone down the eschatology, the ranting cant.

(“Ranting Cant,” The Fly Bottle, 2 August 2008; untitled, Eye of the Storm, 2 August 2008; respectively)

For my part, I imagine there is something to this argument. Every faction has its share of less than completely rational members. But if it is the contention of Messrs. Wilkinson and Sartwell that the behavior of some of the advocates of anthropogenic climate change bears one iota of relevance on the soundness of the theory itself, then this is a picture perfect instance of the falicy of argumentum ad hominem. The emotional satisfaction that someone takes in holding a particular position would seem irrelevant to the ultimate adjudication of said position.

Some time ago when I originally made the formal cognition post (1 January 2008) K.S. said that he didn’t see the point. What was my advocacy of formal cognition meant to achieve? I couldn’t quite answer him at the time, but Mr. Wilkinson’s post really clarifies the matter for me. I’m an advocate of formal cognition against rhetoric generally, but most especially against some of its more pernitious tactics of Freudianism broadly construed as an interpretive style, sociobiology in its normative aspect (an epistemological relative of Freudianism), and secularization thesis.

For every purportedly empirical statement out there, there is built up a detritus of extraneous belief. There is the psychological baggage of the proposition: the advocacy or denial of an opinion is motivated. Cui bono? Or advocacy or denial becomes a part of one’s identity. People build an emotional complex around certain beliefs. Certain propositions become tropes mapping us into desired cultural categories. A proposition becomes cornerstone to an elaborate worldview into which their constructors invest vast amounts of intellectual energy. These people tend to become conservative about such propositions all out of proportion to the weight that the casual observer might assign to such beliefs.

It’s really easy to succumb to the desire to set aside the mater per se and argue the detritus. It’s certainly more emotionally satisfying. The purpose of a catalogue of validated logic and methodologies is to determine the soundness of a proposition and cast out the irrelevant considerations in a systematic way.

So, for example, the scientific veracity of anthropogenic climate change is within range of rational risk assessment. The systems concepts of a tipping point and self-reinforcing, accelerating change are legitimate and the potential implications of these concepts applied here are alarming. The perennial libertarian Alfred E. Neuman “What, me worry?” worldview has its own short fallings, namely that disasters are plausible and occasionally systemic.

On the other hand, there is no proposition beyond the proposing hominid. I’m not so sure that the distinction between rhetoric and formal decidability is tenable, especially once one admits the scientific method into the corpus of formal cognition. Given that induction is logically and experientially unsound, the scientific method becomes merely a highly stylized rhetoric, a rhetoric whose admissible tactics are more narrowly circumscribed. It is most certainly a rhetoric that is more reliable than others, but it nonetheless exists with other rhetorics along a continuum of variably reliable tactics, rather than being cordoned off in a privileged category all its own.

If nothing else, the absolute prohibition against argumentum ad hominem seems incompatible with Thomas Kuhn’s The Structure of Scientific Revolutions. Is it even possible for the behavior, psychology, constellation of attendant beliefs and rhetorical strategies of the advocates for a proposition to be irrelevant to the acceptance or rejection of the proposition? I think that once one dispenses with the notion of truth or falsity of a proposition in any strong sense in favor mere acceptance or rejection (the sociology of knowledge), then these previously considered extraneous factors become relevant. They are real channels by which information and belief are transmitted throughout society. They are part of the practice of acceptance and rejection as they actually happen. Argumentum ad hominem seeks to make explicit and disrupt these channels. It reduces their efficacy through ridicule.

(This is not to deny the truth or falsity of out beliefs in some ultimate sense. The truth is out there — it just doesn’t intervene in our deliberations in any radical way. Prima facie, incomplete beliefs about the world can be made workable.)

Edsger W. Dijkstra’s Blog

One of the blog parlor games is to come up with unlikely blog-like precursors. Doogie Howser, M.D. is a favorite. I’m going to propose computer scientist Edsger W. Dijkstra.

I’m a big fan of “Real Programmers Don’t Use PASCAL” which contains a few direct derogatory references to Mr. Dijkstra and is generally opposed to his views on computer science. I guess I like it more for it’s computer geek-heroic tone. Since I am mostly a proponent of system and method, I am on the side of Mr. Dijkstra. But anyway, from the Wikipedia page on Mr. Dijkstra:

Dijkstra was known for his essays on programming; … He was also known for his habit of carefully composing manuscripts with his fountain pen. The manuscripts are called EWDs, since Dijkstra numbered them with EWD as prefix. Dijkstra would distribute photocopies of a new EWD among his colleagues; as many recipients photocopied and forwarded their copy, the EWDs spread throughout the international computer science community. The topics are mainly computer science and mathematics, but also include trip reports, letters, and speeches. More than 1300 EWDs have since been scanned, with a growing number also transcribed to facilitate search, and are available online at the Dijkstra archive of the University of Texas

Mr. Dijkstra’s “blog,” EDW is hosted here.

Ouroboros to Mise en Abyme

A few unsystematic thoughts on Ouroboros and mise en abyme:

  • I almost mentioned Douglas Hofstadter’s book, I Am a Strange Loop, in last week’s post (“The Mythical Economy,” 23 July 2008). He could have gone with Ouroboros on the cover too, but instead he went with mise en abyme.

    Or maybe he couldn’t have gone with Ouroboros. While Ouroboros is, on a superficial level, obviously a strange loop and a symbol that could be seen as self-referential, a peek at the index of Hofstadter’s book at the entry for “video feedback” — a technological mise en abyme — shows that he has a thicker analogy in mind:

    video feedback, 65-71; as candidate for strange loop, 103, 187, 203, 361; epiphenomena in, 68, 70-71; fear of meltdown, 56, 57; fractalic gestalts of, 204; lack of “I” in, 203; lack of perception in, 75-77, 187, 203; lack of symbols in, 203; lack of thinking in, 203; locking-in of patterns in, 70; parameters of, 65-67, 69, 75; reverberation in, 67-68; two systems entwined in, 210-211, 253-254; vanilla loop in, 208

  • While I’m amused at the notion of an Ouroboros economy, I can’t really think of any real correlate to the slightly humorous image. Unless maybe something like a naturalistic notion of the human economy, wherein the human economy is nature parasitic upon itself. The destruction of the biological world as giving birth to the artifactual or the cybernetic world. Ouroboros reborn for the Twenty-first Century!

  • The thing that’s really causing me to bring up mise en abyme is some thoughts on how people think about the future. People are faced with the need to decide and nearly all decisions that people make are, when not completely about the future, at least future-oriented. People’s thoughts about the future are divided into two closely related activities, carried out in tandem: planning and prediction. Prediction occasionally becomes an activity of its own, but for the most part prediction is an activity carried out in service of the more pragmatic planning.

    Planning is a branching strategic game. It works like this. I have a goal whose attainment is not simple: it involves a number of steps and it could be thwarted at any one of them. I start with my known situation and have a vague idea what the path to my goal would be and I make a series of hypothetical decisions. I test the soundness of a hypothetical decision by predicting the outcome of such an action. That is, I imagine a potential future.

    In the first round, the one inaugurated in the present, I know what my options are because they are present. In the second round and in all subsequent rounds, I must employ prediction to imagine what options I will have to choose from because from then on I am dealing in an imagined future. I repeat the hypothetical decision, predict, test, simulate new options algorithm down until I reach the last round whose decision results in the attainment of the goal.

    When I make predictions about the future, I rarely make a single prediction, since prediction, especially the sort of intuitionistic variant that people employ for the purpose of most of their planning, is not a very reliable. So I predict a range of possible futures. And in each possible future I face a range of possible decisions that I can take. Predicting and planning branch. Most of these I abandon in favor of the most fruitful seeming paths. But if a path dead-ends, I back up until I find the probable fateful decision that sent me down the path to the dead end. I recollect the other options at that possible future and imagine my way down another branch. I also generally flag a number of contingency plans. I went with this predicted future, but as things actually unfold, if it turns out that I predicted wrong, I have a plan ready for that other branch too.

    When I have what I imagine to be a satisfactory path from present to goal, I lock in each decision hypothetically made into “what I’ve decided upon.”

    This is a pretty systematic model and not necessarily exactly how most people make plans. People rarely sit town and carry it out algorithmically from beginning to end. More frequently people engage in this activity in fits and starts, not taking the problem from start to finish, but working on pieces that strike them at various occasions throughout their day. They absentmindedly do it while at their work computer, or do it extremely quickly while laying a joint plan with a partner over the telephone. Or maybe they try to be thorough about it and make a list on a notepad so they can see what’s done and what still in need of attention. Whatever the case, I think that ultimately this is what people are doing.

    The important point for mise en abyme is that near future decisions can only be locked in once more distant future decisions have been validated. Each step is dependent on the one after it having been made first. One starts the planning and predicting from the present and works one’s way forward, but one decides, as it were, backward, from the future to the present. Predictions and plans regarding the immediate future include as a part within them predictions and plans regarding the immediate future, which in turn contain predictions and plans about the distant future and so on. My thoughts about the future are mise en abyme insofar as they contain within them further thoughts about more distant futures.

    What one is doing in this process of planning for the future is conducting is a depth first search of potential futures. And depth first search is canonically thought of as recursive.

  • Mise en abyme seems to have a lot more analogistic or systemizing potential. Scale symmetry (e.g. fractals) along with all the related phenomena that can be grouped under that pattern seem coterminous with mise en abyme. Hegel’s logical schema seems like a highly abstract instance of mise en abyme, where each intellectual system is subsumed into a higher order intellectual system.

  • Perhaps there is a historical development of the conceptual depth and sophistication of the idea of self-referentiality. Ouroboros is simple cyclicality, though extrapolated into a notion of infinity or eternity. Homunculus is a limited instance of scale symmetry. Modern formal recursion are the culmination.

The Thinking Cap

Scientific American has an article on how the mind-machine interface is about to go commercial with a wearable EEG game controller that reads your mind (Sergo, Peter, “Head Games: Video Controller Taps into Brain Waves,” 14 April 2008). How’d they do it? Exactly the way the people at Wired would imagine. Rather than developing a series of hard-won determinate correlations between identified brain waves and intentions they just brute forced it. They recorded a gigantic quantity of sample data and processed it using a cloud computer to find the patterns:

Emotiv solved this brain-computer interface problem with the help of a multidisciplinary team that included neuroscientists, who understood the brain at a systems level (rather than individual cells), and computer engineers with a knack for machine learning and pattern recognition. Over the last four years, the company has conducted thousands of EEG recordings on hundreds of volunteers — not all gamers — as they experienced virtual scenarios that elicited various emotions, facial expressions and cognitive demands. The aim was to find a revealing brain activity that many people shared — a needle in a haystack of frenzied signals. Now, the EPOC allows users to fine-tune settings that allow it to pick up on even the subtlest of smirks.

When building these algorithms commenced two years ago, it had taken up to 72 hours for a bank of powerful computers to run through a mere 10 seconds of individual brain data and extract important features. Sorting through a seemingly endless stream of recordings eventually led Emotiv to find consistent signal patterns that revealed specific mental experiences. “Through a large enough sample size,” Le says, “we were able to get some consistency around the population to attain a high degree of confidence that it accurately measures an emotional state.”

And in dispensing with theoretical purity and just going with base correlation, the engineers at Emotive didn’t even have to concern themselves with the signal to noise ratio of the data:

Buch also suspects that the facial expressions that the EPOC detects are based more on the electrical activity of facial and scalp muscles than the brain per se. Although the electrical activity of muscles, he explained, is normally considered as artifact noise that needs to be filtered out to attain clean EEG signals that are of interest, they are still informative about how facial muscles move, such as during a wink. Tan agrees, saying that in their classification strategy some of the EPOC’s detections are based on muscle movements.

It’s all just correlation and if the noise helps identify the correlation, than it’s just as good as signal. In the petabyte age there is no phenomenon under consideration, not phenomenon under consideration issue. Any possible interference will be defeated by the size of the data set.

Now if they would just make a model that looks like this:

Robotech, Rick Hunter in the thinking cap

And maybe control an F-14 that transforms into a 50 foot tall robot instead of stupid games.

The Jules Verne of the Future Will be a Computer Scientist

Wired Magazine’s cover story this month on The End of Science / The Dawning of the Petabyte Age (Anderson, Chris, vol. 16, no. 7, July 2008, pp 107-121) has a very mundane answer to John’s enthusiasm: just scoop up tones of seawater, sequence every piece of DNA that you find, and compare it to a database of known DNA. The system will be able to flag each strand as existing species / new species.

We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

The best practical example of this is the shotgun gene sequencing by J. Craig Venter. Enabled by high-speed sequencers and supercomputers that statistically analyze the data they produce, Venter went from sequencing individual organisms to sequencing entire ecosystems. In 2003, he started sequencing much of the ocean, retracing the voyage of Captain Cook. And in 2005 he started sequencing the air. In the process, he discovered thousands of previously unknown species of bacteria and other life-forms.

Unfortunately this doesn’t do much to tell us about what the creature is like.

If the words “discover a new species” call to mind Darwin and drawings of finches, you may be stuck in the old way of doing science. Venter can tell you almost nothing about the species he found. He doesn’t know what they look like, how they live, or much of anything else about their morphology. He doesn’t even have their entire genome. All he has is a statistical blip — a unique sequence that, being unlike any other sequence in the database, must represent a new species.

This sequence may correlate with other sequences that resemble those of species we do know more about. In that case, Venter can make some guesses about the animals — that they convert sunlight into energy in a particular way, or that they descended from a common ancestor. But besides that, he has no better model of this species than Google has of your MySpace page. It’s just data.

But who knows, soon enough we’ll have software that will take a DNA sequence as input and produce a virtual model of a creature complete with visualization and tables of physiological data (bone density, blood chemistry, synapse count, etc.). We’ll never even have to find an instance of the creature.

Update, 25 June 2008: I think I’ve got my references a little crossed here. I titled the post The Jules Verne of the Future Will be a Computer Scientist for symmetry with John’s post, but Jules Verne is the author of the exploration stories, not the explorer himself, whereas the hypothetical computer scientist to which I am referring would be one of Jules Verne’s characters. The proper title should have been The Captain Nemo of the Future Will be a Computer Scientist.

Climate Change Comes to Flyover Country

Flooding in downtown Cedar Rapids, Iowa, 13 June 2008

We’re not even getting the worst of it, just the feeble remnants, but still, the waves of storms that have been blowing through D.C. have been terrifying, violent, disruptive events. They have been apocalyptic, with the sky darkening and the air becoming restless as the storm approaches. Last week the street lights all came on at three o’clock in the afternoon, so dark had it become. We all gathered at the windows watching the oncoming storm in amazement. We looked across to the neighboring building where the occupants of nearly all eleven floors gathered at their windows too. I’ve lived in D.C. for five years now and I have never seen storms so violent as these.

A few days ago S. said that it turns out that New Orleans wasn’t the U.S. city that got wiped off the map, but rather, merely the first U.S. city to be wiped of the map. And in the perverse logic of the greenhouse effect, where the weather becomes not generally hotter or colder or wetter or dryer, but variably more extreme in every direction, I’ll bet by August the news of flooding has been replaced by stories of drought and wildfire.

It was pretty easy for middle-American public figures to be smug about climate change when it was just sinful, elitist coastal cities that were going to be destroyed by rising sea levels. But it turns out that the gods make no distinctions among we mortals between the righteous and the wicked. Climate change will come to the heartland just as much as it will to the decadent coastal cities.

Just as in Iraq, people tend to contrast the costs of a change in direction with the costs of doing nothing as if doing nothing were free. There are no costs to the status quo. The right digs out all these numbers about the drag on the economy of various plans to prevent climate change, but it is presented as if it were an absolute, rather than a comparative cost. Do nothing and continue along that same unencumbered glide-path to prosperity; change direction and it’s the road to serfdom. In fact, there has even been a spate of articles as of late on how climate change is going to be an economic boon — at least for some. But one no longer needs a month of simulation time on a supercomputer — a window and a cable subscription will suffice — to see that there are coasts to doing nothing.

Markets: Plan A or Plan B?

In an aside to an article on genetic determinism, The National Review comments on markets and the limits of information science (Manzi, Jim, “Undetermined,” vol. LX, no. 10, 2 June 2008, pp. 26-32):

In the middle of the 20th century, Friedrich Hayek and the libertarians he inspired faced those who asserted that the economy could be successfully planned. The libertarian position was not that such planning could be proved impossible in theory, but that we lacked sufficient information and processing power to accomplish it. The world of economic interaction was so complex that it overwhelms out ability to render it predictable; hence the need for markets to set prices.

I don’t for a moment believe that the Libertarian Party will disband once we cross some floating point operation threshold on supercomputers. There is the practical and there is the principled reason for subscribing to the libertarian position and I have read some of its proponents specifically state that even if the command economy could deliver superior performance, they would still be libertarians because of the component of human freedom.