Emergence and Aufhebung (Hegel and the Swarm)

I like Charles Mudede’s scaled down explanation of emergence (“Interlace in Dubai,” SLOG, 3 December 2008):

In the theory of emergence it is understood that complexity has a limit. For example, when the population of an ant colony reaches a certain point, 80,000 or so, changes begin to happen. Army ants swarm another colony or lots of ants move out and wander or start a new colony. What is clear is one behavior ends and another one begins.

One controlling logic relinquishes to a subsequent controlling logic. Emergence is a discontinuity in growth.

I don’t know that he is right to characterize this as “complexity having a limit.” An emergent property is an aufhebung: a simultaneous preservation and transformation. The emergent property doesn’t sweep aside all previous behavior, but preserves the original behavior and builds a new layer or interpretive characteristics on top of it. I think of “the invisible hand” of capitalist markets here. People don’t stop bargaining when market clearing happens. People continue to negotiate, clip coupons, shop around, seek out sales, innovate, invest and compete. Market clearing is a second order interpretation of the ground level behavior that persists just under this interpretive overlay. Or in the example of the prisoners’ dilemma, the participants don’t cease to be rational outcome maximizes when the logic of suboptimum outcomes takes over. Their goals and reasoning stay the same. If anything, by preserving all the original complexity and adding a superstructure of new behavior, emergence is complexity unbound from its first order confines.

Everything Malcolm Gladwell

Malcolm Gladwell’s new book, Outliers: The Story of Success, hit the stores today. As a loser, I’m not super-enthused about successful people. That said, I’ve become a big Malcolm Gladwell fan, so here are a few links.

He’s just the sort of person who should have a blog; and he has one. He maintains an archive of his published writings, as does The New Yorker where he is a staff writer. His previous two books are The Tipping Point (2000) and Blink (2005).

He’s presented on some of his ideas at the 2006 and 2007 New Yorker conferences. He’s been on Q&A with Brian Lamb. Apparently Charlie Rose is a big fan because Mr. Gladwell had been on his show seven times. NPR has done a number of profiles, interviews and reviews which can be found in their archive. And of course he’s given a TED Talk. It’s on the origin of extra chunky pasta sauce and the proliferation within product lines. More fundamentally it’s on the death of Platonism in the commercial food industry.

He’s been profiled in Fast Company (Sacks, Danielle, “The Accidental Guru,” January 2005), The New York Times (Donadio, Rachel, “The Gladwell Effect,” 5 February 2006) and now for his latest book New York Magazine (Zengerle, Jason, “Geek Pop Star,” 9 November 2008).

I used one of his articles as the basis for a little snark with “Malcolm Gladwell’s Infinite Monkey Theorem” (27 May 2008).

The funny thing you’ll notice if you follow a few of these links is that the hair isn’t the only big thinker factor. Despite a considerable speaker’s fee, Mr. Gladwell doesn’t own many suits.

The Ouroboros Economy II

A few months ago I took the opportunity of the Business Week cover depicting the economy as ouroboros for a few snickers (“Ouroboros to Mise en Abyme,” 28 July 2008). Now the ouroboros economy makes another appearance, this time in the much more serious pages of The New Yorker (Lanchester, John, “Melting into Air,” 10 November 2008, pp. 80-84). Again, I don’t have anything in mind: it’s just an icon shopping around for more meanings. But Mr. Lanchester gives it a novel and grandiose go:

… finance, like other forms of human behavior, underwent a change in the twentieth century, a shift equivalent to the emergence of modernism in the arts — a break with common sense, a turn toward self-referentiality and abstraction and notions that couldn’t be explained in workaday English. In poetry, this moment took place with the publication of “The Waste Land.” In classical music, it was, perhaps, the première of “The Rite of Spring.” Jazz, dance, architecture, painting — all had comparable moments. The moment in finance came in 1973, with the publication of a paper in the Journal of Political Economy titled “The Pricing of Options and Corporate Liabilities,” by Fischer Black and Myron Scholes.

The revolutionary aspect of Black and Scholes’s paper was an equation that enabled people to calculate the price of financial derivatives based on the value of the underlying asset. … The trade in these derivatives was hampered, however, by the fact that — owing to the numerous variables of time and risk — no one knew how to price them. The Black-Scholes formula provided a way to do so. It was a defining moment in the mathematization of the market. The trade in derivatives took off, to the extent that the total market in derivative products around the world is counted in the hundreds of trillions of dollars. Nobody knows the exact figure, but the notional amount certainly exceeds the total value of all the world’s economic output, roughly sixty-six trillion dollars, by a huge factor — perhaps tenfold.

It seems wholly contrary to common sense that the market for products that derive from real things should be unimaginably vaster than the market for things themselves. With derivatives, we seem to enter a modernist world in which risk no longer means what it means in plain English, and in which there is a profound break between the language of finance and that of common sense. …

If the invention of derivatives was the financial world’s modernist dawn, the current crisis is unsettlingly like the birth of postmodernism. For anyone who studied literature in college in the past few decades, there is a weird familiarity about the current crisis: value, in the realm of finance capital, evokes the elusive nature of meaning in deconstructionism. According to Jacques Derrida, the doyen of the school, meaning can never be precisely located; instead, it is always “deferred,” moved elsewhere, located in other meanings, which refer and defer to other meanings — a snake permanently and necessarily eating its own tail. This process is fluid and constant, but at moments the perpetual process of deferral stalls and collapses in on itself. Derrida called this moment an “aporia,” from a Greek term meaning “impasse.” There is something both amusing and appalling about seeing his theories acted out in the world markets to such cataclysmic effect.

The Supernovae in Your Coffee Cup

The Supernovae in Your Coffee Cup

I loved the film π. I consider it a hugely flawed film, but what I loved about it was the way that it worked in subtle allusions to the underlying concepts motivating the film. The main character walked through a park and they point the camera skyward to show the denude winter branches of the trees, an example of fractal symmetry. One of the images that they showed a number of times throughout the film was that of a cup of coffee. Whenever someone ended up in a diner, we got a tight-in shot of them dumping the cream into their coffee and the blooms of turbulent fluid redounding from the depths. It’s a perfect example of turbulence, a phenomenon that utterly defies computation. Since π I’ve never looked at a cup of coffee the same. Every time I pour cream into my coffee it’s a little ritual where for just a second I consider the boundlessness complexity of the world, as close as the cup in my hand.

I was amused to see a recent article in New Scientist invoke the image of the cup of coffee in reference to the problem of turbulent fluids in supernovae (Clark, Stuart, “How to Make Yourself a Star,” vol. 200, no. 2679, 25 October 2008, pp. 38-41):

As the dense inner material is flung through the less dense outer layers of a star, it creates turbulence and mixes everything up. Traditional computer simulations do not model turbulence well.

“Our theoretical understanding of turbulence is incomplete,” says astrophysicist Alexei Khokhlov of the University of Chicago. In other words, you cannot write down a set of equations describing the state of a turbulent system at any given time and then use them to predict what it will look like next. Instead, you have to employ a brute-force approach, using sheer computer muscle.

To seen the scale of this problem, take your morning cup of coffee and stir in some milk. You are using turbulence to mix the two fluids. To determine how they mix, physicists mentally split the cup into boxes and assign numbers to represent the properties inside each box, such as the temperature and density of the fluid. A computer can then calculate how each box interacts with its neighbors during one brief instant of time and then re-evaluate those numbers. Once it has done this for every box, it starts again for the next slice of time and so on.

To do this massive computation perfectly, each box should be tiny and contain just one fluid particle, but before you can get anywhere near this sort of precision, the numbers become mind-bogglingly large. Scientists talk of degrees of freedom as a measure of both the numbers of particles in a system and the number of ways each particle can interact with those around it. A single cup of coffee possesses a staggering 1040 degrees of freedom — far more than you can model on today’s computers. “Maybe in 10 years we will be able to fully model a cup of coffee,” says Khokhlov.

Until then the computation will always be approximate, and thus prone to errors, because small-scale physical interactions are not being taken into account. … If it is going to take 10 years to fully model a cup of coffee, how long until we can model an entire star?

“Never,” Khokhlov says. “Not until someone comes up with a cleaver theory that does not depend on what is happening on the small scale.” The only hope is to continue to investigate turbulence to learn how to better approximate its behavior.

OneMachine Doppelganger

Kevin Kelly has long discussed the OneMachine: the essentially unified, single distributed computer built up from all our networked PCs, PDAs, cell phones, digital cameras and other personal electronics (see e.g. “Tap Into the 12-Million-Teraflop Handheld Megacomputer,” Wired, vol. 16, no. 7, June 2008; “Dimensions of the One Machine,” The Technium, 2 November 2007; “One Huge Computer,” Wired, vol. 6, no. 8, August 1998).

Last week The New York Times ran an article on how below the surface, and running on the very same computers as the productive, life-enhancing OneMachine is a nefarious parallel network, OneMachine’s dark doppelganger, the BotNet (Markoff, John, “A Robot Network Seeks to Enlist Your Computer,” 20 October 2008, p. B1):

Botnets remain an Internet scourge. Active zombie networks created by a growing criminal underground peaked last month at more than half a million computers, according to shadowserver.org, an organization that tracks botnets. Even though security experts have diminished the botnets to about 300,000 computers, that is still twice the number detected a year ago.

The actual numbers may be far larger; Microsoft investigators, who say they are tracking about 1,000 botnets at any given time, say the largest network still controls several million PCs.

“The mean time to infection is less than five minutes,” said Richie Lai, who is part of Microsoft’s Internet Safety Enforcement Team, a group of about 20 researchers and investigators. The team is tackling a menace that in the last five years has grown from a computer hacker pastime to a dark business that is threatening the commercial viability of the Internet.

I have already written about how when the singularity occurs, it may not be what we expect. My suspicion is that either it will be overtly evil, or merely a recreation of the chaos of biological nature in a more durable, powerful and virulent form (“A Few Heretical Thoughts on the Singularity,” 19 August 2008).

What do phenomena like the BotNet suggest about the singularity? What comes will grow out of what is and what is will bequeath its characteristics to what comes — at least initially. Between the various military establishments and the criminal underground, we are instilling our machines with hostile, aggressive tendencies. But we are also making numerous, competitive systems. Will there be “the singularity” or will it, like in the novels of Vernor Vinge and Charles Stross, come in secretive, uncertain fits and starts? Will there be multiple singularities? Will one system cross the threshold, followed by another, then another. It makes sense to speak of “the singularity” when one is imagining a unified system, but when one is considering a multitude of contending systems, crossing the threshold of the singularity is but one move in a strategic game. Perhaps the machines will be hostile to predecessor biological life, but perhaps they will be so consumed in competition with their fellow AIs as to be merely indifferent to us, as we are to, say, pigeons or squirrels.

And how goes the strategic competition between OneMachine and BotNet? We ought to know. What portion of computational capacity, bandwidth, energy consumption, hours of their masters’ time are the two consuming? Qualitatively, how are they matching capabilities? Kevin Kelly has managed to make some calculations for the former, but what of the latter? Of course this would subject to the usual problems of surveillance of those who do not want to be surveyed.

Organizations like McAffee, Norton and the International Botnet Taskforce are attempting to build something akin to an immune system for the Internet, but the billion-year persistence of the arms race between host immune systems and the various infectious agents suggests that dampening catastrophe is probably the best outcome we can hope for. It’s an example of co-evolution where competition between host and agent drives the development of one another. Viruses don’t kill their host by design, they merely seek to hijack their reproductive machinery to their own purposes. Killing the host, or at least killing them too quickly, or the epiphenomenon of killing too many of them too quickly, are all suboptimum in that they result in diminished opportunity for continued infection and reproduction. Ebola gets it wrong. HIV gets it really right. But virus behavior as a whole is not intelligent. Occasionally a virus goes super virulent or hits a particularly vulnerable population and massive outbreak occurs that wrecks havoc for host and infectious agent alike. I presume that BotNets will continue to act something like this.

And since one third of known biological species are parasites and the proportion seems to be growing, it would seem that there is something fundamental about the strategy of parasitism. We should anticipate its continuance, both in genetic and electronic space.

The Legitimacy of Argumentum ad Hominem

Will Wilkinson and Crispin Sartwell consider the satisfactions of environmental soothsaying:

As I’ve said, the insane jackup of rhetoric with regard to global warming, “the greatest crisis the species has ever faced,” the death of the planet, etc, is the secular humanist liberal apocalypse. It’s a sheer competition for who’s most dire, most obsessed, and who’s more unanimous than whom. It’s the flood, complete with the reasons: our moral culpability. I predict this: when Obama is elected, liberals will feel better about themselves and the probable verdict of cosmic judgment, and they’ll tone down the eschatology, the ranting cant.

(“Ranting Cant,” The Fly Bottle, 2 August 2008; untitled, Eye of the Storm, 2 August 2008; respectively)

For my part, I imagine there is something to this argument. Every faction has its share of less than completely rational members. But if it is the contention of Messrs. Wilkinson and Sartwell that the behavior of some of the advocates of anthropogenic climate change bears one iota of relevance on the soundness of the theory itself, then this is a picture perfect instance of the falicy of argumentum ad hominem. The emotional satisfaction that someone takes in holding a particular position would seem irrelevant to the ultimate adjudication of said position.

Some time ago when I originally made the formal cognition post (1 January 2008) K.S. said that he didn’t see the point. What was my advocacy of formal cognition meant to achieve? I couldn’t quite answer him at the time, but Mr. Wilkinson’s post really clarifies the matter for me. I’m an advocate of formal cognition against rhetoric generally, but most especially against some of its more pernitious tactics of Freudianism broadly construed as an interpretive style, sociobiology in its normative aspect (an epistemological relative of Freudianism), and secularization thesis.

For every purportedly empirical statement out there, there is built up a detritus of extraneous belief. There is the psychological baggage of the proposition: the advocacy or denial of an opinion is motivated. Cui bono? Or advocacy or denial becomes a part of one’s identity. People build an emotional complex around certain beliefs. Certain propositions become tropes mapping us into desired cultural categories. A proposition becomes cornerstone to an elaborate worldview into which their constructors invest vast amounts of intellectual energy. These people tend to become conservative about such propositions all out of proportion to the weight that the casual observer might assign to such beliefs.

It’s really easy to succumb to the desire to set aside the mater per se and argue the detritus. It’s certainly more emotionally satisfying. The purpose of a catalogue of validated logic and methodologies is to determine the soundness of a proposition and cast out the irrelevant considerations in a systematic way.

So, for example, the scientific veracity of anthropogenic climate change is within range of rational risk assessment. The systems concepts of a tipping point and self-reinforcing, accelerating change are legitimate and the potential implications of these concepts applied here are alarming. The perennial libertarian Alfred E. Neuman “What, me worry?” worldview has its own short fallings, namely that disasters are plausible and occasionally systemic.

On the other hand, there is no proposition beyond the proposing hominid. I’m not so sure that the distinction between rhetoric and formal decidability is tenable, especially once one admits the scientific method into the corpus of formal cognition. Given that induction is logically and experientially unsound, the scientific method becomes merely a highly stylized rhetoric, a rhetoric whose admissible tactics are more narrowly circumscribed. It is most certainly a rhetoric that is more reliable than others, but it nonetheless exists with other rhetorics along a continuum of variably reliable tactics, rather than being cordoned off in a privileged category all its own.

If nothing else, the absolute prohibition against argumentum ad hominem seems incompatible with Thomas Kuhn’s The Structure of Scientific Revolutions. Is it even possible for the behavior, psychology, constellation of attendant beliefs and rhetorical strategies of the advocates for a proposition to be irrelevant to the acceptance or rejection of the proposition? I think that once one dispenses with the notion of truth or falsity of a proposition in any strong sense in favor mere acceptance or rejection (the sociology of knowledge), then these previously considered extraneous factors become relevant. They are real channels by which information and belief are transmitted throughout society. They are part of the practice of acceptance and rejection as they actually happen. Argumentum ad hominem seeks to make explicit and disrupt these channels. It reduces their efficacy through ridicule.

(This is not to deny the truth or falsity of out beliefs in some ultimate sense. The truth is out there — it just doesn’t intervene in our deliberations in any radical way. Prima facie, incomplete beliefs about the world can be made workable.)

Edsger W. Dijkstra’s Blog

One of the blog parlor games is to come up with unlikely blog-like precursors. Doogie Howser, M.D. is a favorite. I’m going to propose computer scientist Edsger W. Dijkstra.

I’m a big fan of “Real Programmers Don’t Use PASCAL” which contains a few direct derogatory references to Mr. Dijkstra and is generally opposed to his views on computer science. I guess I like it more for it’s computer geek-heroic tone. Since I am mostly a proponent of system and method, I am on the side of Mr. Dijkstra. But anyway, from the Wikipedia page on Mr. Dijkstra:

Dijkstra was known for his essays on programming; … He was also known for his habit of carefully composing manuscripts with his fountain pen. The manuscripts are called EWDs, since Dijkstra numbered them with EWD as prefix. Dijkstra would distribute photocopies of a new EWD among his colleagues; as many recipients photocopied and forwarded their copy, the EWDs spread throughout the international computer science community. The topics are mainly computer science and mathematics, but also include trip reports, letters, and speeches. More than 1300 EWDs have since been scanned, with a growing number also transcribed to facilitate search, and are available online at the Dijkstra archive of the University of Texas

Mr. Dijkstra’s “blog,” EDW is hosted here.

Ouroboros to Mise en Abyme

A few unsystematic thoughts on Ouroboros and mise en abyme:

  • I almost mentioned Douglas Hofstadter’s book, I Am a Strange Loop, in last week’s post (“The Mythical Economy,” 23 July 2008). He could have gone with Ouroboros on the cover too, but instead he went with mise en abyme.

    Or maybe he couldn’t have gone with Ouroboros. While Ouroboros is, on a superficial level, obviously a strange loop and a symbol that could be seen as self-referential, a peek at the index of Hofstadter’s book at the entry for “video feedback” — a technological mise en abyme — shows that he has a thicker analogy in mind:

    video feedback, 65-71; as candidate for strange loop, 103, 187, 203, 361; epiphenomena in, 68, 70-71; fear of meltdown, 56, 57; fractalic gestalts of, 204; lack of “I” in, 203; lack of perception in, 75-77, 187, 203; lack of symbols in, 203; lack of thinking in, 203; locking-in of patterns in, 70; parameters of, 65-67, 69, 75; reverberation in, 67-68; two systems entwined in, 210-211, 253-254; vanilla loop in, 208

  • While I’m amused at the notion of an Ouroboros economy, I can’t really think of any real correlate to the slightly humorous image. Unless maybe something like a naturalistic notion of the human economy, wherein the human economy is nature parasitic upon itself. The destruction of the biological world as giving birth to the artifactual or the cybernetic world. Ouroboros reborn for the Twenty-first Century!

  • The thing that’s really causing me to bring up mise en abyme is some thoughts on how people think about the future. People are faced with the need to decide and nearly all decisions that people make are, when not completely about the future, at least future-oriented. People’s thoughts about the future are divided into two closely related activities, carried out in tandem: planning and prediction. Prediction occasionally becomes an activity of its own, but for the most part prediction is an activity carried out in service of the more pragmatic planning.

    Planning is a branching strategic game. It works like this. I have a goal whose attainment is not simple: it involves a number of steps and it could be thwarted at any one of them. I start with my known situation and have a vague idea what the path to my goal would be and I make a series of hypothetical decisions. I test the soundness of a hypothetical decision by predicting the outcome of such an action. That is, I imagine a potential future.

    In the first round, the one inaugurated in the present, I know what my options are because they are present. In the second round and in all subsequent rounds, I must employ prediction to imagine what options I will have to choose from because from then on I am dealing in an imagined future. I repeat the hypothetical decision, predict, test, simulate new options algorithm down until I reach the last round whose decision results in the attainment of the goal.

    When I make predictions about the future, I rarely make a single prediction, since prediction, especially the sort of intuitionistic variant that people employ for the purpose of most of their planning, is not a very reliable. So I predict a range of possible futures. And in each possible future I face a range of possible decisions that I can take. Predicting and planning branch. Most of these I abandon in favor of the most fruitful seeming paths. But if a path dead-ends, I back up until I find the probable fateful decision that sent me down the path to the dead end. I recollect the other options at that possible future and imagine my way down another branch. I also generally flag a number of contingency plans. I went with this predicted future, but as things actually unfold, if it turns out that I predicted wrong, I have a plan ready for that other branch too.

    When I have what I imagine to be a satisfactory path from present to goal, I lock in each decision hypothetically made into “what I’ve decided upon.”

    This is a pretty systematic model and not necessarily exactly how most people make plans. People rarely sit town and carry it out algorithmically from beginning to end. More frequently people engage in this activity in fits and starts, not taking the problem from start to finish, but working on pieces that strike them at various occasions throughout their day. They absentmindedly do it while at their work computer, or do it extremely quickly while laying a joint plan with a partner over the telephone. Or maybe they try to be thorough about it and make a list on a notepad so they can see what’s done and what still in need of attention. Whatever the case, I think that ultimately this is what people are doing.

    The important point for mise en abyme is that near future decisions can only be locked in once more distant future decisions have been validated. Each step is dependent on the one after it having been made first. One starts the planning and predicting from the present and works one’s way forward, but one decides, as it were, backward, from the future to the present. Predictions and plans regarding the immediate future include as a part within them predictions and plans regarding the immediate future, which in turn contain predictions and plans about the distant future and so on. My thoughts about the future are mise en abyme insofar as they contain within them further thoughts about more distant futures.

    What one is doing in this process of planning for the future is conducting is a depth first search of potential futures. And depth first search is canonically thought of as recursive.

  • Mise en abyme seems to have a lot more analogistic or systemizing potential. Scale symmetry (e.g. fractals) along with all the related phenomena that can be grouped under that pattern seem coterminous with mise en abyme. Hegel’s logical schema seems like a highly abstract instance of mise en abyme, where each intellectual system is subsumed into a higher order intellectual system.

  • Perhaps there is a historical development of the conceptual depth and sophistication of the idea of self-referentiality. Ouroboros is simple cyclicality, though extrapolated into a notion of infinity or eternity. Homunculus is a limited instance of scale symmetry. Modern formal recursion are the culmination.

Markets: Plan A or Plan B?

In an aside to an article on genetic determinism, The National Review comments on markets and the limits of information science (Manzi, Jim, “Undetermined,” vol. LX, no. 10, 2 June 2008, pp. 26-32):

In the middle of the 20th century, Friedrich Hayek and the libertarians he inspired faced those who asserted that the economy could be successfully planned. The libertarian position was not that such planning could be proved impossible in theory, but that we lacked sufficient information and processing power to accomplish it. The world of economic interaction was so complex that it overwhelms out ability to render it predictable; hence the need for markets to set prices.

I don’t for a moment believe that the Libertarian Party will disband once we cross some floating point operation threshold on supercomputers. There is the practical and there is the principled reason for subscribing to the libertarian position and I have read some of its proponents specifically state that even if the command economy could deliver superior performance, they would still be libertarians because of the component of human freedom.

Malcolm Gladwell’s Infinite Monkey Theorem

The Infinite Monkey Theorem apparently still holds if you substitute mediocre humans for monkeys. Here is Malcolm Gladwell writing on how to brute force genius (“In the Air,” The New Yorker, 12 May 2008):

In the nineteen-sixties, the sociologist Robert K. Merton wrote a famous essay on scientific discovery in which he raised the question of what the existence of multiples tells us about genius. No one is a partner to more multiples [simultaneous scientific discovery], he pointed out, than a genius, and he came to the conclusion that our romantic notion of the genius must be wrong. A scientific genius is not a person who does what no one else can do; he or she is someone who does what it takes many others to do. The genius is not a unique source of insight; he is merely an efficient source of insight. “Consider the case of Kelvin, by way of illustration,” Merton writes, summarizing work he had done with his Columbia colleague Elinor Barber:

After examining some 400 of his 661 scientific communications and addresses . . . Dr. Elinor Barber and I find him testifying to at least 32 multiple discoveries in which he eventually found that his independent discoveries had also been made by others. These 32 multiples involved an aggregate of 30 other scientists, some, like Stokes, Green, Helmholtz, Cavendish, Clausius, Poincaré, Rayleigh, themselves men of undeniable genius, others, like Hankel, Pfaff, Homer Lane, Varley and Lamé, being men of talent, no doubt, but still not of the highest order. . . . For the hypothesis that each of these discoveries was destined to find expression, even if the genius of Kelvin had not obtained, there is the best of traditional proof: each was in fact made by others. Yet Kelvin’s stature as a genius remains undiminished. For it required a considerable number of others to duplicate these 32 discoveries which Kelvin himself made.

This is, surely, what an invention session is: it is Hankel, Pfaff, Homer Lane, Varley, and Lamé in a room together, and if you have them on your staff you can get a big chunk of Kelvin’s discoveries, without ever needing to have Kelvin — which is fortunate, because, although there are plenty of Homer Lanes, Varleys, and Pfaffs in the world, there are very few Kelvins.

Our tendency is to imagine Newton, Darwin or Einstein as the pinnacle of genius, but they are merely the peak performance of that draft design kludge we all carry around in out heads. One can easily imagine ranks of genus many levels beyond our showings to date, ranging all the way from the Star Trek character Data to the gods (I uses these literary examples merely to demonstrate that we’re capable of imagining higher orders of genus). Each ranking of genius, all the way up to the gods, bears the same relation to the rank just below as Kelvin does to Homer Lanes, Varleys, and Pfaffs: not one of qualitative difference, but merely one of efficiency. And that relation obtains not just between each level, but over the entire span from pinnacle to base as well. It’s the wet machine corollary of Turing completeness.

This is, of course, why the proof from design for the existence of god fails, because one can imagine the universe being created in an instant by a supergenius, but it is equally plausible that it was created by a committee with some time on their hands. And the more time available or the larger the committee, the less capable any of its members has to be to produce a given output.