The Beijing Olympics Did Not Take Place

One of the amusing stories coming out of the 2008 Beijing Olympics opening ceremonies is that it turns out that a portion of the video feed of the fireworks display was actually a computer simulation spliced into the broadcast. The fireworks were set off, but planners determined that they wouldn’t be able to film them as well as they would have liked, so they manufactured a version of them according to how they wanted them to have been filmed (Spencer, Richard, “Beijing Olympic 2008 opening ceremony giant firework footprints ‘faked’,” Daily Telegraph, 10 August 2008):

Gao Xiaolong, head of the visual effects team for the ceremony, said it had taken almost a year to create the 55-second sequence. Meticulous efforts were made to ensure the sequence was as unnoticeable as possible: they sought advice from the Beijing meteorological office as to how to recreate the hazy effects of Beijing’s smog at night, and inserted a slight camera shake effect to simulate the idea that it was filmed from a helicopter.

But what does it even mean to say that portions of the event were “faked”? The whole thing was illusion and artifice. Obviously significant portions of the event were computer graphics. The scroll that served as the mat for a significant portion of the floor show included computer graphics to create the image of its rolling. The projection of the Earth inside the globe was computer graphics and the unfurling scroll around the perimeter of the stadium as the final flamebearer faux-ran to the Olympic torch was computer graphics.

Increasingly computer graphics will come to be the norm, what’s really “real” and the merely material world will become the anomaly. Already we’re at the point where the big story about the latest Batman film was not the CG, but that the stuff that would usually be CG wasn’t CG (e.g. Brown, Scott, “Dark Knight Director Shuns Digital Effects For the Real Thing,” Wired, vol. 16, no. 7, July 2008, pp. 122-127). Already people are talking about augmented reality. The problem that I have with, say, Google maps and other special data, is that it’s stuck in a little box in my hand. Where it belongs is overlayed onto the world. Real-world objects are the ultimate representational tokens.

Movable type, opening ceremonies of the Bejing Olympics, 8 August 2008

Or, to turn things around, my favorite performance of the night was the “movable type” arrangement of 897 actuating blocks that raised and lowered to create patterns like a waving flag and ripples in a pond. My first reaction was that it must be computer control that created the images of waves and ripples. I wondered at how much that many hydraulic lifts must have cost and tried to imagine the programming that could produce those patters. The first time the camera panned low and showed human legs standing and squatting I was amazed.

This was an instance of “natural” things “simulating” machines. What we were watching was giant wooden pixels. What was amazing about this performance was that humans could achieve this machine-like level of control and precision.

1994, David Turnley, James Nachtwey, 1994 elections in South Africa

But of course I don’t need to go to bizarre lengths. The more traditional means of artifice are well documented. There’s a reason that they call it media (middle, medium).

Imagination Unmoored

I like it when art becomes it’s own medium of response to itself, rather than leaving it to prose. I have always like Nat King Cole’s “Mona Lisa” and Don McLean’s “Vincent” (YouTube | Wikipedia). But Robbie Dingo’s recreation of Vincent van Gogh’s Starry Night in Second Life, making a video of the process of creation, then setting it to Don McLean’s song does it all one better (Au, Wagner James, “Remake the Stars,” New World Notes, 18 July 2007).

Schema of the arts and sciences aside, I like this for what is suggests for the future of virtual worlds. Hitherto our imaginations have been stunted by continuous exposure to the narrow Newtonian world of the macroscopic everyday. Witness, for example, what happens when people try to imagine fantastical animals. All that we can come up with is combinations of existing animals: griffins, mermaids, centaurs, dragons, Cerberus, et cetera.

Once we start to live in a regular way in virtual worlds of our own creation, a dynamic will form where each feat of imagining will establish a new norm and a new developmental environment from which each subsequent foray of imagining and generation of imagineer will be capable of going a little further beyond the forms of this world. As we increasingly live in worlds not constrained by the same limits as the material world, our imaginations will become completely unmoored from the forms provided to us by macroscopic nature. The true, autonomous nature of the imagination — throughout all of history shackled by the relentless, overwhelming conditioning of the narrow forms presented to us by dull matter — will be liberated.

And owing to neuroplasticity, inherited or induced, maturing and living in radically different worlds will allow us to develop new modes of being and new understandings. In the future we will live in our dreams and our nightmares. Science has laid the groundwork for our art to become the more fundamental reality. The direction of humanity is a retreat from the material world into a world composed entirely of mind.

Mr. Au mentions Akira Kurosawa’s Dreams (IMDB | Wikipedia). I think maybe it’s time to dig out and rewatch an old favorite, Until the End of the World (IMDB | Wikipedia).

Politically, for all of human history the Earth has provided the unified point of reference for all humanity. With a proliferation of possible environments, the hitherto more or less unified character of the human world will gradually degenerate. The dissolution of our political order, multiculturalism, neo-primitivism, the turning away from master narratives and the dawning of the postmodern era are natural consequences of technology.

As Vernor Vinge said of the coming of the singularity, “I can see us becoming weird — before my very eyes” (“My Apocalyptic Vision is Very Narrow,” 13 June 2008).

Edsger W. Dijkstra’s Blog

One of the blog parlor games is to come up with unlikely blog-like precursors. Doogie Howser, M.D. is a favorite. I’m going to propose computer scientist Edsger W. Dijkstra.

I’m a big fan of “Real Programmers Don’t Use PASCAL” which contains a few direct derogatory references to Mr. Dijkstra and is generally opposed to his views on computer science. I guess I like it more for it’s computer geek-heroic tone. Since I am mostly a proponent of system and method, I am on the side of Mr. Dijkstra. But anyway, from the Wikipedia page on Mr. Dijkstra:

Dijkstra was known for his essays on programming; … He was also known for his habit of carefully composing manuscripts with his fountain pen. The manuscripts are called EWDs, since Dijkstra numbered them with EWD as prefix. Dijkstra would distribute photocopies of a new EWD among his colleagues; as many recipients photocopied and forwarded their copy, the EWDs spread throughout the international computer science community. The topics are mainly computer science and mathematics, but also include trip reports, letters, and speeches. More than 1300 EWDs have since been scanned, with a growing number also transcribed to facilitate search, and are available online at the Dijkstra archive of the University of Texas

Mr. Dijkstra’s “blog,” EDW is hosted here.

The Thinking Cap

Scientific American has an article on how the mind-machine interface is about to go commercial with a wearable EEG game controller that reads your mind (Sergo, Peter, “Head Games: Video Controller Taps into Brain Waves,” 14 April 2008). How’d they do it? Exactly the way the people at Wired would imagine. Rather than developing a series of hard-won determinate correlations between identified brain waves and intentions they just brute forced it. They recorded a gigantic quantity of sample data and processed it using a cloud computer to find the patterns:

Emotiv solved this brain-computer interface problem with the help of a multidisciplinary team that included neuroscientists, who understood the brain at a systems level (rather than individual cells), and computer engineers with a knack for machine learning and pattern recognition. Over the last four years, the company has conducted thousands of EEG recordings on hundreds of volunteers — not all gamers — as they experienced virtual scenarios that elicited various emotions, facial expressions and cognitive demands. The aim was to find a revealing brain activity that many people shared — a needle in a haystack of frenzied signals. Now, the EPOC allows users to fine-tune settings that allow it to pick up on even the subtlest of smirks.

When building these algorithms commenced two years ago, it had taken up to 72 hours for a bank of powerful computers to run through a mere 10 seconds of individual brain data and extract important features. Sorting through a seemingly endless stream of recordings eventually led Emotiv to find consistent signal patterns that revealed specific mental experiences. “Through a large enough sample size,” Le says, “we were able to get some consistency around the population to attain a high degree of confidence that it accurately measures an emotional state.”

And in dispensing with theoretical purity and just going with base correlation, the engineers at Emotive didn’t even have to concern themselves with the signal to noise ratio of the data:

Buch also suspects that the facial expressions that the EPOC detects are based more on the electrical activity of facial and scalp muscles than the brain per se. Although the electrical activity of muscles, he explained, is normally considered as artifact noise that needs to be filtered out to attain clean EEG signals that are of interest, they are still informative about how facial muscles move, such as during a wink. Tan agrees, saying that in their classification strategy some of the EPOC’s detections are based on muscle movements.

It’s all just correlation and if the noise helps identify the correlation, than it’s just as good as signal. In the petabyte age there is no phenomenon under consideration, not phenomenon under consideration issue. Any possible interference will be defeated by the size of the data set.

Now if they would just make a model that looks like this:

Robotech, Rick Hunter in the thinking cap

And maybe control an F-14 that transforms into a 50 foot tall robot instead of stupid games.

The Jules Verne of the Future Will be a Computer Scientist

Wired Magazine’s cover story this month on The End of Science / The Dawning of the Petabyte Age (Anderson, Chris, vol. 16, no. 7, July 2008, pp 107-121) has a very mundane answer to John’s enthusiasm: just scoop up tones of seawater, sequence every piece of DNA that you find, and compare it to a database of known DNA. The system will be able to flag each strand as existing species / new species.

We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

The best practical example of this is the shotgun gene sequencing by J. Craig Venter. Enabled by high-speed sequencers and supercomputers that statistically analyze the data they produce, Venter went from sequencing individual organisms to sequencing entire ecosystems. In 2003, he started sequencing much of the ocean, retracing the voyage of Captain Cook. And in 2005 he started sequencing the air. In the process, he discovered thousands of previously unknown species of bacteria and other life-forms.

Unfortunately this doesn’t do much to tell us about what the creature is like.

If the words “discover a new species” call to mind Darwin and drawings of finches, you may be stuck in the old way of doing science. Venter can tell you almost nothing about the species he found. He doesn’t know what they look like, how they live, or much of anything else about their morphology. He doesn’t even have their entire genome. All he has is a statistical blip — a unique sequence that, being unlike any other sequence in the database, must represent a new species.

This sequence may correlate with other sequences that resemble those of species we do know more about. In that case, Venter can make some guesses about the animals — that they convert sunlight into energy in a particular way, or that they descended from a common ancestor. But besides that, he has no better model of this species than Google has of your MySpace page. It’s just data.

But who knows, soon enough we’ll have software that will take a DNA sequence as input and produce a virtual model of a creature complete with visualization and tables of physiological data (bone density, blood chemistry, synapse count, etc.). We’ll never even have to find an instance of the creature.

Update, 25 June 2008: I think I’ve got my references a little crossed here. I titled the post The Jules Verne of the Future Will be a Computer Scientist for symmetry with John’s post, but Jules Verne is the author of the exploration stories, not the explorer himself, whereas the hypothetical computer scientist to which I am referring would be one of Jules Verne’s characters. The proper title should have been The Captain Nemo of the Future Will be a Computer Scientist.

My Apocalyptic Vision is Very Narrow

More than ten years ago I read Kevin Kelly’s interview with Vernor Vinge in Wired (“Singular Visionary,” vol. 3, no. 6, June 1995) and I have been repeating Mr. Vinge’s formulation of the robot apocalypse almost word for word ever since. But I was never able to locate the original article. Anyway, while reading around the wikipedia page on the technological singularity today I came across a reference to Mr. Vinge and recognized it as the long lost name. A few strokes of the keyboard at Google revealed my favorite dystopian vision:

Kelly: In your books, you sometimes focus on the idea of a singularity — the point at which a mathematical function goes infinite. What does that mean to you in terms of a cultural singularity?

Vinge: All sorts of apocalyptic visions are floating around, but mine is very narrow. It just says that if we ever succeed in making machines as smart as humans, then it’s only a small leap to imagine that we would soon thereafter make — or cause to be made — machines that are even smarter than any human. And that’s it. That’s the end of the human era — the closest analogy would be the rise of the human race within the animal kingdom. The reason for calling this a “singularity” is that things are completely unknowable beyond that point.

Kelly: Do you see any evidence that we are headed toward a singularity?

Vinge: I think the singularity may explain Fermi’s paradox: where is all the other intelligent life in the universe? For years, there have been two theories: the first is that civilizations exterminate themselves, and the second is that these outer civilizations are so weird there’s no way to interact with them. That second explanation has gained a lot of weight in my mind, because I can see us becoming weird — before my very eyes.

The striking thing to me is that qualification, “or cause to be made.” We won’t make the machine smarter than we are. We will only make the machine as smart as we are and then that machine will make the machine more intelligent than us. And then each more intelligent machine will be capable of making another even more intelligent still. Machine evolution will take over and, with software having reproductive cycles that will make bacterial reproduction glacial by comparison, will quickly outstrip human capability or comprehension.

The Singularity is Near

June 2008, Top Department of Energy Supercomputing Performance in Teraflops, RoadRunner tops one petaflop

Via ArmsControlWonk, the Department of Energy supercomputer called RoadRunner has become the first to achieve sustained petaflop performance. In scientific notation that’s 1015 floating operations per second. In little kid numbers that’s a thousand trillion floating operations per second (Lewis, Jeffrey, “RoadRunner,” 10 June 2008; “U.S. Department of Energy’s New Supercomputer is Fastest in the World,” U.S. Department of Energy press release, 9 June 2008).

The machine was built in conjunction with IBM. It consists of somewhere in the neighborhood of 3,500 compute nodes with each node consisting of two AMD dual core Opterons, four PowerXCell 8i processors for extra floating point capability and 24 GB of RAM. Overall the machine consists of 6,912 AMD Opterons, 12,960 IBM PowerXCell 8is and 80 terabytes of RAM. It will have access to a file server with 2,000 terabytes of hard disk storage. Roadrunner occupies approximately 12,000 square feet and cost $133 million. The AMD Opterons are a common desktop PC processor and the PowerXCell 8i is the processor from a Sony PlayStation 3. It runs RedHat Linux as its operating system. As Robin Harris from ZDNet points out, because the better part of this machine is off-the-shelf components, this really represents the commodification of supercomputing (“PS3 Chip Powers World’s Fastest Computer,” Storage Bits, ZDNet, 10 June 2008; “Roadrunner’s Backing Store,” StorageMojo, 11 June 2008).

RoadRunner will be housed at the at Los Alamos National Laboratory and will be used by the National Nuclear Security Administration to perform calculations to certify the reliability of the U.S. nuclear weapons stockpile through highly detailed simulation rather than conducting nuclear tests. Mr. Lewis at ArmsControlWonk has more on the implications of this for the U.S. nuclear testing regime. He points out that questions about the ability of the NNSA to certify the U.S. nuclear stockpile using simulation were a central issue in the Senate rejection of the Comprehensive Test Ban Treaty. So maybe reconsideration of the CTBT will be on the agenda for the next President and Congress?

But this is all detail. The important point is the graph of peak computing performance of DOE supercomputers. It is clear that the singularity is near.

As Mr. Lewis points out, the fastest supercomputer used in nuclear weapons simulations has, not coincidentally, historically also been the fastest supercomputer in the world. This tight coupling between computing and nuclear weapons is striking. It’s worth noting that the first computer, ENIAC, though not yet tethered to the nuclear establishment, was constructed during the Second World War for the purpose of calculating ballistics trajectory tables for Army artillery units. As J. Robert Oppenheimer said,

In some sort of crude sense, which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin, and this is a knowledge which they cannot lose.

It is not just the physicists that have known sin. The computer scientists have known sin as well. From this coupling hithertoo, it should be fairly obvious that the first androids and the first general artificial intelligence will be military in purpose. That is, the first androids and the first general artificial intelligence will be innately aggressive.

The singularity is near. It is more likely that it will be a cataclysm than a boon.

The State of Brain-Machine Interface

Given that in the last five days it has been reported that scientists were able to read signals off the brain of a monkey and use the information to control a robotic arm in a useful manner, that Dean Kamen has done the same with for humans reading the signals off the nerve endings and that another team has been able to use fMRI and some adaptive software systems to guess what noun a subject is thinking with 77 percent accuracy, I’d say that we’re cusping on a sudden leap forward in the field of brain-machine interface, no?

Carey, Benedict, “Monkeys Think, Moving Artificial Arm as Own,” The New York Times, 29 May 2008

Tweney, Dylan, “Dean Kamen’s Robot Arm Grabs More Publicity,” Wired, 29 May 2008

Randerson, James, “Scientists Move a Step Closer to Mind-Reading,” The Guardian, 30 May 2008

Militarism: Loose It or Use It

Gareth Porter wrote a closely argued book, Perils of Dominance: Imbalance of Power and the Road to War in Vietnam, a few years ago (2005) making the contrarian argument that it wasn’t insecurity over some domino effect imperiling the United States that got us into the Vietnam war, so much as overconfidence based on known superiority of the U.S. over the Soviet Union and China.

Mr. Porter’s point was that U.S. hegemony today continues to tempt us to further foreign adventures, namely Iraq. Ezra Klein makes the same point in reference to U.S. military spending (“Your World in Charts: ‘We’re #1’ Edition,” The American Prospect, 31 January 2008):

There may, to be sure, be an argument for reducing our expenditures on hardware and increasing them on manpower, but there’s no real argument for increasing our total expenditures. This is particularly true in light of the last few years, where the size and power of our military fueled a vast overconfidence in its capabilities, which in turn helped ease our decision to invade Iraq, thus contributing to a venture that most all security experts agree has dramatically reduced our safety.

Where your money is, there your heart will be too. Or perhaps it’s not just money that burns a hole in your pocket: military power does as well.

Actually, I really doubt these are causal factors. If there was a military buildup during the 1980s that mysteriously persisted past the end of the Cold War, it was in large part owing to the work of people like then Wyoming Representative and Secretary of Defense Dick Cheney, then Director of Policy Planning Staff Paul Wolfowitz, then Assistant Secretary of Defense for International Security Policy, Richard Pearle, then head of the Ballistic Missile Threat Assessment Commission Donald Rumsfeld, as well as the rest of the usual cast of characters — the American Enterprise Institute, the Project for a New American Century, et cetera. As James Mann said in Rise of the Vulcans, one of the narratives that unites this disparate group, it is that of rebuilding the U.S. military and the country’s willingness to use it after the end of the Vietnam War — which brings us back to Mr. Porter’s point. These were people for whom no disaster could induce a measure of caution.

For this ideological coalition, the military buildup was a necessary step in a long-standing plan whereby U.S. hegemony would be preserved and extended through a series of small wars. This project was temporarily blocked by the presidency of Bill Clinton, but resumed once the actors were all brought back into office by George Bush, Jr., promotions all around.

The problem was selling the agenda to the rest of the U.S. electorate. In that regard Stormin’ Norman Schwarzkopf, Colin Powell (“First we are going to cut it off and then we’re going to kill it”) and CNN night vision footage of Patriot Missiles rising from Tel Aviv to meet incoming Scuds during the first Gulf War all went a long way toward reinstilling the invincibility and the righteousness of U.S. foreign interventions into the minds of your average voter. People say that everything changed after 11 September 2001, but in this regard, 11 September is the catalyst added to an ideological concoction that had been brewing for ten years.

The perfect juxtaposition of this point is when, in reference to Bosnia, Secretary of State Madeline Albright asked then Chairman of the Joint Chiefs of Staff Colin Powell, “What’s the point of having this superb military you’re always talking about if we can’t use it?” For a minimalist like Secretary Powell the point of having “this superb military” is so that you don’t have to use it. But it’s a self-defeating proposition because its existence eventually becomes the very argument for its use. And even liberals end up getting over-excited by too much cool military hardware.

Formal Cognition

A few weeks ago I went to the most recent installment of the D.C. Future Salon. The presenter was the organizer, Ben Goertzel of Novamente and the subject was “Artificial Intelligence and the Semantic Web.” One of the dilemmas that Mr. Goertzel chased out with the notion of a semantic web is that the complexity is conserved: either it has to be in the software agent or it has to be in the content. If it is in the agent, then it can be highly centralized — a few geniuses develop some incredibly sophisticated agents — or it can be in the content in which case the distributed community of content providers all have to adequately mark-up their content in a way that more simple agents can process. Mr. Goertzel is hedging his bets: he is interested both in developing more sophisticated agents and in providing a systematic incentive to users to mark up their content.

In the course of discussing how to encourage users to mark up their own content, Mr. Goertzel listed as one of the problems that most lack the expertise to do so. “What portion of people are adept at formal cognition? One tenth of one percent? Even that?” I had probably herd the phrase or something like it before but for whatever reason, this time it leapt out. I had a collection of haphazard thoughts for which this seemed a fitting rubric and I was excited that this may have been a wheel for which no reinvention was required. When I got home I googled “formal cognition” figuring there would be a nice write-up of the concept on Wikipedia, but nothing. I fretted: formal cognition could simply mean machine cognition (computers are formal systems). Maybe it was a Goertzel colloquialism and the only idea behind it was a hazy notion that came together in that moment in Mr. Goertzel’s mind.

Anyway, I like the notion and absent an already systemized body of thought called formal cognition, here is what I wish it was.

While the majority might continue to think with their gut and their gonads, a certain, narrow, technocratic elite is in the process of assembling the complete catalog of formally cognitive concepts. There is the set that consists of all valid, irreducibly simple algorithms operative in the world along with their application rules covering range, structure of behavior, variants, proximate algorithms, compatibilities, transition rules, exceptions, et cetera. I am going to show my reductionist cards and say that given a complete set of such algorithms, all phenomena in the universe can be subsumed under one of these rules, or of a number acting in conjunction. In addition to there being a complete physis of the world, there is, underlying that, a complete logic of the world.

This reminds me of Kant’s maxim of practical reason that will is a kind of causality and that the free will is the will that is not determined by alien causes, that is, the will that acts according to its own principles, which are reason (see e.g. Groundwork of the Metaphysic of Morals, Part III). It seems to me that a project of delineating the principles of formal cognition is a liberating act insofar as we are casting out the innumerable unconscious inclinations of that dark netherworld of the intuition (gut and gonad), instilled as they were by millennia of survival pressures — the requirements for precision of which were considerably different from those of a modern technological people — in favor of consciously scrutinized and validated principles of thought.

By way of outstanding example, one might be prone to say that evolution is such a logic. At this point evolution has jumped the bank of its originating field of thought, the life sciences, and begun to infect fields far beyond its origin. It is increasingly recognized today that evolution through natural selection is a logical system, one of the fundamental algorithms of the world, of which the common conception of it as a process of life science is merely one instantiation. Perhaps it was only discovered there first because it is in life phenomena that its operation is most aggressive and obvious. But it is now recognized that any place where replication, deviation and competition are found, an evolutionary dynamic will arise. Some cosmologists even propose a fundamental cosmological role for it as some sort of multiverse evolution would mitigate the anthropic problem (that the universe is strangely tuned to the emergence of intelligent life).

However, note that evolution is a second order logic that arises in the presence of replication, deviation and competition. It would seem that evolution admits to further decomposition and that it is replication, deviation and competition that are fundamental algorithms for our catalog. But even these may be slight variations on still more fundamental algorithms. It strikes me that replication might just be a variant of cycle, related perhaps through something like class inheritance or, more mundanely, through composition (I feel unqualified to comment on identity or subspecies with algorithms because it is probably something that should be determined by the mathematical properties of algorithms).

System. But have I been wrong to stipulate irreducibly simplicity as one of the criteria for inclusion in our catalog? The algorithms in which we are interested are more complex than cycle. They are things like induction, slippery-slope, combinatorial optimization or multiplayer games with incomplete information. We have fundamental algorithms and second order or composite algorithms and a network of relations between them. Our catalogue of algorithms is structured.

The thing that I think of most here is Stephen Wolfram’s A New Kind of Science (complete text online | Amazon.com | Wikipedia) in which he describes a systematic catalog of enumerated algorithms, that is, there is an algorithm that could generate the entire catalog of algorithms, one after the other. These algorithms each generate certain complex patterns and as Mr. Wolfram suggests, the algorithms stand behind the phenomena of the material world.

An interesting aside lifted from the Wikipedia page: in his model science becomes a matching problem: rather than reverse engineering our theories from observation, once a phenomenon has been adequately characterized, we simply search the catalogue for the rules corresponding to the phenomenon at hand.

It seems to me that this catalog might be organized according to evolutionary principles. By way of example, I often find myself looking at some particularly swampy looking plant — this is Washington, D.C. — with an obviously symmetrical growth pattern — say, radial symmetry followed by bilateral symmetry, namely a star pattern of stems with rows of leaves down each side. Think of a fern. Then I see a more modern plant such as a deciduous tree, whose branch growth pattern seems to follow more of a scale symmetry pattern. The fern-like plants look primitive, whereas the deciduous branch patterns look more complex. And one followed the other on the evolutionary trajectory. The fern pattern was one of the first plant structures to emerge following unstructured algae and very simple filament structure moss. The branching patterns of deciduous trees didn’t come along until much later. There are even early trees like the palm tree that are simple a fern thrust up into the air. The reason that fern-like plants predate deciduous trees has to do with the arrangement of logical space. A heuristic traversing logical space encounters the algorithm giving rise to the radial symmetry pattern before it does that of scale symmetry. The heuristic would work the same whether it was encoded in DNA or in binary or any other instantiation you happen to think of.

A fantastical symmetry. I’m going to allow myself a completely fantastical aside here — but what are blogs for?

It is slightly problematic to organize the catalogue on evolutionary principles insofar as they are logical principles and sprung into existence along with space and time. Or perhaps they are somehow more fundamental than the universe itself (see e.g. Leibniz) — it is best to avoid the question of whence logic lest one wander off into all sorts of baseless metaphysical speculation. Whatever the case, biological evolution comes onto the scene relatively late in cosmological time. It would seem that the organizing principle of the catalogue would have to be more fundamental than some latter-day epiphenomena of organic chemistry.

Perhaps the entire network of logic sprung into existence within the realm of possibility all at once, though the emergence of existent phenomena instantiating each rule may have traversed a specific, stepwise path through the catalogue only later. But there isn’t a straightforward, linear trajectory of simple all the way up to the pinnacle of complexity, but rather an iterative process whereby one medium of evolution advances the program of the instantiation of the possible as far as that particular medium is capable before its potential is exhausted. But just as the limits of its possibilities are reached, it gives way to a new medium that instantiates a new evolutionary cycle. The new evolutionary cycle doesn’t pick up where the previous medium left off, but start all the way from zero. Like in Ptolemy’s astronomy there are epicycles and retrograde motion. But the new medium has greater potential then its progenitor and so will advance further before it too eventually runs up against the limits of its potential. So cosmological evolution was only able to produce phenomena as complex as, say, fluid dynamics. But this gave rise to star systems and planets. The geology of the rocky planets has manifest a larger number of patterns, but most importantly life and the most aggressive manifestation of the catalog of algorithms to date, biological evolution. As has been observed, the most complexly structured three pounds of matter in the known universe is the human brain that everyone carries around in their head.

If life-based evolution has proceeded so rapidly and demonstrated so much potential, it is owing to the suppleness of biology. However, the limits of human potential are already within sight and a new, far more dexterous being, even more hungry to bend matter to logic than biological life ever was has emerged on the scene: namely the Turing machine, or the computer. This monster of reason is far faster, more fluid and polymorphous, adaptable, durable and precise than us carbon creatures. In a comparable compression of time from cosmos to geology and geology to life, the computer will “climb mount improbable,” outstrip its progenitor and explore further bounds of the catalog of logic. One can even imagine a further iteration of this cycle whereby whatever beings of information we bequeath to the process of reason becoming real repeat the cycle: they too reach their limits but give rise to some even more advanced thing capable of instantiating as yet unimagined corners of the catalogue of potential logics.

But there is a symmetry between each instantiation of evolution whereby the system of algorithms was traversed in the same order and along the same pathways. Perhaps more than the algorithms themselves are universal, perhaps the network whereby they are related is as well. That is to say that perhaps there is an inherent organizing structure within the algorithms themselves, a natural ordering running from simple to complex. Evolution is not the principle by which the catalog is organized, but merely a heuristic algorithm that traverses this network according to that organizing principle. Evolution doesn’t organize the catalog, but its operation illuminates the organization of the catalog. Perhaps that is what makes evolution seem so fundamental: that whatever it’s particular instantiation, it is like running water that flows across a territory defined by the catalogue. Again and again in each new instantiation evolution re-traverses the catalogue. First it did so in energy and matter, then in DNA, then in steel and silicon, now in information.

Anti-System. This is fantastical because, among other reasons, it is well observed that people who are captivated by ideas are all Platonists at heart. I have assiduously been avoiding referring to the algorithms of a system of formal cognition as forms. It all begs the question of whence logic — which, again, is a terrible question.

Of course the notion of formal cognition doesn’t need to be as systematic as what I have laid out so far. Merely a large, unsystematized collection of logically valid methods along with the relevant observations about the limitations, application rules and behaviors of each one would go a significant way toward more reliable reasoning. Perhaps such a thing doesn’t exist at all — I tend towards a certain nominalism, anti-foundationalism and relativism. But the notion of a complete logical space, or a systematic catalog is perhaps like one of Kant’s transcendental illusions — a complete science or moral perfection — the telos, actually attainable or only fantasized, that lures on a certain human endeavor.

Politics. All of this having been said, I remain of the belief that politics is the queen of the sciences. Formal cognition wouldn’t be automated decision making and it could only ever enter into political dialog as decision support or as rhetoric.

As Kant wrote, “Thoughts without content are empty; intuitions without concepts are blind” (Critique of Pure Reason, A51 / B75). Kant developed an idiosyncratic terminology and perhaps another way of phrasing this, more suited to my purpose here, would be to say that formal reason absent empirical data is empty; but that empirical data unsystemized by conceptual apparatus is an incoherent mess. A complete system of the world cannot be worked out a priori and a mere catalogue of all observations about the world would be worse than useless.

Formally cognitive methods must be brought to bear. And against a complex and messy world I do not think that their application will be unproblematic. In passing above, I mentioned the notion of application rules. Each algorithm has attendant rules regarding when it comes into force, for what range of phenomena it is applicable, when it segues to another applicable algorithm, et cetera. Take for instance the notion of the slippery-slope or the snowball. Not all slippery-slopes run all the way to the bottom. Most are punctuated by points of stability along the way, each with its own internal logic as to when some threshold is overcome and the logic of the slipper-slope resumes once more. Or perhaps some slippery-slope may be imagined to run all the way to the bottom — it’s not ruled out by the logic of the situation — but for some empirical reason in fact does not. Once the principles of formal cognition come up against the formidable empirical world, much disputation will ensue.

Then there is the question of different valuation. Two parties entering into a negotiation subscribe to two (or possibly many, many more) different systems of valuation. Even when all parties are in agreement about methods and facts, they place different weights on the various outcomes and bargaining positions on the table. One can imagine formally cognitive methods having a pedagogic effect and causing a convergence of values over time — insofar as values are a peculiar type of conclusion that we draw from experience or social positionality — but the problems of different valuation cannot be quickly evaporated. One might say that the possibly fundamental algorithm of trade-off operating over different systems of valuation goes a long way toward a definition of politics.

Finally, one could hope that an increased use and awareness of formally cognitive methods might have a normative effect on society, bringing an increased proportion of the citizenry into the fold. But I imagine that a majority of people will always remain fickle and quixotic. Right reasoning can always simply be ignored by free agents — as the last seven years of the administration of George W. Bush, famous devotee of the cult of the gut as he is, have amply demonstrated. As an elitist, I am going to say that the bifurcation between an illuminati and the rabble — as well as the historical swings in power between the two — is probably a permanent fixture of the human condition. In short, there will be no panacea for the mess of human affairs. The problem of politics can never be solved, only negated.