Ouroboros to Mise en Abyme

A few unsystematic thoughts on Ouroboros and mise en abyme:

  • I almost mentioned Douglas Hofstadter’s book, I Am a Strange Loop, in last week’s post (“The Mythical Economy,” 23 July 2008). He could have gone with Ouroboros on the cover too, but instead he went with mise en abyme.

    Or maybe he couldn’t have gone with Ouroboros. While Ouroboros is, on a superficial level, obviously a strange loop and a symbol that could be seen as self-referential, a peek at the index of Hofstadter’s book at the entry for “video feedback” — a technological mise en abyme — shows that he has a thicker analogy in mind:

    video feedback, 65-71; as candidate for strange loop, 103, 187, 203, 361; epiphenomena in, 68, 70-71; fear of meltdown, 56, 57; fractalic gestalts of, 204; lack of “I” in, 203; lack of perception in, 75-77, 187, 203; lack of symbols in, 203; lack of thinking in, 203; locking-in of patterns in, 70; parameters of, 65-67, 69, 75; reverberation in, 67-68; two systems entwined in, 210-211, 253-254; vanilla loop in, 208

  • While I’m amused at the notion of an Ouroboros economy, I can’t really think of any real correlate to the slightly humorous image. Unless maybe something like a naturalistic notion of the human economy, wherein the human economy is nature parasitic upon itself. The destruction of the biological world as giving birth to the artifactual or the cybernetic world. Ouroboros reborn for the Twenty-first Century!

  • The thing that’s really causing me to bring up mise en abyme is some thoughts on how people think about the future. People are faced with the need to decide and nearly all decisions that people make are, when not completely about the future, at least future-oriented. People’s thoughts about the future are divided into two closely related activities, carried out in tandem: planning and prediction. Prediction occasionally becomes an activity of its own, but for the most part prediction is an activity carried out in service of the more pragmatic planning.

    Planning is a branching strategic game. It works like this. I have a goal whose attainment is not simple: it involves a number of steps and it could be thwarted at any one of them. I start with my known situation and have a vague idea what the path to my goal would be and I make a series of hypothetical decisions. I test the soundness of a hypothetical decision by predicting the outcome of such an action. That is, I imagine a potential future.

    In the first round, the one inaugurated in the present, I know what my options are because they are present. In the second round and in all subsequent rounds, I must employ prediction to imagine what options I will have to choose from because from then on I am dealing in an imagined future. I repeat the hypothetical decision, predict, test, simulate new options algorithm down until I reach the last round whose decision results in the attainment of the goal.

    When I make predictions about the future, I rarely make a single prediction, since prediction, especially the sort of intuitionistic variant that people employ for the purpose of most of their planning, is not a very reliable. So I predict a range of possible futures. And in each possible future I face a range of possible decisions that I can take. Predicting and planning branch. Most of these I abandon in favor of the most fruitful seeming paths. But if a path dead-ends, I back up until I find the probable fateful decision that sent me down the path to the dead end. I recollect the other options at that possible future and imagine my way down another branch. I also generally flag a number of contingency plans. I went with this predicted future, but as things actually unfold, if it turns out that I predicted wrong, I have a plan ready for that other branch too.

    When I have what I imagine to be a satisfactory path from present to goal, I lock in each decision hypothetically made into “what I’ve decided upon.”

    This is a pretty systematic model and not necessarily exactly how most people make plans. People rarely sit town and carry it out algorithmically from beginning to end. More frequently people engage in this activity in fits and starts, not taking the problem from start to finish, but working on pieces that strike them at various occasions throughout their day. They absentmindedly do it while at their work computer, or do it extremely quickly while laying a joint plan with a partner over the telephone. Or maybe they try to be thorough about it and make a list on a notepad so they can see what’s done and what still in need of attention. Whatever the case, I think that ultimately this is what people are doing.

    The important point for mise en abyme is that near future decisions can only be locked in once more distant future decisions have been validated. Each step is dependent on the one after it having been made first. One starts the planning and predicting from the present and works one’s way forward, but one decides, as it were, backward, from the future to the present. Predictions and plans regarding the immediate future include as a part within them predictions and plans regarding the immediate future, which in turn contain predictions and plans about the distant future and so on. My thoughts about the future are mise en abyme insofar as they contain within them further thoughts about more distant futures.

    What one is doing in this process of planning for the future is conducting is a depth first search of potential futures. And depth first search is canonically thought of as recursive.

  • Mise en abyme seems to have a lot more analogistic or systemizing potential. Scale symmetry (e.g. fractals) along with all the related phenomena that can be grouped under that pattern seem coterminous with mise en abyme. Hegel’s logical schema seems like a highly abstract instance of mise en abyme, where each intellectual system is subsumed into a higher order intellectual system.

  • Perhaps there is a historical development of the conceptual depth and sophistication of the idea of self-referentiality. Ouroboros is simple cyclicality, though extrapolated into a notion of infinity or eternity. Homunculus is a limited instance of scale symmetry. Modern formal recursion are the culmination.

My Apocalyptic Vision is Very Narrow

More than ten years ago I read Kevin Kelly’s interview with Vernor Vinge in Wired (“Singular Visionary,” vol. 3, no. 6, June 1995) and I have been repeating Mr. Vinge’s formulation of the robot apocalypse almost word for word ever since. But I was never able to locate the original article. Anyway, while reading around the wikipedia page on the technological singularity today I came across a reference to Mr. Vinge and recognized it as the long lost name. A few strokes of the keyboard at Google revealed my favorite dystopian vision:

Kelly: In your books, you sometimes focus on the idea of a singularity — the point at which a mathematical function goes infinite. What does that mean to you in terms of a cultural singularity?

Vinge: All sorts of apocalyptic visions are floating around, but mine is very narrow. It just says that if we ever succeed in making machines as smart as humans, then it’s only a small leap to imagine that we would soon thereafter make — or cause to be made — machines that are even smarter than any human. And that’s it. That’s the end of the human era — the closest analogy would be the rise of the human race within the animal kingdom. The reason for calling this a “singularity” is that things are completely unknowable beyond that point.

Kelly: Do you see any evidence that we are headed toward a singularity?

Vinge: I think the singularity may explain Fermi’s paradox: where is all the other intelligent life in the universe? For years, there have been two theories: the first is that civilizations exterminate themselves, and the second is that these outer civilizations are so weird there’s no way to interact with them. That second explanation has gained a lot of weight in my mind, because I can see us becoming weird — before my very eyes.

The striking thing to me is that qualification, “or cause to be made.” We won’t make the machine smarter than we are. We will only make the machine as smart as we are and then that machine will make the machine more intelligent than us. And then each more intelligent machine will be capable of making another even more intelligent still. Machine evolution will take over and, with software having reproductive cycles that will make bacterial reproduction glacial by comparison, will quickly outstrip human capability or comprehension.

The Singularity is Near

June 2008, Top Department of Energy Supercomputing Performance in Teraflops, RoadRunner tops one petaflop

Via ArmsControlWonk, the Department of Energy supercomputer called RoadRunner has become the first to achieve sustained petaflop performance. In scientific notation that’s 1015 floating operations per second. In little kid numbers that’s a thousand trillion floating operations per second (Lewis, Jeffrey, “RoadRunner,” 10 June 2008; “U.S. Department of Energy’s New Supercomputer is Fastest in the World,” U.S. Department of Energy press release, 9 June 2008).

The machine was built in conjunction with IBM. It consists of somewhere in the neighborhood of 3,500 compute nodes with each node consisting of two AMD dual core Opterons, four PowerXCell 8i processors for extra floating point capability and 24 GB of RAM. Overall the machine consists of 6,912 AMD Opterons, 12,960 IBM PowerXCell 8is and 80 terabytes of RAM. It will have access to a file server with 2,000 terabytes of hard disk storage. Roadrunner occupies approximately 12,000 square feet and cost $133 million. The AMD Opterons are a common desktop PC processor and the PowerXCell 8i is the processor from a Sony PlayStation 3. It runs RedHat Linux as its operating system. As Robin Harris from ZDNet points out, because the better part of this machine is off-the-shelf components, this really represents the commodification of supercomputing (“PS3 Chip Powers World’s Fastest Computer,” Storage Bits, ZDNet, 10 June 2008; “Roadrunner’s Backing Store,” StorageMojo, 11 June 2008).

RoadRunner will be housed at the at Los Alamos National Laboratory and will be used by the National Nuclear Security Administration to perform calculations to certify the reliability of the U.S. nuclear weapons stockpile through highly detailed simulation rather than conducting nuclear tests. Mr. Lewis at ArmsControlWonk has more on the implications of this for the U.S. nuclear testing regime. He points out that questions about the ability of the NNSA to certify the U.S. nuclear stockpile using simulation were a central issue in the Senate rejection of the Comprehensive Test Ban Treaty. So maybe reconsideration of the CTBT will be on the agenda for the next President and Congress?

But this is all detail. The important point is the graph of peak computing performance of DOE supercomputers. It is clear that the singularity is near.

As Mr. Lewis points out, the fastest supercomputer used in nuclear weapons simulations has, not coincidentally, historically also been the fastest supercomputer in the world. This tight coupling between computing and nuclear weapons is striking. It’s worth noting that the first computer, ENIAC, though not yet tethered to the nuclear establishment, was constructed during the Second World War for the purpose of calculating ballistics trajectory tables for Army artillery units. As J. Robert Oppenheimer said,

In some sort of crude sense, which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin, and this is a knowledge which they cannot lose.

It is not just the physicists that have known sin. The computer scientists have known sin as well. From this coupling hithertoo, it should be fairly obvious that the first androids and the first general artificial intelligence will be military in purpose. That is, the first androids and the first general artificial intelligence will be innately aggressive.

The singularity is near. It is more likely that it will be a cataclysm than a boon.

The State of Brain-Machine Interface

Given that in the last five days it has been reported that scientists were able to read signals off the brain of a monkey and use the information to control a robotic arm in a useful manner, that Dean Kamen has done the same with for humans reading the signals off the nerve endings and that another team has been able to use fMRI and some adaptive software systems to guess what noun a subject is thinking with 77 percent accuracy, I’d say that we’re cusping on a sudden leap forward in the field of brain-machine interface, no?

Carey, Benedict, “Monkeys Think, Moving Artificial Arm as Own,” The New York Times, 29 May 2008

Tweney, Dylan, “Dean Kamen’s Robot Arm Grabs More Publicity,” Wired, 29 May 2008

Randerson, James, “Scientists Move a Step Closer to Mind-Reading,” The Guardian, 30 May 2008

Formal Cognition

A few weeks ago I went to the most recent installment of the D.C. Future Salon. The presenter was the organizer, Ben Goertzel of Novamente and the subject was “Artificial Intelligence and the Semantic Web.” One of the dilemmas that Mr. Goertzel chased out with the notion of a semantic web is that the complexity is conserved: either it has to be in the software agent or it has to be in the content. If it is in the agent, then it can be highly centralized — a few geniuses develop some incredibly sophisticated agents — or it can be in the content in which case the distributed community of content providers all have to adequately mark-up their content in a way that more simple agents can process. Mr. Goertzel is hedging his bets: he is interested both in developing more sophisticated agents and in providing a systematic incentive to users to mark up their content.

In the course of discussing how to encourage users to mark up their own content, Mr. Goertzel listed as one of the problems that most lack the expertise to do so. “What portion of people are adept at formal cognition? One tenth of one percent? Even that?” I had probably herd the phrase or something like it before but for whatever reason, this time it leapt out. I had a collection of haphazard thoughts for which this seemed a fitting rubric and I was excited that this may have been a wheel for which no reinvention was required. When I got home I googled “formal cognition” figuring there would be a nice write-up of the concept on Wikipedia, but nothing. I fretted: formal cognition could simply mean machine cognition (computers are formal systems). Maybe it was a Goertzel colloquialism and the only idea behind it was a hazy notion that came together in that moment in Mr. Goertzel’s mind.

Anyway, I like the notion and absent an already systemized body of thought called formal cognition, here is what I wish it was.

While the majority might continue to think with their gut and their gonads, a certain, narrow, technocratic elite is in the process of assembling the complete catalog of formally cognitive concepts. There is the set that consists of all valid, irreducibly simple algorithms operative in the world along with their application rules covering range, structure of behavior, variants, proximate algorithms, compatibilities, transition rules, exceptions, et cetera. I am going to show my reductionist cards and say that given a complete set of such algorithms, all phenomena in the universe can be subsumed under one of these rules, or of a number acting in conjunction. In addition to there being a complete physis of the world, there is, underlying that, a complete logic of the world.

This reminds me of Kant’s maxim of practical reason that will is a kind of causality and that the free will is the will that is not determined by alien causes, that is, the will that acts according to its own principles, which are reason (see e.g. Groundwork of the Metaphysic of Morals, Part III). It seems to me that a project of delineating the principles of formal cognition is a liberating act insofar as we are casting out the innumerable unconscious inclinations of that dark netherworld of the intuition (gut and gonad), instilled as they were by millennia of survival pressures — the requirements for precision of which were considerably different from those of a modern technological people — in favor of consciously scrutinized and validated principles of thought.

By way of outstanding example, one might be prone to say that evolution is such a logic. At this point evolution has jumped the bank of its originating field of thought, the life sciences, and begun to infect fields far beyond its origin. It is increasingly recognized today that evolution through natural selection is a logical system, one of the fundamental algorithms of the world, of which the common conception of it as a process of life science is merely one instantiation. Perhaps it was only discovered there first because it is in life phenomena that its operation is most aggressive and obvious. But it is now recognized that any place where replication, deviation and competition are found, an evolutionary dynamic will arise. Some cosmologists even propose a fundamental cosmological role for it as some sort of multiverse evolution would mitigate the anthropic problem (that the universe is strangely tuned to the emergence of intelligent life).

However, note that evolution is a second order logic that arises in the presence of replication, deviation and competition. It would seem that evolution admits to further decomposition and that it is replication, deviation and competition that are fundamental algorithms for our catalog. But even these may be slight variations on still more fundamental algorithms. It strikes me that replication might just be a variant of cycle, related perhaps through something like class inheritance or, more mundanely, through composition (I feel unqualified to comment on identity or subspecies with algorithms because it is probably something that should be determined by the mathematical properties of algorithms).

System. But have I been wrong to stipulate irreducibly simplicity as one of the criteria for inclusion in our catalog? The algorithms in which we are interested are more complex than cycle. They are things like induction, slippery-slope, combinatorial optimization or multiplayer games with incomplete information. We have fundamental algorithms and second order or composite algorithms and a network of relations between them. Our catalogue of algorithms is structured.

The thing that I think of most here is Stephen Wolfram’s A New Kind of Science (complete text online | Amazon.com | Wikipedia) in which he describes a systematic catalog of enumerated algorithms, that is, there is an algorithm that could generate the entire catalog of algorithms, one after the other. These algorithms each generate certain complex patterns and as Mr. Wolfram suggests, the algorithms stand behind the phenomena of the material world.

An interesting aside lifted from the Wikipedia page: in his model science becomes a matching problem: rather than reverse engineering our theories from observation, once a phenomenon has been adequately characterized, we simply search the catalogue for the rules corresponding to the phenomenon at hand.

It seems to me that this catalog might be organized according to evolutionary principles. By way of example, I often find myself looking at some particularly swampy looking plant — this is Washington, D.C. — with an obviously symmetrical growth pattern — say, radial symmetry followed by bilateral symmetry, namely a star pattern of stems with rows of leaves down each side. Think of a fern. Then I see a more modern plant such as a deciduous tree, whose branch growth pattern seems to follow more of a scale symmetry pattern. The fern-like plants look primitive, whereas the deciduous branch patterns look more complex. And one followed the other on the evolutionary trajectory. The fern pattern was one of the first plant structures to emerge following unstructured algae and very simple filament structure moss. The branching patterns of deciduous trees didn’t come along until much later. There are even early trees like the palm tree that are simple a fern thrust up into the air. The reason that fern-like plants predate deciduous trees has to do with the arrangement of logical space. A heuristic traversing logical space encounters the algorithm giving rise to the radial symmetry pattern before it does that of scale symmetry. The heuristic would work the same whether it was encoded in DNA or in binary or any other instantiation you happen to think of.

A fantastical symmetry. I’m going to allow myself a completely fantastical aside here — but what are blogs for?

It is slightly problematic to organize the catalogue on evolutionary principles insofar as they are logical principles and sprung into existence along with space and time. Or perhaps they are somehow more fundamental than the universe itself (see e.g. Leibniz) — it is best to avoid the question of whence logic lest one wander off into all sorts of baseless metaphysical speculation. Whatever the case, biological evolution comes onto the scene relatively late in cosmological time. It would seem that the organizing principle of the catalogue would have to be more fundamental than some latter-day epiphenomena of organic chemistry.

Perhaps the entire network of logic sprung into existence within the realm of possibility all at once, though the emergence of existent phenomena instantiating each rule may have traversed a specific, stepwise path through the catalogue only later. But there isn’t a straightforward, linear trajectory of simple all the way up to the pinnacle of complexity, but rather an iterative process whereby one medium of evolution advances the program of the instantiation of the possible as far as that particular medium is capable before its potential is exhausted. But just as the limits of its possibilities are reached, it gives way to a new medium that instantiates a new evolutionary cycle. The new evolutionary cycle doesn’t pick up where the previous medium left off, but start all the way from zero. Like in Ptolemy’s astronomy there are epicycles and retrograde motion. But the new medium has greater potential then its progenitor and so will advance further before it too eventually runs up against the limits of its potential. So cosmological evolution was only able to produce phenomena as complex as, say, fluid dynamics. But this gave rise to star systems and planets. The geology of the rocky planets has manifest a larger number of patterns, but most importantly life and the most aggressive manifestation of the catalog of algorithms to date, biological evolution. As has been observed, the most complexly structured three pounds of matter in the known universe is the human brain that everyone carries around in their head.

If life-based evolution has proceeded so rapidly and demonstrated so much potential, it is owing to the suppleness of biology. However, the limits of human potential are already within sight and a new, far more dexterous being, even more hungry to bend matter to logic than biological life ever was has emerged on the scene: namely the Turing machine, or the computer. This monster of reason is far faster, more fluid and polymorphous, adaptable, durable and precise than us carbon creatures. In a comparable compression of time from cosmos to geology and geology to life, the computer will “climb mount improbable,” outstrip its progenitor and explore further bounds of the catalog of logic. One can even imagine a further iteration of this cycle whereby whatever beings of information we bequeath to the process of reason becoming real repeat the cycle: they too reach their limits but give rise to some even more advanced thing capable of instantiating as yet unimagined corners of the catalogue of potential logics.

But there is a symmetry between each instantiation of evolution whereby the system of algorithms was traversed in the same order and along the same pathways. Perhaps more than the algorithms themselves are universal, perhaps the network whereby they are related is as well. That is to say that perhaps there is an inherent organizing structure within the algorithms themselves, a natural ordering running from simple to complex. Evolution is not the principle by which the catalog is organized, but merely a heuristic algorithm that traverses this network according to that organizing principle. Evolution doesn’t organize the catalog, but its operation illuminates the organization of the catalog. Perhaps that is what makes evolution seem so fundamental: that whatever it’s particular instantiation, it is like running water that flows across a territory defined by the catalogue. Again and again in each new instantiation evolution re-traverses the catalogue. First it did so in energy and matter, then in DNA, then in steel and silicon, now in information.

Anti-System. This is fantastical because, among other reasons, it is well observed that people who are captivated by ideas are all Platonists at heart. I have assiduously been avoiding referring to the algorithms of a system of formal cognition as forms. It all begs the question of whence logic — which, again, is a terrible question.

Of course the notion of formal cognition doesn’t need to be as systematic as what I have laid out so far. Merely a large, unsystematized collection of logically valid methods along with the relevant observations about the limitations, application rules and behaviors of each one would go a significant way toward more reliable reasoning. Perhaps such a thing doesn’t exist at all — I tend towards a certain nominalism, anti-foundationalism and relativism. But the notion of a complete logical space, or a systematic catalog is perhaps like one of Kant’s transcendental illusions — a complete science or moral perfection — the telos, actually attainable or only fantasized, that lures on a certain human endeavor.

Politics. All of this having been said, I remain of the belief that politics is the queen of the sciences. Formal cognition wouldn’t be automated decision making and it could only ever enter into political dialog as decision support or as rhetoric.

As Kant wrote, “Thoughts without content are empty; intuitions without concepts are blind” (Critique of Pure Reason, A51 / B75). Kant developed an idiosyncratic terminology and perhaps another way of phrasing this, more suited to my purpose here, would be to say that formal reason absent empirical data is empty; but that empirical data unsystemized by conceptual apparatus is an incoherent mess. A complete system of the world cannot be worked out a priori and a mere catalogue of all observations about the world would be worse than useless.

Formally cognitive methods must be brought to bear. And against a complex and messy world I do not think that their application will be unproblematic. In passing above, I mentioned the notion of application rules. Each algorithm has attendant rules regarding when it comes into force, for what range of phenomena it is applicable, when it segues to another applicable algorithm, et cetera. Take for instance the notion of the slippery-slope or the snowball. Not all slippery-slopes run all the way to the bottom. Most are punctuated by points of stability along the way, each with its own internal logic as to when some threshold is overcome and the logic of the slipper-slope resumes once more. Or perhaps some slippery-slope may be imagined to run all the way to the bottom — it’s not ruled out by the logic of the situation — but for some empirical reason in fact does not. Once the principles of formal cognition come up against the formidable empirical world, much disputation will ensue.

Then there is the question of different valuation. Two parties entering into a negotiation subscribe to two (or possibly many, many more) different systems of valuation. Even when all parties are in agreement about methods and facts, they place different weights on the various outcomes and bargaining positions on the table. One can imagine formally cognitive methods having a pedagogic effect and causing a convergence of values over time — insofar as values are a peculiar type of conclusion that we draw from experience or social positionality — but the problems of different valuation cannot be quickly evaporated. One might say that the possibly fundamental algorithm of trade-off operating over different systems of valuation goes a long way toward a definition of politics.

Finally, one could hope that an increased use and awareness of formally cognitive methods might have a normative effect on society, bringing an increased proportion of the citizenry into the fold. But I imagine that a majority of people will always remain fickle and quixotic. Right reasoning can always simply be ignored by free agents — as the last seven years of the administration of George W. Bush, famous devotee of the cult of the gut as he is, have amply demonstrated. As an elitist, I am going to say that the bifurcation between an illuminati and the rabble — as well as the historical swings in power between the two — is probably a permanent fixture of the human condition. In short, there will be no panacea for the mess of human affairs. The problem of politics can never be solved, only negated.

William Gibson’s Idoru and Blogging

I want to add one more thought about blogging before I get started. In my Inaugural Post I asked, “Why join this societal wave of exhibitionism?” and mentioned the relation of technology to surveillance, voyeurism, privacy and exhibitionism. Every time I think about these issues, a character from William Gibson’s 1996 novel Idoru comes to mind.

Before I delve into the main point, I want to say that I think William Gibson is a genius. In his first novel, Neuromancer (1984), the hit that launched the cyberpunk genre, he came up with the term cyberspace. In case you passed over that parenthetical date too quickly, let me point out that he came up with the idea of cyberspace in 1984: before there was either the Internet or virtual reality.

Yes, I am aware that Tron came out in 1982, but Tron is about a man who is sucked into a little, tiny world inside of a computer were the programs are personified (e.g. the vilan, “Master Control”) and forced to fight high-tech gladiatorial games in sexy spandex body suits. This of course will never happen and is merely a technological variant of The Fantastic Voyage, The Wizard of Oz or The Lion, The Witch and the Wardrobe. Yes, there are some silly parts of Neuromancer: the space Rastafarians are hardly the heady stuff of Arthur C. Clarke or Isaac Asimov. However, a total emersion interface to a simulated world spread over a network of computers is freaking visionary. Unlike Tron, which set people’s understanding of computers back a decade, Neuromancer is the future.

What is most relevant to blogging is his vision of celebrity and media that make up the ideological backdrop of Idoru. The novel is set in the not-too-distant future where mass media has continued to throw its net wider and wider, where, as Andy Warhol said in what must be the most accurate prediction ever made, “everyone will be famous for fifteen minutes.” Murderers are famous, the parents of their victims are famous, college students fake kidnappings to get on television, unaccomplished debutantes are famous for nothing other than ostentation, people become famous when sex tapes “accidentally” find there way on to the Internet, people elbow their way onto television for opportunities to boast about things that previously one wouldn’t even want one’s neighbors to know. Actually, I am talking about the present, but imagine this trend married to the myriad of widely affordable media production and distribution technologies chased out twenty years into the future. With thousands of television channels to fill up and with everyone’s vanity site on the Internet and with no gatekeepers, fame will devolve to the masses. Gibson has one of his characters describe it thus:

“Nobody’s really famous anymore, Laney. Have you noticed that?…I mean really famous. There’s not much fame left, not in the old sense. Not enough to go around…We learned to print money off this stuff,” she said. “Coin of our realm. Now we’ve printed too much; even the audience knows. It shows in the ratings…Except,” she said… “when we decide to destroy one.” (6-7)

Gibson spends the opening chapters of the book describing how derelict protagonist Colin Laney lost his previous job as a “researcher” at a tabloid news show called Slitscan. In this future, like our present, an increasing proportion of people’s transactions are being passively recorded in corporate databases. And also as in our present, some companies exist solely to purchase information, correlate disparate pieces in useful ways and sell it to those who might put it to some (usually pernicious) use. In this novel, Slitscan had a questionable relationship with such a data agglomeration corporation called DatAmerica and Laney’s job was to troll through the data trails left by celebrities looking for the “nodal points” — the confluences of data — that indicated something gossip-worthy for the show to report.

Laney was not, he was careful to point out, a voyeur. He had a peculiar knack with data collection architectures, and a medically documented concentration deficit that he could toggle, under certain conditions, into a state of pathological hyperfocus…he was an intuitive fisher of patterns of information: of the sort of signature a particular individual inadvertently created in the net as he or she went about the mundane yet endlessly multiplex business of life in a digital society. (30-31)

Laney was fired when, while researching the mistress of a celebrity, it became clear to him from her data trail that she intended to commit suicide and he tried unsuccessfully to intervene. Here Laney checks back with his mark after returning from a vacation:

The nodal point was different now, though he had no language to describe the change. He sifted the countless fragments that had clustered around Alison Shires in his absence, feeling for the source of his earlier conviction. He called up the music that she’d accessed while he’d been in Mexico, playing each song in the order of her selection. He found her choices had grown more life-affirming; she’d moved to a new provider, Upful Groupvine, whose relentlessly positive product was the musical equivalent of the Good News Channel.

Cross-indexing her charges against the records of her credit-provider and its credit retailers, he produced a list of everything she’d purchased in the past week. Six-pack, blades, Tokkai carton opener. Did she own a Tokkai carton opener? But then he remembered Kathy’s advice, that this was the part of research most prone to produce serious transference, the point at which the researcher’s intimacy with the subject could lead to loss of perspective. “It’s often easiest for us to identify at the retail level, Laney. We are a shopping species. Find yourself buying a different brand of frozen peas because the subject, watch out.” (66-67)

Before excerpting a passage where Gibson describes the future of gossip journalism, let me remind you that this is Gibson’s view from 1996, when MTV’s The Real World was only in its 4th season, the O.J. Simpson trial was just over, Monica Lexinsky’s blue dress was stain-free and Survivor was still four years off:

Slitscan was descended from “reality” programming and the network tabloids of the late twentieth century, but it resembled them no more than some large, swift, bipedal carnivore resembled its sluggish, shallow-dwelling ancestors. Slitscan was the mature form, supporting fully global franchises. Slitscan’s revenues had paid for entire satellites and built the building he worked in in Burbank.

Slitscan was a show so popular that it had evolved into something akin to the old idea of a network. It was flanked and buffered by spinoffs and peripherals, each designed to shunt the viewer back to the crucial core, the familiar and reliably bloody alter that one of Laney’s Mexican co-workers called Smoking Mirror.

It was impossible to work at Slitscan without a sense of participating in history, or else what Kathy Torrance would argue had replaced history. Slitscan itself, Laney suspected, might be one of those larger nodal points he sometimes found himself trying to imagine, an informational peculiarity opening into some unthinkably deeper structure.

In his quest for lesser nodal points, the sort that Kathy sent him into DatAmerica to locate, Laney had already affected the course of municipal elections, the market in patent gene futures, abortion laws in the state of New Jersey, and the spin on an ecstatic pro-euthanasia movement (or suicide cult, depending) called Cease Upon the Midnight, not to mention the lives and careers of several dozen celebrities of various kinds.

Not always for the worst, either, in terms of what the show’s subjects might have wished for themselves. Kathy’s segment on the Dukes of Nuke ‘Em, exposing the band’s exclusive predilection for Iraqi fetal tissue, had sent their subsequent release instant platinum (and had resulted in show-trials and public hangings in Baghdad, but he supposed life was hard there to begin with). (50-52)

Of course, something like Slitscan — or the Jerry Springer Show, Cops, E True Hollywood Story, Average Joe or The Fifth Wheel in our time — could not exist were it not for the sadistic voyeurism of the masses. I select this passage as much to satisfying my own snickering elitism as to illustrate the lust for other people’s misery that comprises our current and future television viewing audience:

…Slitscan’s audience…is best visualized as a vicious, lazy, profoundly ignorant, perpetually hungry organism craving the warm god-flesh of the anointed. Personally I like to imagine something the size of a baby hippo, the color of a week-old boiled potato, that lives by itself, in the dark, in a double-wide on the outskirts of Topeka. It’s covered with eyes and it sweats constantly. The sweat runs into those eyes and makes them sting. It has no mouth, Laney, no genitals, and can only express its mute extremes of murderous rage and infantile desire by changing the channels on a universal remote. Or by voting in presidential elections. (35-36)

Of course one can already see aspects of this world coming into being. Corporations are harvesting, agglomerating and correlating information at a frightening and increasing rate — but that is for another post. What I am thinking about here is the voyeuristic and micro-celebrity aspects of our quickening information age. I have a friend who reads several people’s blogs on an occasional basis, some of whom he has never even met. Of one that he hasn’t met, he maintains that this blogger is teetering on the brink of an infidelity with a coworker against his current girlfriend — an infidelity, the imminence of which he himself is not yet aware! My friend keeps returning to this blog awaiting the climactic post as if it were a soap opera.

There you have it: micro-celebrity, sadistic voyeurism, a readable data trail from which one might extrapolate future behavior with a minimal amount of theory. Admittedly, my friend is following an intentional data trail rather than a passive one, but the small difference between this situation and that of Gibson’s Laney anticipating the suicide is striking.

I don’t absolve myself of any of this. I loved the show Trauma: Real Life in the ER, which is about as sadistic of a voyeurism as you’ll find. I did say in the “Inaugural Post” that I consider this a “deeply improprietious endeavor.” I am, however, aware of the context in which I embark upon this effort.

Inaugural Post

If I am going to join the bloggosphere, I guess that an inaugural post is in order, in which I offer certain explanations, make certain confessions, enumerate goals and make a polite gesture toward the specter of propriety (and I do think that this is a deeply improprietious endeavor). Why blogging? What am I trying to accomplish? Why should anyone care what I think? Why join this societal wave of exhibitionism?

Before I get on to the explaining, allow me to make a theoretical discussion.

In the future, content filtering will be done less by authorities such as editors and more by personally determined trust networks. I frequently fantasize about the future universal digital library. The problem (or advantage) of such a scheme is that although content providers may continue to produce on a regular, periodic schedule, content would be disaggregated from its brand name. For instance, right now I read from many different “containers”: The New York Times, The New Republic et cetera. I know what containers to draw from because I know the network of associations built around the brand names. I know which publications are reputable and I know the editorial positions associated with each publication. The brand name under which articles are currently agglomerated serves as a network of information and authority, a guarantee of quality, an indicator of various literary characteristics (for a discussion of the information content that brand names provide, see The Case for Brands and Special Report: Brands in The Economist, 6 September 2001 [subscription required]. The cover of the issue provocatively mocked the cover of Naomi Klein’s leftist hit No Logo).

In the future I might get my news via an aggregator function that pulls all the articles that I want based on subject or keyword searches, and since they all came from one “container” — the universal digital library — I might read my news without noticing which producer deposited them in the library. But I don’t want to waste time on anti-establishment diatribes, right wing conspiracies or low quality writing and investigating. I won’t want The Bremerton Sun’s take on the latest wave of attacks in Iraq; I want The New York Times’s.

Another problem is that I don’t want to read just the articles that I think I want to read: I will want to read some articles that I do not yet know that I want to read. A system needs both agglomerating and variegating functions. That is to say, systems need both to bring together many things of like kind, but then need to introduce a measure of randomness. Currently, when I bring up The New York Times web page, an editor has selected a variety of stories to present to their readers. I zoom in on the international affairs and national political stories, but also notice other stories and occasionally see something that I wouldn’t have thought to read had not the editor of The New York Times presented it to me. But I couldn’t define my searches in the universal library and then have a ten percent random function: I don’t want absolute randomness. I don’t want to read any articles on transplanting hydrangeas or comparing automobile performance. But specifying what you don’t want is exceedingly difficult.

In such a future, consumers will come to rely on a network of trusted fellow consumers, rather than professional editors to select and vouch for the legitimacy of content and content providers. This is a future that is already rapidly arriving and I can think of a couple of ways that the Internet already achieves this.

One way that this is accomplished is the oldest Internet feature, e-mail. At this point nearly every web site has an “e-mail this to a friend” function that includes an “add your comments” box. People are constantly e-mailing articles to friends they think might be interested and affixing a few comments. People know what their friends are interested in and e-mail accordingly, but people also come across a lot of novel stuff and e-mail that too, hence agglomerating and variegating.

Another example is Amazon.com which allows its users to post reading lists and reviews. When one user identifies another whose reviews are useful or whose reading list contains works similar in interest, the first user can mark the second as a “trusted” user and Amazon will use the second’s habits as a basis for recommendations to the first. Amazon is packing this trick all over the site. When one looks at a book, there is usually a “Customers who bought this book also bought…” heading that lists a few books based on the theory that if two people like one book and one of them also likes another, then the first may like that other book as well.

A third example brings us to the issue of blogging. Blog readers are largely anonymous companions that have trusted the blog writer to direct them to information that the two find mutually interesting. Most blogs are focused on a few, well advertised subjects, so one may choose blogs based on one’s interest, but most bloggers also serve the variegating function in that nearly all make occasional posts of things that vary from the main topic areas.

In a sort of synthesis, I both read J. Bradford DeLong’s blog on a daily basis and have marked him as a “trusted” user at Amazon.com. Think of this as query by ostension, which is a very complex form of specification that contains little abstraction or theory and one which captures the correct element of randomness that one would like.

With that theoretical component in place I now offer up the following list of reasons / explanations:

  1. Systematization. The Internet is about amateurs offering services, often of little earnestness, to a dispersed, frequently anonymous, network of consumers. Two magazines, Foreign Policy and The Wilson Quarterly, have, in addition to standard book reviews, summaries of recent, noteworthy articles and essays from a breadth of sources that their readers might not get to on their regular reading circuit or may not have heard of, not being a part of a particular intellectual community. I find this highly useful and would like to offer the same in a more regular and accessible way. I have my fingers in a lot of pies (see What I read and Why) so I have a lot to offer here. I am constantly trying to skim off the cream of what I read and direct it to the attention of my friends. As I insist the articles upon my friends, I frequently find myself mapping articles into the various debates in which they belong, pointing out the significance of the authors, connecting an article up with a few of its kind to give a good overview of a subject or debate or tracing the genealogy of ideas that lead up to an article’s publication. A blog should allow me to systematize this behavior.
  2. Unobtrusiveness. I engage in my fair share of the e-mailing of articles as I discussed above and sometimes have a lot to say in that little “add your comments” box. In fact, sometimes the session has timed-out on me by the time I click the send button. I am also constantly juggling a list of people to whom I send articles and remarks. “I am undoubtedly going to mention this point to so-and-so: she should get it too.” “Am I being too mean or smug if I send this to my parents?” “These people are casually interested; should I throw them in on a whim?” This push system has the short fallings of being both too intrusive and too hit-and-miss. Do people really want my opinion cluttering up their inbox on my schedule? But what if I miss someone? So I am switching to a pull system.
  3. Trial by Fire. I have always been a part of what I consider a vibrant pseudo-intellectual community, constantly engaged in heated discussion of all manner of issues. As I have grown older, that group has become more dispersed and the arguments more tempered. Further, I worry that I am growing more complacent and ossified in my thinking. I am just a little too self-satisfied. An outstanding characteristic of my previous circle was the highly contentious disagreements among us. I am looking for more challenges and a more rarified atmosphere. Since I have become an exclusively urban liberal, discussion consists of nothing more than congratulating one another on our equally enlightened opinions. The self-congratulation is becoming a little tiresome and I am hoping to find a few conservatives to lay into me. I am soliciting contention, for I love the dictum,
  4. 12Now if any man build upon this foundation gold, silver, precious stones, wood, hay, stubble;13Every man’s work shall be made manifest: for the day shall declare it, because it shall be revealed by fire; and the fire shall try every man’s work of what sort it is. (1 Corinthians 3: 12 & 13)

    Or perhaps better, Nietzsche:

    If one wants to have a friend one must also want to wage war for him: and to wage war, one must be capable of being an enemy. In a friend one should still honor the enemy…In a friend one should have one’s best enemy. You should be closest to him with your heart when you resist him. (On the Friend, Thus Spoke Zarathustra, trans. Walter Kaufman)

  5. Reading Discipline. I am hoping that this blog will bring a greater measure of discipline to my reading regiment. I frequently make my weekly magazine run, identify the most critical articles of the week and then proceed to meander through the pile, reading the more easily digested pieces in rapid sequence and never getting to the most important ones. Next thing I know, I have the next week’s magazines piled on top of those oh so important articles from last week. I don’t read with a plan. Further, I tend to neglect articles that seem to be the latest information on a story whose broad outlines are already well reported. I am hoping that the imperatives of writing make me more dogged about sticking with a story all the way to the end.
  6. Writing Mastry. I am interested in improving my writing ability — especially the speed at which I write — and in building up a cache of material to allow me to more quickly write longer pieces. For myself, I consider this blog a “half baked thought” database.Further, one frequently has not achieved mastery of a subject until one has put their knowledge of the subject into practice. For the intellectual pursuits, the practice is writing.
  7. Documentation. As I believe Andrew Sullivan is fond of pointing out, the Internet has a long collective memory. I would like to record the things that I thought while events were unfolding to see how they fare in light of the backward glance. For instance, I can recall saying in a particular argument during the lead up to the second Iraq war that I was sure that Saddam Hussein had weapons of mass destruction that he was concealing from United Nations inspectors. I had read Scott Ritter and Richard Butler’s books and I confidently asserted that when the regime had been toppled, we would find all sorts of nasties beyond the most wild imaginings of Hans Blix. However, I also recall waking up early or not going out to run errands so that I could watch Blix and ElBaradei update the U.N. Security Council on the state of inspections and very clearly remember taking note of ElBaradei’s careful and surprisingly thorough debunking of the Bush administration’s case against Iraq. I remember watching Colin Powell’s presentation and thinking it somewhat less than conclusive. In fact, I thought it rather circumstantial. What I don’t remember is exactly how nuanced was my assessment of this conflicting information. Right now, I am predicting that John Kerry is a stinker, that I will not succumb to party loyalty and ever get enthusiastic about him and that he will talk his way into losing in November. This may change (see smarties epigram) and if it does, I want a record of my original thinking on the issue as well as an account of its subsequent evolution. On the other hand, the status field in the database underlying this blog allows a value of suppressed, so maybe I won’t own up to everything.
  8. Parents. Despite some problems covered in this rather amusing piece from The Onion (“Mom Finds Out About Blog.” 12 November 2004), I currently live all the way across the country from my parents and this will allow them a measure of unprompted news on me.
  9. Physics of Information. Among the issues in which I am interested is what I might term the physics of information, or perhaps general systems theory as applied to information. I don’t want to bamboozle anyone with fancy words: I don’t know much about the subject and probably never will. However, I am very interested in visualization, strange attractors, post hoc or atheoretic analysis, object oriented design, statistical control, organization of narrative information, metadata and other issues. A blog will give me opportunities (and inflict upon me the necessities) to explore these concepts and techniques.
  10. Technology and Society. I am very interested in issues of technology and society: surveillance, voyeurism, privacy and exhibitionism; accelerated productivity growth, corporate restructuring and globalization in the economy; the leveling of hierarchies in a more networked society; non-constructive uses of technology; open-source and the extension of the do-it-yourself movement. One can comment on these issues without knowing much about the underlying technologies, but to really understand the logic driving these changes and, hence, to engage in some measure of anticipation, a detailed understanding at the level of implementation is very important.
  11. The Hacker Ethic. For those who want to put out the effort, the Internet is very much a part of the do-it-yourself movement. I, like many, am deeply concerned about us low-rent occupants being driven out by the upscale, corporate types and the Internet turning into a one-way those in powerful to pliant masses communications medium. As long as one is a mere consumer, one will have little interest in the issues faced by small produces. Smartiesis my Internet squat.There is also a deeply elitist strain to computer subculture — witness Saturday Night Live’sdead-on recurrent skit, “Your Companies Computer Guy.” I am way too fond of telling people that my first Internet access was a dial-up shell account on a Unix machine in 1993 and that I first installed Linux on my computer when it was, I think, kernel version 0.92.I frequently reread the essay Real Programmers Don’t Use PASCAL (Post, Ed. Datamation. volume 29, number 7. July 1983. pp. 263-265.) and love it. Times have changed. The first language that I learned was Pascal (though I have toggled a program into a computer using a front panel) and I currently work in PHP and Perl which are both hardly down on the bare metal. In fact, if I had to go back to a strongly typed language like C (yes, I know C is a debatable example, but it’s strong compared to PHP), I might go crazy and write a bunch of libraries to make it work like PHP. But I do prefer to be “close to the machine.” I am still sitting here at a terminal window on a Unix machine. I do all my work for smarties using Joe, about as vanilla of a text editor as one can find. Knowing “how to beat the machine every time” makes for an entirely different relationship to technology, even if sometimes I just have to use more drastic means. Smarties is my project on which I might hack.
  12. Fun. Ideology and ambition are all fine and good, but computers are just plain fun. In the past I have been rather deeply involved in computers, whether as a computer science student or as a PC tech. Lately, I haven’t had any official reason to keep my skills sharp, so I need a computer-related avocation. For some time now, I have been wanting to explore databases and information interfaces. Smarties allows me to fuse my interest in computer technology with my interest in politics.

And now a little housekeeping.

  1. Technologies for the Future. As I said in reason number eleven, I am doing this partly as an excuse to play with computers. I wrote from scratch all the databases and the code that make smartiesgo. In my rush there is a lot of spaghetti code and some of features don’t work yet (e.g. the search button above). I am going to try to go back and make the code more elegant so that (1) I can figure out what is going on if I revisit something after a long period of neglect and (2) because my interest in design issues from reason number eight. Much of smarties is functional programming and I would like for more of it to be objects so that the implementation is separated from the interface, allowing advancements in functionality to be more easily incorporated.Even now smarties spans multiple databases. I intend to add more (counter-intelligence and news archive; today in historyneeds to be fully incorporated). Database interoperability and architectures to span data sources will be big research issues for me in the near future.One of my first goals will be to make separate development and production environments. Until then, I may break a few functions trying to improve them. Don’t be surprised if smarties doesn’t load or gives garbage on occasion. I’m working on it.
  2. Interface Design. The interface is very inconsistent right now. It appears that functional groupings are listed horizontally in the green menu bar and that topic filters run down the left-hand grey menu. If only there were some nice rule. Soap Box is the blog, but only the intellectual aspect of it (politics, history, economics, philosophy and culture). Personal is posts about my goings-on. News and about smarties are (will be) static pages. When in Soap Box, left column options act as topic filters, but when in other function areas the filters act like links back into Soap Box. All very confusing. I need to bring some regularity to this mess.Also, I want the left-hand menu bar to do more. I want it to become more of an active navigation aide and to display information indicating the viewer’s position in the site map. Right now it is mostly empty screen space.
  3. Post Rate. A fair number of bloggers as of late have quit owing to burn-out. Even the mighty Andrew Sullivan recently said, “I’m unsure of how long I can keep this up.” I don’t want to post at such a high rate that I drive myself mad, but I do want to post frequently enough that people don’t forget about my page. I’m thinking about twice a week right now. These days I am almost exclusively interested in politics, but one of my longest standing interests is in economics, though, and I want to try to make a post on a periodic basis assessing the state and prospects of the national and international economy.
  4. The Third Person. I will frequently speak of myself in the third person. I know it is a little creepy, but it is also part of the tradition. Many bloggers do it. Joshua Marshall frequently says “Here at Talking Points Memo we’ve repeatedly noted…” Robert Caro never uses “I”, but always “the author of this book.” A whole episode of Seinfeld is devoted to George Costanza’s habit of referring to himself in the third person. “George is getting very angry.”
  5. Acknowledgements. Smarties isn’t created in splendid isolation. I discuss extensively with a select group of people and many of my ideas and rhetoric are borrowed when one of you plants an excellent idea in my head or expresses something better than I can. Thinking is alternately a solitary and a community activity. In order to avoid starting every post, “I was talking to so-and-so the other day…” and out of a policy of confidentiality — I have elected to make myself public, I am not going to drag everyone I know with me — I am going to largely avoid mentioning other people’s names. Certain of you will end up contributing almost as much to this site as will I. For that I am very grateful and the fact that I don’t mention you on each occasion is out of respect, not indifference.

As a final note, I suppose that an explanation of the site name is also required. Do I really feel sufficiently self-assured to promulgate my writings under the title smarties? Actually, the name was bestowed upon me by another: a frequent recipient of my editorial e-mails took to calling the recipient list — rather sardonically — “your smarties.” When I set up an e-mail group for my recipients, I named it “smarties” for lack of anything else. When brainstorming a name for the web site, I flippantly chose this barb for its self-deprecating value and as a tilt of the hat to the genesis of the site.