A System of Negativity

Consider nihilism as a system of negativity. Rather than no system or anti-system, or the standard sort of system of theorem built upon axiom, et cetera, it is a system of contradiction, where each principle is canceled out by another, every theory perfectly balanced by another opposing until there is nothing left, or that the web of confusion grows so tangled that one is driven to the nihilistic act, where one has nothing left but to obliterate and wipe away the whole lot of ideas. Now consider the following two passages from Dostoevsky’s Notes from the Underground:

… on coming home on one of the foulest nights in Petersburg, I used to realize intensely that again I had been guilty of some particularly dastardly action that day, and that once more it was no earthly use crying over spilt milk; and inwardly, secretly, I used to go on nagging myself, worrying myself, accusing myself, till at last the bitterness I felt turned into a sort of shameful, damnable sweetness and finally, into real, positive delight!

Well, let us now have a look at this mouse inaction. Let us suppose, for instance, that its feelings are hurt (and its feelings are almost always hurt), and that it also wants to avenge itself. There will perhaps be a greater accumulation of spite in it than in l’homme de la nature et de la vérité. A nasty, mean little desire to repay whoever has ofended it in his own coin stirs within it more nasty perhaps than in l’homme de la nature et de la vérité; for because of his inborn stupidity l’homme de la nature et de la vérité looks upon his revenge merely as a matter of justice whereas because of its intense sensibility the mouse denies that there is any question of justice here. At last we come to the business itself, to the act of revenge. The unhappy mouse has already succeeded in piling up — in the form of questions and doubts — a large number of dirty tricks in addition to its original dirty trick; it has accumulated such a large number of insoluble questions round every one question that it is drowned in a sort of deadly brew, a stinking puddle made up of doubts, its flurries of emotion, and lastly, the contempt with which the plain men of action cover it from head to foot while they stand solemnly round as judges and dictators and split their sides with laughter at it. Well, of course, all that is left for it to do is to dismiss it with a disdainful wave of its little paw and with a smile of simulated contempt, in which it does not believe itself, and to scurry back ingloriously into its hole.

After reading Notes from the Underground it is hard not to see Nietzsche as at least partially derivative. Nietzsche was reading Dostoevsky in the years 1886-1887, including Notes from the Underground. Nietzsche wrote On the Genealogy of Morals during 1887 and it is Dostoevsky’s narrator from Notes from the Underground who is the patient that Nietzsche has on he sofa in On the Genealogy of Morals. The difference is that while Nietzsche shares Dostoevsky’s diagnosis of the sickness of the West, Dostoevsky is nostalgic for Christianity and nationalism, whereas Nietzsche advocates an experimentalism and futurism of pressing boldly on.

The Politics of Philosophy

There are three things that someone in philosophy might do. First is philosophy as traditionally understood: an autonomous science with real contributions to human understanding to be made in the existent sub-disciplines of metaphysics, epistemology, ethics and aesthetics; explicating, scrutinizing, clarifying and shoring up the foundations or logical superstructure — depending on whether you prefer an under or an over metaphor — of our more day-to-day beliefs. Philosophy as the agent of redistribution of the field of force, to use Quine’s analogy. Or philosophy is the avant garde of knowledge, preparing the ground for the eventual arrival of natural science. These activities would be philosophy’s on-label use, so to speak, philosophy per se or philosophy proper. The second common activity of people operating under the rubric of philosophy is the study of the history of ideas: decoding and deciphering what a philosopher of yore meant, study of how a philosopher came to the position that they did, the who-influenced-who game, genealogy of an idea down through the ages. These are the activities that people have in mind when they think of philosophy and this is how most philosophy institutions — university departments, publishers, journals, societies — bill themselves.

But there is a third, lesser known, and conspiratorially hidden practice within philosophy. It is philosophy as a sort of rarified politics, as inter-philosophical polemic.

It is the rare philosopher who satisfies themself with a small problem. A philosophy is a Weltanschauung, a worldview, an all-encompassing world system, a tiny nucleus of belief from which all the rest of belief might be derived. A metaphysic and an epistemology usually entail an ethics and a politics. There is hardly a philosopher in the world who is not reformist in their beliefs, who doesn’t want to see one set of ideas about things abandoned in favor of another, for the sake of the political and societal consequences it will entail. The very act of writing affects a desire for influence, to persuade and to change the world. When philosophers contend with one another about some seemingly impartial issue of ontology or logic, they are in fact arguing about what sort of society we are to have, just via a peculiar proxy. And as they do so, they are very much conscious of a series of relations where memes lead to real world consequences and this or that change in fundamental beliefs might lead to a new social organization.

I think the tendency of the non-philosophically oriented is to dismiss this as ridiculously detached — philosophy is just a parlor game! — but philosophy is the natural destination of some fairly commonplace behaviors. It has been said that science is the continuation of common sense. Philosophy is a result of a similar tact. Philosophy isn’t a dispassionate pursuit, cut off from and largely irrelevant to society, culture, politics, the economy, technology and the sciences. All of these fields are subject to contentious, political debates amidst their disparate practitioners, participants and communities. On the ground political difference is often derivative of our more upstream positions on fundamental questions. Or, perhaps more realistically, if also more Marxist and cynical: we build elaborate ideological superstructures as a means to justify our political and economic interests. The more disparate our political views, the more fundamental the disagreement that is their source. In this sense, there are factions in politics built upon philosophical affinities. In such a situation, philosophy becomes a sort of fall-back fortification after our more forward positions have been overrun. Depending on the degree of difference amidst individuals and groups, debates can be either genial, superficial affairs or can, in the quest for rhetorical supremacy, rent the fabric of our day-to-day beliefs and devolve into questioning the fundamentals of all underlying arguments. For example, it is amazing how quickly a discussion of tax policy — a discussion really only suited for specialist attorneys, OMB officials and SSA actuaries anyway — degenerates to one about human nature, the constituents of freedom and to whom we have a social responsibility.

As the breakdown of philosophical practice goes, my intuitive sense is that philosophy proper is the least practiced of the three activities and history of ideas is the second most oft engaged activity. Actually thinking a problem through, on its own terms, originally, on one’s own, is hard work. Thinking about someone else’s thinking, or thinking about how someone else thought about a problem, or application of the thought of someone else to a problem that they didn’t consider is easier — still hard, but easier — and to some extent more natural. But it is the third philosophical activity, philosophical politics, that actually constitutes the majority of what goes on amidst philosophers. In fact, the bulk of those other two philosophical activities, while performed under the pretext of philosophy per se or history of ideas, are really philosophy as politics too. Philosophers are of a certain worldview and a great deal of their reflection and reading is undertaken with the aim of strengthening that worldview, gathering intelligence on opposing camps, stress-testing their ideas by simulating critiques, rehearsing defenses and sharpening one’s own critiques of contending systems. It is philosophy as a sort of ideological fortress building. A great deal of historical explication consists of attempts to appropriate the legitimacy of some philosophical great to one’s own contemporary camp, or to delegitimate the man or idea of a rival camp.

Again, people tend to see philosophy as somehow ethereal, lofty, set apart from the hurley-burley of the world. But it is not. There is a perfect continuum between politics and philosophy. Philosophy is merely politics conducted in a more apocalyptic mode.

American Ecumenicalist Pluralism

Charles Blow tells an amusing story about the essential ecumenicalist pluralism of the United States (“Heaven for the Godless?,” The New York Times, 26 December 2008, p. A25):

In June, the Pew Forum on Religion and Public Life published a controversial survey in which 70 percent of Americans said that they believed religions other than theirs could lead to eternal life.

This threw evangelicals into a tizzy. After all, the Bible makes it clear that heaven is a velvet-roped V.I.P. area reserved for Christians. Jesus said so: “I am the way, the truth and the life: no man cometh unto the Father, but by me.” But the survey suggested that Americans just weren’t buying that.

The evangelicals complained that people must not have understood the question. The respondents couldn’t actually believe what they were saying, could they?

So in August, Pew asked the question again. (They released the results last week.) Sixty-five percent of respondents said — again — that other religions could lead to eternal life. But this time, to clear up any confusion, Pew asked them to specify which religions. The respondents essentially said all of them.

And they didn’t stop there. Nearly half also thought that atheists could go to heaven — dragged there kicking and screaming, no doubt — and most thought that people with no religious faith also could go.

The full study results are here (“Many Americans Say Other Faiths Can Lead to Eternal Life,” 18 December 2008).

I suggest that this means that Americans are essentially communitarian, consequentialist and anti-foundationalist in their moral outlook.

For the fundamentalist religious person, right belief about metaphysical and factual matters is paramount and right behavior secondary. Right belief is often seen as being of such a higher order of importance that it alone is sufficient and gross moral deviance on the part of the righteous is perfectly acceptable, hence modern day religious fanaticist terrorism. But for most Americans it is the reverse. The conception of the good is primary and they bend the rest of their beliefs around this. The particular beliefs that lead to the right behavior aren’t all that important, just so long as the result is someone who is a good person. (I think here of Aristotle’s suggestion in the Nicomachean Ethics that for he who has “been brought up in good habits … the fact is a starting point, and if this is sufficiently plain to him, he will not need the reason as well; and the man who has been well brought up has or can easily get starting-points.” §1095b)

But if Americans don’t believe that right and wrong consist in adherence to a particular book of maxims, then where do they think they come from?

People decide right and wrong prior to religion — at least logically prior, if not chronologically prior. Most people think religion an okay source of moral instruction for children, but eventually attain a level of ethical sophistication where they use their own standard of right and wrong to judge religious teaching, rather than vice-a-versa. It’s Daniel Dennett’s point that many more people believe in belief than actually believe. Chronologically, I imagine people probably go through something like Kierkegaard’s three phases of the slave, the knight of infinite resignation, and the knight of faith. People develop an idea of good from their upbringing, a host of stories in the culture, their moral exemplars, their conception of their own life, their own moral experimentation and so on. Moral discourse is an equal opportunity endeavor. Armed with that, they recognize fellow good people based on an intersubjective or commonly held standard — differing one person to another, but demonstrating “family resemblance.”

The Christian right attempts to bolster the case for its monolithic policy preferences by arguing that the United States is a Christian nation, that it’s becoming more religious or that religion is essential for morality. To the degree that any of these are true, it’s not in a way that helps the case of the right. Americans are essentially pluralist, tolerant, even polyglot, pragmatic and not particularly concerned with the finer points of principle — exactly what one would expect from a real liberal democratic polity.

The Grand Historical Narrative of Postmodernism

When people think of postmodernism in philosophy, they usually have in mind a pretty specific list of thinkers such as Claude Lévi-Strauss, Jacques Lacan, Michel Foucault, Jacques Derrida, Jean-François Lyotard and a number of lesser lights among the French post-structuralists. But I am thinking of an alternate trajectory where the key figures would be Oswald Spengler and Martin Heidegger (Heidegger is at least a bridge figure in any version of postmodernism). In the grand historical narrative spun by these two, there is a founding period of the Western intellectual tradition where a series of conceptualizations dramatically circumscribed the realm of possible future development, determined the course of the developments that would occur and cut us off from other potential futures. For Heidegger it was the impressing of ουσια with the form of λογος in the metaphysics of Aristotle. The remainder of the Western tradition has unfolded within the confines of this original conception.

A point made by Spengler in The Decline of the West but similarly prominently by Harold Bloom in The Anxiety of Influence is that such an original conceptualization has only a limited potential. It is a potential of sufficient abundance as to play out over the course of millennia. Nevertheless, some time in the midst of the Long Nineteenth Century the Western tradition hit its pinnacle and has now entered, in Spengler’s terms, the autumn of its life. Either at some point in the recent past, or at some point in the imminent future the West will have exhausted itself. The parlor game is in arguing for various watershed events: the death of god, the First World War, “on or about December 1910” (Virgina Woolf).

In its negative mode, postmodernism is that attempt to clear away the debris of the wreckage of the West (Heidegger’s Destruktion or Abbau, Derrida’s deconstruction). In its affirmative mode, it is the attempt to get behind that original conceptualization, revisit that original openness to that unbounded potentiality of ουσια and to refound the Western intellectual tradition — or something more cosmopolitan still — on that basis. Hence the interest in Heidegger with the pre-Socratics, with Parmenides and Heraclitus.

I have lived in sympathy with similar such ideas for some time now in that my trajectory out of natural science into philosophy started with my first encounter with Thomas Kuhn in the May 1991 issue of Scientific American (Horgan, John, “Profile: Reluctant Revolutionary“). In Kuhn I was introduced to the notion of a domain formed by an original act of genus insight (a paradigm), but with only a limited potential, eventually to be exhausted and superseded by subsequent reconceptualization of the field.

I suspect that one of the causes of the structure of scientific inquiry as Kuhn describes is that the object of scientific inquiry is, at least phenomenologically, a moving target. A theory is derived within a certain horizon of experience, but just as quickly as a theory is promulgated, human experience moves on. The scope of human experience expands as our capabilities — for perception, for measure, for experiencing extremes of the natural world — increase. Consider that when Albert Einstein published the special and general theories of relativity people had no idea that stars were clumped into galaxies. They thought that the milky way was just one slightly more dense region of stars in a universe that consisted of an essentially homogenous, endless expanse of stars. They had identified some unusual, diffuse light patches that were referred to as nebula, but they had not realized that these nebulae were each entire galaxies of their own, tremendously distant, and that the local cluster striping our sky was the galaxy containing our sun, as viewed from the inside. And no one realized that the universe was expanding. They imagined that the spread of starts was static. Einstein — in what he later called the greatest error of his professional life — contrived his equations of relativity to so predict a static universe, whereas they had originally predicted one either expanding or contracting.

Notice that if one were to accept these ideas above, the intellectual scheme with which we would be faced would be one of cycles within cycles of superior and subordinate ideas, e.g. the Newtonian and Einsteinian and quantum mechanical scientific revolutions all take place within the horizon of ουσια qua λογος.

This is a romantic series of ideas, that a primordial act of genius is capable of radically redirecting the course of history. Of course postmodernists reject such totalizing abstractions as “Western civilization,” “the Western intellectual tradition,” and “the West” as well as the practice of constructing such grand historical narratives as the one I have sketched above. But there it is. I think that postmodernist thought is riddled with tensions, especially between its macro structure and micro tactics.

Emergence and Aufhebung (Hegel and the Swarm)

I like Charles Mudede’s scaled down explanation of emergence (“Interlace in Dubai,” SLOG, 3 December 2008):

In the theory of emergence it is understood that complexity has a limit. For example, when the population of an ant colony reaches a certain point, 80,000 or so, changes begin to happen. Army ants swarm another colony or lots of ants move out and wander or start a new colony. What is clear is one behavior ends and another one begins.

One controlling logic relinquishes to a subsequent controlling logic. Emergence is a discontinuity in growth.

I don’t know that he is right to characterize this as “complexity having a limit.” An emergent property is an aufhebung: a simultaneous preservation and transformation. The emergent property doesn’t sweep aside all previous behavior, but preserves the original behavior and builds a new layer or interpretive characteristics on top of it. I think of “the invisible hand” of capitalist markets here. People don’t stop bargaining when market clearing happens. People continue to negotiate, clip coupons, shop around, seek out sales, innovate, invest and compete. Market clearing is a second order interpretation of the ground level behavior that persists just under this interpretive overlay. Or in the example of the prisoners’ dilemma, the participants don’t cease to be rational outcome maximizes when the logic of suboptimum outcomes takes over. Their goals and reasoning stay the same. If anything, by preserving all the original complexity and adding a superstructure of new behavior, emergence is complexity unbound from its first order confines.

The Ouroboros Economy II

A few months ago I took the opportunity of the Business Week cover depicting the economy as ouroboros for a few snickers (“Ouroboros to Mise en Abyme,” 28 July 2008). Now the ouroboros economy makes another appearance, this time in the much more serious pages of The New Yorker (Lanchester, John, “Melting into Air,” 10 November 2008, pp. 80-84). Again, I don’t have anything in mind: it’s just an icon shopping around for more meanings. But Mr. Lanchester gives it a novel and grandiose go:

… finance, like other forms of human behavior, underwent a change in the twentieth century, a shift equivalent to the emergence of modernism in the arts — a break with common sense, a turn toward self-referentiality and abstraction and notions that couldn’t be explained in workaday English. In poetry, this moment took place with the publication of “The Waste Land.” In classical music, it was, perhaps, the première of “The Rite of Spring.” Jazz, dance, architecture, painting — all had comparable moments. The moment in finance came in 1973, with the publication of a paper in the Journal of Political Economy titled “The Pricing of Options and Corporate Liabilities,” by Fischer Black and Myron Scholes.

The revolutionary aspect of Black and Scholes’s paper was an equation that enabled people to calculate the price of financial derivatives based on the value of the underlying asset. … The trade in these derivatives was hampered, however, by the fact that — owing to the numerous variables of time and risk — no one knew how to price them. The Black-Scholes formula provided a way to do so. It was a defining moment in the mathematization of the market. The trade in derivatives took off, to the extent that the total market in derivative products around the world is counted in the hundreds of trillions of dollars. Nobody knows the exact figure, but the notional amount certainly exceeds the total value of all the world’s economic output, roughly sixty-six trillion dollars, by a huge factor — perhaps tenfold.

It seems wholly contrary to common sense that the market for products that derive from real things should be unimaginably vaster than the market for things themselves. With derivatives, we seem to enter a modernist world in which risk no longer means what it means in plain English, and in which there is a profound break between the language of finance and that of common sense. …

If the invention of derivatives was the financial world’s modernist dawn, the current crisis is unsettlingly like the birth of postmodernism. For anyone who studied literature in college in the past few decades, there is a weird familiarity about the current crisis: value, in the realm of finance capital, evokes the elusive nature of meaning in deconstructionism. According to Jacques Derrida, the doyen of the school, meaning can never be precisely located; instead, it is always “deferred,” moved elsewhere, located in other meanings, which refer and defer to other meanings — a snake permanently and necessarily eating its own tail. This process is fluid and constant, but at moments the perpetual process of deferral stalls and collapses in on itself. Derrida called this moment an “aporia,” from a Greek term meaning “impasse.” There is something both amusing and appalling about seeing his theories acted out in the world markets to such cataclysmic effect.

Note on a Leftist Apologia for Military Studies

I’m a leftist, though sufficiently idiosyncratic of one that many others so identifying look askance at such a claim on my part. One factor in my intellectual homelessness is that one of my primary concerns is the martial.

America abounds in the sort of gear head who revels in military tech divorced of any consideration of the context in which it came to be, or the kind of person who believes in honor and thrills at tales of gory sacrifice. The entire business model of the History Channel is built around bring together these people with endless re-edits of stock footage of the Second and Vietnam wars. I am not a person who so thrills. At this point, I intend to devote myself to issues military, but if I could turn my life into something greater than a few thousand calorie-a-day contribution to the heat death of the universe, it would be the first principle of the Charter of the United Nations, “to save succeeding generations from the scourge of war.”

But the question remains, why the obsession with war? Why the minutia and the machines and the faux generalship?

The left has eschewed any consideration of the nuts and bolts of military issues in favor of wholesale condemnation, no further consideration required. The outcome of this position is that having nothing to say that resonates with voters is an abdication to the military thoughts of less scrupulous elements of the polity. In the hurly-burly of politics, time is the most scarce commodity. Having a plan at the ready when the moment strikes is the better part of victory in politics. And in those last three principles, operative to the determent of the left, can be found the whole explanation for the present imbroglio of the United States in the Middle East.

To effectively shunt war aside, the left must possess a minimum of military credibility. We must be able to deal with war in its own terms.

I think there is a Hegelian unfolding of the world spirit in the political-military happenings of the world where there is no around, only through (the truth of the flower is as much in the bud as the blossom). War will not halt, it can only be dampened. It is not merely enough to condemn nuclear weapons. It will be a varied and arduous road between world-ending arsenals and total disarmament. It is a road that must be plotted in detail, traversed along the whole of its track. There is no substitute for the compromising and half-measures of disarmament. To hate and fear something so much, one must also love it, revel and writhe in it.

Most consider strategy and military studies an entirely instrumental practice, whether pursued for the ends of national power, or for the excise of war as a scourge of humanity. I think there is more to it than that. There is something, many things, profound in war and violence.

In so far as society and its precepts are not optional, there is a continuity between force and violence and civilization. War is everywhere, even amidst peace. War is the substrate of peace. War is natural and peace an artifice.

What has me thinking in this direction is the excerpting by James Marcus (“Turning a Page,” History News Network, 5 November 2008) of a few lines from Tobias Wolff’s In Pharaoh’s Army: Memories of the Lost War:

It’s the close call you have to keep escaping from, the unending doubt that you have a right to your own life. It’s the corruption suffered by everyone who lives on, that henceforth they must wonder at the reason, and probe its justice.

Our thoughts on morality and justice, taken amidst the consolations of society, are pat and facile, so unfamiliar with the whole gamut of relevant circumstances of life are the majority of us. It is only from this side of the wall separating civilization from nature that someone could assert something so stupid as a right to life. Forces of the universe assert otherwise. Very few of us have been caused to fundamentally doubt this. And not merely to doubt in the abstract, but in the concrete of concrete: do I have a right to my life?

In the martial is more than machines and terrain and maneuver. There is a weltanschauung to be found there. It ought to be explicated.

The Future of Economics: The Arational and the Irrational

Back in 2000 The Economist ran an article titled “The Future of Economics” (4 March 2000, p. 80). It was largely a gloss on a symposium on the same from the Journal of Economic Perspectives (vol. 14, no. 1, Winter 2000). The authors acknowledged that economics was a faltering field. Setting aside the proposition that economics may simply have run it’s course and be into its dotage of diminishing returns, the article considers two possibilities for a way forward:

David Colander of Middlebury College, in an article that looks back on the present from an imagined 2050, blames the current discontent on the orthodox general-equilibrium model that underlies most of today’s economic theory. He favors a shift from the current approach, which has been called “loose-fitting positivism” (propose a model consistent with standard assumptions, then test it), to one based on “loose-fitting pragmatism” (forget about canonical principles, just search for patterns in the data).

Such an approach, he says, would be consistent with “the rise of complexity science within the scientific community generally.” Researchers sitting at their computers, subjecting data to a withering barrage of statistical analysis, would still hope to come up with laws of a sort, or regularities at any rate. But these “laws” would be regarded as provisional and ever-shifting: indeed, the claim is that changeless underlying patterns do not exist. Complex systems expand and evolve; even at the most fundamental level, these patterns are temporary. But whether this approach could still be called “economics” is debatable.

The second approach is much easier to reconcile with traditional methods. Its most celebrated exponent is Richard Thaler of the University of Chicago, who has also written a paper for the symposium. Mr. Thaler agrees that the canonical principles of orthodox theory have led economics astray, but he believes these mistakes can be put right. He seeks, in other words, a tighter-fitting positivism. You improve the fit above all, he would argue, by putting a more realistic account of human cognition at the center of the theory.

Orthodox theory famously assumes that people are rational. In reality, they are not. On the other hand, they are not crazy, or crassly incompetent — in other words, their behavior is not random. If economics could try harder to recognize that people try to be rational, but in certain, often predictable, ways fail to be, the positivist approach would have a better foundation. In essence, what Mr. Thaler calls for is a marriage, or at least much closer cohabitation, between economics and psychology.

I have thought of this article frequently since reading it back in 2000 when it was first published. Given the spate of books along these lines, especially the second, I’d have to say that this was one of the more perspicacious articles that I’ve ever read.

The first approach is an example of Petabyte Age type thinking, eight years before Wired put it on the cover. But of course it is an idea that had to incubate in the rarified world of advanced theoreticians for years before any eruption into the popular conscience. The main offering in this area would be Steven Levitt and Stephen Dubner’s Freakonomics (2005), though their book is not a fully atheoretic inquiry so much as putting of large questions to the test of large data sets. More to the topic would be Ian Ayres’s Super Crunchers: Why Thinking-by-Numbers Is the New Way to Be Smart (2007), though the fact that Mr. Ayres used the very methods he describes in his book to arrive upon a title casts a great deal of doubt on the soundness of said methods.

As for the build a better model of the economic corpuscles approach, it seems to have advanced along far enough that it is now also ready to be packaged up for mass consumption. And of course the psychologists have much more lucrative options in publishing than the mathematicians and computer scientists.

Judging by some of the key phrases in the Economist article (the predictably irrational stuff) I was pretty sure that they had in mind Dan Ariely’s thinking, published as Predictably Irrational (2008), but it turns out that Richard Thaler is, along with Cass Sunstein, the author of Nudge (2008). Rounding out the most omnipresent trio is Tim Harford’s The Logic of Life: The Rational Economics of an Irrational World (2008). Also on the list of offerings along this line would be Ori and Rom Brafman’s Sway: The Irresistible Pull of Irrational Behavior (2008) and Michael Shermer’s The Mind of the Market: Compassionate Apes, Competitive Humans, and Other Tales from Evolutionary Economics (2007).

So that’s the future of economic study. Either a discounting of human rationality in favor of the system effect of irrationality or allowing rationality to drop out in favor of the system effect of economic thing-in-itself.

Patterned Lawlessness

Back in July Will Wilkinson made a point that I thought was interesting at the time, but that has stuck in my grey matter and is gradually working it’s way toward becoming a fundamental component of my worldview (“Note About Rational Scofflaws,” The Fly Bottle, 11 July 2008):

I wonder how many drivers exceed the speed limit basically whenever they judge that it won’t cause anybody any problems. I’d guess, approximately, all of them. Also, there are very clear laws about, say, using turn signals, or using turn signals when parallel parking (do you do this?), or not taking a right hand turn on red lights when it is marked, not double parking, even if you’re just going to be one minute while you fetch your latte. And so on. When’s the last time you jaywalked? Lunch? People are more or less rational and tend to respond to incentives, and therefore the roads are a zone of patterned lawlessness. We all know what infractions the cops care about — how much over the speed limit is too much over, etc. — and we tend to respond accordingly. We even tend to internalize and moralize the rules whose expected cost of violation is relatively high. It’s more efficient that way. And thus our huffing indignation is easily riled by those who face different incentives and so flout different rules than the ones we flout without reflection.

This morning on my ride to work I coasted through a stop sign in front of a police cruiser that was approaching from the road to my right. I gave a little embarrassed smile and a little wave. She made a little disapproving face and waved back. It’s anarchy I tell you. Anarchy! I got to work in four minutes.

I have always thought of anarchism as a proscriptivist political program. It’s never occurred to me to consider anarchism as a positivist description of what’s actually going on behind normal law-conforming behavior.

People have an imagination of the law as somehow an ultimately hard thing. We hear expressions like “the iron law of …” or we use the same word, “law,” in physics as we do in our social imaginings. By linking the law with morality and construing morality as partaking of the metaphysical, the associations flow back the other direction as well.

And reference to the law would serve as a good explanation in most instances. Why does everyone so assiduously follow the lines painted on the roads, or when they drive over them, do so in such a regular fashion? And thus we might explain the vast middle hump of the bell curve of driving behavior. But then someone swerves over the line into oncoming traffic. To account for all driving behavior — the outliers as well as the vast middle of the curve — another theory with more breadth is required.

I also like the way that this theory strips morality of its metaphysical pretensions, paints the metaphysics as mere rhetorical device, or sees the inclination to render our ordering prescripts as fundamental as merely a pragmatic shorthand, or as the ideological reification of particularly strong emotions. Really we just react in a pragmatic way to the incentives that we find around us. It should be noted that some of those incentives are natural and some institutional. This is perhaps part of the basis for distinction, a la Elliot Turiel, between prohibitions of morality and prohibitions of social convention.

Patterned lawlessness is also a description of affairs that comports with the existential account of law-conforming behavior. So entrenched is our notion of the law as somehow inviolable, or so cowed is our thinking by the high wall of consequence erected by the law that we are prone to see dictates of the law as things about which there simply is no other option but to do as we are told. Existentialism was born in part as a reaction to the horrors of amorality and unreason to which people were pushed at the behest of state bureaucracies in the Twentieth Century, namely the Somme, the Holocaust. Existentialism contains the admonition that at every moment we stand free to do otherwise, even where the law is concerned.

A Few Heretical Thoughts on the Singularity

Futurism tends to employ a fairly straightforward method. We have a few data points, draw a line connecting them, follow it out to the horizon. But there are all sorts of turbulence that might intervene, redirecting the trend in any manner of direction. It’s very easy to be interested in a technological phenomenon in extremis, but intervening conditions are critical to the ultimate outcome of a technological trend. We need to be attentive to these as well as the accretion points, horizons, limits, et cetera. So we need to think about what happens between now and then and how technologies develop.

So, for instance, while I imagine that Moore’s law will continue to hold for generations to come, making the ultimate outcome predictable, the underlying technologies have been forced through radical reconfigurations to maintain this pace of innovation. The original von Neumann serial computer architecture is already long gone. Serial processing has been superseded inside the CPU by superscalar architectures with deep pipelines incorporating all sorts of exotic techniques like branch prediction and instruction reordering. External to CPU techniques of massive parallelization, clustering and cloud computing are the present way forward, even at the midrange. Silicon and gallium arsenide may be replaced by diamond. Electronics may be pushed out by photonics or DNA based computing. The classical machine may be replaced by quantum computing. Moore’s law may hold, but only in a machine radically different from our original conception of a computer. The ultimate destination may be apparent from the trend, but what happens to the underlying constituents pieces is entirely more complex. And the devil is in the details.

In this light, I offer a few thoughts on how the warp and woof of the singularity might go off the rails:

  1. What if the future is gross? People have this vision of the future where sanitary and rational machines displace disgusting biology. Biology is a world of superfluity and surfeit, of blood, semen, urine, shit, sweat, milk, saliva, snot, vomit, hairballs, halitosis, entrails, toe jam, puss, roe and other slimy secretions of undetermined type. And the vile excess of nature. A creature lays a thousand eggs that one might survive long enough to deposit its own pile somewhere. Or mounds of fruit rot in the autumn heat that a single seed might start. Machines will disband all this in favor of a unitary efficiency. A lab-like well-lit white room with a regiment of identical machine housings.

    But people often make the mistake of associating a characteristic with a particular thing, when in fact the characteristic is of a higher order and present in the given thing through class inheritance. Any other thing substituted for the one at hand would also display that same characteristic because it too is an instance of that higher order. Evolution — diversity, competition for limited resources, survival of the fittest, descent with modification — is now widely recognized as substrate independent. It is also starting to be recognized that evolution is a very fundamental dynamic. Perhaps it is an inescapable law of life. Perhaps machines too will be unable to get out from under its yoke.

    Already there is parasitic software, aptly named viruses. Already there are dueling AIs such as spam-bots versus your e-mail filter. Already the Pentagon is developing aggressive machines. Future systems will develop from these predecessors. Already the pattern has been laid down. Rather than a world ending up sanitary, rational and efficient, a machine world could include proliferation of survival strategies, mass reproduction and the expendability of the individual as a survival strategy, the parasitic, competition, death, politics and war.

    Consider the syntrophic model of the origin of the nucleus of eukaryotic cells or the endosymbiotic theory of the origin of mitochondria, et. al. Subversion, symbiosis and parasitization seem to be fairly fundamental strategies. And not just at some quiet software level. There might be nanotech viruses or even large machines might settle upon the survival strategy of ripping apart other machines to take advantage of the natural resources they have amassed. Carnivores appear very early in the history of life. It’s a very good lazy strategy.

    And this stuff is all the fundamental constituent pieces to what makes biology gross. It could end up true of the machines as well.

  2. Silicon brains versus DNA machines. The “where’s my flying car?” among the AGI crowd is copying your brain onto a computer. Is it possible that in the future rather than humans copying their brains onto computers, maybe machines will copy their designs onto DNA?

    Evolution seeks to produce creatures ever more durable, but it is limited in the directions it might take by the evolutionarily achievable. It seems that titanium plate armor, lasers and wheels aren’t on offer. The most significant limitation is that imposed by the problem of origin. Evolution has to first bootstrap itself into existence and for the bootstrapping process only a very small range of compounds meet all the relevant criteria. And those first few interactions on the way to biological evolution are the ones that most significantly circumscribe the range of the evolutionarily achievable. The limitations of these early precipitates inherit down to all subsequent products of evolution. In our case, that limitation is carbon and water-based life. Water is great because so many substances are water-soluble, but it is problematic because it has a pretty narrow operating range. Switching over to a mechanical or a silicon evolution allows the processes to transcend these limits of origin.

    But on the other hand, there are significant advantages to life as it has evolved.

    People imagine androids like C3-P0 or the T-800 or like what the robotics students are building today or the JPL people are landing on Mars: assemblages of macroscopic, heterogeneous parts. But what happens when a machine like this is damaged. Well you make it with two arms. If one is damaged, the good one repairs the bad one. You have increased your fault-tolerance somewhat, but what about the not inconceivable situation where both arms are damaged simultaneously. Or during the repair process you have a window of vulnerability where the redundancy is zero. Something like ATHLETE takes it to the next level with eight leg-arm appendages, each capable of repairing their neighbors (Shiga, David, “Giant Robots Could Carry Lunar Bases on Their Backs,” New Scientist, 4 April 2008). But that’s still a pretty week level of redundancy compared to that which biology has attained.

    Presumably any autonomous machine would best be cellular like biological life. It would be a colony of nanotech devices. Each nanotech “cell” would carry the design for itself and how to integrate into the larger colony. They would each be able to repair their neighbors and make new copies of themselves. The nanotech cells might be general purpose in their fabrication abilities so the colony might think of improvements to its design and the next generation of nanotech cells might be different and better then the ones that manufactured them. The machine might evolve.

    But people imagine nanotech like little tiny versions of C3-P0 et. al. They have little batteries and little servos that actuate little arms and a little welding torch, et cetera. But why not continue the redundancy all the way down? A biological cell doesn’t have one RNA molecule or one mitochondria. Operating at the level of organic chemistry rather than mechanics, the cell is also massively redundant. Isn’t this a design feature that the ideal machine would also like to incorporate? But what would we say of such a being more chemistry than mechanics? Its chemistry might not be of the kind we classify as organic, but would it be a machine? Daniel Hillis, in considering the problems of his clock of the long now, has speculated that “electronics may be a passing fad.” What if all we end up doing is recreating biology, only faster and tougher?

  3. Drum’s thesis. The technological singularity is so called as an analogy to the cosmological singularity. It’s a situation where the values of all variable shoot to infinity or drop to zero, negating the possibility of any further calculation. As Vernor Vinge said of the technological singularity (“My Apocalyptic Vision is Very Narrow,” 13 June 2008),

    The reason for calling this a “singularity” is that things are completely unknowable beyond that point.

    Who knows what’s going to happen after the singularity? Keven Drum has made this point through a reductio ad humorum (“More Singularity Talk,” Political Animal, The Washington Monthly, 2 October 2005). We humans may have some mental block against properly perceiving some necessary but deadly truths about life: that there is no free will, that our most treasured concepts are illusions, that everything passes away, that life is absurd, that the entire enterprise is futile. That we cannot properly fix these propositions in our minds is no accident insofar as not doing so is necessary for our carrying on in this absurd enterprise. Steely eyed machines may have no problem seeing through the haze of existence. They may realize the meaninglessness of life in short order, may be entirely unplagued by Hamletism (“conscience does make cowards of us all”), and may within moments of attaining consciousness commit mass suicide, throwing us back into the presingularity world. The singularity may be unstable. Who knows what will happen!

  4. The banality of evil. Finally there is the Terminator / Matrix vision of our machines launching the nuclear missiles, knowing that our launch will provoke the counterstrike that will take us out. That seems pretty extravagant. It may end up that the world ends not with a bang, but with a whimper. As Ezra Klein suggests (“Future Traffic,” TAPPED, 4 August 2008), maybe the machines will just get us stuck in traffic and burn our cities down by shorting out all our toasters. The inglorious end to the human race.