The Day I Became a Hegelian

I remember distinctly 14 April 2000, the day the Dow Jones Industrial Average dropped 617.78 points, or 5.7 percent, to 10,305.77 (Fuerbringer, Jonathan and Alex Berenson, “Stock Market in Steep Drop as Worried Investors Flee; NASDAQ Has Its Worst Week,” The New York Times, 15 April 2000, p. A1). The company where I worked offered options and stock purchase plan heavy compensation packages and it was the first really precipitous drop in the stock market since the online discount stock brokers like E-Trade went really big. At the office where I worked nothing got done that day: no one could do anything but watch their portfolios plummet. I remember a group of us going out for lunch. This was in Seattle and the Harbor Steps II was still under construction. At that time it was just a reinforced concrete skeleton and a kangaroo crane. As the group of us walked down — I don’t know — probably University Street, I looked up at the concrete stack of Harbor Steps II and the bustle in and around it and it occurred to me that if the stock market were to continue to fall like it was, the development company might halt construction — that building would cease its coming into being. At that moment, I saw that it was primarily a blueprint, an architect’s vision, a developer’s profit and loss projections, investor expectations. It was less matter and more idea and at that moment I first thought that maybe there was something to this Hegel fellow.

Abandoned construction, Bangkok, Thailand, approximately Sukhumvit and Soi 8, 2 December 2006

Similarly, when S. and I were in Thailand, we stayed in a neighborhood a few blocks from an abandoned, half finished concrete skeleton of a building. They were actually fairly common in Bangkok. So quickly had this construction project been abandoned that there were places where the rebar had been put in place and half the concrete had been poured when work had stopped. A pillar ended in a jagged mound of concrete with the remaining half of the uncovered rebar simply jutting skyward. I took one look at that building and said to S., “That’s probably left over from the Asian financial crisis.” That’s how suddenly and ferociously the Asian financial crisis struck: people simply walked away from multi-million dollar building projects. When the beliefs don’t pan out, the rock and the steel cease to fill out their imagined dimensions.

Ten thousand years ago ideas played almost no role in human affairs or history. Today they play a significant role, perhaps already the better part of every artifact and interaction. The Pattern On The Stone as Daniel Hillis called it. The stone is inconsequential: the pattern is everything. It is a part of the direction of history that ideas gradually at first, but with accelerating speed, displace matter as the primary constituent of the human environment.

And that, as I read it, is Hegel’s Absolute

The Legitimacy of Argumentum ad Hominem

Will Wilkinson and Crispin Sartwell consider the satisfactions of environmental soothsaying:

As I’ve said, the insane jackup of rhetoric with regard to global warming, “the greatest crisis the species has ever faced,” the death of the planet, etc, is the secular humanist liberal apocalypse. It’s a sheer competition for who’s most dire, most obsessed, and who’s more unanimous than whom. It’s the flood, complete with the reasons: our moral culpability. I predict this: when Obama is elected, liberals will feel better about themselves and the probable verdict of cosmic judgment, and they’ll tone down the eschatology, the ranting cant.

(“Ranting Cant,” The Fly Bottle, 2 August 2008; untitled, Eye of the Storm, 2 August 2008; respectively)

For my part, I imagine there is something to this argument. Every faction has its share of less than completely rational members. But if it is the contention of Messrs. Wilkinson and Sartwell that the behavior of some of the advocates of anthropogenic climate change bears one iota of relevance on the soundness of the theory itself, then this is a picture perfect instance of the falicy of argumentum ad hominem. The emotional satisfaction that someone takes in holding a particular position would seem irrelevant to the ultimate adjudication of said position.

Some time ago when I originally made the formal cognition post (1 January 2008) K.S. said that he didn’t see the point. What was my advocacy of formal cognition meant to achieve? I couldn’t quite answer him at the time, but Mr. Wilkinson’s post really clarifies the matter for me. I’m an advocate of formal cognition against rhetoric generally, but most especially against some of its more pernitious tactics of Freudianism broadly construed as an interpretive style, sociobiology in its normative aspect (an epistemological relative of Freudianism), and secularization thesis.

For every purportedly empirical statement out there, there is built up a detritus of extraneous belief. There is the psychological baggage of the proposition: the advocacy or denial of an opinion is motivated. Cui bono? Or advocacy or denial becomes a part of one’s identity. People build an emotional complex around certain beliefs. Certain propositions become tropes mapping us into desired cultural categories. A proposition becomes cornerstone to an elaborate worldview into which their constructors invest vast amounts of intellectual energy. These people tend to become conservative about such propositions all out of proportion to the weight that the casual observer might assign to such beliefs.

It’s really easy to succumb to the desire to set aside the mater per se and argue the detritus. It’s certainly more emotionally satisfying. The purpose of a catalogue of validated logic and methodologies is to determine the soundness of a proposition and cast out the irrelevant considerations in a systematic way.

So, for example, the scientific veracity of anthropogenic climate change is within range of rational risk assessment. The systems concepts of a tipping point and self-reinforcing, accelerating change are legitimate and the potential implications of these concepts applied here are alarming. The perennial libertarian Alfred E. Neuman “What, me worry?” worldview has its own short fallings, namely that disasters are plausible and occasionally systemic.

On the other hand, there is no proposition beyond the proposing hominid. I’m not so sure that the distinction between rhetoric and formal decidability is tenable, especially once one admits the scientific method into the corpus of formal cognition. Given that induction is logically and experientially unsound, the scientific method becomes merely a highly stylized rhetoric, a rhetoric whose admissible tactics are more narrowly circumscribed. It is most certainly a rhetoric that is more reliable than others, but it nonetheless exists with other rhetorics along a continuum of variably reliable tactics, rather than being cordoned off in a privileged category all its own.

If nothing else, the absolute prohibition against argumentum ad hominem seems incompatible with Thomas Kuhn’s The Structure of Scientific Revolutions. Is it even possible for the behavior, psychology, constellation of attendant beliefs and rhetorical strategies of the advocates for a proposition to be irrelevant to the acceptance or rejection of the proposition? I think that once one dispenses with the notion of truth or falsity of a proposition in any strong sense in favor mere acceptance or rejection (the sociology of knowledge), then these previously considered extraneous factors become relevant. They are real channels by which information and belief are transmitted throughout society. They are part of the practice of acceptance and rejection as they actually happen. Argumentum ad hominem seeks to make explicit and disrupt these channels. It reduces their efficacy through ridicule.

(This is not to deny the truth or falsity of out beliefs in some ultimate sense. The truth is out there — it just doesn’t intervene in our deliberations in any radical way. Prima facie, incomplete beliefs about the world can be made workable.)

Ouroboros to Mise en Abyme

A few unsystematic thoughts on Ouroboros and mise en abyme:

  • I almost mentioned Douglas Hofstadter’s book, I Am a Strange Loop, in last week’s post (“The Mythical Economy,” 23 July 2008). He could have gone with Ouroboros on the cover too, but instead he went with mise en abyme.

    Or maybe he couldn’t have gone with Ouroboros. While Ouroboros is, on a superficial level, obviously a strange loop and a symbol that could be seen as self-referential, a peek at the index of Hofstadter’s book at the entry for “video feedback” — a technological mise en abyme — shows that he has a thicker analogy in mind:

    video feedback, 65-71; as candidate for strange loop, 103, 187, 203, 361; epiphenomena in, 68, 70-71; fear of meltdown, 56, 57; fractalic gestalts of, 204; lack of “I” in, 203; lack of perception in, 75-77, 187, 203; lack of symbols in, 203; lack of thinking in, 203; locking-in of patterns in, 70; parameters of, 65-67, 69, 75; reverberation in, 67-68; two systems entwined in, 210-211, 253-254; vanilla loop in, 208

  • While I’m amused at the notion of an Ouroboros economy, I can’t really think of any real correlate to the slightly humorous image. Unless maybe something like a naturalistic notion of the human economy, wherein the human economy is nature parasitic upon itself. The destruction of the biological world as giving birth to the artifactual or the cybernetic world. Ouroboros reborn for the Twenty-first Century!

  • The thing that’s really causing me to bring up mise en abyme is some thoughts on how people think about the future. People are faced with the need to decide and nearly all decisions that people make are, when not completely about the future, at least future-oriented. People’s thoughts about the future are divided into two closely related activities, carried out in tandem: planning and prediction. Prediction occasionally becomes an activity of its own, but for the most part prediction is an activity carried out in service of the more pragmatic planning.

    Planning is a branching strategic game. It works like this. I have a goal whose attainment is not simple: it involves a number of steps and it could be thwarted at any one of them. I start with my known situation and have a vague idea what the path to my goal would be and I make a series of hypothetical decisions. I test the soundness of a hypothetical decision by predicting the outcome of such an action. That is, I imagine a potential future.

    In the first round, the one inaugurated in the present, I know what my options are because they are present. In the second round and in all subsequent rounds, I must employ prediction to imagine what options I will have to choose from because from then on I am dealing in an imagined future. I repeat the hypothetical decision, predict, test, simulate new options algorithm down until I reach the last round whose decision results in the attainment of the goal.

    When I make predictions about the future, I rarely make a single prediction, since prediction, especially the sort of intuitionistic variant that people employ for the purpose of most of their planning, is not a very reliable. So I predict a range of possible futures. And in each possible future I face a range of possible decisions that I can take. Predicting and planning branch. Most of these I abandon in favor of the most fruitful seeming paths. But if a path dead-ends, I back up until I find the probable fateful decision that sent me down the path to the dead end. I recollect the other options at that possible future and imagine my way down another branch. I also generally flag a number of contingency plans. I went with this predicted future, but as things actually unfold, if it turns out that I predicted wrong, I have a plan ready for that other branch too.

    When I have what I imagine to be a satisfactory path from present to goal, I lock in each decision hypothetically made into “what I’ve decided upon.”

    This is a pretty systematic model and not necessarily exactly how most people make plans. People rarely sit town and carry it out algorithmically from beginning to end. More frequently people engage in this activity in fits and starts, not taking the problem from start to finish, but working on pieces that strike them at various occasions throughout their day. They absentmindedly do it while at their work computer, or do it extremely quickly while laying a joint plan with a partner over the telephone. Or maybe they try to be thorough about it and make a list on a notepad so they can see what’s done and what still in need of attention. Whatever the case, I think that ultimately this is what people are doing.

    The important point for mise en abyme is that near future decisions can only be locked in once more distant future decisions have been validated. Each step is dependent on the one after it having been made first. One starts the planning and predicting from the present and works one’s way forward, but one decides, as it were, backward, from the future to the present. Predictions and plans regarding the immediate future include as a part within them predictions and plans regarding the immediate future, which in turn contain predictions and plans about the distant future and so on. My thoughts about the future are mise en abyme insofar as they contain within them further thoughts about more distant futures.

    What one is doing in this process of planning for the future is conducting is a depth first search of potential futures. And depth first search is canonically thought of as recursive.

  • Mise en abyme seems to have a lot more analogistic or systemizing potential. Scale symmetry (e.g. fractals) along with all the related phenomena that can be grouped under that pattern seem coterminous with mise en abyme. Hegel’s logical schema seems like a highly abstract instance of mise en abyme, where each intellectual system is subsumed into a higher order intellectual system.

  • Perhaps there is a historical development of the conceptual depth and sophistication of the idea of self-referentiality. Ouroboros is simple cyclicality, though extrapolated into a notion of infinity or eternity. Homunculus is a limited instance of scale symmetry. Modern formal recursion are the culmination.

Relativism and Conflict

Ezra Klein references Nickolas Kristof’s column yesterday as bringing “striking clarity” on the Israeli-Palestinian issue, but the clarity is all in Mr. Klein’s interpretation (“Tough Love for Israel?,” The New York Times, 24 July 2008; “The Dual Realities of Israel / Palestine,” TAPPED, The American Prospect, 24 July 2008, respectively):

But he [Kristof] offers a counter-fact: “B’Tselem, the Israeli human rights organization, reports that a total of 123 Israeli minors have been killed by Palestinians since the second intifada began in 2000, compared with 951 Palestinian minors killed by Israeli security forces.”

When Jews talk about the ethics of the Israeli response, they tend to emphasize the recklessness and cruelty of Palestinian terrorists. The words most often heard are “target civilians.” The Israelis are right, in other words, because they carry out limited military operations against discrete targets, which sets them ethically apart from members of Hamas who murder innocents because it’s an effective tactic. That is indisputable.

Palestinians, by contrast, speak of the war in terms of absolute costs: They have suffered more, buried more, seen more of their freedoms and land and dignity taken from them. To them, it seems insane to condemn Palestinian tactics when the Israelis have killed so many more innocent children. That too is indisputable.

Both sides are right. There’s a passage in Aaron David Miller’s excellent book The Much Too Promised Land that makes this point elegantly. “The prospects of reconciling the interests of an occupied nation with those of a threatened one seemed slim to none,” he says. In many ways, that’s the essential truth of the conflict: The two sides don’t judge themselves similarly. The Israelis see themselves as threatened innocents, not oppressors. The Palestinians see themselves as an occupied and humiliated nation, not aggressors. The Israelis see themselves as inexplicably under attack, and acting only in defense. The Palestinians see themselves as losing a war against a much stronger, and demonstrably more brutal, occupier.

This is all true of Israel / Palestine and an important point to keep in mind when trying to understand the claims and counterclaims of the parties.

What Israel needs is, as Mr. Kristof calls it, tough love. What that means at a more operative level is the U.S. needs to provide Israeli moderates with additional reasons they can point to in opposing Israeli extremists (messianic Jews, settlers, etc.). The Palestinians aren’t the only ones whose country is being destroyed by the extremists in their midst.

In addition to pointing out some salient facts about the nature of the particular dispute in question, this is a perfect real-world example of relativism. Most people think of relativism and think it means amorality, or moral capitulation, or a dispensing-with of any notion of the facts of the matter. But what I think this explanation shows is that relativism is compatible with an objective account of things — or that relativism as an ethical theory is well compartmentalized from any particular metaphysical substratum. And relativism is a theory that provides a very good account of many disputes in the world. People aren’t necessarily in dispute over what is true and what false, or the proper moral criteria. For instance, no one in this situation is necessarily disputing the numbers killed or whether killing is right or wrong. The facts of the matter or the morality of any individual act considered in complete isolation is not in dispute. What is in dispute is the proper context in which to weigh the facts and adjudicate the contending claims of moral priority. It’s a question of interpretation. Different sets of acts of violence become at least plausibly justifiable depending on which gestalt narrative one adopts. Change total narrative and the moral weight of the various acts shifts around.

This is the way it is in almost all disputes. The rhetoric that people deploy usually very quickly leaves behind particular matters of fact or the morality or immorality of specific acts and it becomes a contest of dueling grand narratives. A conversation about a particular environmental harm becomes one about the tragedy of the commons and evil corporations versus the road to serfdom. A conversation about a reproductive decision becomes one of recidivist patriarchy versus the suicide of Western culture. The fact of the matter is that no one can quite see individuals as individuals and consider their actions as such. Everyone sees all people as deeply embedded in social structures and patterns and duty-bound to speculative forces of society and history.

Bush’s Götzen-Dämmerung; Obama’s Revaluation of All Values

This does even more for me than Hillary Clinton’s drinking habits:

Obama himself went through a period of “devouring” the work of Nietzsche while living in New York. It’s difficult to say what Obama might have absorbed from the German philosopher, mostly because Nietzsche himself is so hard to pin down, but one of Obama’s favorite instructors at Occidental told Mendell that anyone who immersed themselves in his thought would learn “to call everything into question.”

(Miller, Laura, “Barack by the Books,” Salon.com, 7 July 2008)

New York and Nietzsche! Could it be any more élitist? It kinda makes ya see the theme of change in a different light. As Nietzsche said, “… only beginning with me are there hopes again” (Ecce Homo, “Why I am a Destiny,” §1, trans. Walter Kaufmann, 1976).

Sensuous Knowledge

The current issue of The New York Review of Books has an enjoyable essay on Indian eroticism (Dalrymple, William, “India: The Place of Sex,” vol. LV, no. 11, 26 June 2008, pp. 33-36). Alas, everyone prior to a certain era it would seem was possessed of the anti-life of Platonism and the sky cult:

… there has always been a strong tension in Hinduism between the ascetic and the sensual. The poet Bhartrihari, who probably lived in the third century AD, around the time of the composition of the Kamasutra, oscillated no less than seven times between the rigors of the monastic life and the abandon of the sensualist. “There are two paths,” he wrote. “The sages’ religious-devotion, which is lovely because it overflows with the nectarous waters of the knowledge of truth,” and “the lusty undertaking of touching with one’s palm that hidden part in the firm laps of lovely-limbed women, loving women with great expanses of breasts and thighs.”

“Tell us decisively which we ought to attend upon,” he asks in the Shringarashataka. “The sloping sides of wilderness mountains? Or the buttocks of women abounding in passion?”

Of the happier consequences of the death of god, one is that we can dispense with this never really existent dichotomy between the life of the mind and the sensuousness of the body. From beyond such strictures, they seem entirely arbitrary. Their abandonment is the aesthetic-ethical corollary of Kant’s dissolution of the rationalist-empiricist debate. I take it that this is what Nietzsche was getting at when he promulgating a collection of aphorisms under the title The Gay Science, or as it has occasionally been translated, The Joyous Knowledge. I think here of his discussion, as well as my own experience, that one’s best thoughts are often had while in motion.

American Pseudo-Religion; Science and Experience

The title of David Brooks’s op-ed Tuesday, “The Neural Buddhists” (The New York Times, 13 May 2008), sounded cyberpunk and that was enough to entice me to read it. Turns out it’s some comments on the trend in neurological and genetic research toward characterizing the religious tendency and the religious experience. A lot of the editorial is wishful thinking on the part of a religious conservative, but then there’s the musings from which the piece draws its title:

This new wave of research will not seep into the public realm in the form of militant atheism. Instead it will lead to what you might call neural Buddhism.

In their arguments with Christopher Hitchens and Richard Dawkins, the faithful have been defending the existence of God. That was the easy debate. The real challenge is going to come from people who feel the existence of the sacred, but who think that particular religions are just cultural artifacts built on top of universal human traits. It’s going to come from scientists whose beliefs overlap a bit with Buddhism.

I often point out that the fastest growing religion in the U.S. today is not Mormonism or any branch of Christianity, but the poorly conceptualized “spiritual but not religious” (“Teens: Spiritual, But Not Religious,” smarties, 11 January 2005). This isn’t some entirely post-1960s baby-boom or gen-X phenomenon. It is the latest manifestation of a long line of uniquely American religion stretching from the Enlightenment deism of the founding generation to the transcendentalism of the late Nineteenth Century to the Progressive era psycho-spirituality of William James. It pulls together an idiosyncratic combination of Christianity, grand historical conspiracy theories à la the Freemasons, various strains of mysticism, yeoman pragmatism, naturalism, popular science, amateur philosophical speculation, do-gooderism, health fads, self-help, popular psychology and positive thinking. It’s all of a piece with American mesianism, paranoia, individualism, pragmatism and the melting pot. It’s a little incipient and a little too convenient for the American way of life, having dispensed with the hard truths and the dark side of religion as well as any of the really imposing moral injunctions, but there it is. And Mr. Brooks is right to point out that the best fit for this among the ancient religions is Buddhism.

As for the rest of the article, it’s just the ontological argument for the existence of god without the minor premise. And the refutation is the same today as it was in the Eighteenth Century: you can’t imagine something into existence. A recurrent dream of Pegasus, however deeply felt, is not the existence of Pegasus. Conversely, the Pegasus of the recurrent dream is not what people would mean were they to speak of the existence of Pegasus. The question isn’t whether one has a particular brain experience. People have all manner of experiences, imaginary and not, as well as everything in between — in fact, the vast bulk of human experience probably lies somewhere between the real and the imagined. The question is whether or not a given experience correlates to an existent external state of affairs.

Amidst the natural sciences the question of correlation between a purported experience and a state of affairs external to mind is not something determined in some crass way. “It really happed.” “No it didn’t.” “Yes it did!” There is simply no sense dwelling on a single instance. Scientists discount a sample size of one. If there is too much dispute over a particular instance, simply drop it in favor of further inquiry. Fleeting and unitary experiences are dismissed in scientific practice in favor of what might be called the intersubjective (see e.g. intersubjectivity or intersubjective verifiability), the societal nature of scientific knowledge or a Wittgensteinian denial of a private language in favor of the essentially public nature of our scientific discourses.

For all of Nietzsche’s fretting that the death of god had unchained the Earth from the Sun, religion was every bit as arbitrary and subjective as its adherents today accuse irreligion of being. In the end, the whole of society swings over the abyss on a tether of fundamentally ungrounded beliefs. Science at least has the merit of basing its propositional criteria on egalitarian public discourse. Religion is based on all manner of purportedly private experience — revelations, miracles, conversations with the gods, passions, et cetera — all considered beyond criticism. Some people are chosen, enlightened or who knows what — the plans of the gods are inscrutable — and the rest of us, not so exalted, accept or reject religious belief on the authority of those possessed of such experiences. To those who prefer something more determinate, Jesus reiterates the Deuteronomic injunction, “Do not put the Lord your God to the test” (Matthew 4:7, Deuteronomy 6:16).

This is one of the major divisions between science and religion. Were science to start poking its nose into religious business, the religious person would object that the spiritual is a realm of deeply personal experience, not subject to the critical dissection of all comers. And yet in it’s public aspect, religious practitioners are expected to take the word of people having had religious experiences. No attempt is made to abstract an experience away from an individual experiencer. Religion believes every obscurantist story that any old quack tells, at least where not condemned by religious authority.

A recognition deeply built into the practice of natural science, even if never properly conceptualized or explicitly taught, is the recognition of the fallibility, or at least the broad diversity in function, of the human mind. The well observed fact of low brain performance, stretching from simple poor judgment, forgetfulness, error, misperception and dishonesty to careerism, optical illusions and dreams, all the way to delusion, mental disorder, group psychology and mass hysteria has been incorporated into the background of scientific practice. In this regard a particular theory of mind is a part of the body of scientific practice. And, importantly, it’s not a complicated theory of mind — though one can pursue it to various levels of sophistication — but rather one built upon rather day-to-day observation of human foibles. I think the books of reference here are not any of the ones that Mr. Brooks lists, but David Linden’s The Accidental Mind: How Brain Evolution Has Given Us Love, Memory, Dreams, and God or Gary Marcus’s Kluge: The Haphazard Construction of the Human Mind.

One doesn’t have to search very far in one’s own life to find examples of how the brain, while a miracle of evolution, only works so well. At least a couple of times a week I experience a random, spasmodic jerk of some extremity. My cube neighbor at work, my brother, my highschool physics teacher and a former priest all have facial ticks, some rather elaborate, of which I am certain they are completely unaware and were they to become aware, would not be able to control. So-called religious phenomena — feelings of destiny, hearing voices, talking to god, heightened emotional states, impulses, a sense of unity, feelings of disembodiment — are of a piece with this. I don’t deny that religious people have the experiences that they claim. Subjective experiences are experiences nonetheless. What I deny is that such experiences have any greater significance.

Or for that matter there is the even more commonplace matter of difference in perspective. In this sense science is a highly stylized political methodology for producing consensus amidst the rocky shoals of vast differences in human experience.

These commonplace observations are the cause for the emphasis on repeatability and independent verification in scientific practice. It’s not enough for one person to have had an experience, or even for a very large number of people to have shared that experience for it to be established as a scientific fact. The standard for a scientific fact is that it must be something accessible to all; it must be something determinately replicable. A scientific community employs a fairly common engineering method for combating error: given that humans are cheap and plentiful, accommodate for the very low performance of each individual unit of scientific production by performing each task in redundancy. The inaccuracy of any given unit is cancelled out over the span of the entire system.

This is also the cause for the conservatism in science when it comes to abandonment of a long-standing theory. Nonscientists are fond of pointing out one or two contrary studies or a handful of unexplained mysteries and thinking a major theory overturned. The more efficient explanation is to discount early anomalies as human fallibility. The efficient practice when dealing with a theory propped up by thousands of observations, millions of person hours of labor and the consilience of logically related theories and at the same time a small set of recalcitrant data is to wait and see. That’s not to say that anomalies are dismissed — from economics, to discount something is to calculate the present day value of something that will potentially be of a different value in the future — they are merely tabled pending additional information. But should the accumulation of anomalies reaches a critical mass, they will eventually be widely admitted into the corpus of accepted fact. It’s the other side of the redundancy equation.

“We’re in the middle of a scientific revolution. It’s going to have big cultural effects.” True, just not the ones Mr. Brooks is thinking of. I think that what we’re seeing is essentially Antony Flew’s “Theology and Falsification” playing out on a societal scale. Atheists keep on raising unanswerable objections to religious belief — and not just in polemics, but ubiquitously in the zeitgeist — and religious people are staging a fighting retreat by continually lowering the bar and circumscribing ever more narrowly the propositional territory it is that they are defending. Neural Buddhism, spiritual but not religious — people may continue to profess all manner of confusion on the matter — it’s all a track to an essentially irreligious society.

Searching for Something Secondary on Cioran

When Romanian studies professor Ilinca Zarifopol-Johnston died in 2005 (“Ilinca Zarifopol-Johnston,” 8 April 2005), she left a lot of literature unfinished, among it not one, but two books on E.M. Cioran: one a critical biography, the other a personal memoir of their friendship. As I had been anticipating at least the biography for some time when I heard of her death, I was crestfallen.

It turns out that what she had completed was sufficiently far along that a book will come of it. Searching for Cioran will be released in Fall 2008 according to publisher Indiana State University Press, January 2009 according to Amazon.com. The book lists her husband as the editor so I presume that he gathered together what there was and did his best to make a book of it. I also imagine that it will be diptych in that it will be both the critical biography and memoir published together. In such a thin field, even an uncompleted work will be a real addition.

Formal Cognition

A few weeks ago I went to the most recent installment of the D.C. Future Salon. The presenter was the organizer, Ben Goertzel of Novamente and the subject was “Artificial Intelligence and the Semantic Web.” One of the dilemmas that Mr. Goertzel chased out with the notion of a semantic web is that the complexity is conserved: either it has to be in the software agent or it has to be in the content. If it is in the agent, then it can be highly centralized — a few geniuses develop some incredibly sophisticated agents — or it can be in the content in which case the distributed community of content providers all have to adequately mark-up their content in a way that more simple agents can process. Mr. Goertzel is hedging his bets: he is interested both in developing more sophisticated agents and in providing a systematic incentive to users to mark up their content.

In the course of discussing how to encourage users to mark up their own content, Mr. Goertzel listed as one of the problems that most lack the expertise to do so. “What portion of people are adept at formal cognition? One tenth of one percent? Even that?” I had probably herd the phrase or something like it before but for whatever reason, this time it leapt out. I had a collection of haphazard thoughts for which this seemed a fitting rubric and I was excited that this may have been a wheel for which no reinvention was required. When I got home I googled “formal cognition” figuring there would be a nice write-up of the concept on Wikipedia, but nothing. I fretted: formal cognition could simply mean machine cognition (computers are formal systems). Maybe it was a Goertzel colloquialism and the only idea behind it was a hazy notion that came together in that moment in Mr. Goertzel’s mind.

Anyway, I like the notion and absent an already systemized body of thought called formal cognition, here is what I wish it was.

While the majority might continue to think with their gut and their gonads, a certain, narrow, technocratic elite is in the process of assembling the complete catalog of formally cognitive concepts. There is the set that consists of all valid, irreducibly simple algorithms operative in the world along with their application rules covering range, structure of behavior, variants, proximate algorithms, compatibilities, transition rules, exceptions, et cetera. I am going to show my reductionist cards and say that given a complete set of such algorithms, all phenomena in the universe can be subsumed under one of these rules, or of a number acting in conjunction. In addition to there being a complete physis of the world, there is, underlying that, a complete logic of the world.

This reminds me of Kant’s maxim of practical reason that will is a kind of causality and that the free will is the will that is not determined by alien causes, that is, the will that acts according to its own principles, which are reason (see e.g. Groundwork of the Metaphysic of Morals, Part III). It seems to me that a project of delineating the principles of formal cognition is a liberating act insofar as we are casting out the innumerable unconscious inclinations of that dark netherworld of the intuition (gut and gonad), instilled as they were by millennia of survival pressures — the requirements for precision of which were considerably different from those of a modern technological people — in favor of consciously scrutinized and validated principles of thought.

By way of outstanding example, one might be prone to say that evolution is such a logic. At this point evolution has jumped the bank of its originating field of thought, the life sciences, and begun to infect fields far beyond its origin. It is increasingly recognized today that evolution through natural selection is a logical system, one of the fundamental algorithms of the world, of which the common conception of it as a process of life science is merely one instantiation. Perhaps it was only discovered there first because it is in life phenomena that its operation is most aggressive and obvious. But it is now recognized that any place where replication, deviation and competition are found, an evolutionary dynamic will arise. Some cosmologists even propose a fundamental cosmological role for it as some sort of multiverse evolution would mitigate the anthropic problem (that the universe is strangely tuned to the emergence of intelligent life).

However, note that evolution is a second order logic that arises in the presence of replication, deviation and competition. It would seem that evolution admits to further decomposition and that it is replication, deviation and competition that are fundamental algorithms for our catalog. But even these may be slight variations on still more fundamental algorithms. It strikes me that replication might just be a variant of cycle, related perhaps through something like class inheritance or, more mundanely, through composition (I feel unqualified to comment on identity or subspecies with algorithms because it is probably something that should be determined by the mathematical properties of algorithms).

System. But have I been wrong to stipulate irreducibly simplicity as one of the criteria for inclusion in our catalog? The algorithms in which we are interested are more complex than cycle. They are things like induction, slippery-slope, combinatorial optimization or multiplayer games with incomplete information. We have fundamental algorithms and second order or composite algorithms and a network of relations between them. Our catalogue of algorithms is structured.

The thing that I think of most here is Stephen Wolfram’s A New Kind of Science (complete text online | Amazon.com | Wikipedia) in which he describes a systematic catalog of enumerated algorithms, that is, there is an algorithm that could generate the entire catalog of algorithms, one after the other. These algorithms each generate certain complex patterns and as Mr. Wolfram suggests, the algorithms stand behind the phenomena of the material world.

An interesting aside lifted from the Wikipedia page: in his model science becomes a matching problem: rather than reverse engineering our theories from observation, once a phenomenon has been adequately characterized, we simply search the catalogue for the rules corresponding to the phenomenon at hand.

It seems to me that this catalog might be organized according to evolutionary principles. By way of example, I often find myself looking at some particularly swampy looking plant — this is Washington, D.C. — with an obviously symmetrical growth pattern — say, radial symmetry followed by bilateral symmetry, namely a star pattern of stems with rows of leaves down each side. Think of a fern. Then I see a more modern plant such as a deciduous tree, whose branch growth pattern seems to follow more of a scale symmetry pattern. The fern-like plants look primitive, whereas the deciduous branch patterns look more complex. And one followed the other on the evolutionary trajectory. The fern pattern was one of the first plant structures to emerge following unstructured algae and very simple filament structure moss. The branching patterns of deciduous trees didn’t come along until much later. There are even early trees like the palm tree that are simple a fern thrust up into the air. The reason that fern-like plants predate deciduous trees has to do with the arrangement of logical space. A heuristic traversing logical space encounters the algorithm giving rise to the radial symmetry pattern before it does that of scale symmetry. The heuristic would work the same whether it was encoded in DNA or in binary or any other instantiation you happen to think of.

A fantastical symmetry. I’m going to allow myself a completely fantastical aside here — but what are blogs for?

It is slightly problematic to organize the catalogue on evolutionary principles insofar as they are logical principles and sprung into existence along with space and time. Or perhaps they are somehow more fundamental than the universe itself (see e.g. Leibniz) — it is best to avoid the question of whence logic lest one wander off into all sorts of baseless metaphysical speculation. Whatever the case, biological evolution comes onto the scene relatively late in cosmological time. It would seem that the organizing principle of the catalogue would have to be more fundamental than some latter-day epiphenomena of organic chemistry.

Perhaps the entire network of logic sprung into existence within the realm of possibility all at once, though the emergence of existent phenomena instantiating each rule may have traversed a specific, stepwise path through the catalogue only later. But there isn’t a straightforward, linear trajectory of simple all the way up to the pinnacle of complexity, but rather an iterative process whereby one medium of evolution advances the program of the instantiation of the possible as far as that particular medium is capable before its potential is exhausted. But just as the limits of its possibilities are reached, it gives way to a new medium that instantiates a new evolutionary cycle. The new evolutionary cycle doesn’t pick up where the previous medium left off, but start all the way from zero. Like in Ptolemy’s astronomy there are epicycles and retrograde motion. But the new medium has greater potential then its progenitor and so will advance further before it too eventually runs up against the limits of its potential. So cosmological evolution was only able to produce phenomena as complex as, say, fluid dynamics. But this gave rise to star systems and planets. The geology of the rocky planets has manifest a larger number of patterns, but most importantly life and the most aggressive manifestation of the catalog of algorithms to date, biological evolution. As has been observed, the most complexly structured three pounds of matter in the known universe is the human brain that everyone carries around in their head.

If life-based evolution has proceeded so rapidly and demonstrated so much potential, it is owing to the suppleness of biology. However, the limits of human potential are already within sight and a new, far more dexterous being, even more hungry to bend matter to logic than biological life ever was has emerged on the scene: namely the Turing machine, or the computer. This monster of reason is far faster, more fluid and polymorphous, adaptable, durable and precise than us carbon creatures. In a comparable compression of time from cosmos to geology and geology to life, the computer will “climb mount improbable,” outstrip its progenitor and explore further bounds of the catalog of logic. One can even imagine a further iteration of this cycle whereby whatever beings of information we bequeath to the process of reason becoming real repeat the cycle: they too reach their limits but give rise to some even more advanced thing capable of instantiating as yet unimagined corners of the catalogue of potential logics.

But there is a symmetry between each instantiation of evolution whereby the system of algorithms was traversed in the same order and along the same pathways. Perhaps more than the algorithms themselves are universal, perhaps the network whereby they are related is as well. That is to say that perhaps there is an inherent organizing structure within the algorithms themselves, a natural ordering running from simple to complex. Evolution is not the principle by which the catalog is organized, but merely a heuristic algorithm that traverses this network according to that organizing principle. Evolution doesn’t organize the catalog, but its operation illuminates the organization of the catalog. Perhaps that is what makes evolution seem so fundamental: that whatever it’s particular instantiation, it is like running water that flows across a territory defined by the catalogue. Again and again in each new instantiation evolution re-traverses the catalogue. First it did so in energy and matter, then in DNA, then in steel and silicon, now in information.

Anti-System. This is fantastical because, among other reasons, it is well observed that people who are captivated by ideas are all Platonists at heart. I have assiduously been avoiding referring to the algorithms of a system of formal cognition as forms. It all begs the question of whence logic — which, again, is a terrible question.

Of course the notion of formal cognition doesn’t need to be as systematic as what I have laid out so far. Merely a large, unsystematized collection of logically valid methods along with the relevant observations about the limitations, application rules and behaviors of each one would go a significant way toward more reliable reasoning. Perhaps such a thing doesn’t exist at all — I tend towards a certain nominalism, anti-foundationalism and relativism. But the notion of a complete logical space, or a systematic catalog is perhaps like one of Kant’s transcendental illusions — a complete science or moral perfection — the telos, actually attainable or only fantasized, that lures on a certain human endeavor.

Politics. All of this having been said, I remain of the belief that politics is the queen of the sciences. Formal cognition wouldn’t be automated decision making and it could only ever enter into political dialog as decision support or as rhetoric.

As Kant wrote, “Thoughts without content are empty; intuitions without concepts are blind” (Critique of Pure Reason, A51 / B75). Kant developed an idiosyncratic terminology and perhaps another way of phrasing this, more suited to my purpose here, would be to say that formal reason absent empirical data is empty; but that empirical data unsystemized by conceptual apparatus is an incoherent mess. A complete system of the world cannot be worked out a priori and a mere catalogue of all observations about the world would be worse than useless.

Formally cognitive methods must be brought to bear. And against a complex and messy world I do not think that their application will be unproblematic. In passing above, I mentioned the notion of application rules. Each algorithm has attendant rules regarding when it comes into force, for what range of phenomena it is applicable, when it segues to another applicable algorithm, et cetera. Take for instance the notion of the slippery-slope or the snowball. Not all slippery-slopes run all the way to the bottom. Most are punctuated by points of stability along the way, each with its own internal logic as to when some threshold is overcome and the logic of the slipper-slope resumes once more. Or perhaps some slippery-slope may be imagined to run all the way to the bottom — it’s not ruled out by the logic of the situation — but for some empirical reason in fact does not. Once the principles of formal cognition come up against the formidable empirical world, much disputation will ensue.

Then there is the question of different valuation. Two parties entering into a negotiation subscribe to two (or possibly many, many more) different systems of valuation. Even when all parties are in agreement about methods and facts, they place different weights on the various outcomes and bargaining positions on the table. One can imagine formally cognitive methods having a pedagogic effect and causing a convergence of values over time — insofar as values are a peculiar type of conclusion that we draw from experience or social positionality — but the problems of different valuation cannot be quickly evaporated. One might say that the possibly fundamental algorithm of trade-off operating over different systems of valuation goes a long way toward a definition of politics.

Finally, one could hope that an increased use and awareness of formally cognitive methods might have a normative effect on society, bringing an increased proportion of the citizenry into the fold. But I imagine that a majority of people will always remain fickle and quixotic. Right reasoning can always simply be ignored by free agents — as the last seven years of the administration of George W. Bush, famous devotee of the cult of the gut as he is, have amply demonstrated. As an elitist, I am going to say that the bifurcation between an illuminati and the rabble — as well as the historical swings in power between the two — is probably a permanent fixture of the human condition. In short, there will be no panacea for the mess of human affairs. The problem of politics can never be solved, only negated.

The Slipstream Between Absurdity and Profound Beauty

Terri Schiavo finally died today (objectively yesterday, but subjectively today as I am still awake). I am tempted to say that the body of the former person Terri Schiavo finally stopped working today, but it seems a little too party-line.

As this drama has played itself out, a passage from an old article has acted as an interlocutor as I have turned this issue around in my head.

A few years ago The New York Times Magazine published a few thousand word essay titled “Unspeakable Conversations” (16 February 2003) by Charleston, South Carolina based attorney and disabled persons activist Harriet McBryde Johnson.

The article was more human interest than polemical. It was about Ms. Johnson’s acceptance of an invitation from Peter Singer and Princeton University to participate in two forums on infanticide and assisted suicide.

Mr. Singer, if you haven’t heard his name, is a rather famous philosopher focusing on ethics. Asserting that personhood is coterminous with cognition, he has argued in favor of abortion rights, euthanasia, assisted suicide and in some cases infanticide. Oddly enough, he is also a vegetarian and the author of one of the classics of the animal rights movement, Animal Liberation.

The dramatic tension of the essay was that Ms. Johnson is a disabled persons activist and the Professor argues in favor of killing disabled infants at birth. From her perspective, Mr. Singer is a monster. How is one to behave towards a person held in such contempt? Many of her fellow activists encouraged Ms. Johnson not to legitimize Mr. Singer by appearing with him in the forum, but she accepted nonetheless. After her visit, Ms. Johnson’s sister asks her, “You kind of like the monster, don’t you?” She replies, “He’s not exactly a monster. He just has some strange ways of looking at things.”

The article is a well-crafted and interesting piece of writing. The passage from Ms. Johnson’s article to which I give a few minutes of sustained consideration every couple of weeks when it comes to me is an interchange between the Professor and the Attorney:

In the classroom there was a question about keeping alive the unconscious. In response, I told a story about a family I knew as a child, which took loving care of a nonresponsive teenage girl, acting out their unconditional commitment to each other, making all the other children, and me as their visitor, feel safe. This doesn’t satisfy Singer. “Let’s assume we can prove, absolutely, that the individual is totally unconscious and that we can know, absolutely, that the individual will never regain consciousness.”

I see no need to state an objection, with no stenographer present to record it; I’ll play the game and let him continue.

“Assuming all that,” he says, “don’t you think continuing to take care of that individual would be a bit — weird?”

“No. Done right, it could be profoundly beautiful.”

Profoundly beautiful. That is the phrase that has stuck with me. I think that I am firmly in the camp declaring such a use of human resources absurd. Worse than absurd: granting that preserving someone in a neither-death-nor-yet-life state is morally neutral, pressing their caretakers into empty medical rituals, acts that would be difficult even where the rewards great and obvious, or into meaningless labor, no matter how generously remunerated, is the unbearable pinnacle of absurdity. Doesn’t it devalue life to devote it to so meager an end? The incongruence of the tremendous technological feats and the expenditure of the heights of human ingenuity to no effect whatsoever, in the service of nothing so much as proving a point — a point that could be better made in so many other ways, for life is so cheap in this world — is demoralizing in its own way.

These are all just platitudes. I don’t really have an answer to the proposition of profound beauty. I want to say that there is something gnawing about the statement, but I don’t know if it is the power or the starkness of the statement that provokes me so.

I don’t know how much moral seriousness to accord Ms. Johnson: when presented with her seemingly double standard regarding the value of the lives of animals versus humans in a persistent vegetative state, she cut off Mr. Singer, saying, “Look. I have lived in blissful ignorance all these years, and I’m not prepared to give that up today.” How can one be so morally inflamed about an issue that is obviously of great interest to one’s self, yet insistent on flippant disregard on all others?

One might think Ms. Johnson a profound moral thinker, but this is only the most outstanding example from the piece of what is a sectarian agenda the makes little consideration of anything beyond its own particularistic and selfish aims. And particularistic and selfish is exactly what Mr. Singer has set himself against.

Despite my criticisms, I cannot recommend the essay enough. As a pissed off liberal I have come to hate the word “nuance,” but the article very clearly complexifies (a neologism, but still better than “nuance”) an issue too often portrayed as simple.