Sociobiology as the Transcendence of Biological Ecology

The central idea of sociobiology is that the emergence of social creatures (herd animals) coincided with the creation of what might be termed a socio-cultural environment. The socio-cultural environment is as much an environment that social creatures inhabited as the material environment. As social creatures evolve, two things happen to the socio-cultural environment:

  1. As with evolution of species morphology, the maximum complexity of socio-cultural environments increases (there is selection pressure on the entire socio-cultural environment as, say, predators develop way of thwarting or exploiting the social aspect of their prey and the social species evolve to countervail this development, e.g. parasitic cordyceps and ants; that is, there is species-level selection, since a social characteristic unrecognized by a counterpart comes to nothing; consider this as an analogy for fourth generation warfare).

  2. Subsequent generations of herd animals come to rely ever more heavily upon social cohesion — as opposed to horns, honed perceptive apparatus, efficient digestion, et cetera — as their primary means of survival.

As this socio-cultural environment becomes more sophisticated and intricate and increasingly important as a means of survival, the socio-cultural environment grows in importance as the universe of factors shaping the evolution of social animals, while the objective, geological, hydrological and biological environment recedes in its evolutionary force.

Sexual selection (a type of sociobiological selection, as opposed to natural or Malthusian selection) is the sort of selection pressure that a species faces when its fellows, rather than the environment becomes the main challenge to getting its genes into the future. The shifting balance of natural selection and sexual selection in the play of evolutionary forces is meta-evolutionary. Evolution is recursive, with developments in the subjects of evolution backpropigating into the mechanism itself. In this respect every new thing in the universe (or at least in the effective realm) can potentially alter the functioning of the evolutionary dynamic. In this broadened perspective, the idea of machine or meme evolution supplanting biological evolution should not be so surprising.

Among a certain sector of the wildly technologically enthusiastic (among whom I count myself, though Matthew Crawford’s Shop Class as Soulcraft: An Inquiry Into the Value of Work is presently doing a lot to kick the piss out of this pretention), there is a notion that humans are rapidly disencumbering themselves of the material world and constructing for ourselves a world of pure ideas, information, mind-stuff. At some point in human history saber-toothed tigers, virulent microbes, droughts and tar pits ceased to be the primary challenge to humans seeking to survive and reproduce. Such extra-human threats were replaced as the primary danger by human-originating threats such as careening contraptions, shoddy construction techniques, insufficient precaution with the kitchen fire, marauding hoplites, jilted dagger-wielding lovers, corrupt institutions and flawed regimes of succession in governance. It is at the point where today it is plausible that the human socio-cultural environment has attained a level of preponderance where even the level of environmental catastrophe such as an asteroid strike that caused the mass extinctions of the past might be thwarted by the constituents of the human socio-cultural environment (on the other hand, the complexity of our socio-cultural environment might be just the sort of run-away biological factor that caused past mass extinction such as the oxygen catastrophe or the Canfield ocean thesis of the Permian–Triassic extinction event). In this conception, it is usually the the information revolution, the invention of the computer — a brain-like device — that is the cause of this transcending of matter. The advent of technology was not the key turning point. The recognition of sociobiology is that this trend is an aspect of evolution; that it long predates not only technology, but also even predates humans. In this way, we are not unique, not the penultimate branch of the tree of life, but only the latest in a succession of forms.

Update, 15 September 2009: It’s worth noting that while computers are not the revolution, nor the source of the revolution, they do form a paradigm, shaping our conceptualizations in ways that allow us to perceive the revolution.

The Positive Side of the Fallacy of Composition

It’s one thing to recognize that the properties of the component aren’t those of the composite. But on the positive side and in the interstitiary of this seemingly commonplace observation lies an entire world. The fallacy of composition is perhaps the first and the most important recognition on the way to systems science. Characteristics don’t scale in a straightforward or linear way and emergent properties, complex systems, adaptation and evolution, ecology, intelligence and society1 are all the positive side of the recognition of the fallacy of composition.

I mention the notion of an interstitiary between components and composites because this distinction is often thought of as discreet and binary. That is, people imagine components and composites and no intermediate states. This is because when people think about the fallacy of composition, they tend think of a machine-type example, or one of substance and form, e.g. just because an ingot of metal is pretty durable doesn’t mean that a precision machine made of metal parts is too. But emergence-type phenomenon are a unique kind of composition not captured in the machine analogy of component and composition.

A machine is made of heterogeneous parts and that is misleading here. Each part is relatively property-thin and by themselves are just a collection of odd shapes. It is only when completely assembled that the function of the whole is manifest. Structure is everything in the machine. Intermediate states between parts strewn on the table and complete assembly accomplish nothing. Minor deviations from proscribed structure accomplish nothing. And the whole is efficient: that is, it is specifically designed to minimize emergent or any other extraneous characteristics. Design and emergence are at odds here.2

Emergence, complex systems, et cetera come about from collections of more or less homogeneous corpuscles interacting in a largely unstructured way.3 The major difference here is that while the machine has no or few remarkable intermediary states, scale is all-important in systems phenomena, which tend to display differing and unique behavior at various scales. And where the machine is highly structure sensitive, complex systems tend to be robust in their various states.4

Continuous variable change

The key is that more is different (nearly the motto of the discipline).5 The common sense imagining, prior to the fallacy of composition, is that things scale in a linear way. More means more of the same. Then there is second order sophistication about scale. People add in the notion of a discontinuity. Things scale until they reach a certain threshold, at which point there is a discontinuity. More is different. But this is still a relatively binary conception with simply two sides of the discontinuity: one side of the discontinuity where the logic of linear growth prevails and the other where that of emergence or complexity prevails. There is a third order sophistication about scale where the binary view of second order thinking is still too simple; where the relationship between scale and properties is continuously variable.

As opposed to these simpler first and second order notions, continuously variable growth and development happen through many phases distinguished by differing logics or marked by different epiphenomena (something more like the imaginary graph above). As a system or network gains in members or in interactions, one set of behaviors becomes strained and unstable as it approaches the upper-bound threshold of the given logic. The threshold crossed, the previous logic gives way to a new logic. The pattern repeats, with the new logic holding for a period, but it too petering out in favor of a subsequent logic. And so on. Or perhaps there are tiers of controlling logics operative simultaneously. There may be meta-epi-phenomena as the various transitions admit a pattern of cycling logics or a second order logic to their or relations.6 No behavior or sequence is fundamental to systems, or the process of growth or to any particular phenomenon. We should dispense with the idea of a sin qua non behavior of systems in favor of a continuum of potentially many different behaviors as a system grows and changes.7

I will make a few examples to demonstrate the changing relation of prevailing logic and a system phenomena under growth.

Bak-Tang-Wiesenfeld sandpile growth model

  1. Multiple logics. Consider that paradigmatic example of complexity: grains of sand falling onto a pile (a Bak-Tang-Wiesenfeld sandpile). As grains fall onto the pile, they tumble down the side, and come to a rest according to a standard distribution. Most come to a rest close to the drop point so the pile grows in height, but only a few roll all the way to the bottom so the base remains largely the same area resulting in the incline of the sides of the pile grow ever steeper. But there is a range of values for which the ratio of base to height is stable. When the pile reaches its upper bound of stability, the pile collapses. But it doesn’t collapse all the way to a layer of sand one grain thick. The collapse comes to a halt once the pile has reached the lower bound of the ratio of base to height (the lower bound being where the slope of the side of the pile is such that friction overcomes gravity and momentum). The pile recommences its growth in height. With a new, broader base having been established, it will achieve a greater height this time before reaching the critical threshold.

    The pile doesn’t grow continuously until collapse though. It is constantly experiencing local collapse. The sand pile demonstrates scale symmetry in so far as there are slides of all sizes starting with a single grain previously stable coming lose and tumbling a little further down a side. Single grains frequently start a chain reaction leading to larger slides. The collapse of the whole pile isn’t an phenomenon of it’s own, unrelated to the logic of local slides, as the catastrophic collapse of the pile is usually the result of a local slide running away to the scale of the entire pile.

    But fractal collapse isn’t a constant. Rather, its probability is a function of where the pile is in the growth-collapse cycle. At the lower bound of the range of stability any collapse is almost impossible. A single grain may break lose and tumble to a more stable location, but owing to the widespread state of stability, it is not likely to set off a chain reaction leading to a large slide. At the top end, catastrophic collapse approaches certainty. A growing proportion of sand grains are in unstable positions. More are likely to break lose and, having broken lose, collide with other unstable grains setting off chain reactions. As the pile goes through the positive phase of the growth-collapse cycle, it traverses a range of increasing collapse probability from almost impossible to almost certain and its fractal dimension grows.

    Tally the sheer number of logics at work in this seemingly unnoteworthy phenomenon. The overall pattern is linear growth — the pile grows in mass in equal units in equal time. The shape of the pile demonstrates an oscillating pattern, between a steep cone and a shallow cone. The height of the pile superficially demonstrates cyclical growth. Upon closer inspection, the growth of the height of the pile turns out to be fractal. But even that fractal growth isn’t constant, but one of increasing fractal dimension.

  2. A complicated interstitiary. A discontinuity is probably always preceded by some sort of turbulence, a breakdown in the previously prevailing relationship. A prevailing logic doesn’t yield instantaneously to its successor. In at least materials science this is well documented in the form of the microstructure of phase transitions.

    As a social scientific example, nuclear weapons represent a discontinuity in the growth of military power, but how much of one? Countries had already acquired “city-busting” and potentially civilization destroying capability using the regular old Newtonian technologies of modern air power and chemical explosives. The U.S. imagined “bombing Vietnam back to the stone age” employing conventional methods. So it would seem that military power (technology) had already crossed an inflection point and gone non-linear in a way that was necessitating major strategic reconsideration prior to the discontinuity of nuclear weapons.

  3. Many discontinuities. People have a tendency to conceptualize consciousness in the model of the binary discontinuity. Neural networks grow in size from tiny nerve bundles in nematode worms up to pre-hominids and then somewhere in the Pleistocene whamo, the discontinuity and consciousness and that’s the pinnacle. Some people imagine the difference between regular people and geniuses to be a little discontinuity, rather than a difference of quantity. Hence the obsession with Einstein’s brain, Descartes’s skull, and so on.

    But I imagine consciousness rather than a big bang being a development with perhaps a number of discontinuities and plateaus — the first reflex, the first pain, the first internal world model, the first declarative memory, the first language of discrete symbols, the first syntax, the first theory of mind — on the way to humans. But our intelligence is just another plateau on the path. There is the singularity somewhere beyond us and still more discontinuities, unimaginable to us even beyond that. As intelligence is not one thing, I imagine each of the many faculties having its own trajectory, each with multiple discontinuities of their own, as well as the synergy of the many faculties being of uneven development and their interplay subject to system effects.8

Instead of relying on the sort of intuitionism that says something like “We know that there has been a discontinuity in strategic thought because we read Thomas Schelling now instead of Helmuth von Moltke” we should dispenses with this intuitionistic approach in favor of systematic mathematical methods to analyze change — to determine when and where system effects are at work, which ones are at work, the extent of their range, the exact location of inflection points or discontinuities, where the logic of a tipping point takes over and the pace of accelerating assertion of the logic of the new régime, to study turbulence, islands of stability, equilibria, feedback and other patterns that can develop.9


  1. I am prone to write “collective intelligence” in place of “society,” but an ancillary point here is that all intelligence is collective (composite), whether it’s a society, so to speak, of individually dumb neurons in the brain of an individual or the more commonplace idea of a society, namely wisdom of crowds, invisible hand, social networking, superorganism type phenomena.

  2. Though not necessarily so: we are learning how to harness emergent properties to our purposes, to design them in.

  3. Though structure might be a concomitant emergent property, e.g. ecologies might exist in equilibrium, but that’s not because competition among species is well ordered.

  4. This category error of misconstruing different component-composition behavior types, or not even identifying the second type at all, is perhaps illuminating to the overarching issue of intelligent design’s constant irreducible complexity mongering. One of the favorite analogies from their rhetoric is that of the mousetrap: remove a single component and it isn’t a slightly less effective mousetrap, it isn’t an anything and natural selection can’t function on constellations that display no function. But notice how this relies entirely on the machine type of component-composition thinking. But if one sees phenotypic expression as an emergent property of genotypic change, than one might shift to the later version of the component-composition model. A heavily determined trait will be stable over many changes in the underlying genes, until a critical threshold is reached at which time rapid change in phenotype (the emergent property) could occur. A perfect example of the contrast between the two types of thinking that I am describing here is that version of evolution as the long, slow accumulation of mutation down through the millennia, which sees a more or less consistent, linear change, versus something like punctuated equilibrium which imagines different logics of change predominating under different circumstances. I would go further to say that punctuated equilibrium is only a theory of discontinuity. A more elaborate theory of the deep history dynamics of evolution, or a marcotheory of evolution, as opposed to the microtheory we have at present, is required.

  5. More is not binary, though more is frequently discrete — neural networks grow in whole neurons, societies grow in whole people. But not always. System happens as much in analog as in digital phenomena, e.g. strategy must adapt as power or capability increase, but power does not grow in discrete units.

  6. Meta-epi-phenomena would be something observable only over extremely long timeframes for most phenomena with which we are familiar. Kondratiev cycles (45-60 years) and Forrester waves (200 years) would be examples from economics and sociology. The existence of such long patterns, whose period is outside the scope of our limited record, is part of the explanation for black swan type phenomena. I imagine that there are small models, probably in physics and biology, where the dynamcs of such systems could be worked out in detail, then recognized in longer term examples based on partial data. As the patterns of systems theory are largely substratum independent, patterns discovered in one area are relevant to another. A well organized sub-discipline would better allow for the abstraction of higher-order patterns from their original fields of discovery and facilitate interdisciplinary transmission of knowledge. A pattern well characterized in observations of one phenomenon might be revolutionary to another area where the phenomenon is insufficiently well observed to know the entire continuum of behavior, but observed well enough to recognize that it is an instance of the former pattern. Systems-type phenomena are present throughout the various sciences. Just as mathematics is used throughout the sciences, but is consolidated into a discipline of its own, so too systems science should become a better organized and consolidated sub-discipline. As it involves such a diverse range of the sciences, it would be an interdisciplinary field, not one as autonomous as mathematics.

  7. While none fundamental, there are, presumably, a finite set of possible system patterns. I wrote on the possibility of cataloguing and organizing them on some sort of periodic table of logics in “Formal Cognition,” 1 January 2008.

  8. Animals have developed a variety of different types and levels of intelligence; there’s no reason to think that machine intelligence will be any different.

  9. The problem is that the data sets that we have aren’t big enough, granular enough and are riddled with bad data. With two points you can’t tell linear from geometric growth. With three points you can’t tell geometric growth from linear growth with noise. And of course it gets much worse than this. And we don’t even know how to operationalize a large segment of the really interesting phenomena. How would we quantify something as elusive as national power — something international relations theorists are trying to do all the time — when even the component statistics are a mess?

Formal Cognition

A few weeks ago I went to the most recent installment of the D.C. Future Salon. The presenter was the organizer, Ben Goertzel of Novamente and the subject was “Artificial Intelligence and the Semantic Web.” One of the dilemmas that Mr. Goertzel chased out with the notion of a semantic web is that the complexity is conserved: either it has to be in the software agent or it has to be in the content. If it is in the agent, then it can be highly centralized — a few geniuses develop some incredibly sophisticated agents — or it can be in the content in which case the distributed community of content providers all have to adequately mark-up their content in a way that more simple agents can process. Mr. Goertzel is hedging his bets: he is interested both in developing more sophisticated agents and in providing a systematic incentive to users to mark up their content.

In the course of discussing how to encourage users to mark up their own content, Mr. Goertzel listed as one of the problems that most lack the expertise to do so. “What portion of people are adept at formal cognition? One tenth of one percent? Even that?” I had probably herd the phrase or something like it before but for whatever reason, this time it leapt out. I had a collection of haphazard thoughts for which this seemed a fitting rubric and I was excited that this may have been a wheel for which no reinvention was required. When I got home I googled “formal cognition” figuring there would be a nice write-up of the concept on Wikipedia, but nothing. I fretted: formal cognition could simply mean machine cognition (computers are formal systems). Maybe it was a Goertzel colloquialism and the only idea behind it was a hazy notion that came together in that moment in Mr. Goertzel’s mind.

Anyway, I like the notion and absent an already systemized body of thought called formal cognition, here is what I wish it was.

While the majority might continue to think with their gut and their gonads, a certain, narrow, technocratic elite is in the process of assembling the complete catalog of formally cognitive concepts. There is the set that consists of all valid, irreducibly simple algorithms operative in the world along with their application rules covering range, structure of behavior, variants, proximate algorithms, compatibilities, transition rules, exceptions, et cetera. I am going to show my reductionist cards and say that given a complete set of such algorithms, all phenomena in the universe can be subsumed under one of these rules, or of a number acting in conjunction. In addition to there being a complete physis of the world, there is, underlying that, a complete logic of the world.

This reminds me of Kant’s maxim of practical reason that will is a kind of causality and that the free will is the will that is not determined by alien causes, that is, the will that acts according to its own principles, which are reason (see e.g. Groundwork of the Metaphysic of Morals, Part III). It seems to me that a project of delineating the principles of formal cognition is a liberating act insofar as we are casting out the innumerable unconscious inclinations of that dark netherworld of the intuition (gut and gonad), instilled as they were by millennia of survival pressures — the requirements for precision of which were considerably different from those of a modern technological people — in favor of consciously scrutinized and validated principles of thought.

By way of outstanding example, one might be prone to say that evolution is such a logic. At this point evolution has jumped the bank of its originating field of thought, the life sciences, and begun to infect fields far beyond its origin. It is increasingly recognized today that evolution through natural selection is a logical system, one of the fundamental algorithms of the world, of which the common conception of it as a process of life science is merely one instantiation. Perhaps it was only discovered there first because it is in life phenomena that its operation is most aggressive and obvious. But it is now recognized that any place where replication, deviation and competition are found, an evolutionary dynamic will arise. Some cosmologists even propose a fundamental cosmological role for it as some sort of multiverse evolution would mitigate the anthropic problem (that the universe is strangely tuned to the emergence of intelligent life).

However, note that evolution is a second order logic that arises in the presence of replication, deviation and competition. It would seem that evolution admits to further decomposition and that it is replication, deviation and competition that are fundamental algorithms for our catalog. But even these may be slight variations on still more fundamental algorithms. It strikes me that replication might just be a variant of cycle, related perhaps through something like class inheritance or, more mundanely, through composition (I feel unqualified to comment on identity or subspecies with algorithms because it is probably something that should be determined by the mathematical properties of algorithms).

System. But have I been wrong to stipulate irreducibly simplicity as one of the criteria for inclusion in our catalog? The algorithms in which we are interested are more complex than cycle. They are things like induction, slippery-slope, combinatorial optimization or multiplayer games with incomplete information. We have fundamental algorithms and second order or composite algorithms and a network of relations between them. Our catalogue of algorithms is structured.

The thing that I think of most here is Stephen Wolfram’s A New Kind of Science (complete text online | | Wikipedia) in which he describes a systematic catalog of enumerated algorithms, that is, there is an algorithm that could generate the entire catalog of algorithms, one after the other. These algorithms each generate certain complex patterns and as Mr. Wolfram suggests, the algorithms stand behind the phenomena of the material world.

An interesting aside lifted from the Wikipedia page: in his model science becomes a matching problem: rather than reverse engineering our theories from observation, once a phenomenon has been adequately characterized, we simply search the catalogue for the rules corresponding to the phenomenon at hand.

It seems to me that this catalog might be organized according to evolutionary principles. By way of example, I often find myself looking at some particularly swampy looking plant — this is Washington, D.C. — with an obviously symmetrical growth pattern — say, radial symmetry followed by bilateral symmetry, namely a star pattern of stems with rows of leaves down each side. Think of a fern. Then I see a more modern plant such as a deciduous tree, whose branch growth pattern seems to follow more of a scale symmetry pattern. The fern-like plants look primitive, whereas the deciduous branch patterns look more complex. And one followed the other on the evolutionary trajectory. The fern pattern was one of the first plant structures to emerge following unstructured algae and very simple filament structure moss. The branching patterns of deciduous trees didn’t come along until much later. There are even early trees like the palm tree that are simple a fern thrust up into the air. The reason that fern-like plants predate deciduous trees has to do with the arrangement of logical space. A heuristic traversing logical space encounters the algorithm giving rise to the radial symmetry pattern before it does that of scale symmetry. The heuristic would work the same whether it was encoded in DNA or in binary or any other instantiation you happen to think of.

A fantastical symmetry. I’m going to allow myself a completely fantastical aside here — but what are blogs for?

It is slightly problematic to organize the catalogue on evolutionary principles insofar as they are logical principles and sprung into existence along with space and time. Or perhaps they are somehow more fundamental than the universe itself (see e.g. Leibniz) — it is best to avoid the question of whence logic lest one wander off into all sorts of baseless metaphysical speculation. Whatever the case, biological evolution comes onto the scene relatively late in cosmological time. It would seem that the organizing principle of the catalogue would have to be more fundamental than some latter-day epiphenomena of organic chemistry.

Perhaps the entire network of logic sprung into existence within the realm of possibility all at once, though the emergence of existent phenomena instantiating each rule may have traversed a specific, stepwise path through the catalogue only later. But there isn’t a straightforward, linear trajectory of simple all the way up to the pinnacle of complexity, but rather an iterative process whereby one medium of evolution advances the program of the instantiation of the possible as far as that particular medium is capable before its potential is exhausted. But just as the limits of its possibilities are reached, it gives way to a new medium that instantiates a new evolutionary cycle. The new evolutionary cycle doesn’t pick up where the previous medium left off, but start all the way from zero. Like in Ptolemy’s astronomy there are epicycles and retrograde motion. But the new medium has greater potential then its progenitor and so will advance further before it too eventually runs up against the limits of its potential. So cosmological evolution was only able to produce phenomena as complex as, say, fluid dynamics. But this gave rise to star systems and planets. The geology of the rocky planets has manifest a larger number of patterns, but most importantly life and the most aggressive manifestation of the catalog of algorithms to date, biological evolution. As has been observed, the most complexly structured three pounds of matter in the known universe is the human brain that everyone carries around in their head.

If life-based evolution has proceeded so rapidly and demonstrated so much potential, it is owing to the suppleness of biology. However, the limits of human potential are already within sight and a new, far more dexterous being, even more hungry to bend matter to logic than biological life ever was has emerged on the scene: namely the Turing machine, or the computer. This monster of reason is far faster, more fluid and polymorphous, adaptable, durable and precise than us carbon creatures. In a comparable compression of time from cosmos to geology and geology to life, the computer will “climb mount improbable,” outstrip its progenitor and explore further bounds of the catalog of logic. One can even imagine a further iteration of this cycle whereby whatever beings of information we bequeath to the process of reason becoming real repeat the cycle: they too reach their limits but give rise to some even more advanced thing capable of instantiating as yet unimagined corners of the catalogue of potential logics.

But there is a symmetry between each instantiation of evolution whereby the system of algorithms was traversed in the same order and along the same pathways. Perhaps more than the algorithms themselves are universal, perhaps the network whereby they are related is as well. That is to say that perhaps there is an inherent organizing structure within the algorithms themselves, a natural ordering running from simple to complex. Evolution is not the principle by which the catalog is organized, but merely a heuristic algorithm that traverses this network according to that organizing principle. Evolution doesn’t organize the catalog, but its operation illuminates the organization of the catalog. Perhaps that is what makes evolution seem so fundamental: that whatever it’s particular instantiation, it is like running water that flows across a territory defined by the catalogue. Again and again in each new instantiation evolution re-traverses the catalogue. First it did so in energy and matter, then in DNA, then in steel and silicon, now in information.

Anti-System. This is fantastical because, among other reasons, it is well observed that people who are captivated by ideas are all Platonists at heart. I have assiduously been avoiding referring to the algorithms of a system of formal cognition as forms. It all begs the question of whence logic — which, again, is a terrible question.

Of course the notion of formal cognition doesn’t need to be as systematic as what I have laid out so far. Merely a large, unsystematized collection of logically valid methods along with the relevant observations about the limitations, application rules and behaviors of each one would go a significant way toward more reliable reasoning. Perhaps such a thing doesn’t exist at all — I tend towards a certain nominalism, anti-foundationalism and relativism. But the notion of a complete logical space, or a systematic catalog is perhaps like one of Kant’s transcendental illusions — a complete science or moral perfection — the telos, actually attainable or only fantasized, that lures on a certain human endeavor.

Politics. All of this having been said, I remain of the belief that politics is the queen of the sciences. Formal cognition wouldn’t be automated decision making and it could only ever enter into political dialog as decision support or as rhetoric.

As Kant wrote, “Thoughts without content are empty; intuitions without concepts are blind” (Critique of Pure Reason, A51 / B75). Kant developed an idiosyncratic terminology and perhaps another way of phrasing this, more suited to my purpose here, would be to say that formal reason absent empirical data is empty; but that empirical data unsystemized by conceptual apparatus is an incoherent mess. A complete system of the world cannot be worked out a priori and a mere catalogue of all observations about the world would be worse than useless.

Formally cognitive methods must be brought to bear. And against a complex and messy world I do not think that their application will be unproblematic. In passing above, I mentioned the notion of application rules. Each algorithm has attendant rules regarding when it comes into force, for what range of phenomena it is applicable, when it segues to another applicable algorithm, et cetera. Take for instance the notion of the slippery-slope or the snowball. Not all slippery-slopes run all the way to the bottom. Most are punctuated by points of stability along the way, each with its own internal logic as to when some threshold is overcome and the logic of the slipper-slope resumes once more. Or perhaps some slippery-slope may be imagined to run all the way to the bottom — it’s not ruled out by the logic of the situation — but for some empirical reason in fact does not. Once the principles of formal cognition come up against the formidable empirical world, much disputation will ensue.

Then there is the question of different valuation. Two parties entering into a negotiation subscribe to two (or possibly many, many more) different systems of valuation. Even when all parties are in agreement about methods and facts, they place different weights on the various outcomes and bargaining positions on the table. One can imagine formally cognitive methods having a pedagogic effect and causing a convergence of values over time — insofar as values are a peculiar type of conclusion that we draw from experience or social positionality — but the problems of different valuation cannot be quickly evaporated. One might say that the possibly fundamental algorithm of trade-off operating over different systems of valuation goes a long way toward a definition of politics.

Finally, one could hope that an increased use and awareness of formally cognitive methods might have a normative effect on society, bringing an increased proportion of the citizenry into the fold. But I imagine that a majority of people will always remain fickle and quixotic. Right reasoning can always simply be ignored by free agents — as the last seven years of the administration of George W. Bush, famous devotee of the cult of the gut as he is, have amply demonstrated. As an elitist, I am going to say that the bifurcation between an illuminati and the rabble — as well as the historical swings in power between the two — is probably a permanent fixture of the human condition. In short, there will be no panacea for the mess of human affairs. The problem of politics can never be solved, only negated.