Sociobiology as the Transcendence of Biological Ecology

The central idea of sociobiology is that the emergence of social creatures (herd animals) coincided with the creation of what might be termed a socio-cultural environment. The socio-cultural environment is as much an environment that social creatures inhabited as the material environment. As social creatures evolve, two things happen to the socio-cultural environment:

  1. As with evolution of species morphology, the maximum complexity of socio-cultural environments increases (there is selection pressure on the entire socio-cultural environment as, say, predators develop way of thwarting or exploiting the social aspect of their prey and the social species evolve to countervail this development, e.g. parasitic cordyceps and ants; that is, there is species-level selection, since a social characteristic unrecognized by a counterpart comes to nothing; consider this as an analogy for fourth generation warfare).

  2. Subsequent generations of herd animals come to rely ever more heavily upon social cohesion — as opposed to horns, honed perceptive apparatus, efficient digestion, et cetera — as their primary means of survival.

As this socio-cultural environment becomes more sophisticated and intricate and increasingly important as a means of survival, the socio-cultural environment grows in importance as the universe of factors shaping the evolution of social animals, while the objective, geological, hydrological and biological environment recedes in its evolutionary force.

Sexual selection (a type of sociobiological selection, as opposed to natural or Malthusian selection) is the sort of selection pressure that a species faces when its fellows, rather than the environment becomes the main challenge to getting its genes into the future. The shifting balance of natural selection and sexual selection in the play of evolutionary forces is meta-evolutionary. Evolution is recursive, with developments in the subjects of evolution backpropigating into the mechanism itself. In this respect every new thing in the universe (or at least in the effective realm) can potentially alter the functioning of the evolutionary dynamic. In this broadened perspective, the idea of machine or meme evolution supplanting biological evolution should not be so surprising.

Among a certain sector of the wildly technologically enthusiastic (among whom I count myself, though Matthew Crawford’s Shop Class as Soulcraft: An Inquiry Into the Value of Work is presently doing a lot to kick the piss out of this pretention), there is a notion that humans are rapidly disencumbering themselves of the material world and constructing for ourselves a world of pure ideas, information, mind-stuff. At some point in human history saber-toothed tigers, virulent microbes, droughts and tar pits ceased to be the primary challenge to humans seeking to survive and reproduce. Such extra-human threats were replaced as the primary danger by human-originating threats such as careening contraptions, shoddy construction techniques, insufficient precaution with the kitchen fire, marauding hoplites, jilted dagger-wielding lovers, corrupt institutions and flawed regimes of succession in governance. It is at the point where today it is plausible that the human socio-cultural environment has attained a level of preponderance where even the level of environmental catastrophe such as an asteroid strike that caused the mass extinctions of the past might be thwarted by the constituents of the human socio-cultural environment (on the other hand, the complexity of our socio-cultural environment might be just the sort of run-away biological factor that caused past mass extinction such as the oxygen catastrophe or the Canfield ocean thesis of the Permian–Triassic extinction event). In this conception, it is usually the the information revolution, the invention of the computer — a brain-like device — that is the cause of this transcending of matter. The advent of technology was not the key turning point. The recognition of sociobiology is that this trend is an aspect of evolution; that it long predates not only technology, but also even predates humans. In this way, we are not unique, not the penultimate branch of the tree of life, but only the latest in a succession of forms.

Update, 15 September 2009: It’s worth noting that while computers are not the revolution, nor the source of the revolution, they do form a paradigm, shaping our conceptualizations in ways that allow us to perceive the revolution.

Singulitarian Panic Goes Mainstream

Last week both the New York Times and New Scientist featured articles seriously speculating about the danger posed to humans by intelligent, belligerent robots. This week The Telegraph follows suite (Markoff, John, “Scientists Worry Machines May Outsmart Man,” The New York Times, 26 July 2009, p. A1; Campbell, MacGregor, “Artificial Intelligence Researchers Confront Sci-Fi Scenarios,” New Scientist, 29 July 2009; “Military Killer Robots ‘Could Endanger Civilians’,” The Telegraph, 3 August 2009). Also this week, an older story made the rounds about a Swedish company that was fined 25,000 kronor ($3,000) after a malfunctioning robot lashed out and nearly killed a maintenance worker (“Robot Attacked Swedish Factory Worker,” The Local, 28 April 2009). The prosecutor stated that, “I’ve never heard of a robot attacking somebody like this,” but as Matthew Yglesias points out, it’s not just the proletariat that’s under attack: Japanese Prime Minister Junichiro Koizumi was attacked by a robot in 2005 (“Robot Attacks Aren’t Just for Comedy, ThinkProgress, 29 July 2009; “Robot Attacks Japanese Prime Minister,” we make money not art, 21 August 2005). Finally, not only do we fret and fantasize over disaster, we make satire of it as well, as in this extreme anti-smoking video (Woerner, Meredith, “Smoking: It’s Only Enabling The Machines,” io9, 3 August 2009).

That’s a lot of fretting over robots for one week. Even 25 years after The Terminator (IMDB | Wikipedia), the robot apocalypse has remained a pretty geeky fantasy / disaster, but I would say that it’s approaching NBC (nuclear, biological, chemical) / CERN accidentally creating a mini-black hole level consciousness as a destroyer of humanity. Now if we could just gin up a little more fear over nanotech / grey goo / ecophagy / ice-9.

The Internet is Still Very, Very New

The Stranger “reviews” twitter and makes the obvious, though necessary point (Constant, Paul, “Paul Constant Reviews Twitter,” The Stranger, 30 June 2009):

So I’m going to say something that might strike you as weird and naive, but it’s true. Listen: The internet is still very, very new.

Most people haven’t even been on the internet for 10 years yet. Ten years! Every technology is lawless frontier after just 10 years.

Television was still radio with scenery 10 years after its inception. People pointed, awestruck, at planes 10 years after Kittyhawk.

We’re just learning what the internet can do, and we’ll learn a lot more once children born today grow up with today’s internet.

For the first three years of twitter, it was easy to lampoon the service as the ultimate medium for whining about first world problems. But then the Iranian election happened and overnight it became a tool for unleashing social transformation and the indispensable news medium. The Internet is still new. Many potential services lay as yet unimplemented. Many will at first seem trivial or demeaning of this or that high value (“Is Google making us stupid?”). They will seem so until the moment when they transform into something utterly other than their original intention, specification, design.

Good point aside, can we have no more articles about twitter written entirely of 140 character paragraphs. It was cute at first, but now it’s just very gimmicky. It was worth it once for the style of the thing, but now to do so only detracts from your larger point. The 140 character message has its place and it is not the short-form essay.

Kurzweil Will Die

Apropos the latest Terminator film, The New York Times has a decent rundown of singularitarianism, transhumanism, A.I. and so on that touches on most of the figures in the field (Markoff, John, “The Coming Superbrain,” 24 May 2009). The conclusion:

Kurzweil will probably die, along with the rest of us not too long before the ‘great dawn,'” said Gary Bradski, a Silicon Valley roboticist. “Life’s not fair.”

Moses never gets to enter the Promised Land. Such a shame — to be the last generation to die.

A Cyberpunk Rereading of German Idealism

Pattie Maes's TED talk demonstrating an enhanced reality device, February 2009

Over at SLOG there is a bit of a conversation is going on about Pattie Maes’s recent TED talk in which she demonstrated what she calls “sixth sense,” but that I would call “augmented reality” (with Pranav Mistry, “Unveiling the ‘Sixth Sense,’ Game-Changing Wearable Tech,” TED, February 2009; Hecht, Anthony, “Holy Freaking Crap,” SLOG, The Stranger, 3 April 2009).

Today Charles Mudede, one of the thinkers to whom I consider myself most close, comments on the significance of Ms. Maes’s innovations along a line similar to my own project (“The Near Future,” SLOG, The Stranger, 8 April 2008):

It’s as if Hegel’s geist in his grand narrative of the history of consciousness, Phenomenology of the Spirit, actually came true. We can laugh at Hegel and his impossible absolute spirit, but we cannot laugh at Pattie Maes and her wearable tech.

For some time now I have been thinking that a cyberpunk rereading of the German Idealists is necessary. I have made a number of posts along this line (see Related Posts below). One of the themes of this blog — one that has emerged accidentally — is of the hard materiality of that which we call “ideal”; the degree to which mind is in the world; and not just statically so, but the degree to which the balance of matter and information is giving ground to information, processes of reification, the “imperialism of information”; that tool for rendering the study of ideology a material science, the meme; of those twain machines which bridge the gap: brains and computers.

My contributions to the project to date:

The Deus ex Machina of Economic Crisis,” 25 March 2009
The Noosphere Visualized,” 1 January 2009
Emergence and Aufhebung (Hegel and the Swarm),” 5 December 2008
The Day I Became a Hegelian,” 18 August 2008
Imagination Unmoored,” 8 August 2008

“What is rational is actual, and what is actual is rational”!

Preface to The Philosophy of Right (1821)

CS, AI, T&A

A bit of a discussion broke out at this morning’s session over Shane Legg and Marcus Hutter’s paper, “Universal Intelligence: A Definition of Machine Intelligence” (Minds & Machines, vol. 17, no. 4, 2007, pgs. 391-444, arXiv:0712.3329v1). Following the convention of abbreviated reference to a paper by its authors’ last names, and as Hutter is pronounced “hooter,” this paper is referred to as “legs and hooters.” So there was this back and forth, “As the legs and hooters paper shows …” “You should look more carefully at legs and hooters.” “It can be hard to get you head around legs and hooters.” “We shouldn’t rush to embrace legs and hooters.” I exaggerate slightly, but I would imagine that there are better papers than Legg and Hutter’s on the subject of the definition of machine intelligence; it’s just that those other papers get passed over in favor of one granting a computer nerd the opportunity to say “legs and hooters” in all seriousness in front of a room full of people. I’ll bet that Legg and Hutter decided to collaborate on the basis that such a winning name combination guaranteed their rocket-like ascension in the ranking of most oft cited papers.

Disciplinary Normativeness and the Artificial General Intelligence Conference

Ben Goertzel and Jürgen Schmidhuber, Artificial General Intelligence Conference 2009, keynote question and answer, 6 March 2009

S. and I are spending the weekend volunteering at the Artificial General Intelligence Conference 2009. Last night we saw organizer Ben Goertzel’s introductory talk and Jürgen Schmidhuber’s talk on various aspects of intelligence as compression and formalism in AGI (post-talk discussion, Goertzel left, Schmidhuber to the right). Today we attended Cognitive Architectures I and II and the poster session. Matthew Ikle and Ben Goertzel’s discussion of using formal economic models as a means to generate attention in the OpenCog framework and Eric Baum’s presentation on the relation between evolution and intelligence both blew my mind. I cant wait for these talks and their attendant slideshows to be up on the website.

For now the most interesting thing about the conference from the standpoint of a social scientist is the degree to which the organizer, Ben Goertzel is a Kuhnian revolutionary disciplinarian. His talk on the challenges of AGI was a perfect demonstration of the problems of prerevolutionary or pre-paradigmatic science. Pre-paradigmatic is the current state of AGI research and it would be an excellent candidate for history of science study as it will probably remain so for many years to come, but its revolution is coming.

It has gradually become clear to me the degree to which Mr. Goertzel is a leader in the field, by which I do not mean his role as an innovator — though he is definitely that — but that he is someone drawing the discipline together from its disparate strands and goading it on in its proper objectives. The problems that he identified in his opening talk — the lack of a common language, a dominate model shared by at least a plurality of researchers, a road-map for future problem identification, again, shared by at least a plurality, the lack of any metric of progress — are all classically Kuhnian problems. The conference obviously serves a number of objectives, many very traditional such as professional networking and facilitation of communication of findings. But unlike one might expect from a conference of a more mature science, there was a considerable amount of normative, discipline-definitional activity. First is the very conference itself. There is clearly no well-defined research area of artificial general intelligence. The bizarre diffusion of backgrounds and affiliations represented displayed no coherence or institutional establishment. Participants had backgrounds in neurology, cognitive science, anesthesiology, evolutionary biology, bioinfomatics, mathematics, logic, computer science and various strands of engineering. Creating the problem of a shared language, people had to be fluent in the languages of multiple disciplines and were mixing and matching as well as engaging in isolated terminological innovation. People worked as academics, corporate researchers and developers, engineers, entrepreneurs and so on.

Ill-definition means that things don’t cohere, or that what has come together naturally dissipates. It is in this sense that Mr. Goertzel is a disciplinary revolutionary. He really has a personal goal and a vision regarding AGI. At one point in his opening talk he actually delivered a brief bit of a lecture to conference participants on the problem of focusing on sub-general level intelligences for the expedience that they are achievable and money-making, though admitting to culpability in that respect as well. It is also clear what a small clique of researchers constitute the AGI world, as well as Mr. Goertzel’s position as a hub of the social and intellectual network. During the question and answer, he was able to call on most people in the room by first name. And he is clearly an intellectual celebrity with few peers. As Kuhn argued, non-scientific factors feature more prominently in the definition and direction of a science than rhetoric of objectivity would lead one to expect.

Group Proprioception Goes Interspecies

Some Seattle artist and I aren’t the only ones who think your pet should be life logging: the British government does too. Reading University has been commissioned to conduct a study of how much wildlife is being destroyed by domestic cats (McKie, Robin, “Special Tags to Measure How Often Cats Kill,” The Observer, 15 February 2009):

“For the first time, cats will be fitted with data loggers that will show their movements, range and behaviour 24 hours a day. We will know when one kills an animal — typically by the way it plays with its prey.

“We will then be able to work out precisely how many animals a cat is killing every year, and from that estimate a national figure. It will be a pretty formidable number.”

Now if they could just get some sort of pattern recognition software to read the live GPS data stream coming off your cat and tweet his kills to your cell phone, then your cat would be twittering too.

Life Logging: Not Just for Human Life Anymore

Not only should you be thinking about life logging, but you should also be thinking about it for your pet (Chansanchai, Athima, “Cooper the Cat Shows His Stuff in Photo Exhibit,” Seattle Post-Intelligencer, 13 February 2009):

For this Seattle cat, photography is his medium, a gift from his “parents” — filmmakers Michael and Deirdre Cross, who gave him a very small and light digital camera that hung from his collar one day a week for a year. It was programmed to take a picture every two minutes.

They wanted the answer to a question many pet lovers have asked themselves: What does he do all day?

He came back with thousands of answers — 16 of which are framed and on display at the Urban Light Studios in the Greenwood Collective. The exhibit opens with a reception tonight as part of the Greenwood Art Walk. The show runs through March 10.

Cooper the cat photographer has a blog dedicated to his exploits at http://cooper-catphotographer.blogspot.com/.

And while you’re at it, you may want to survey your environment for any particularly interesting non-living things, appliances, informational or gameworld agents, et cetera whose activities you might want to see in your FaceBook feed.

Update, 15 September 2011: Cooper the cat photographer’s blog has been relocated. It can now be found at http://www.photographercat.com/.

Machine Ethics

Two great new books on the future of robots, Moral Machines: Teaching Robots Right from Wrong and Wired for War: The Robotics Revolution and Conflict in the 21st Century are out right now. I’m not going to have time for either, but in the meantime, the New York Times constantly runs articles on this subject, most recently “A Soldier, Taking Orders From Its Ethical Judgment Center” (Dean, Cornelia, 25 November 2008, p. D1). To the list of all the things that robots will be better at than humans, we can add that they will be more ethical than us:

“My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can,” said Ronald C. Arkin, a computer scientist at Georgia Tech, who is designing software for battlefield robots under contract with the Army.

In a report to the Army last year, Dr. Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built without anger or recklessness, Dr. Arkin wrote, and they can be made invulnerable to what he called “the psychological problem of ‘scenario fulfillment,’ ” which causes people to absorb new information more easily if it agrees with their pre-existing ideas.

His report drew on a 2006 survey by the surgeon general of the Army, which found that fewer than half of soldiers and marines serving in Iraq said that noncombatants should be treated with dignity and respect, and 17 percent said all civilians should be treated as insurgents. More than one-third said torture was acceptable under some conditions, and fewer than half said they would report a colleague for unethical battlefield behavior.

Troops who were stressed, angry, anxious or mourning lost colleagues or who had handled dead bodies were more likely to say they had mistreated civilian noncombatants, the survey said [Mental Health Advisory Team IV, FINAL REPORT, Office of the Surgeon General, United States Army Medical Command, 17 November 2006].

It is incorrect to imagine machines as behaving more ethically than humans insofar as it construes humans and machines as occupying the same ethical continuum. We may program machines to have human-compatible ethics, but that shouldn’t confuse us; the same ethical prohibitions that apply to us will not apply to robots.

Right and wrong aren’t something floating out there on the other side of the sphere of the stars. Right and wrong are derived from the characteristics of the human body, human tastes and tendencies as endowed in us by our natural history, the structure of the human lifecycle, our conceptions of the good life, shared human experience, and communal mythos. Creatures for whom these factors are different will have different ideas about right and wrong. As the last three items on the list — conceptions of the good life and shared experience, public reference symbols — differ among people, we have different ideas about right and wrong. A creature with a transferable consciousness won’t have an essentialist view of the relation of body to self and hence won’t take moral exception to bodily damage. A creature with a polymorphous consciousness wouldn’t disparage even psychic damage (though the question of identity for such a creature would be even more difficult than it is with us, as already elusive as we are).

Creatures with different conceptions interacting have to develop ethical interfaces. The minimalist limitations of rights-based liberalism and the law of nations are to some extent that: interfaces between differing moral systems — the former an interface for people within a society, the latter between different societies. What an interface between different species, or an interface between different types of life, would look like, I have no idea. Whether such an interface is even possible is perhaps more pressing: they only seem to hold up so well amidst humans.

Neil Sinhababu, “the Ethical Werewolf,” and Ramesh Ponnuru had a go-round back in 2006 that touched on the ethical status of non-human creatures, but I don’t think it really goes beyond the natural extension of liberalism to different physical morphologies, with which liberalism has an extensive history in the various rights movements. And different physical morphologies is all that aliens and other mythological creatures, as conventionally conceived, are (Sinhababu, Neil, “Mind Matters,” The American Prospect, 23 August 2006; Ponnuru, Ramesh, “Fear Not, Frodo,” National Review Online, 28 August 2006; Sinhababu, Neil, “More on Minds,” TAPPED, 30 August 2006).