A Cyberpunk Rereading of German Idealism

Pattie Maes's TED talk demonstrating an enhanced reality device, February 2009

Over at SLOG there is a bit of a conversation is going on about Pattie Maes’s recent TED talk in which she demonstrated what she calls “sixth sense,” but that I would call “augmented reality” (with Pranav Mistry, “Unveiling the ‘Sixth Sense,’ Game-Changing Wearable Tech,” TED, February 2009; Hecht, Anthony, “Holy Freaking Crap,” SLOG, The Stranger, 3 April 2009).

Today Charles Mudede, one of the thinkers to whom I consider myself most close, comments on the significance of Ms. Maes’s innovations along a line similar to my own project (“The Near Future,” SLOG, The Stranger, 8 April 2008):

It’s as if Hegel’s geist in his grand narrative of the history of consciousness, Phenomenology of the Spirit, actually came true. We can laugh at Hegel and his impossible absolute spirit, but we cannot laugh at Pattie Maes and her wearable tech.

For some time now I have been thinking that a cyberpunk rereading of the German Idealists is necessary. I have made a number of posts along this line (see Related Posts below). One of the themes of this blog — one that has emerged accidentally — is of the hard materiality of that which we call “ideal”; the degree to which mind is in the world; and not just statically so, but the degree to which the balance of matter and information is giving ground to information, processes of reification, the “imperialism of information”; that tool for rendering the study of ideology a material science, the meme; of those twain machines which bridge the gap: brains and computers.

My contributions to the project to date:

The Deus ex Machina of Economic Crisis,” 25 March 2009
The Noosphere Visualized,” 1 January 2009
Emergence and Aufhebung (Hegel and the Swarm),” 5 December 2008
The Day I Became a Hegelian,” 18 August 2008
Imagination Unmoored,” 8 August 2008

“What is rational is actual, and what is actual is rational”!

Preface to The Philosophy of Right (1821)

Advertisements

CS, AI, T&A

A bit of a discussion broke out at this morning’s session over Shane Legg and Marcus Hutter’s paper, “Universal Intelligence: A Definition of Machine Intelligence” (Minds & Machines, vol. 17, no. 4, 2007, pgs. 391-444, arXiv:0712.3329v1). Following the convention of abbreviated reference to a paper by its authors’ last names, and as Hutter is pronounced “hooter,” this paper is referred to as “legs and hooters.” So there was this back and forth, “As the legs and hooters paper shows …” “You should look more carefully at legs and hooters.” “It can be hard to get you head around legs and hooters.” “We shouldn’t rush to embrace legs and hooters.” I exaggerate slightly, but I would imagine that there are better papers than Legg and Hutter’s on the subject of the definition of machine intelligence; it’s just that those other papers get passed over in favor of one granting a computer nerd the opportunity to say “legs and hooters” in all seriousness in front of a room full of people. I’ll bet that Legg and Hutter decided to collaborate on the basis that such a winning name combination guaranteed their rocket-like ascension in the ranking of most oft cited papers.

Disciplinary Normativeness and the Artificial General Intelligence Conference

Ben Goertzel and Jürgen Schmidhuber, Artificial General Intelligence Conference 2009, keynote question and answer, 6 March 2009

S. and I are spending the weekend volunteering at the Artificial General Intelligence Conference 2009. Last night we saw organizer Ben Goertzel’s introductory talk and Jürgen Schmidhuber’s talk on various aspects of intelligence as compression and formalism in AGI (post-talk discussion, Goertzel left, Schmidhuber to the right). Today we attended Cognitive Architectures I and II and the poster session. Matthew Ikle and Ben Goertzel’s discussion of using formal economic models as a means to generate attention in the OpenCog framework and Eric Baum’s presentation on the relation between evolution and intelligence both blew my mind. I cant wait for these talks and their attendant slideshows to be up on the website.

For now the most interesting thing about the conference from the standpoint of a social scientist is the degree to which the organizer, Ben Goertzel is a Kuhnian revolutionary disciplinarian. His talk on the challenges of AGI was a perfect demonstration of the problems of prerevolutionary or pre-paradigmatic science. Pre-paradigmatic is the current state of AGI research and it would be an excellent candidate for history of science study as it will probably remain so for many years to come, but its revolution is coming.

It has gradually become clear to me the degree to which Mr. Goertzel is a leader in the field, by which I do not mean his role as an innovator — though he is definitely that — but that he is someone drawing the discipline together from its disparate strands and goading it on in its proper objectives. The problems that he identified in his opening talk — the lack of a common language, a dominate model shared by at least a plurality of researchers, a road-map for future problem identification, again, shared by at least a plurality, the lack of any metric of progress — are all classically Kuhnian problems. The conference obviously serves a number of objectives, many very traditional such as professional networking and facilitation of communication of findings. But unlike one might expect from a conference of a more mature science, there was a considerable amount of normative, discipline-definitional activity. First is the very conference itself. There is clearly no well-defined research area of artificial general intelligence. The bizarre diffusion of backgrounds and affiliations represented displayed no coherence or institutional establishment. Participants had backgrounds in neurology, cognitive science, anesthesiology, evolutionary biology, bioinfomatics, mathematics, logic, computer science and various strands of engineering. Creating the problem of a shared language, people had to be fluent in the languages of multiple disciplines and were mixing and matching as well as engaging in isolated terminological innovation. People worked as academics, corporate researchers and developers, engineers, entrepreneurs and so on.

Ill-definition means that things don’t cohere, or that what has come together naturally dissipates. It is in this sense that Mr. Goertzel is a disciplinary revolutionary. He really has a personal goal and a vision regarding AGI. At one point in his opening talk he actually delivered a brief bit of a lecture to conference participants on the problem of focusing on sub-general level intelligences for the expedience that they are achievable and money-making, though admitting to culpability in that respect as well. It is also clear what a small clique of researchers constitute the AGI world, as well as Mr. Goertzel’s position as a hub of the social and intellectual network. During the question and answer, he was able to call on most people in the room by first name. And he is clearly an intellectual celebrity with few peers. As Kuhn argued, non-scientific factors feature more prominently in the definition and direction of a science than rhetoric of objectivity would lead one to expect.

Group Proprioception Goes Interspecies

Some Seattle artist and I aren’t the only ones who think your pet should be life logging: the British government does too. Reading University has been commissioned to conduct a study of how much wildlife is being destroyed by domestic cats (McKie, Robin, “Special Tags to Measure How Often Cats Kill,” The Observer, 15 February 2009):

“For the first time, cats will be fitted with data loggers that will show their movements, range and behaviour 24 hours a day. We will know when one kills an animal — typically by the way it plays with its prey.

“We will then be able to work out precisely how many animals a cat is killing every year, and from that estimate a national figure. It will be a pretty formidable number.”

Now if they could just get some sort of pattern recognition software to read the live GPS data stream coming off your cat and tweet his kills to your cell phone, then your cat would be twittering too.

Life Logging: Not Just for Human Life Anymore

Not only should you be thinking about life logging, but you should also be thinking about it for your pet (Chansanchai, Athima, “Cooper the Cat Shows His Stuff in Photo Exhibit,” Seattle Post-Intelligencer, 13 February 2009):

For this Seattle cat, photography is his medium, a gift from his “parents” — filmmakers Michael and Deirdre Cross, who gave him a very small and light digital camera that hung from his collar one day a week for a year. It was programmed to take a picture every two minutes.

They wanted the answer to a question many pet lovers have asked themselves: What does he do all day?

He came back with thousands of answers — 16 of which are framed and on display at the Urban Light Studios in the Greenwood Collective. The exhibit opens with a reception tonight as part of the Greenwood Art Walk. The show runs through March 10.

Cooper the cat photographer has a blog dedicated to his exploits at http://cooper-catphotographer.blogspot.com/.

And while you’re at it, you may want to survey your environment for any particularly interesting non-living things, appliances, informational or gameworld agents, et cetera whose activities you might want to see in your FaceBook feed.

Update, 15 September 2011: Cooper the cat photographer’s blog has been relocated. It can now be found at http://www.photographercat.com/.

Machine Ethics

Two great new books on the future of robots, Moral Machines: Teaching Robots Right from Wrong and Wired for War: The Robotics Revolution and Conflict in the 21st Century are out right now. I’m not going to have time for either, but in the meantime, the New York Times constantly runs articles on this subject, most recently “A Soldier, Taking Orders From Its Ethical Judgment Center” (Dean, Cornelia, 25 November 2008, p. D1). To the list of all the things that robots will be better at than humans, we can add that they will be more ethical than us:

“My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can,” said Ronald C. Arkin, a computer scientist at Georgia Tech, who is designing software for battlefield robots under contract with the Army.

In a report to the Army last year, Dr. Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built without anger or recklessness, Dr. Arkin wrote, and they can be made invulnerable to what he called “the psychological problem of ‘scenario fulfillment,’ ” which causes people to absorb new information more easily if it agrees with their pre-existing ideas.

His report drew on a 2006 survey by the surgeon general of the Army, which found that fewer than half of soldiers and marines serving in Iraq said that noncombatants should be treated with dignity and respect, and 17 percent said all civilians should be treated as insurgents. More than one-third said torture was acceptable under some conditions, and fewer than half said they would report a colleague for unethical battlefield behavior.

Troops who were stressed, angry, anxious or mourning lost colleagues or who had handled dead bodies were more likely to say they had mistreated civilian noncombatants, the survey said [Mental Health Advisory Team IV, FINAL REPORT, Office of the Surgeon General, United States Army Medical Command, 17 November 2006].

It is incorrect to imagine machines as behaving more ethically than humans insofar as it construes humans and machines as occupying the same ethical continuum. We may program machines to have human-compatible ethics, but that shouldn’t confuse us; the same ethical prohibitions that apply to us will not apply to robots.

Right and wrong aren’t something floating out there on the other side of the sphere of the stars. Right and wrong are derived from the characteristics of the human body, human tastes and tendencies as endowed in us by our natural history, the structure of the human lifecycle, our conceptions of the good life, shared human experience, and communal mythos. Creatures for whom these factors are different will have different ideas about right and wrong. As the last three items on the list — conceptions of the good life and shared experience, public reference symbols — differ among people, we have different ideas about right and wrong. A creature with a transferable consciousness won’t have an essentialist view of the relation of body to self and hence won’t take moral exception to bodily damage. A creature with a polymorphous consciousness wouldn’t disparage even psychic damage (though the question of identity for such a creature would be even more difficult than it is with us, as already elusive as we are).

Creatures with different conceptions interacting have to develop ethical interfaces. The minimalist limitations of rights-based liberalism and the law of nations are to some extent that: interfaces between differing moral systems — the former an interface for people within a society, the latter between different societies. What an interface between different species, or an interface between different types of life, would look like, I have no idea. Whether such an interface is even possible is perhaps more pressing: they only seem to hold up so well amidst humans.

Neil Sinhababu, “the Ethical Werewolf,” and Ramesh Ponnuru had a go-round back in 2006 that touched on the ethical status of non-human creatures, but I don’t think it really goes beyond the natural extension of liberalism to different physical morphologies, with which liberalism has an extensive history in the various rights movements. And different physical morphologies is all that aliens and other mythological creatures, as conventionally conceived, are (Sinhababu, Neil, “Mind Matters,” The American Prospect, 23 August 2006; Ponnuru, Ramesh, “Fear Not, Frodo,” National Review Online, 28 August 2006; Sinhababu, Neil, “More on Minds,” TAPPED, 30 August 2006).

The Napoleon Dynamite Problem

After casing Ben Goertzel’s blog today, the point that I find myself really chewing on is this one (“The Increasing Value of Peculiar Intelligence,” The Multiverse According to Ben, 26 November 2008):

What occurs to me is that in a transparent society, there is massive economic value attached to peculiar intelligence. This is because if everyone can see everything else, the best way to gain advantage is to have something that nobody can understand even if they see it. And it’s quite possible that, even if they know that’s your explicit strategy, others can’t really do anything to thwart it.

Yes, a transparent society could decide to outlaw inscrutability. But this would have terrible consequences, because nearly all radical advances are initially inscrutable. Inscrutability is dangerous. But it’s also, almost by definition, the only path to radical growth.

I argued in a recent blog post [“The Inevitable Increase of Irrationality,” 25 November 2008] that part of the cause of the recent financial crisis is the development of financial instruments so complex that they are inscrutable to nearly everyone — so that even if banks play by the rules and operate transparently, they can still trick shareholders (and journalists) because these people can’t understand what they see!

But it seems that this recent issue with banks is just a preliminary glimmering of what’s to come.

Inscrutability, peculiarity, the idiosyncratic are already creeping in. Mr. Goertze is right to point to the rise of the quants and mathematical finance as an example. The one that comes to mind for me is the Napoleon Dynamite problem.

NetFlix has announced a million dollar prize for anyone who can improve the precision of its recommendation engine by ten percent. The New York Times Magazine and NPR’s On the Media both did stories about it back in November (Thompson, Clive, “If You Liked This, You’re Sure to Love That,” 23 November 2008; Gladstone, Brooke, “Knowing Me, Knowing You,” 21 November 2008). It turns out that improving the quality of this sort of singular value decomposition algorithm is geometric in difficulty. Most movies are easy to predict whether someone will like or dislike them, but a small number of odd movies thwart the algorithm. Chief among them is Napoleon Dynamite. For the research group profiled in The New York Times piece, Napoleon Dynamite was responsible for a whopping fifteen percent of all recommendation errors. There is no telling on the basis of people’s past movie rating history whether or not they’ll like this movie.

But the Napoleon Dynamite problem isn’t a solitary anomaly, but rather the paradigm of a trend. What we have is a Hollywood focused on these monster, expensive productions. Increasingly the movies that Hollywood makes are global products, with as much revenue coming from abroad as from the U.S. audience, so Hollywood is careful to strip its movies of any dialogue, humor or situations which are culturally nuanced and might not translate well. So the plot and dialog that we get in big Hollywood movies today is only the most broadly recognized and basic cultural tropes. Also, Hollywood has jacked the price of a movie up to the point where viewers now almost universally make a theatre-rental division: big special effects movies that they want to see in the theatres, and the dramas for which screen size isn’t a factor. It is a division with a positive feedback loop in that movie makers are aware of it and now shape their product offerings around it.

For a particularly depressing take on this, give a listen to Malcolm Gladwell’s 2006 New Yorker Festival talk on the use of machines to produce blockbuster scripts. At the same time that institutions like NetFlix are using computers to match customers to movies with increasing efficiency on the consumer end, Hollywood is using computers to make films increasingly easy to pigeonhole and match to demographics on the production side. It’s post-Fordist cultural production perfected. Soon we will be able to take the human out of the equation and the entertainment industry will just garnish out wages.

But there is — as is always the case — a countervailing motion. Just as Hollywood productions become increasingly trite and formulaic, there is the rise of these wildly bizarre and idiosyncratic films like The Zero Effect, Adaptation, Eternal Sunshine of the Spotless Mind, Lost in Translation, The Royal Tenenbaums, I Huckabees, Burn After Reading and so on. There is this sort of shadow Hollywood with it’s own set of stars and directors branding the alt-film genera: Wes Anderson, Charlie Kaufman, the Coen brothers, Catherine Keener, John Malkovich, William H. Macy, Frances McDormand. I would be remiss if I didn’t mention Steve Buscemi here.

What we have is a hollowing out of the middle. Along a spectrum, films range from obscurantia to formulaic. In the past, most movies probably fell in some broad middle: accessible, but unique. And most movie watchers probably fell there too. But increasingly movies and the movie-watching audience is being polarized into the genera constellations at one end and the difficult to categorize peculiarities at the other. Notice that the ambiguity of suspense has been replaced by the spectacle of gore in horror; that the sort of romantic comedy for which Drew Barrymore was designed and built has completely driven the older adult romantic drama to extinction. Similarly, the sort of accessible quirky, artiness represented by Woody Allen has moved much further down the spectrum of the idiosyncratic. The people who didn’t like Woody Allen are utterly baffled by Wes Anderson.

To generalize: hitherto we have been a normal distribution society. The majority of people fall into the broad middle and are closely related. But increasingly we are on the way toward a parabolic, or inverse normal distribution society, where the preponderance resides at the antipodes and people are separated by wide gulfs. This is true across the cultural spectrum, whether it’s politics, religion, the professions and so on. In the United States it is almost happening physically with the costal regions swelling as the center of the country is abandoned to satellite guided tractors and migrant labor. Some might call this the condition of postmodernity, some might call it the dissolution of Western Civilization.