Two great new books on the future of robots, Moral Machines: Teaching Robots Right from Wrong and Wired for War: The Robotics Revolution and Conflict in the 21st Century are out right now. I’m not going to have time for either, but in the meantime, the New York Times constantly runs articles on this subject, most recently “A Soldier, Taking Orders From Its Ethical Judgment Center” (Dean, Cornelia, 25 November 2008, p. D1). To the list of all the things that robots will be better at than humans, we can add that they will be more ethical than us:
“My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can,” said Ronald C. Arkin, a computer scientist at Georgia Tech, who is designing software for battlefield robots under contract with the Army.
…
In a report to the Army last year, Dr. Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built without anger or recklessness, Dr. Arkin wrote, and they can be made invulnerable to what he called “the psychological problem of ‘scenario fulfillment,’ ” which causes people to absorb new information more easily if it agrees with their pre-existing ideas.
His report drew on a 2006 survey by the surgeon general of the Army, which found that fewer than half of soldiers and marines serving in Iraq said that noncombatants should be treated with dignity and respect, and 17 percent said all civilians should be treated as insurgents. More than one-third said torture was acceptable under some conditions, and fewer than half said they would report a colleague for unethical battlefield behavior.
Troops who were stressed, angry, anxious or mourning lost colleagues or who had handled dead bodies were more likely to say they had mistreated civilian noncombatants, the survey said [Mental Health Advisory Team IV, FINAL REPORT, Office of the Surgeon General, United States Army Medical Command, 17 November 2006].
It is incorrect to imagine machines as behaving more ethically than humans insofar as it construes humans and machines as occupying the same ethical continuum. We may program machines to have human-compatible ethics, but that shouldn’t confuse us; the same ethical prohibitions that apply to us will not apply to robots.
Right and wrong aren’t something floating out there on the other side of the sphere of the stars. Right and wrong are derived from the characteristics of the human body, human tastes and tendencies as endowed in us by our natural history, the structure of the human lifecycle, our conceptions of the good life, shared human experience, and communal mythos. Creatures for whom these factors are different will have different ideas about right and wrong. As the last three items on the list — conceptions of the good life and shared experience, public reference symbols — differ among people, we have different ideas about right and wrong. A creature with a transferable consciousness won’t have an essentialist view of the relation of body to self and hence won’t take moral exception to bodily damage. A creature with a polymorphous consciousness wouldn’t disparage even psychic damage (though the question of identity for such a creature would be even more difficult than it is with us, as already elusive as we are).
Creatures with different conceptions interacting have to develop ethical interfaces. The minimalist limitations of rights-based liberalism and the law of nations are to some extent that: interfaces between differing moral systems — the former an interface for people within a society, the latter between different societies. What an interface between different species, or an interface between different types of life, would look like, I have no idea. Whether such an interface is even possible is perhaps more pressing: they only seem to hold up so well amidst humans.
Neil Sinhababu, “the Ethical Werewolf,” and Ramesh Ponnuru had a go-round back in 2006 that touched on the ethical status of non-human creatures, but I don’t think it really goes beyond the natural extension of liberalism to different physical morphologies, with which liberalism has an extensive history in the various rights movements. And different physical morphologies is all that aliens and other mythological creatures, as conventionally conceived, are (Sinhababu, Neil, “Mind Matters,” The American Prospect, 23 August 2006; Ponnuru, Ramesh, “Fear Not, Frodo,” National Review Online, 28 August 2006; Sinhababu, Neil, “More on Minds,” TAPPED, 30 August 2006).