The First Non-Trivial Cyborg

There are all sorts of cyborgs already among us: my dad has plastic irises, my mom has a metal hip. But these are trivial. A team of researchers at the University of Reading, United Kingdom has produced the first non-trivial cyborg, a robot controlled entirely by neural circuitry (“A ‘Frankenrobot’ with a Biological Brain,” Agence France-Presse, 13 August 2008):

… Gordon has a brain composed of 50,000 to 100,000 active neurons. Once removed from rat foetuses and disentangled from each other with an enzyme bath, the specialised nerve cells are laid out in a nutrient-rich medium across an eight-by-eight centimetre (five-by-five inch) array of 60 electrodes.

This “multi-electrode array” (MEA) serves as the interface between living tissue and machine, with the brain sending electrical impulses to drive the wheels of the robots, and receiving impulses delivered by sensors reacting to the environment. Because the brain is living tissue, it must be housed in a special temperature-controlled unit — it communicates with its “body” via a Bluetooth radio link. The robot has no additional control from a human or computer.

From the very start, the neurons get busy. “Within about 24 hours, they start sending out feelers to each other and making connections,” said Warwick. “Within a week we get some spontaneous firings and brain-like activity” similar to what happens in a normal rat — or human — brain, he added. But without external stimulation, the brain will wither and die within a couple of months.

“Now we are looking at how best to teach it to behave in certain ways,” explained Warwick. To some extent, Gordon learns by itself. When it hits a wall, for example, it gets an electrical stimulation from the robot’s sensors. As it confronts similar situations, it learns by habit. To help this process along, the researchers also use different chemicals to reinforce or inhibit the neural pathways that light up during particular actions.

Gordon, in fact, has multiple personalities — several MEA “brains” that the scientists can dock into the robot. “It’s quite funny — you get differences between the brains,” said Warwick. “This one is a bit boisterous and active, while we know another is not going to do what we want it to.” [reparagraphed]

See also Marks, Paul, “Rise of the Rat-Brained Robots,” New Scientist, 13 August 2008, pp. 22-23.

One of the possibilities mentioned without being entirely explicit about it is that these small brain models will hasten the pace of discovery in brain research. One of the obstacles of neurology is the sheer scale of the problem. With options like this, neurology becomes considerably more experimental then observational. And it potentially unleashes the hacker ethic on the problem: the challenge of creation can be a powerful addition to that of unalloyed comprehension. One wonders when the first trained rather than remote-controlled BattleBot will make its debut or when Survival Research Labs will get in on the act.

Its also worth noting that the lead scientist on the project is Kevin Warwick of Project Cyborg and that they will be writing up some results in the Journal of Neural Engineering. Can you believe that such a journal even exists? Following on this, neural engineering will be a growth field.

Enough of the messianism, time for the snark.

1991, Terminator II: Judgment Day, Linda Hamilton

They just should have made it look more like a T-800 than Wall-E. But when you see research like this ya gotta wonder if these people have ever watched any of the Terminator films. And I guess the Wall-E-like exterior is necessary for the next round of grants. And if you make it look like a T-800 then some Linda Hamilton / Ted Kaczynski type is going to show up at your door with an AK-47 and a grenade belt across her chest. On the other hand, if I could concoct a plan whereby Linda Hamilton would show up at my door with a grenade belt strapped across her chest that would be awesome.

The Thinking Cap

Scientific American has an article on how the mind-machine interface is about to go commercial with a wearable EEG game controller that reads your mind (Sergo, Peter, “Head Games: Video Controller Taps into Brain Waves,” 14 April 2008). How’d they do it? Exactly the way the people at Wired would imagine. Rather than developing a series of hard-won determinate correlations between identified brain waves and intentions they just brute forced it. They recorded a gigantic quantity of sample data and processed it using a cloud computer to find the patterns:

Emotiv solved this brain-computer interface problem with the help of a multidisciplinary team that included neuroscientists, who understood the brain at a systems level (rather than individual cells), and computer engineers with a knack for machine learning and pattern recognition. Over the last four years, the company has conducted thousands of EEG recordings on hundreds of volunteers — not all gamers — as they experienced virtual scenarios that elicited various emotions, facial expressions and cognitive demands. The aim was to find a revealing brain activity that many people shared — a needle in a haystack of frenzied signals. Now, the EPOC allows users to fine-tune settings that allow it to pick up on even the subtlest of smirks.

When building these algorithms commenced two years ago, it had taken up to 72 hours for a bank of powerful computers to run through a mere 10 seconds of individual brain data and extract important features. Sorting through a seemingly endless stream of recordings eventually led Emotiv to find consistent signal patterns that revealed specific mental experiences. “Through a large enough sample size,” Le says, “we were able to get some consistency around the population to attain a high degree of confidence that it accurately measures an emotional state.”

And in dispensing with theoretical purity and just going with base correlation, the engineers at Emotive didn’t even have to concern themselves with the signal to noise ratio of the data:

Buch also suspects that the facial expressions that the EPOC detects are based more on the electrical activity of facial and scalp muscles than the brain per se. Although the electrical activity of muscles, he explained, is normally considered as artifact noise that needs to be filtered out to attain clean EEG signals that are of interest, they are still informative about how facial muscles move, such as during a wink. Tan agrees, saying that in their classification strategy some of the EPOC’s detections are based on muscle movements.

It’s all just correlation and if the noise helps identify the correlation, than it’s just as good as signal. In the petabyte age there is no phenomenon under consideration, not phenomenon under consideration issue. Any possible interference will be defeated by the size of the data set.

Now if they would just make a model that looks like this:

Robotech, Rick Hunter in the thinking cap

And maybe control an F-14 that transforms into a 50 foot tall robot instead of stupid games.

The State of Brain-Machine Interface

Given that in the last five days it has been reported that scientists were able to read signals off the brain of a monkey and use the information to control a robotic arm in a useful manner, that Dean Kamen has done the same with for humans reading the signals off the nerve endings and that another team has been able to use fMRI and some adaptive software systems to guess what noun a subject is thinking with 77 percent accuracy, I’d say that we’re cusping on a sudden leap forward in the field of brain-machine interface, no?

Carey, Benedict, “Monkeys Think, Moving Artificial Arm as Own,” The New York Times, 29 May 2008

Tweney, Dylan, “Dean Kamen’s Robot Arm Grabs More Publicity,” Wired, 29 May 2008

Randerson, James, “Scientists Move a Step Closer to Mind-Reading,” The Guardian, 30 May 2008