To the left, the Venus of Willendorf, a 10.8 cm high stone statuette dating from the Paleolithic era, 30-27 kyr. To the right, the first anthropomorphic design for an atmospheric diving suite built by the Carmagnolle brothers of Marseilles, France in 1882.
The Carmagnolle brothers diving suite is the stuff of steampunk fantasy, but perhaps so situating it is time-out-of-joint in the wrong direction. Instead of designating it as a paleo-future, it is more properly a parachronism: perhaps we should call it the Venus (Ares) of Marseilles (with the Venus of Willendorf as old stone punk).
Seeing what my partner sees might make me never want to have sex again. I am reminded of Steve’s disquisition on “nether freckling” from Coupling s3:e5: “There are … angles. In a relationship you get seen from certain … angles.”
But more seriously, total immersion in the experience of another ranks along with immortality, total recall, omniscience, radical subjectivity, demediation of desire and fulfillment and a few others as ultimate goals of technology.
What I’m saying is that I think technology is teleological. The evolution of technology is not a random walk or a function of reachability or the traversal of a dependency network. Well, it is all those things, but it is not only those things. There are ends or extreme outer limits toward which technology is evolving. I think I listed a few off-the-cuff. Some systematic and dedicated attention to a fuller list is warranted.
But wence do the ends come? As I have framed them, they could be construed as merely the desiderata of the human makers of technology technology has no end of it’s own: they are bestowed by their makers. But perhaps technology as a continuation of life, as a fourth domain, inherits these ends. Or perhaps these ends admit of a more objective formulation: eternity instead of immortality, idealist anarchy for radical subjectivity. Or perhaps for Kantian cyborgs, they are the transcendental illusions of technology.
Also, as if my digression hasn’t already been far enough, there’s this as a longing to supersede individuation:
“This kind of innovative thinking is exactly what we need more of when it comes to finding solutions to human-wildlife conflict,” says Alison Kock, research manager of Shark Spotters, apparently a shark social media PR firm in South Africa (Alan Yu, “More Than 300 Sharks In Australia Are Now On Twitter“, NPR, 1 January 2014). This is, of course, the sort of utopian thinking rampant among tech enthusiasts. But it’s only a matter of time before these vital shark voices are shouted down by a bunch of galeophobs, reminding us of Internet Rule 14.1: Don’t chum the trolls.
There is constant talk of the Internet of Things, about how all our devices are getting on-line and being internetworked with one another. But what about an Internet of All Creatures Great and Small? Instead of just us humans getting on-line, living in augmented reality, having brain computer interfaces, being enhanced by cognitive prostheses, we need to get all the other animals on-line too.
And look, we humans can’t stave off the robot apocalypse by ourselves. Presumably SkyNet and the Matrix don’t stop with the humans. Both SkyNet and the Matrix blot out the sun as a tactic. The other animals have a stake in the outcome of this as well. We animals need to pull together. Maybe even the plants too. Already we’re developing thought controlled power suits for monkeys and cockroaches that can interface with your iPhone (there’s an app for that). Raspberry crazy ants can detect electromagnetic fields and already have a vendetta against electronics. Now we just need to upgrade them to Raspberry Pie Ants. Maybe once we get this computation using protein folding and DNA, Craig Venter will engineer protozoa and yeast with wireless access and IP addresses to fight the nanobots.
The Google Glass feature that (almost) no one is talking about is the experience not of the user, but of everyone other than the user. A tweet by David Yee introduces it well:
There is a kid wearing Google Glasses at this restaurant which, until just now, used to be my favorite spot.
The key experiential question of Google Glass isn’t what it’s like to wear them, it’s what it’s like to be around someone else who’s wearing them. I’ll give an easy example. Your one-on-one conversation with someone wearing Google Glass is likely to be annoying, because you’ll suspect that you don’t have their undivided attention. And you can’t comfortably ask them to take the glasses off (especially when, inevitably, the device is integrated into prescription lenses). Finally here’s where the problems really start you don’t know if they’re taking a video of you.
Now pretend you don’t know a single person who wears Google Glass … and take a walk outside. Anywhere you go in public any store, any sidewalk, any bus or subway you’re liable to be recorded: audio and video. Fifty people on the bus might be Glassless, but if a single person wearing Glass gets on, you and all 49 other passengers could be recorded. Not just for a temporary throwaway video buffer, like a security camera, but recorded, stored permanently, and shared to the world.
Now, I know the response: “I’m recorded by security cameras all day, it doesn’t bother me, what’s the difference?” Hear me out I’m not done. What makes Glass so unique is that it’s a Google project. And Google has the capacity to combine Glass with other technologies it owns.
First, take the video feeds from every Google Glass headset, worn by users worldwide. Regardless of whether video is only recorded temporarily, as in the first version of Glass, or always-on, as is certainly possible in future versions, the video all streams into Google’s own cloud of servers. Now add in facial recognition and the identity database that Google is building within Google Plus (with an emphasis on people’s accurate, real-world names): Google’s servers can process video files, at their leisure, to attempt identification on every person appearing in every video. And if Google Plus doesn’t sound like much, note that Mark Zuckerberg has already pledged that Facebook will develop apps for Glass.
Finally, consider the speech-to-text software that Google already employs, both in its servers and on the Glass devices themselves. Any audio in a video could, technically speaking, be converted to text, tagged to the individual who spoke it, and made fully searchable within Google’s search index.
Now our stage is set: not for what will happen, necessarily, but what I just want to point out could technically happen, by combining tools already available within Google.
Let’s return to the bus ride. It’s not a stretch to imagine that you could immediately be identified by that Google Glass user who gets on the bus and turns the camera toward you. Anything you say within earshot could be recorded, associated with the text, and tagged to your online identity. And stored in Google’s search index. Permanently.
I’m still not done.
The really interesting aspect is that all of the indexing, tagging, and storage could happen without the Google Glass user even requesting it. Any video taken by any Google Glass, anywhere, is likely to be stored on Google servers, where any post-processing (facial recognition, speech-to-text, etc.) could happen at the later request of Google, or any other corporate or governmental body, at any point in the future.
Remember when people were kind of creeped out by that car Google drove around to take pictures of your house? Most people got over it, because they got a nice StreetView feature in Google Maps as a result.
Google Glass is like one camera car for each of the thousands, possibly millions, of people who will wear the device every single day, everywhere they go on sidewalks, into restaurants, up elevators, around your office, into your home. From now on, starting today, anywhere you go within range of a Google Glass device, everything you do could be recorded and uploaded to Google’s cloud, and stored there for the rest of your life. You won’t know if you’re being recorded or not; and even if you do, you’ll have no way to stop it.
And that, my friends, is the experience that Google Glass creates. That is the experience we should be thinking about. The most important Google Glass experience is not the user experience it’s the experience of everyone else. The experience of being a citizen, in public, is about to change.
I guess I will temper my enthusiasm for life logging accordingly.
There are two major points one lesser, one greater to Neal Stephenson’s In the Beginning … was the Command Line (Wikipedia | Amazon). The lesser point serves to feed the superiority complex of computer geeks, namely that people who work closer to the machine are more awesome than people who work with their machines through layers of mediation. The greater point is that maintaining a high degree of control of the devices that shape our lives is a critical element of freedom in the information age.
It’s not about turning your nose up at GUIs and other user-friendly efforts in favor of arcane monochrome text interfaces. The point is that when you cede control of the devices that comprise your environment that serve as the basis of your personal capabilities when you cede these to manufacturers, marketers, designers, content providers, legislators, then the limits they seek to impose become your limits as well.
It is as extending and impacting this point that I think Cory Doctorow’s talk, “The Coming War on General Computation” is so important (28th Chaos Communication Congress, Berliner Congress Center, Berlin, Germany, 27 December 2011).
You should definitely watch the whole thing: it’s entertaining as well as one of the most cogent talks I’ve heard in some time. To me his outstanding points are two:
1. The never-ending desire for a certain kind of ease of use that comes through circumscribed functionality is an invitation to, a kind of lazy collusion with, the likes of Apple who are more than happy to sell you a device hobbled in a way that maximizes corporate returns (the walled garden):
So today we have marketing departments who say things like “we don’t need computers, we need … appliances. Make me a computer that doesn’t run every program, just a program that does this specialized task, like streaming audio, or routing packets, or playing Xbox games, and make sure it doesn’t run programs that I haven’t authorized that might undermine our profits”. And on the surface, this seems like a reasonable idea just a program that does one specialized task after all, we can put an electric motor in a blender, and we can install a motor in a dishwasher, and we don’t worry if it’s still possible to run a dishwashing program in a blender. But that’s not what we do when we turn a computer into an appliance. We’re not making a computer that runs only the “appliance” app; we’re making a computer that can run every program, but which uses some combination of rootkits, spyware, and code-signing to prevent the user from knowing which processes are running, from installing her own software, and from terminating processes that she doesn’t want. In other words, an appliance is not a stripped-down computer it is a fully functional computer with spyware on it out of the box.
2. Media copyright is just the tip of the iceberg when it comes to the incentive of corporations to turn to political-legislative attempts to prevent the disruptions to their business models that result from technological change:
And even this is a shadow of what is to come. After all, this was the year in which we saw the debut of open sourced shape files for converting AR-15s to full automatic. This was the year of crowd-funded open-sourced hardware for gene sequencing. And while 3D printing will give rise to plenty of trivial complaints, there will be judges in the American South and Mullahs in Iran who will lose their minds over people in their jurisdiction printing out sex toys. The trajectory of 3D printing will most certainly raise real grievances, from solid state meth labs, to ceramic knives.
And it doesn’t take a science fiction writer to understand why regulators might be nervous about the user-modifiable firmware on self-driving cars, or limiting interoperability for aviation controllers, or the kind of thing you could do with bio-scale assemblers and sequencers. Imagine what will happen the day that Monsanto determines that it’s really… really… important to make sure that computers can’t execute programs that cause specialized peripherals to output organisms that eat their lunch… literally. Regardless of whether you think these are real problems or merely hysterical fears, they are nevertheless the province of lobbies and interest groups that are far more influential than Hollywood and big content are on their best days, and every one of them will arrive at the same place “can’t you just make us a general purpose computer that runs all the programs, except the ones that scare and anger us? Can’t you just make us an Internet that transmits any message over any protocol between any two points, unless it upsets us?”
The way to think of all of this is as akin to the transition from feudalism to capitalism. There’s no reason to think that an information economy will be just more capitalism (to think so is a contribution to capitalism as the end of history). That a growing list of industries face disruption on a scale where it’s hard to see their business model surviving in absence of ever escalating state measures to construct markets that otherwise would fail (a point well made by Mr. Doctorow with his wheels analogy) suggests significant incompatibility between capitalism and the information economy.
The retort of the defender of capitalism here would be that the information economy is a creature of capitalism without chip fabricators and integraters and intellectual property and venture capital and server farms, the information economy doesn’t happen. But of course the feudal baron would have said the same of the capitalist upstart. History is a realm of contingency. It is not a logical system. Contradictions and the deleterious eddies that result are perfectly possible. That the information economy might end up destroying the very basis for its existence is within the realm of possibility.
Or perhaps this is perfectly compatible with capitalism and effected sectors are merely the cracks through which we can see the lie of laissez-faire throughout the rest of the economy. The government takes a heavy hand in constructing markets everywhere they exist.
But the point is that previous economic transformations weren’t tranquil evolutions, didn’t happen in a discrete moment. The social transformations that we today package under the rubric “capitalism” benefitted some, but came at terrible consequence to others. Those who stood to lose prestige, revenue, power, opposed these changes, frequently by violence. For them, capitalism wasn’t just social change, it was immoral. Ownership of property by those who did not directly fight for it, property as a transferable abstraction, rootlessness, equality among the classes, attacks upon the ancient privileges of nobility, the undermining of seigniorial obligation, the money economy, the violations of guild oaths, the codification of techne (craft), the insolence of entrepreneurs: these were violations of the moral order of society.
The practices that have grown up around the frictionlessness of the information economy’s core commodities are called piracy by the partisans of our present order. It is immoral. It is theft of property (property is here an analogy growing threadbare at the margins from being stretched too much). It is the collapse of the basis of prosperity. But how is a system of constant content theft to be harnessed to our system of material progress? I haven’t the foggiest notion. But capitalism too was a post hoc ideological construct. At the time it seemed like the end of the world. Remember that by the time Adam Smith wrote The Wealth of Nations, such processes were already far along. Smith wasn’t envisioning future pin factories: he was describing existent ones that he had recently visited.
Besides, if it is not within the scope of power to achieve these things, it does not matter the machinations of ideology. Ideology adapts. Moral and immoral will be renamed to accommodate the new arrangement of factors.
I’m going to excerpt about fifty percent of David Frum’s review from this weekend’s New York Times Book Review (“Unhappy Days,” 5 September 2010, p. BR20):
Art historians tell us that photography revolutionized painting. Suddenly there was a better way of recording the physical appearance of things, and artists had to discover new purposes for brush and pigment. But for those living through the revolution, the process must have seemed more gradual. Long after the Impressionists and Cubists and Futurists, there must have been serious portraitists who continued to earn a living depicting brides on their wedding day or businessmen made good.
I kept thinking of those backwardlooking artists all the way through Laura Kalman’s “Right Star Rising.” As a work of history about the Ford and Carter years, there is nothing seriously wrong with it. The facts are accurate, the writing is clear and the point of view is not tendentious. Once upon a time, such a book might have been useful to somebody.
But the question it raises — and it’s not a question about this book alone — is: What’s the point of this kind of history in the age of the Internet? Suppose I’m an undergraduate who stumbles for the first time across the phrase “Proposition 13.” I could, if I were minded, walk over to the university library, pull this book from the shelf and flip to the index. Or I could save myself two hours and Google it. I wouldn’t learn more from a Google search than I’d learn in these pages. But I wouldn’t learn a whole lot less either.
He gets a little more specific than this, makes a few examples, but that’s about all he has to say about the book. It’s nothing against Ms. Kalman — as Mr. Frum writes, “it’s not a question about this book alone.” The analogy to painting in an era of photography is apt. We live in a time in which our relation to information is changing. Problems of availability have — at least in the developed world — been for the most part solved. So like the painter, how are information workers to make their way in this world?
I’m not going to wind this post up with some pat answer. I think that Mr. Frum is also correct in not making a teleologically overdetermined analogy. “For those living through the revolution, the process must have seemed more gradual,” he writes. Painters only found a post-photography life through protracted experimentation.
I think of Harold Bloom’s idea of the anxiety of influence as much more than a theory of poetry. In an age of mass information, all information workers labor under the anxiety of influence (Jimmy Wales is our Milton). No one should think that a publisher is going to cut down a few hundred acres of trees for more of the same.
We have been inadvertently betraying our presence for 60 years with our television, radio and radar transmissions. The earliest episodes of I Love Lucy have washed over 6,000 or so star systems, and are reaching new audiences at the rate of one solar system a day. If there are sentient beings out there, the signals will reach them.
Charles Mudede’s explanation for why the slave becomes the thesis of the next order dialectic in Hegel’s Phenomenology of Spirit is surprisingly straightforward and elegant (“Marxism and Insects: Slave-Making Ants,” SLOG, The Stranger, 13 May 2010):
Hegel argues that because the world is more and more made and shaped by slave labor — serving, building, putting “all to rights” — the world makes more and more sense to slaves and less and less sense to the masters (“so utterly helpless are the masters”). The masters only know how to destroy; the slaves know how to create.
If you follow the link and read the entire post, know that it is the latest installment in Mr. Mudede’s recent ant phase. His explanation of Hegel quoted above is a takeoff from a description of slavery amidst the ants found in Darwin’s On the Origin of Species. For the slave revolt among the ants, definitely read the article that commenter @10 recommends (Rodríguez, Álvaro, “Enslaved Ants Revolt, Slaughter Their Captors’ Children,” DiscoBlog, Discover, 18 August 2008).
In reaction to last week’s The New Yorker article on synthetic biology (Specter, Michael, “A Life Of Its Own,” 28 September 2009, pp. 56-65):
The objective of synthetic biology is the final subsumption of the logic of nature into the logic of capitalism. Capitalism being the logic of human desire, the objective of synthetic biology is — as with the whole of the technological endeavor — the elimination of all intercession between desire and its fulfillment. It is the attempt to return to the purity of the hallucination of the breast, to do away with despised reality testing, the creation of a world of pure subjectivity.
A few weeks ago I met up with some friends and we were walking through the busy Gallery Place / Chinatown area, all three of us heads down studying our various hand-helds (two iPhones and an Android). I joked that the app that we need is something like the range-finders from the Alien movies, only that does picture-in-picture on our phones so we can see what’s coming without having to look up from our immersion in our respective virtual worlds as we walk through heavy pedestrian traffic.
The absurd extent of the anxiety of influence: not only if you’ve had a good idea can you count on someone having already had it, but if you make a joke about something absurd, you can rest assured that someone is already doing that too. It turns out there is already a sonar app for the iPhone (Frucci, Adam, “iPhone’s Sonar Ruler App Measures Distance Using Sound,” Gizmodo, 21 August 2009).
(Jokes about iPhone apps follow the same formula as jokes about hitherto unnamed but always Johnny-on-the-spot when convenient members of the Smurf village: think of an absurd or inappropriate function, append “smurf”; e.g. Cuckolding Smurf finds life in the Smurf village paradisiacal; or sumrfs keep themselves free of tropical disease by regularly licking Quinine Smurf; In the case of iPhone apps, name an absurd function, then say “There’s an app for that.”)