Paleolithic Punk and the Venus of Marseilles

Venuses of Willendorf (30-27ky) and Marseilles (1882)

To the left, the Venus of Willendorf, a 10.8 cm high stone statuette dating from the Paleolithic era, 30-27 kyr. To the right, the first anthropomorphic design for an atmospheric diving suite built by the Carmagnolle brothers of Marseilles, France in 1882.

The Carmagnolle brothers diving suite is the stuff of steampunk fantasy, but perhaps so situating it is time-out-of-joint in the wrong direction. Instead of designating it as a paleo-future, it is more properly a parachronism: perhaps we should call it the Venus (Ares) of Marseilles (with the Venus of Willendorf as old stone punk).

Advertisements

What Does Technology Lust After?

A Wearable Tech Hackathon sex app for Google Glass

So this was inevitable, right?

First thoughts:

  1. Really? Picture-in-picture?
  2. Seeing what my partner sees might make me never want to have sex again. I am reminded of Steve’s disquisition on “nether freckling” from Coupling s3:e5: “There are … angles. In a relationship you get seen from certain … angles.”
  3. With all the photos being stolen off of hacked mobile devices and posted to revenge porn websites, how could this possibly go wrong?
  4. Wasn’t Strange Days (1995) a great / terrible sci-fi film?

But more seriously, total immersion in the experience of another ranks along with immortality, total recall, omniscience, radical subjectivity, demediation of desire and fulfillment and a few others as ultimate goals of technology.

What I’m saying is that I think technology is teleological. The evolution of technology is not a random walk or a function of reachability or the traversal of a dependency network. Well, it is all those things, but it is not only those things. There are ends or extreme outer limits toward which technology is evolving. I think I listed a few off-the-cuff. Some systematic and dedicated attention to a fuller list is warranted.

But wence do the ends come? As I have framed them, they could be construed as merely the desiderata of the human makers of technology — technology has no end of it’s own: they are bestowed by their makers. But perhaps technology as a continuation of life, as a fourth domain, inherits these ends. Or perhaps these ends admit of a more objective formulation: eternity instead of immortality, idealist anarchy for radical subjectivity. Or perhaps for Kantian cyborgs, they are the transcendental illusions of technology.

Also, as if my digression hasn’t already been far enough, there’s this as a longing to supersede individuation:

An Internet of Creatures

Keanu Reeves in Johnny Mnemonic getting ghost-hacked by Ice-T's cyborg dolphin

Keanu Reeves in Johnny Mnemonic getting ghost-hacked by Ice-T’s cyborg dolphin

Non-human animals are late adopters. In 2009 cats started life logging. In 2012 dogs got into biometrics. Now sharks are getting on twitter. The Western Australia Department of Fisheries has started tagging sharks with radio transmitters. When the tagged sharks are detected within half a mile from beaches, the monitoring system updates the Surf Life Saving WA twitter feed with the shark’s species, size and location (“Roll Out of Tagged Shark Monitors Continues, 14 December 2013). The sharks do not yet post selfies to instagram.

“This kind of innovative thinking is exactly what we need more of when it comes to finding solutions to human-wildlife conflict,” says Alison Kock, research manager of Shark Spotters, apparently a shark social media PR firm in South Africa (Alan Yu, “More Than 300 Sharks In Australia Are Now On Twitter“, NPR, 1 January 2014). This is, of course, the sort of utopian thinking rampant among tech enthusiasts. But it’s only a matter of time before these vital shark voices are shouted down by a bunch of galeophobs, reminding us of Internet Rule 14.1: Don’t chum the trolls.

There is constant talk of the Internet of Things, about how all our devices are getting on-line and being internetworked with one another. But what about an Internet of All Creatures Great and Small? Instead of just us humans getting on-line, living in augmented reality, having brain computer interfaces, being enhanced by cognitive prostheses, we need to get all the other animals on-line too.

And look, we humans can’t stave off the robot apocalypse by ourselves. Presumably SkyNet and the Matrix don’t stop with the humans. Both SkyNet and the Matrix blot out the sun as a tactic. The other animals have a stake in the outcome of this as well. We animals need to pull together. Maybe even the plants too. Already we’re developing thought controlled power suits for monkeys and cockroaches that can interface with your iPhone (there’s an app for that). Raspberry crazy ants can detect electromagnetic fields and already have a vendetta against electronics. Now we just need to upgrade them to Raspberry Pie Ants. Maybe once we get this computation using protein folding and DNA, Craig Venter will engineer protozoa and yeast with wireless access and IP addresses to fight the nanobots.

Addendum, 17 January 2014: I should add that hot on the heels of kittens, a lot of animals are life logging these days — alligators, falcons, halibut, dolphins, sperm whales, eagles, a caracara and turkey vultures, more eagles and penguins. And of course National Geographic is like the BuzzFeed of animal cams.

Addendum II, 17 January 2014: This scientist is fitting honey bees with transponders and is running some sort of bee Aegis radar-type system to monitor their bee flash mobs.

Through the Google Looking Glass

Back in 2008 I got really excited about life logging (“Life Logging: It’s All About the Metadata,” 11 September 2008; “The End of the Era of Orphanage,” 15 September 2008). I had all these exciting visions of all the things we could do if we were to dramatically increase capture. Now that Google Glass is close to bringing this geek fantasy to mass consumption, commentators are beginning to see the malign potential of such a technology. Mark Hurst of Creative Good imagines the transformation of the public that could result from such a degree of capture (The Google Glass Feature No One is Talking About, 28 February 2013):

The Google Glass feature that (almost) no one is talking about is the experience — not of the user, but of everyone other than the user. A tweet by David Yee introduces it well:

There is a kid wearing Google Glasses at this restaurant which, until just now, used to be my favorite spot.

The key experiential question of Google Glass isn’t what it’s like to wear them, it’s what it’s like to be around someone else who’s wearing them. I’ll give an easy example. Your one-on-one conversation with someone wearing Google Glass is likely to be annoying, because you’ll suspect that you don’t have their undivided attention. And you can’t comfortably ask them to take the glasses off (especially when, inevitably, the device is integrated into prescription lenses). Finally — here’s where the problems really start — you don’t know if they’re taking a video of you.

Now pretend you don’t know a single person who wears Google Glass … and take a walk outside. Anywhere you go in public — any store, any sidewalk, any bus or subway — you’re liable to be recorded: audio and video. Fifty people on the bus might be Glassless, but if a single person wearing Glass gets on, you — and all 49 other passengers — could be recorded. Not just for a temporary throwaway video buffer, like a security camera, but recorded, stored permanently, and shared to the world.

Now, I know the response: “I’m recorded by security cameras all day, it doesn’t bother me, what’s the difference?” Hear me out — I’m not done. What makes Glass so unique is that it’s a Google project. And Google has the capacity to combine Glass with other technologies it owns.

First, take the video feeds from every Google Glass headset, worn by users worldwide. Regardless of whether video is only recorded temporarily, as in the first version of Glass, or always-on, as is certainly possible in future versions, the video all streams into Google’s own cloud of servers. Now add in facial recognition and the identity database that Google is building within Google Plus (with an emphasis on people’s accurate, real-world names): Google’s servers can process video files, at their leisure, to attempt identification on every person appearing in every video. And if Google Plus doesn’t sound like much, note that Mark Zuckerberg has already pledged that Facebook will develop apps for Glass.

Finally, consider the speech-to-text software that Google already employs, both in its servers and on the Glass devices themselves. Any audio in a video could, technically speaking, be converted to text, tagged to the individual who spoke it, and made fully searchable within Google’s search index.

Now our stage is set: not for what will happen, necessarily, but what I just want to point out could technically happen, by combining tools already available within Google.

Let’s return to the bus ride. It’s not a stretch to imagine that you could immediately be identified by that Google Glass user who gets on the bus and turns the camera toward you. Anything you say within earshot could be recorded, associated with the text, and tagged to your online identity. And stored in Google’s search index. Permanently.

I’m still not done.

The really interesting aspect is that all of the indexing, tagging, and storage could happen without the Google Glass user even requesting it. Any video taken by any Google Glass, anywhere, is likely to be stored on Google servers, where any post-processing (facial recognition, speech-to-text, etc.) could happen at the later request of Google, or any other corporate or governmental body, at any point in the future.

Remember when people were kind of creeped out by that car Google drove around to take pictures of your house? Most people got over it, because they got a nice StreetView feature in Google Maps as a result.

Google Glass is like one camera car for each of the thousands, possibly millions, of people who will wear the device — every single day, everywhere they go — on sidewalks, into restaurants, up elevators, around your office, into your home. From now on, starting today, anywhere you go within range of a Google Glass device, everything you do could be recorded and uploaded to Google’s cloud, and stored there for the rest of your life. You won’t know if you’re being recorded or not; and even if you do, you’ll have no way to stop it.

And that, my friends, is the experience that Google Glass creates. That is the experience we should be thinking about. The most important Google Glass experience is not the user experience — it’s the experience of everyone else. The experience of being a citizen, in public, is about to change.

I guess I will temper my enthusiasm for life logging accordingly.

If You Can’t Beat ‘Em, Squelch ‘Em

There are two major points — one lesser, one greater — to Neal Stephenson’s In the Beginning … was the Command Line (Wikipedia | Amazon). The lesser point serves to feed the superiority complex of computer geeks, namely that people who work closer to the machine are more awesome than people who work with their machines through layers of mediation. The greater point is that maintaining a high degree of control of the devices that shape our lives is a critical element of freedom in the information age.

It’s not about turning your nose up at GUIs and other user-friendly efforts in favor of arcane monochrome text interfaces. The point is that when you cede control of the devices that comprise your environment — that serve as the basis of your personal capabilities — when you cede these to manufacturers, marketers, designers, content providers, legislators, then the limits they seek to impose become your limits as well.

It is as extending and impacting this point that I think Cory Doctorow’s talk, “The Coming War on General Computation” is so important (28th Chaos Communication Congress, Berliner Congress Center, Berlin, Germany, 27 December 2011).

You should definitely watch the whole thing: it’s entertaining as well as one of the most cogent talks I’ve heard in some time. To me his outstanding points are two:

1. The never-ending desire for a certain kind of ease of use that comes through circumscribed functionality is an invitation to, a kind of lazy collusion with, the likes of Apple who are more than happy to sell you a device hobbled in a way that maximizes corporate returns (the walled garden):

So today we have marketing departments who say things like “we don’t need computers, we need … appliances. Make me a computer that doesn’t run every program, just a program that does this specialized task, like streaming audio, or routing packets, or playing Xbox games, and make sure it doesn’t run programs that I haven’t authorized that might undermine our profits”. And on the surface, this seems like a reasonable idea — just a program that does one specialized task — after all, we can put an electric motor in a blender, and we can install a motor in a dishwasher, and we don’t worry if it’s still possible to run a dishwashing program in a blender. But that’s not what we do when we turn a computer into an appliance. We’re not making a computer that runs only the “appliance” app; we’re making a computer that can run every program, but which uses some combination of rootkits, spyware, and code-signing to prevent the user from knowing which processes are running, from installing her own software, and from terminating processes that she doesn’t want. In other words, an appliance is not a stripped-down computer — it is a fully functional computer with spyware on it out of the box.

2. Media copyright is just the tip of the iceberg when it comes to the incentive of corporations to turn to political-legislative attempts to prevent the disruptions to their business models that result from technological change:

And even this is a shadow of what is to come. After all, this was the year in which we saw the debut of open sourced shape files for converting AR-15s to full automatic. This was the year of crowd-funded open-sourced hardware for gene sequencing. And while 3D printing will give rise to plenty of trivial complaints, there will be judges in the American South and Mullahs in Iran who will lose their minds over people in their jurisdiction printing out sex toys. The trajectory of 3D printing will most certainly raise real grievances, from solid state meth labs, to ceramic knives.

And it doesn’t take a science fiction writer to understand why regulators might be nervous about the user-modifiable firmware on self-driving cars, or limiting interoperability for aviation controllers, or the kind of thing you could do with bio-scale assemblers and sequencers. Imagine what will happen the day that Monsanto determines that it’s really… really… important to make sure that computers can’t execute programs that cause specialized peripherals to output organisms that eat their lunch… literally. Regardless of whether you think these are real problems or merely hysterical fears, they are nevertheless the province of lobbies and interest groups that are far more influential than Hollywood and big content are on their best days, and every one of them will arrive at the same place — “can’t you just make us a general purpose computer that runs all the programs, except the ones that scare and anger us? Can’t you just make us an Internet that transmits any message over any protocol between any two points, unless it upsets us?”

The way to think of all of this is as akin to the transition from feudalism to capitalism. There’s no reason to think that an information economy will be just more capitalism (to think so is a contribution to capitalism as the end of history). That a growing list of industries face disruption on a scale where it’s hard to see their business model surviving in absence of ever escalating state measures to construct markets that otherwise would fail (a point well made by Mr. Doctorow with his wheels analogy) suggests significant incompatibility between capitalism and the information economy.

The retort of the defender of capitalism here would be that the information economy is a creature of capitalism — without chip fabricators and integraters and intellectual property and venture capital and server farms, the information economy doesn’t happen. But of course the feudal baron would have said the same of the capitalist upstart. History is a realm of contingency. It is not a logical system. Contradictions — and the deleterious eddies that result — are perfectly possible. That the information economy might end up destroying the very basis for its existence is within the realm of possibility.

Or perhaps this is perfectly compatible with capitalism and effected sectors are merely the cracks through which we can see the lie of laissez-faire throughout the rest of the economy. The government takes a heavy hand in constructing markets everywhere they exist.

But the point is that previous economic transformations weren’t tranquil evolutions, didn’t happen in a discrete moment. The social transformations that we today package under the rubric “capitalism” benefitted some, but came at terrible consequence to others. Those who stood to lose prestige, revenue, power, opposed these changes, frequently by violence. For them, capitalism wasn’t just social change, it was immoral. Ownership of property by those who did not directly fight for it, property as a transferable abstraction, rootlessness, equality among the classes, attacks upon the ancient privileges of nobility, the undermining of seigniorial obligation, the money economy, the violations of guild oaths, the codification of techne (craft), the insolence of entrepreneurs: these were violations of the moral order of society.

The practices that have grown up around the frictionlessness of the information economy’s core commodities are called piracy by the partisans of our present order. It is immoral. It is theft of property (property is here an analogy growing threadbare at the margins from being stretched too much). It is the collapse of the basis of prosperity. But how is a system of constant content theft to be harnessed to our system of material progress? I haven’t the foggiest notion. But capitalism too was a post hoc ideological construct. At the time it seemed like the end of the world. Remember that by the time Adam Smith wrote The Wealth of Nations, such processes were already far along. Smith wasn’t envisioning future pin factories: he was describing existent ones that he had recently visited.

Besides, if it is not within the scope of power to achieve these things, it does not matter the machinations of ideology. Ideology adapts. Moral and immoral will be renamed to accommodate the new arrangement of factors.

Information Work in an Era of Information Abundance

I’m going to excerpt about fifty percent of David Frum’s review from this weekend’s New York Times Book Review (“Unhappy Days,” 5 September 2010, p. BR20):

Art historians tell us that photography revolutionized painting. Suddenly there was a better way of recording the physical appearance of things, and artists had to discover new purposes for brush and pigment. But for those living through the revolution, the process must have seemed more gradual. Long after the Impressionists and Cubists and Futurists, there must have been serious portraitists who continued to earn a living depicting brides on their wedding day or businessmen made good.

I kept thinking of those backward­looking artists all the way through Laura Kalman’s “Right Star Rising.” As a work of history about the Ford and Carter years, there is nothing seriously wrong with it. The facts are accurate, the writing is clear and the point of view is not tendentious. Once upon a time, such a book might have been useful to somebody.

But the question it raises — and it’s not a question about this book alone — is: What’s the point of this kind of history in the age of the Internet? Suppose I’m an undergraduate who stumbles for the first time across the phrase “Proposition 13.” I could, if I were minded, walk over to the university library, pull this book from the shelf and flip to the index. Or I could save myself two hours and Google it. I wouldn’t learn more from a Google search than I’d learn in these pages. But I wouldn’t learn a whole lot less either.

He gets a little more specific than this, makes a few examples, but that’s about all he has to say about the book. It’s nothing against Ms. Kalman — as Mr. Frum writes, “it’s not a question about this book alone.” The analogy to painting in an era of photography is apt. We live in a time in which our relation to information is changing. Problems of availability have — at least in the developed world — been for the most part solved. So like the painter, how are information workers to make their way in this world?

I’m not going to wind this post up with some pat answer. I think that Mr. Frum is also correct in not making a teleologically overdetermined analogy. “For those living through the revolution, the process must have seemed more gradual,” he writes. Painters only found a post-photography life through protracted experimentation.

I think of Harold Bloom’s idea of the anxiety of influence as much more than a theory of poetry. In an age of mass information, all information workers labor under the anxiety of influence (Jimmy Wales is our Milton). No one should think that a publisher is going to cut down a few hundred acres of trees for more of the same.

The Electromagnetic Sediment of the Noosphere

Regarding the possibility of the Earth remaining hidden from detection by alien civilizations by running silent, New Scientist points out that it’s already too late (Shostak, Seth, “It’s Too Late to Worry That the Aliens Will Find Us,” 3 July 2010):

We have been inadvertently betraying our presence for 60 years with our television, radio and radar transmissions. The earliest episodes of I Love Lucy have washed over 6,000 or so star systems, and are reaching new audiences at the rate of one solar system a day. If there are sentient beings out there, the signals will reach them.

(Related: “The Noosphere Visualized,” 1 January 2009)