To the left, the Venus of Willendorf, a 10.8 cm high stone statuette dating from the Paleolithic era, 30-27 kyr. To the right, the first anthropomorphic design for an atmospheric diving suite built by the Carmagnolle brothers of Marseilles, France in 1882.
The Carmagnolle brothers diving suite is the stuff of steampunk fantasy, but perhaps so situating it is time-out-of-joint in the wrong direction. Instead of designating it as a paleo-future, it is more properly a parachronism: perhaps we should call it the Venus (Ares) of Marseilles (with the Venus of Willendorf as old stone punk).
Seeing what my partner sees might make me never want to have sex again. I am reminded of Steve’s disquisition on “nether freckling” from Coupling s3:e5: “There are … angles. In a relationship you get seen from certain … angles.”
With all the photos being stolen off of hacked mobile devices and posted to revenge porn websites, how could this possibly go wrong?
Wasn’t Strange Days (1995) a great / terrible sci-fi film?
But more seriously, total immersion in the experience of another ranks along with immortality, total recall, omniscience, radical subjectivity, demediation of desire and fulfillment and a few others as ultimate goals of technology.
What I’m saying is that I think technology is teleological. The evolution of technology is not a random walk or a function of reachability or the traversal of a dependency network. Well, it is all those things, but it is not only those things. There are ends or extreme outer limits toward which technology is evolving. I think I listed a few off-the-cuff. Some systematic and dedicated attention to a fuller list is warranted.
But wence do the ends come? As I have framed them, they could be construed as merely the desiderata of the human makers of technology technology has no end of it’s own: they are bestowed by their makers. But perhaps technology as a continuation of life, as a fourth domain, inherits these ends. Or perhaps these ends admit of a more objective formulation: eternity instead of immortality, idealist anarchy for radical subjectivity. Or perhaps for Kantian cyborgs, they are the transcendental illusions of technology.
Also, as if my digression hasn’t already been far enough, there’s this as a longing to supersede individuation:
Keanu Reeves in Johnny Mnemonic getting ghost-hacked by Ice-T’s cyborg dolphin
Non-human animals are late adopters. In 2009 cats started life logging. In 2012 dogs got into biometrics. Now sharks are getting on twitter. The Western Australia Department of Fisheries has started tagging sharks with radio transmitters. When the tagged sharks are detected within half a mile from beaches, the monitoring system updates the Surf Life Saving WA twitter feed with the shark’s species, size and location (“Roll Out of Tagged Shark Monitors Continues, 14 December 2013). The sharks do not yet post selfies to instagram.
“This kind of innovative thinking is exactly what we need more of when it comes to finding solutions to human-wildlife conflict,” says Alison Kock, research manager of Shark Spotters, apparently a shark social media PR firm in South Africa (Alan Yu, “More Than 300 Sharks In Australia Are Now On Twitter“, NPR, 1 January 2014). This is, of course, the sort of utopian thinking rampant among tech enthusiasts. But it’s only a matter of time before these vital shark voices are shouted down by a bunch of galeophobs, reminding us of Internet Rule 14.1: Don’t chum the trolls.
There is constant talk of the Internet of Things, about how all our devices are getting on-line and being internetworked with one another. But what about an Internet of All Creatures Great and Small? Instead of just us humans getting on-line, living in augmented reality, having brain computer interfaces, being enhanced by cognitive prostheses, we need to get all the other animals on-line too.
And look, we humans can’t stave off the robot apocalypse by ourselves. Presumably SkyNet and the Matrix don’t stop with the humans. Both SkyNet and the Matrix blot out the sun as a tactic. The other animals have a stake in the outcome of this as well. We animals need to pull together. Maybe even the plants too. Already we’re developing thought controlled power suits for monkeys and cockroaches that can interface with your iPhone (there’s an app for that). Raspberry crazy ants can detect electromagnetic fields and already have a vendetta against electronics. Now we just need to upgrade them to Raspberry Pie Ants. Maybe once we get this computation using protein folding and DNA, Craig Venter will engineer protozoa and yeast with wireless access and IP addresses to fight the nanobots.
Addendum II, 17 January 2014: This scientist is fitting honey bees with transponders and is running some sort of bee Aegis radar-type system to monitor their bee flash mobs.
A certain sector of nostalgic curmudgeons among us is driven to distraction by the fact that many people today are engaged in a significant amount of interpersonal communication and interaction with their environment mediated by their mobile devices and web technologies. This annoyance that the young people today don’t interact in the time-honored ways is expressed in a number of criticisms: that they are anti-social, isolated, rude, sedentary, disengaged, aesthetically foreclosed, temporally scattered, attention deficient and consumed by trite distractions. Sherry Turkle, the dean of cellular woe, was taped just last week by the New York Times to lament the presidential selfie. She goes so far as to suggest “device-free zones” as “sacred spaces” (“The Documented Life“, 16 December 2013, A25).
Back in June I wrote about two popular memes expressing this dismay and posed Apple’s then airing ad for the iPhone 5 as a corrective (“Technology and the Profound“, 18 June 2013). Watching the ad again now, it does address a number of these criticisms, but it is unclear whether the creators were thinking of something else and it is merely inadvertent how well the various episodes of the commercial line up with the criticisms; whether they were very subdued in their response; or whether something in between: they were generally aware of some negative perceptions of their product and attempting to show the iPhone in a sentimental, social, generative light without quite explicitly matching their critics.
With their new Christmas advertisement there’s no mistaking it: Apple it using its Madison Avenue genius to directly engage this debate. And for its emotional delicacy, it’s quite a salvo.
We are presented with exactly the teen that critics of our technological mediation obsession portray: bored, disengaged, one hand always unfree, constantly removing himself from important family events to fiddle with his device. But then, a third of the way through the commercial, the reveal: all those moments when he wouldn’t put down his phone, when he dropped out of family events, what he was actually doing was making a very personal video Christmas card to the entire family. We now rewatch all the moments from the first part of the commercial from a new perspective — in both the positioning of the camera, and in our understanding of what’s going on. As the mise en abyme — our protagonist’s video within the video — ends, he signs off with a bit of video of himself, the teenage veneer of boredom now replaced by an unselfconscious, sheepish happiness and pride. The title of the commercial is even “Misunderstood”. Not only is the teenage experience misunderstood by the adults around him, not only are his actions misunderstood, but here is the cutting edge of this soft light and sentimentality play. The title is not just descriptive of the events of the commercial: it is outwardly directed: it is an accusation against the critics of these technologies for which the events of the ad are the argument: you misunderstand what we are doing with these technologies; you mischaracterize the effects they are having on us.
I’ll add a personal story here, lest you write Apple’s commercial off as a contrivance of corporate propaganda. Toward the end of my college years I attended a birthday party. It began, as such things often do, as a late afternoon back yard cookout. But just after sunset one of the organizers brought out a slide projector and for about a half-hour told stories and played music while projecting onto the white wooden siding of the large side of the house photographs from the recent life of our celebree. It was beautiful and sentimental and poignant and really funny and just a wonderful celebration of this person — so much more so than had we just stood around in the yard eating hot dogs and getting slightly buzzed and then going through the heavily scripted song and cake ritual of birthdays. In other words, had we all only lived in the moment, it would have been just another meaningless collegiate afternoon. It was specifically the documentary consciousness and all those interruptions over the years and the need to share and the clever exhibitionism and the devices that created that evening’s sacred space with its deeply focused consciousness, its break from the ordinary, its reflection and appreciation.
This was the late 1990s, so the technologies of this presentation were the old ones: gelatin emulsion film, shoebox archives dug through over the course of weeks, order forms printed on the back of envelopes, photo developing booths isolated in the middle of the shopping plaza parking lots, cardboard mounted diapositives, that beige slide projector with the torus of black slide slots protruding from the top. So the documentary intrusions were fewer, the pace of production and archiving less frenetic, the sharing less ubiquitous. But also less of the life was available, there was no parallax view, the required bravery of the performance was greater (a slideshow!? so hipster).
This is what I really like about Steven Johnson’s response to Nicholas Carr’s book, The Shallows: What the Internet Is Doing to Our Brains (“Yes, People Still Read, but Now It’s Social“, The New York Times, 20 June 2010, p. BU3): Johnson frankly concedes that yes, we are losing something. But loss is not the entirety of the transformation. We are also gaining something. And neither the loss of the detractor, nor the gain of the enthusiast are to be weighed in isolation. The proper debate is: is what we have gained worth what we have lost?
There are two perennial memes, critical of web 2.0, social networking and mobile devices, that keep appearing in my various feeds (Um, kinda ironic that you’re expressing your Luddism through JPEGs). They are these two:
Of course as is the case with the web, the second one isn’t even true. Einstein did say something similar to this, but it was with respect to the atomic bomb — the legitimacy of which is beyond reproach — not cellular telephones.
To anyone who’s posted “the real zombie apocalypse” or this purported Einstein quote: nothing is profound or incipient on its own. We are capable of finding and making such experiences where we will. Technology is as capable of beauty and the sublime as any other experience in life.
In this respect, Apple’s recent advertising campaign serves as a wonderful reply:
Of course, this is corporate propaganda, but this is an instance where what makes this such an effective advertisement is just how well it has captured the truth of an experience. One of the things that I love about this commercial is how it is composed of a number of episodes, each of which show a aspect of this particular technological experience: noticing in greater detail textures and objects that would have warranted less attention in the past but that are uniquely beautiful (weathered wood, oil slicks), being the laggard in a group for capturing an image, the foodie thing, handing cameras to and retrieving with gratitude cameras from strangers, selfies, etc.
And before you go dismissing other people’s interests too quickly, I think your children and your pet are boring.
Back in 2008 I got really excited about life logging (“Life Logging: It’s All About the Metadata,” 11 September 2008; “The End of the Era of Orphanage,” 15 September 2008). I had all these exciting visions of all the things we could do if we were to dramatically increase capture. Now that Google Glass is close to bringing this geek fantasy to mass consumption, commentators are beginning to see the malign potential of such a technology. Mark Hurst of Creative Good imagines the transformation of the public that could result from such a degree of capture (The Google Glass Feature No One is Talking About, 28 February 2013):
The Google Glass feature that (almost) no one is talking about is the experience not of the user, but of everyone other than the user. A tweet by David Yee introduces it well:
There is a kid wearing Google Glasses at this restaurant which, until just now, used to be my favorite spot.
The key experiential question of Google Glass isn’t what it’s like to wear them, it’s what it’s like to be around someone else who’s wearing them. I’ll give an easy example. Your one-on-one conversation with someone wearing Google Glass is likely to be annoying, because you’ll suspect that you don’t have their undivided attention. And you can’t comfortably ask them to take the glasses off (especially when, inevitably, the device is integrated into prescription lenses). Finally here’s where the problems really start you don’t know if they’re taking a video of you.
Now pretend you don’t know a single person who wears Google Glass … and take a walk outside. Anywhere you go in public any store, any sidewalk, any bus or subway you’re liable to be recorded: audio and video. Fifty people on the bus might be Glassless, but if a single person wearing Glass gets on, you and all 49 other passengers could be recorded. Not just for a temporary throwaway video buffer, like a security camera, but recorded, stored permanently, and shared to the world.
Now, I know the response: “I’m recorded by security cameras all day, it doesn’t bother me, what’s the difference?” Hear me out I’m not done. What makes Glass so unique is that it’s a Google project. And Google has the capacity to combine Glass with other technologies it owns.
First, take the video feeds from every Google Glass headset, worn by users worldwide. Regardless of whether video is only recorded temporarily, as in the first version of Glass, or always-on, as is certainly possible in future versions, the video all streams into Google’s own cloud of servers. Now add in facial recognition and the identity database that Google is building within Google Plus (with an emphasis on people’s accurate, real-world names): Google’s servers can process video files, at their leisure, to attempt identification on every person appearing in every video. And if Google Plus doesn’t sound like much, note that Mark Zuckerberg has already pledged that Facebook will develop apps for Glass.
Finally, consider the speech-to-text software that Google already employs, both in its servers and on the Glass devices themselves. Any audio in a video could, technically speaking, be converted to text, tagged to the individual who spoke it, and made fully searchable within Google’s search index.
Now our stage is set: not for what will happen, necessarily, but what I just want to point out could technically happen, by combining tools already available within Google.
Let’s return to the bus ride. It’s not a stretch to imagine that you could immediately be identified by that Google Glass user who gets on the bus and turns the camera toward you. Anything you say within earshot could be recorded, associated with the text, and tagged to your online identity. And stored in Google’s search index. Permanently.
I’m still not done.
The really interesting aspect is that all of the indexing, tagging, and storage could happen without the Google Glass user even requesting it. Any video taken by any Google Glass, anywhere, is likely to be stored on Google servers, where any post-processing (facial recognition, speech-to-text, etc.) could happen at the later request of Google, or any other corporate or governmental body, at any point in the future.
Remember when people were kind of creeped out by that car Google drove around to take pictures of your house? Most people got over it, because they got a nice StreetView feature in Google Maps as a result.
Google Glass is like one camera car for each of the thousands, possibly millions, of people who will wear the device every single day, everywhere they go on sidewalks, into restaurants, up elevators, around your office, into your home. From now on, starting today, anywhere you go within range of a Google Glass device, everything you do could be recorded and uploaded to Google’s cloud, and stored there for the rest of your life. You won’t know if you’re being recorded or not; and even if you do, you’ll have no way to stop it.
And that, my friends, is the experience that Google Glass creates. That is the experience we should be thinking about. The most important Google Glass experience is not the user experience it’s the experience of everyone else. The experience of being a citizen, in public, is about to change.
I guess I will temper my enthusiasm for life logging accordingly.
There are two major points one lesser, one greater to Neal Stephenson’s In the Beginning … was the Command Line (Wikipedia | Amazon). The lesser point serves to feed the superiority complex of computer geeks, namely that people who work closer to the machine are more awesome than people who work with their machines through layers of mediation. The greater point is that maintaining a high degree of control of the devices that shape our lives is a critical element of freedom in the information age.
It’s not about turning your nose up at GUIs and other user-friendly efforts in favor of arcane monochrome text interfaces. The point is that when you cede control of the devices that comprise your environment that serve as the basis of your personal capabilities when you cede these to manufacturers, marketers, designers, content providers, legislators, then the limits they seek to impose become your limits as well.
It is as extending and impacting this point that I think Cory Doctorow’s talk, “The Coming War on General Computation” is so important (28th Chaos Communication Congress, Berliner Congress Center, Berlin, Germany, 27 December 2011).
You should definitely watch the whole thing: it’s entertaining as well as one of the most cogent talks I’ve heard in some time. To me his outstanding points are two:
1. The never-ending desire for a certain kind of ease of use that comes through circumscribed functionality is an invitation to, a kind of lazy collusion with, the likes of Apple who are more than happy to sell you a device hobbled in a way that maximizes corporate returns (the walled garden):
So today we have marketing departments who say things like “we don’t need computers, we need … appliances. Make me a computer that doesn’t run every program, just a program that does this specialized task, like streaming audio, or routing packets, or playing Xbox games, and make sure it doesn’t run programs that I haven’t authorized that might undermine our profits”. And on the surface, this seems like a reasonable idea just a program that does one specialized task after all, we can put an electric motor in a blender, and we can install a motor in a dishwasher, and we don’t worry if it’s still possible to run a dishwashing program in a blender. But that’s not what we do when we turn a computer into an appliance. We’re not making a computer that runs only the “appliance” app; we’re making a computer that can run every program, but which uses some combination of rootkits, spyware, and code-signing to prevent the user from knowing which processes are running, from installing her own software, and from terminating processes that she doesn’t want. In other words, an appliance is not a stripped-down computer it is a fully functional computer with spyware on it out of the box.
2. Media copyright is just the tip of the iceberg when it comes to the incentive of corporations to turn to political-legislative attempts to prevent the disruptions to their business models that result from technological change:
And even this is a shadow of what is to come. After all, this was the year in which we saw the debut of open sourced shape files for converting AR-15s to full automatic. This was the year of crowd-funded open-sourced hardware for gene sequencing. And while 3D printing will give rise to plenty of trivial complaints, there will be judges in the American South and Mullahs in Iran who will lose their minds over people in their jurisdiction printing out sex toys. The trajectory of 3D printing will most certainly raise real grievances, from solid state meth labs, to ceramic knives.
And it doesn’t take a science fiction writer to understand why regulators might be nervous about the user-modifiable firmware on self-driving cars, or limiting interoperability for aviation controllers, or the kind of thing you could do with bio-scale assemblers and sequencers. Imagine what will happen the day that Monsanto determines that it’s really… really… important to make sure that computers can’t execute programs that cause specialized peripherals to output organisms that eat their lunch… literally. Regardless of whether you think these are real problems or merely hysterical fears, they are nevertheless the province of lobbies and interest groups that are far more influential than Hollywood and big content are on their best days, and every one of them will arrive at the same place “can’t you just make us a general purpose computer that runs all the programs, except the ones that scare and anger us? Can’t you just make us an Internet that transmits any message over any protocol between any two points, unless it upsets us?”
The way to think of all of this is as akin to the transition from feudalism to capitalism. There’s no reason to think that an information economy will be just more capitalism (to think so is a contribution to capitalism as the end of history). That a growing list of industries face disruption on a scale where it’s hard to see their business model surviving in absence of ever escalating state measures to construct markets that otherwise would fail (a point well made by Mr. Doctorow with his wheels analogy) suggests significant incompatibility between capitalism and the information economy.
The retort of the defender of capitalism here would be that the information economy is a creature of capitalism without chip fabricators and integraters and intellectual property and venture capital and server farms, the information economy doesn’t happen. But of course the feudal baron would have said the same of the capitalist upstart. History is a realm of contingency. It is not a logical system. Contradictions and the deleterious eddies that result are perfectly possible. That the information economy might end up destroying the very basis for its existence is within the realm of possibility.
Or perhaps this is perfectly compatible with capitalism and effected sectors are merely the cracks through which we can see the lie of laissez-faire throughout the rest of the economy. The government takes a heavy hand in constructing markets everywhere they exist.
But the point is that previous economic transformations weren’t tranquil evolutions, didn’t happen in a discrete moment. The social transformations that we today package under the rubric “capitalism” benefitted some, but came at terrible consequence to others. Those who stood to lose prestige, revenue, power, opposed these changes, frequently by violence. For them, capitalism wasn’t just social change, it was immoral. Ownership of property by those who did not directly fight for it, property as a transferable abstraction, rootlessness, equality among the classes, attacks upon the ancient privileges of nobility, the undermining of seigniorial obligation, the money economy, the violations of guild oaths, the codification of techne (craft), the insolence of entrepreneurs: these were violations of the moral order of society.
The practices that have grown up around the frictionlessness of the information economy’s core commodities are called piracy by the partisans of our present order. It is immoral. It is theft of property (property is here an analogy growing threadbare at the margins from being stretched too much). It is the collapse of the basis of prosperity. But how is a system of constant content theft to be harnessed to our system of material progress? I haven’t the foggiest notion. But capitalism too was a post hoc ideological construct. At the time it seemed like the end of the world. Remember that by the time Adam Smith wrote The Wealth of Nations, such processes were already far along. Smith wasn’t envisioning future pin factories: he was describing existent ones that he had recently visited.
Besides, if it is not within the scope of power to achieve these things, it does not matter the machinations of ideology. Ideology adapts. Moral and immoral will be renamed to accommodate the new arrangement of factors.
À la three posts ago (“Tithing for Metaphysics,” 23 July 2010), I was only using the James Webb Space Telescope as a pretext for a tirade on the political economy of big science and discovery being as much a product of labor and capital — just rarified forms — as other endeavors. The James Webb Space Telescope is starting to come together now and this unusual picture from NASA is getting a lot of play. Here are six out of the eighteen mirrors that will together comprise the main reflector of the telescope about to go into cryogenic testing at the Marshall Space Flight Center.
It’s worth noting here that science inadvertently results in a lot of images that could be considered as art — the various images generated by particle accelerators being a favorite here. It’s also worth noting that an independent review panel recently concluded that the project will go $1.5 billion over budget and run a year behind schedule, unless NASA comes up with $500 million more to get it back on schedule (Gupta, Sujata, “Over-Budget Telescope Threatens Other Projects,” New Scientist, 16 November 2010). That’s another $14.50 per taxpayer, bring our total contributions up to $47.40 each — a small price to pay for photographs of infinity.
Following on my previous post, my IRL friend Frank posted an excerpt from one of his class syllabuses that I think is a good, updated model of the role of the teacher (“A Note on My Teaching Philosophy,” Too Frank?, 29 August 2010):
I am much more interested in your ability to engage with such questions than I am in your ability to memorize series of facts. Unless specifically noted, you should feel free to consult your notes and texts for all assignments, including exams. Information is widely available. What is less common than access to information is the skill required to navigate, evaluate, curate, and interrogate that information. I am not here to dispense knowledge, but to facilitate learning.
I’m going to excerpt about fifty percent of David Frum’s review from this weekend’s New York Times Book Review (“Unhappy Days,” 5 September 2010, p. BR20):
Art historians tell us that photography revolutionized painting. Suddenly there was a better way of recording the physical appearance of things, and artists had to discover new purposes for brush and pigment. But for those living through the revolution, the process must have seemed more gradual. Long after the Impressionists and Cubists and Futurists, there must have been serious portraitists who continued to earn a living depicting brides on their wedding day or businessmen made good.
I kept thinking of those backwardlooking artists all the way through Laura Kalman’s “Right Star Rising.” As a work of history about the Ford and Carter years, there is nothing seriously wrong with it. The facts are accurate, the writing is clear and the point of view is not tendentious. Once upon a time, such a book might have been useful to somebody.
But the question it raises — and it’s not a question about this book alone — is: What’s the point of this kind of history in the age of the Internet? Suppose I’m an undergraduate who stumbles for the first time across the phrase “Proposition 13.” I could, if I were minded, walk over to the university library, pull this book from the shelf and flip to the index. Or I could save myself two hours and Google it. I wouldn’t learn more from a Google search than I’d learn in these pages. But I wouldn’t learn a whole lot less either.
He gets a little more specific than this, makes a few examples, but that’s about all he has to say about the book. It’s nothing against Ms. Kalman — as Mr. Frum writes, “it’s not a question about this book alone.” The analogy to painting in an era of photography is apt. We live in a time in which our relation to information is changing. Problems of availability have — at least in the developed world — been for the most part solved. So like the painter, how are information workers to make their way in this world?
I’m not going to wind this post up with some pat answer. I think that Mr. Frum is also correct in not making a teleologically overdetermined analogy. “For those living through the revolution, the process must have seemed more gradual,” he writes. Painters only found a post-photography life through protracted experimentation.
I think of Harold Bloom’s idea of the anxiety of influence as much more than a theory of poetry. In an age of mass information, all information workers labor under the anxiety of influence (Jimmy Wales is our Milton). No one should think that a publisher is going to cut down a few hundred acres of trees for more of the same.