Technology and the Profound, Part II: Apple’s Retort

A certain sector of nostalgic curmudgeons among us is driven to distraction by the fact that many people today are engaged in a significant amount of interpersonal communication and interaction with their environment mediated by their mobile devices and web technologies. This annoyance that the young people today don’t interact in the time-honored ways is expressed in a number of criticisms: that they are anti-social, isolated, rude, sedentary, disengaged, aesthetically foreclosed, temporally scattered, attention deficient and consumed by trite distractions. Sherry Turkle, the dean of cellular woe, was taped just last week by the New York Times to lament the presidential selfie. She goes so far as to suggest “device-free zones” as “sacred spaces” (“The Documented Life“, 16 December 2013, A25).

U.S. President Barack Obama, Danish Prime Minister Helle Thorning-Schmidt and U.K. Prime Minister David Cameron take a selfie at Nelson Mandela's funeral, Johannesburg, South Africa, 10 December 2013, by Steve Harvey

Back in June I wrote about two popular memes expressing this dismay and posed Apple’s then airing ad for the iPhone 5 as a corrective (“Technology and the Profound“, 18 June 2013). Watching the ad again now, it does address a number of these criticisms, but it is unclear whether the creators were thinking of something else and it is merely inadvertent how well the various episodes of the commercial line up with the criticisms; whether they were very subdued in their response; or whether something in between: they were generally aware of some negative perceptions of their product and attempting to show the iPhone in a sentimental, social, generative light without quite explicitly matching their critics.

With their new Christmas advertisement there’s no mistaking it: Apple it using its Madison Avenue genius to directly engage this debate. And for its emotional delicacy, it’s quite a salvo.

We are presented with exactly the teen that critics of our technological mediation obsession portray: bored, disengaged, one hand always unfree, constantly removing himself from important family events to fiddle with his device. But then, a third of the way through the commercial, the reveal: all those moments when he wouldn’t put down his phone, when he dropped out of family events, what he was actually doing was making a very personal video Christmas card to the entire family. We now rewatch all the moments from the first part of the commercial from a new perspective — in both the positioning of the camera, and in our understanding of what’s going on. As the mise en abyme — our protagonist’s video within the video — ends, he signs off with a bit of video of himself, the teenage veneer of boredom now replaced by an unselfconscious, sheepish happiness and pride. The title of the commercial is even “Misunderstood”. Not only is the teenage experience misunderstood by the adults around him, not only are his actions misunderstood, but here is the cutting edge of this soft light and sentimentality play. The title is not just descriptive of the events of the commercial: it is outwardly directed: it is an accusation against the critics of these technologies for which the events of the ad are the argument: you misunderstand what we are doing with these technologies; you mischaracterize the effects they are having on us.

I’ll add a personal story here, lest you write Apple’s commercial off as a contrivance of corporate propaganda. Toward the end of my college years I attended a birthday party. It began, as such things often do, as a late afternoon back yard cookout. But just after sunset one of the organizers brought out a slide projector and for about a half-hour told stories and played music while projecting onto the white wooden siding of the large side of the house photographs from the recent life of our celebree. It was beautiful and sentimental and poignant and really funny and just a wonderful celebration of this person — so much more so than had we just stood around in the yard eating hot dogs and getting slightly buzzed and then going through the heavily scripted song and cake ritual of birthdays. In other words, had we all only lived in the moment, it would have been just another meaningless collegiate afternoon. It was specifically the documentary consciousness and all those interruptions over the years and the need to share and the clever exhibitionism and the devices that created that evening’s sacred space with its deeply focused consciousness, its break from the ordinary, its reflection and appreciation.

This was the late 1990s, so the technologies of this presentation were the old ones: gelatin emulsion film, shoebox archives dug through over the course of weeks, order forms printed on the back of envelopes, photo developing booths isolated in the middle of the shopping plaza parking lots, cardboard mounted diapositives, that beige slide projector with the torus of black slide slots protruding from the top. So the documentary intrusions were fewer, the pace of production and archiving less frenetic, the sharing less ubiquitous. But also less of the life was available, there was no parallax view, the required bravery of the performance was greater (a slideshow!? so hipster).

This is what I really like about Steven Johnson’s response to Nicholas Carr’s book, The Shallows: What the Internet Is Doing to Our Brains (“Yes, People Still Read, but Now It’s Social“, The New York Times, 20 June 2010, p. BU3): Johnson frankly concedes that yes, we are losing something. But loss is not the entirety of the transformation. We are also gaining something. And neither the loss of the detractor, nor the gain of the enthusiast are to be weighed in isolation. The proper debate is: is what we have gained worth what we have lost?

Flaming December

Annie Leibovitz's December 2013 Vogue cover, Jennifer Chastain as Frederic Leighton's Flaming June

I love pre-Raphaelite women and I love homage, parody, covers, remix, mashup, etc., so I’m crazy for Annie Leibovitz’s photograph of Jennifer Chastain as Frederic Leighton’s Flaming June (1895) for the cover of the December 2013 Vogue. The Vogue website has a slideshow counterpoising the photo shoot with the inspirational source material.

This isn’t the first time I’ve posted about Annie Leibovitz and Vogue spoofing some vintage design. Perhaps the most popular post on this blog (not because anyone is interested in the rambling post; mostly because people are ripping off the image) is “Destroy This Mad Brute“, an analysis of the meaning of LeBron James and Gisele Bündchen mimicking an anti-old Europe First World War U.S. propaganda poster.

Through the Google Looking Glass

Back in 2008 I got really excited about life logging (“Life Logging: It’s All About the Metadata,” 11 September 2008; “The End of the Era of Orphanage,” 15 September 2008). I had all these exciting visions of all the things we could do if we were to dramatically increase capture. Now that Google Glass is close to bringing this geek fantasy to mass consumption, commentators are beginning to see the malign potential of such a technology. Mark Hurst of Creative Good imagines the transformation of the public that could result from such a degree of capture (The Google Glass Feature No One is Talking About, 28 February 2013):

The Google Glass feature that (almost) no one is talking about is the experience — not of the user, but of everyone other than the user. A tweet by David Yee introduces it well:

There is a kid wearing Google Glasses at this restaurant which, until just now, used to be my favorite spot.

The key experiential question of Google Glass isn’t what it’s like to wear them, it’s what it’s like to be around someone else who’s wearing them. I’ll give an easy example. Your one-on-one conversation with someone wearing Google Glass is likely to be annoying, because you’ll suspect that you don’t have their undivided attention. And you can’t comfortably ask them to take the glasses off (especially when, inevitably, the device is integrated into prescription lenses). Finally — here’s where the problems really start — you don’t know if they’re taking a video of you.

Now pretend you don’t know a single person who wears Google Glass … and take a walk outside. Anywhere you go in public — any store, any sidewalk, any bus or subway — you’re liable to be recorded: audio and video. Fifty people on the bus might be Glassless, but if a single person wearing Glass gets on, you — and all 49 other passengers — could be recorded. Not just for a temporary throwaway video buffer, like a security camera, but recorded, stored permanently, and shared to the world.

Now, I know the response: “I’m recorded by security cameras all day, it doesn’t bother me, what’s the difference?” Hear me out — I’m not done. What makes Glass so unique is that it’s a Google project. And Google has the capacity to combine Glass with other technologies it owns.

First, take the video feeds from every Google Glass headset, worn by users worldwide. Regardless of whether video is only recorded temporarily, as in the first version of Glass, or always-on, as is certainly possible in future versions, the video all streams into Google’s own cloud of servers. Now add in facial recognition and the identity database that Google is building within Google Plus (with an emphasis on people’s accurate, real-world names): Google’s servers can process video files, at their leisure, to attempt identification on every person appearing in every video. And if Google Plus doesn’t sound like much, note that Mark Zuckerberg has already pledged that Facebook will develop apps for Glass.

Finally, consider the speech-to-text software that Google already employs, both in its servers and on the Glass devices themselves. Any audio in a video could, technically speaking, be converted to text, tagged to the individual who spoke it, and made fully searchable within Google’s search index.

Now our stage is set: not for what will happen, necessarily, but what I just want to point out could technically happen, by combining tools already available within Google.

Let’s return to the bus ride. It’s not a stretch to imagine that you could immediately be identified by that Google Glass user who gets on the bus and turns the camera toward you. Anything you say within earshot could be recorded, associated with the text, and tagged to your online identity. And stored in Google’s search index. Permanently.

I’m still not done.

The really interesting aspect is that all of the indexing, tagging, and storage could happen without the Google Glass user even requesting it. Any video taken by any Google Glass, anywhere, is likely to be stored on Google servers, where any post-processing (facial recognition, speech-to-text, etc.) could happen at the later request of Google, or any other corporate or governmental body, at any point in the future.

Remember when people were kind of creeped out by that car Google drove around to take pictures of your house? Most people got over it, because they got a nice StreetView feature in Google Maps as a result.

Google Glass is like one camera car for each of the thousands, possibly millions, of people who will wear the device — every single day, everywhere they go — on sidewalks, into restaurants, up elevators, around your office, into your home. From now on, starting today, anywhere you go within range of a Google Glass device, everything you do could be recorded and uploaded to Google’s cloud, and stored there for the rest of your life. You won’t know if you’re being recorded or not; and even if you do, you’ll have no way to stop it.

And that, my friends, is the experience that Google Glass creates. That is the experience we should be thinking about. The most important Google Glass experience is not the user experience — it’s the experience of everyone else. The experience of being a citizen, in public, is about to change.

I guess I will temper my enthusiasm for life logging accordingly.

Martin van Creveld’s Simulacra and Simulation

Here is Martin van Creveld in The Transformation of War (1991) essentially agreeing with Jean Baudrillard, that not just the Gulf War, but nearly every conflict of the post-nuclear era, did not take place:

One factor affecting conventional war as waged by both the super-powers and, increasingly, by other countries, is that nuclear weapons make their dampening effect felt in such wars even when nobody threatens their use. As a result, the United States for one has only been able to employ its conventional armed forces in cases where its vital interests were not at stake. The war fought in Korea, a small appendix of Asia several thousands of miles away, provides an excellent case in point. The American Chiefs of Staff recognized this even at that time, emphasizing the fact that the really significant areas were Japan and the Philippines. The same also applied to Lebanon (1958), Vietnam (1964-72), the Dominican Republic (1965), Cambodia (1972-75), Lebanon (1983), and the Persian Gulf (1987-88). Looking back, so microscopic were the stakes for which GI’s were supposed to shed their blood that most of the cases could hardly even be explained to the American people. On occasions such as the Mayaguez Affair (1975) and Grenada (1983), so puny were the opponents against which American forces pitted themselves that hostilities took on a comic-opera character. (p. 14)

In the convoluted logic of the post-nuclear world, if a state goes to war, it is prima facie because it is an objective not a vital national interest. Any interest that is actually vital would involve levels of determination that are simply too dangerous to test. Vital national interests are those interests for which states were willing to pay prices in other ages that can no longer be afforded in an era of total annihilation.

If You Can’t Beat ‘Em, Squelch ‘Em

There are two major points — one lesser, one greater — to Neal Stephenson’s In the Beginning … was the Command Line (Wikipedia | Amazon). The lesser point serves to feed the superiority complex of computer geeks, namely that people who work closer to the machine are more awesome than people who work with their machines through layers of mediation. The greater point is that maintaining a high degree of control of the devices that shape our lives is a critical element of freedom in the information age.

It’s not about turning your nose up at GUIs and other user-friendly efforts in favor of arcane monochrome text interfaces. The point is that when you cede control of the devices that comprise your environment — that serve as the basis of your personal capabilities — when you cede these to manufacturers, marketers, designers, content providers, legislators, then the limits they seek to impose become your limits as well.

It is as extending and impacting this point that I think Cory Doctorow’s talk, “The Coming War on General Computation” is so important (28th Chaos Communication Congress, Berliner Congress Center, Berlin, Germany, 27 December 2011).

You should definitely watch the whole thing: it’s entertaining as well as one of the most cogent talks I’ve heard in some time. To me his outstanding points are two:

1. The never-ending desire for a certain kind of ease of use that comes through circumscribed functionality is an invitation to, a kind of lazy collusion with, the likes of Apple who are more than happy to sell you a device hobbled in a way that maximizes corporate returns (the walled garden):

So today we have marketing departments who say things like “we don’t need computers, we need … appliances. Make me a computer that doesn’t run every program, just a program that does this specialized task, like streaming audio, or routing packets, or playing Xbox games, and make sure it doesn’t run programs that I haven’t authorized that might undermine our profits”. And on the surface, this seems like a reasonable idea — just a program that does one specialized task — after all, we can put an electric motor in a blender, and we can install a motor in a dishwasher, and we don’t worry if it’s still possible to run a dishwashing program in a blender. But that’s not what we do when we turn a computer into an appliance. We’re not making a computer that runs only the “appliance” app; we’re making a computer that can run every program, but which uses some combination of rootkits, spyware, and code-signing to prevent the user from knowing which processes are running, from installing her own software, and from terminating processes that she doesn’t want. In other words, an appliance is not a stripped-down computer — it is a fully functional computer with spyware on it out of the box.

2. Media copyright is just the tip of the iceberg when it comes to the incentive of corporations to turn to political-legislative attempts to prevent the disruptions to their business models that result from technological change:

And even this is a shadow of what is to come. After all, this was the year in which we saw the debut of open sourced shape files for converting AR-15s to full automatic. This was the year of crowd-funded open-sourced hardware for gene sequencing. And while 3D printing will give rise to plenty of trivial complaints, there will be judges in the American South and Mullahs in Iran who will lose their minds over people in their jurisdiction printing out sex toys. The trajectory of 3D printing will most certainly raise real grievances, from solid state meth labs, to ceramic knives.

And it doesn’t take a science fiction writer to understand why regulators might be nervous about the user-modifiable firmware on self-driving cars, or limiting interoperability for aviation controllers, or the kind of thing you could do with bio-scale assemblers and sequencers. Imagine what will happen the day that Monsanto determines that it’s really… really… important to make sure that computers can’t execute programs that cause specialized peripherals to output organisms that eat their lunch… literally. Regardless of whether you think these are real problems or merely hysterical fears, they are nevertheless the province of lobbies and interest groups that are far more influential than Hollywood and big content are on their best days, and every one of them will arrive at the same place — “can’t you just make us a general purpose computer that runs all the programs, except the ones that scare and anger us? Can’t you just make us an Internet that transmits any message over any protocol between any two points, unless it upsets us?”

The way to think of all of this is as akin to the transition from feudalism to capitalism. There’s no reason to think that an information economy will be just more capitalism (to think so is a contribution to capitalism as the end of history). That a growing list of industries face disruption on a scale where it’s hard to see their business model surviving in absence of ever escalating state measures to construct markets that otherwise would fail (a point well made by Mr. Doctorow with his wheels analogy) suggests significant incompatibility between capitalism and the information economy.

The retort of the defender of capitalism here would be that the information economy is a creature of capitalism — without chip fabricators and integraters and intellectual property and venture capital and server farms, the information economy doesn’t happen. But of course the feudal baron would have said the same of the capitalist upstart. History is a realm of contingency. It is not a logical system. Contradictions — and the deleterious eddies that result — are perfectly possible. That the information economy might end up destroying the very basis for its existence is within the realm of possibility.

Or perhaps this is perfectly compatible with capitalism and effected sectors are merely the cracks through which we can see the lie of laissez-faire throughout the rest of the economy. The government takes a heavy hand in constructing markets everywhere they exist.

But the point is that previous economic transformations weren’t tranquil evolutions, didn’t happen in a discrete moment. The social transformations that we today package under the rubric “capitalism” benefitted some, but came at terrible consequence to others. Those who stood to lose prestige, revenue, power, opposed these changes, frequently by violence. For them, capitalism wasn’t just social change, it was immoral. Ownership of property by those who did not directly fight for it, property as a transferable abstraction, rootlessness, equality among the classes, attacks upon the ancient privileges of nobility, the undermining of seigniorial obligation, the money economy, the violations of guild oaths, the codification of techne (craft), the insolence of entrepreneurs: these were violations of the moral order of society.

The practices that have grown up around the frictionlessness of the information economy’s core commodities are called piracy by the partisans of our present order. It is immoral. It is theft of property (property is here an analogy growing threadbare at the margins from being stretched too much). It is the collapse of the basis of prosperity. But how is a system of constant content theft to be harnessed to our system of material progress? I haven’t the foggiest notion. But capitalism too was a post hoc ideological construct. At the time it seemed like the end of the world. Remember that by the time Adam Smith wrote The Wealth of Nations, such processes were already far along. Smith wasn’t envisioning future pin factories: he was describing existent ones that he had recently visited.

Besides, if it is not within the scope of power to achieve these things, it does not matter the machinations of ideology. Ideology adapts. Moral and immoral will be renamed to accommodate the new arrangement of factors.

Obama’s Debord-ian Dog Whistle

I have previously suggested that in not releasing the photographs of Osama bin Laden’s body, President Obama was deliberately seeking to break out of the logic of bin Laden and the Bush Administration’s war of dueling spectacles (“World History, As Pantomimed in the Facial Expressions of Hillary Clinton, 20 October 2011). I made this suggestion somewhat farcically. Has President Obama set himself against the spectacle? Effectively he may have — and that’s intriguing in itself — but has he done so consciously, intentionally? Has the President read Guy Debord’s Society of the Spectacle? Or does the President have some ideas whose provenance is unknown to him? Are French Marxist theories of capitalist propaganda and false consciousness influencing U.S. strategy in the war on terrorism? It doesn’t even rise to the level of surmise.

But then last night I was listening again to then candidate Senator Obama’s “More Perfect Union” speech (Wikipedia | YouTube), delivered in response to the Reverend Jeremiah Wright controversy (National Constitution Center, Philadelphia, Pennsylvania, 18 March 2008), where he says the following about racial controversies (starting at 28:56 in the video):

For we have a choice in this country. We can accept a politics that breeds division, and conflict, and cynicism. We can tackle race only as spectacle — as we did in the OJ trial — or in the wake of tragedy, as we did in the aftermath of Katrina — or as fodder for the nightly news. We can play Reverend Wright’s sermons on every channel, every day and talk about them from now until the election, and make the only question in this campaign whether or not the American people think that I somehow believe or sympathize with his most offensive words. We can pounce on some gaffe by a Hillary supporter as evidence that she’s playing the race card, or we can speculate on whether white men will all flock to John McCain in the general election regardless of his policies.

We can do that.

But if we do, I can tell you that in the next election, we’ll be talking about some other distraction. And then another one. And then another one. And nothing will change.

Two points:

  1. In common use, the word “spectacle” is an indefinite noun. People who aren’t invoking the theories of Debord would say, “We can tackle race only as a spectacle”. “Spectacle” used without an article, or “the spectacle”, with an article indicating a proper noun, are how people with Debord on the brain use it.

  2. As a brief explanation of the machinations of the spectacle, one that appeals to common language and experience, this is not bad.

At this point I think it is within the realm of possibility that President Obama has consciously and intentionally set himself against the spectacle. Of course President Obama is not a radical, but a meliorist and an incrementalist (“The arc of history is long, but it bends toward justice”). He is not about to explode the spectacle tomorrow. Audacity is apparently formal; it is for hope, ambition, dreams. Small opportunistic (almost Clintonian) victories are for real-world policy. But I think when the opportunity presents itself, President Obama does seek to reject and counteract the logic of the spectacle.

I don’t want to throw fuel on the right-wing illuminati — this is like my own little D’Souza-esque conspiracy — but how strange would it be if President Obama were engaged in covert acts of sublimated high philosophy, if the ideas of Guy Debord were actually influencing the President’s thinking about strategy in the war on terrorism and cultural narratives in media? If the U.S. had actually, explicitly (at least in the mind of the Commander-in-Chief) broken with the logic of dueling spectacles, to — I don’t know what — something else, it would as if Nietzsche’s “the greatest thoughts are the greatest events” (Beyond Good and Evil §285) were playing out right here in the Capital today.

World History, As Pantomimed in the Facial Expressions of Hillary Clinton

The holiday from history ends and the war on terrorism begins with the spectacle of September 11th. The Bush administration decided to make dueling spectacles of the war on terrorism when it opened the war on Iraq with “shock and awe”. The logical conclusion of the first major arc of the war on terrorism would have been the spectacle of Osama bin Laden’s bloodied corpse, but President Obama decided to deny the world that spectacle. That bookend to the war on terrorism would remain unconceptualized in the spectacle (Barack Obama is “the first Jewish president“).

What we got instead of the image of the death of Osama bin Laden was the image of the death of Osama bin Laden reflected on the face of Secretary of State Hillary Clinton.

Hillary Clinton witnessing the death of Osama bin Laden, Situation Room, the White House, 1 May 2011

Secretary Clinton has tried to fob this image off, saying, “I am somewhat sheepishly concerned that it was my preventing one of my early spring allergic coughs. So, it may have no great meaning whatsoever.”

Is this the Clintonian reflex, or Obama’s postmodern commitment to non-representation and non-meaning? Or maybe it was a yawn?

She should own this moment: it’s one of the most amazing and iconic images to come out of the war on terrorism. And she is turning the office of Secretary of State into the U.S.’s emotional barometer.

Today, when Libyan rebels managed to locate and kill Muammar Gaddafi, one of the first vectors of this story was when, while preparing for a series of pool interviews in Kabul, Afghanistan, Secretary Clinton was handed a BlackBerry with the news. Again, no image of the event, but the event reflected in Hillary Clinton’s reaction.

Hillary Clinton reacts to news of Muammar Gaddafi's capture, Kabul, Afghanistan, 20 October 2011

It’s like world history meets Andy Warhol’s Blow Job (Wikipedia | YouTube).

Also of note, that baby bump just over Secretary Clinton’s right shoulder is Deputy Chief of Staff Huma Abedin, wife of Anthony Weiner.

Information Work in an Era of Information Abundance

I’m going to excerpt about fifty percent of David Frum’s review from this weekend’s New York Times Book Review (“Unhappy Days,” 5 September 2010, p. BR20):

Art historians tell us that photography revolutionized painting. Suddenly there was a better way of recording the physical appearance of things, and artists had to discover new purposes for brush and pigment. But for those living through the revolution, the process must have seemed more gradual. Long after the Impressionists and Cubists and Futurists, there must have been serious portraitists who continued to earn a living depicting brides on their wedding day or businessmen made good.

I kept thinking of those backward­looking artists all the way through Laura Kalman’s “Right Star Rising.” As a work of history about the Ford and Carter years, there is nothing seriously wrong with it. The facts are accurate, the writing is clear and the point of view is not tendentious. Once upon a time, such a book might have been useful to somebody.

But the question it raises — and it’s not a question about this book alone — is: What’s the point of this kind of history in the age of the Internet? Suppose I’m an undergraduate who stumbles for the first time across the phrase “Proposition 13.” I could, if I were minded, walk over to the university library, pull this book from the shelf and flip to the index. Or I could save myself two hours and Google it. I wouldn’t learn more from a Google search than I’d learn in these pages. But I wouldn’t learn a whole lot less either.

He gets a little more specific than this, makes a few examples, but that’s about all he has to say about the book. It’s nothing against Ms. Kalman — as Mr. Frum writes, “it’s not a question about this book alone.” The analogy to painting in an era of photography is apt. We live in a time in which our relation to information is changing. Problems of availability have — at least in the developed world — been for the most part solved. So like the painter, how are information workers to make their way in this world?

I’m not going to wind this post up with some pat answer. I think that Mr. Frum is also correct in not making a teleologically overdetermined analogy. “For those living through the revolution, the process must have seemed more gradual,” he writes. Painters only found a post-photography life through protracted experimentation.

I think of Harold Bloom’s idea of the anxiety of influence as much more than a theory of poetry. In an age of mass information, all information workers labor under the anxiety of influence (Jimmy Wales is our Milton). No one should think that a publisher is going to cut down a few hundred acres of trees for more of the same.

Vanity Fair

David Schmader’s description of Vanity Fair (“On Spoofing GOOP,” SLOG, The Stranger, 8 September 2009):

Vanity Fair, which was designed for me by God, who forced People and The New Yorker to have a baby, then swaddled the results in ambitiously art-directed fashion ads. It is one of America’s great narcotics …

I think that Vanity Fair is the best magazine in the U.S. today, a perfect combination of superficiality and seriousness, playfulness and piety.

Infinite Summer Informal-Irregular Get-Together IV

Infinite Summer Informal-Irregular Get-Together IV, Mr. Henry's, Capitol Hill, Washington, D.C., 6 August 2009

6 August 2009, Mr. Henry’s, Capitol Hill, Washington, D.C.: Infinite Summer Informal-Irregular Get-Together IV to discuss David Foster Wallace’s Infinite Jest. Clockwise from 12:00: typicalsquirrel, Miruna Stanica / Rrose Selavy, Daniel Ginsberg / NemaVeze, the photographer (off frame), Sarah Webster, Quinn Norton (blog | twitter | Wikipedia), Matt Dickerson / piscivorous. Oh, that’s right, piscivorous, you weren’t there.