The Future of Economics: The Arational and the Irrational

Back in 2000 The Economist ran an article titled “The Future of Economics” (4 March 2000, p. 80). It was largely a gloss on a symposium on the same from the Journal of Economic Perspectives (vol. 14, no. 1, Winter 2000). The authors acknowledged that economics was a faltering field. Setting aside the proposition that economics may simply have run it’s course and be into its dotage of diminishing returns, the article considers two possibilities for a way forward:

David Colander of Middlebury College, in an article that looks back on the present from an imagined 2050, blames the current discontent on the orthodox general-equilibrium model that underlies most of today’s economic theory. He favors a shift from the current approach, which has been called “loose-fitting positivism” (propose a model consistent with standard assumptions, then test it), to one based on “loose-fitting pragmatism” (forget about canonical principles, just search for patterns in the data).

Such an approach, he says, would be consistent with “the rise of complexity science within the scientific community generally.” Researchers sitting at their computers, subjecting data to a withering barrage of statistical analysis, would still hope to come up with laws of a sort, or regularities at any rate. But these “laws” would be regarded as provisional and ever-shifting: indeed, the claim is that changeless underlying patterns do not exist. Complex systems expand and evolve; even at the most fundamental level, these patterns are temporary. But whether this approach could still be called “economics” is debatable.

The second approach is much easier to reconcile with traditional methods. Its most celebrated exponent is Richard Thaler of the University of Chicago, who has also written a paper for the symposium. Mr. Thaler agrees that the canonical principles of orthodox theory have led economics astray, but he believes these mistakes can be put right. He seeks, in other words, a tighter-fitting positivism. You improve the fit above all, he would argue, by putting a more realistic account of human cognition at the center of the theory.

Orthodox theory famously assumes that people are rational. In reality, they are not. On the other hand, they are not crazy, or crassly incompetent — in other words, their behavior is not random. If economics could try harder to recognize that people try to be rational, but in certain, often predictable, ways fail to be, the positivist approach would have a better foundation. In essence, what Mr. Thaler calls for is a marriage, or at least much closer cohabitation, between economics and psychology.

I have thought of this article frequently since reading it back in 2000 when it was first published. Given the spate of books along these lines, especially the second, I’d have to say that this was one of the more perspicacious articles that I’ve ever read.

The first approach is an example of Petabyte Age type thinking, eight years before Wired put it on the cover. But of course it is an idea that had to incubate in the rarified world of advanced theoreticians for years before any eruption into the popular conscience. The main offering in this area would be Steven Levitt and Stephen Dubner’s Freakonomics (2005), though their book is not a fully atheoretic inquiry so much as putting of large questions to the test of large data sets. More to the topic would be Ian Ayres’s Super Crunchers: Why Thinking-by-Numbers Is the New Way to Be Smart (2007), though the fact that Mr. Ayres used the very methods he describes in his book to arrive upon a title casts a great deal of doubt on the soundness of said methods.

As for the build a better model of the economic corpuscles approach, it seems to have advanced along far enough that it is now also ready to be packaged up for mass consumption. And of course the psychologists have much more lucrative options in publishing than the mathematicians and computer scientists.

Judging by some of the key phrases in the Economist article (the predictably irrational stuff) I was pretty sure that they had in mind Dan Ariely’s thinking, published as Predictably Irrational (2008), but it turns out that Richard Thaler is, along with Cass Sunstein, the author of Nudge (2008). Rounding out the most omnipresent trio is Tim Harford’s The Logic of Life: The Rational Economics of an Irrational World (2008). Also on the list of offerings along this line would be Ori and Rom Brafman’s Sway: The Irresistible Pull of Irrational Behavior (2008) and Michael Shermer’s The Mind of the Market: Compassionate Apes, Competitive Humans, and Other Tales from Evolutionary Economics (2007).

So that’s the future of economic study. Either a discounting of human rationality in favor of the system effect of irrationality or allowing rationality to drop out in favor of the system effect of economic thing-in-itself.

Life Logging: It’s All About the Metadata

Yes, yes, I agree with John that much of what you presently see that might fall under the rubric of life logging is either boring or pretentious or pornography. I really can’t even make it through the cream of such stuff, say, BloggingHeads. As for solipsism, there’s no sense in complaining: that’s our inevitable future. Suburbanization, materialism, the cultural conditions of capitalism et al. are merely the low tech predecessors to the coming introversion. But look past what it is today to the potential that it holds.

Don’t just imagine me sitting at home eating deli potato salad watching on a screen as Frank sits at home web browsing eating a microwave burrito. One person’s life log 24/7: not so interesting. But let’s cut it up and remix it. Imagine if everyone’s life stream was well marked up with metadata. It’s all timestamped and geotagged. Face recognition software tags all the people, perhaps place recognition software adds even more specific location data (H.M.S. Victory instead of just 50° 48′ North Latitude, 1° 06′ West Longitude). All conversations are parsed through speech to text and indexed. Stats on SIPs are tallied. User tags are attached to add to the raw machine indexes. Viewer rating and hit counts are recorded so we have some measures of quality or import. Now we’re ready for some serious use. And what will that consist of? Probably more than I can conceive, but just to toss off a few ideas:

  1. Hindsight is 20/20. There’s really little problem determining in retrospect what was important and what not. The problem is having the foresight to know what’s important before the fact and be at the ready to capture it. If the technology is there (practically limitless storage) then dispense with the hard part of being clairvoyant about impending events and just record everything. We can edit it down later. And with no pressing limit, why not make it much, much later? Or why bother editing at all? In an earlier incarnation along this path, my thought was that what I wanted was complete sense data capture with, say, a ten minute buffer so that I could go back and edit down. But when the trend in data storage struck me, I thought why trade precious time for picayune space?

    But actually hindsight is not 20/20. It only seems so under the sway of dogma. Really the past is inscrutable. There’s almost no telling what revaluation the endless mulling of the past might produce. In the perennial purges to which the raging simplifiers are want, the data trails of alternate narratives are eliminated. What seems inconsequential from one perspective is everything from another. The meager holdings of a peasant’s hovel, junk according to the great man theory of history, become the stuff of grand narrative at the hands of the archeologist. Who is to say what trend, indiscernible to us in the present, will grow to word-historical proportions in the future, or for that matter, what minutia will obsess future generations.

  2. If you build it they will come. One of the interesting phenomena of the budding age is the growing degree of unintended consequences. If you’ve got something, even something unrefined, then put it out there and a bunch of content scavengers will come along with a mash-up of some sort and put it to a heretofore unanticipated good use. One person’s trash is another person’s treasure. What do we do with all that stuff? I don’t know, but my not knowing is not sufficient. Someone else knows. And that right there is a solid gold law of the Internet age. In a system of synergy, where the whole is greater than the sum of its parts, even inert garbage is a contribution to the potential of the system.

  3. Rashomon. Human recall is notoriously unreliable. If you have five witnesses, you have five different versions of events. Life logging may bring some element of objectivity to things. And once you’ve set aside trying to figure out when to turn the recorder on and when to leave it off, catching the unexpected is less of a problem. Just think how much better When Animals Attack or Destroyed in Seconds television programming we’ll have.

  4. Audience. There is, of course, the blatant issue of audience. Who do we log our lives for? As S. e-mailed me,

    To the right audience, there is value. I would give a lot for the ability to look at a few moments (any moments) of the world from my eyes as a second grader. Or a few moments from the eyes of my great-great-grandmother.

    Maybe my two year old self is not of any interest to strangers, but to my parents, to myself, to my children, my great grand children, it would be more valuable than the Zapruder film, the Omaha beach photographs, anything. As a man with a first baby on the way, I anticipate a wholesale reevaluation of your opinion as to what’s dull and forgettable and what important and in need of preservation.

    And per observation number one, the audience is subject to change over time. If that second grader grows up to be an insurance salesman, than maybe interest remains confined to family. If he grows up to be the next Einstein or the next Hitler, than the audience becomes much larger and how valuable all those things are changes vastly.

  5. The human sciences. Imagine just a few of the questions that the sociologist, the historian, the linguist, the political scientist, the antiquarian might be able to address with such a wealth of information at their disposal. The study of linguistic and meme evolution, presently confined to source material consisting of that most contrived medium, writing, would have a plethora of data. If nothing else, the study of nonce words would be revolutionized. Or think what it would do for the preservation of the dying languages and cultures. They could be preserved as they were lived, not as a graduate anthropology student’s field notes. As linguistic analysis tools become more sophisticated the empirical study of the structure of belief, moral practice and reasoning would become possible without the taint of self-consciousness interfering. Perhaps rhetoric would become a hard science. Historians have shifted their interest from great man and political history to people’s or cultural history, but prior to the fairly recent past, there’s almost nothing for them to go on. For developments in culinary practice, foods, cooking and eating tools, historians have to turn to paintings of banquets and study what’s on the table. What furnishings could you expect to find in a peasant’s house in the thirteenth century? Almost a complete mystery. There is worth in the preservation of the quotidian.

  6. Searching and Sorting. Increasingly we will search and sort by ostension. And the join between me and what I’m looking for is other people. It’s Petabyte Age analytics applied to the issue of human interest. People are too complicated for a theory that delves into the internals, so just engage in atheoretic pattern matching, one person to another. This was damn near the first thing that I wrote about as a blogger (see the “theoretical discussion” of my “Inaugural Post,” 21 June 2004).

    Information isn’t just produced and distributed (as if distribution was unproblematic). It’s vouched for, it’s filtered, it’s branded, it’s packaged with other information, it’s marketed and it’s mapped into a network. As the traditional means of these functions — newspapers, magazines, books, television stations — break down, they are being replaced by newer, more individualized methods. It used to be that a person would turn to their established sources — The New York Times, CNN, Cambridge University Press, et cetera. The editors at these institutions served the role of guaranteeing the veracity of information, of assembling a package of information desirable to a certain market segment, of providing the correct admixture of variability. But these were rather dumb packages aimed at gigantic demographics: the readership of The New York Times or the listeners of NPR. With the tools that the information age is making available, people are able to cut out the editor and design their own customized, unique information agglomerations.

    There is so much going on out there in the world that I could never keep up on it all, so I rely on intermediaries. And really I don’t want to keep up on everything. I want intermediaries who are like me and so filter in things similar to the ones that I would select myself. But I don’t want people too much like me. I want some variety and I want to be exposed to new and unanticipated things. But not too much variety. There are some things that I’m absolutely not interested in. I want people who are different enough to introduce variety, but still sufficiently similar to introduce the right kind of variety. Specifying this in an abstract way is extremely difficult. What if you had to make up a list of tags or SIPs that you wanted to see? Could you think of them all? Do you have the time to pursue the Library of Congress subject catalog? And the problem of variety is that of an unknown unknown: most of the variety that I want is stuff in which I don’t yet know that I’m interested. To define this explicitly would be a chore and one that I probably couldn’t do very well through traditional means, so I do so by ostension.

    And the way to do this is with a personally determined network of trust relationships. I subscribe to RSS feeds, I follow certain bloggers, I read my FaceBook News Feed, I add people to my “interesting library” list on LibraryThing, I trust people in recommendation systems like Amazon.com. Their purview becomes an extension of my own. Each node in my network of associations becomes like a radar picket, seeing out from one horizon to the next. They become my agents, recommending back to me the things in which I might be interested, in exchange for others doing the same for them.

    It’s an extension of what’s always gone on. People have always got together and swapped information. They’ve relayed news and gossip, passed on leads on cheap produce, swapped how-to tips. In the past it was rather easy to find people who were into what you were into because there simply wasn’t that much to be into. There weren’t many games, there wasn’t much by way of entertainment, there were fewer hobbies, there weren’t as many job opportunities because the scope of economic activity was narrower, the publishing industry was small. But just as our culture fractures into ever more narrow segments, so our ability to reach out broadens. Our capability to establish similar such relationships is no longer confined to our immediate surroundings and our geographic neighbors. It now extends over the globe and to our ideologically proximate neighbors.

    But if we are to apply Petabyte Age analytics to people, first what we require are the petabytes. In order for other people to serve an ostensive role, they have to make their information available: what they are doing, what events they are attending, what they are reading, what they are watching, what they are purchasing, what they think about all these things. Only then can one start to make determinations about whose life signature to include as part of one’s search criteria and only then do they produce the information to draw into the vortex that is you. Life logging-like behavior is a critical component of search by ostension.

  7. Environmental awareness. Generalized sights like outside.in and EveryBlock try to provide people with information specific to their locality. Sights like Menu Pix or Indie Coffee Shops do the same with respect to particular categories of interest. This is an extension of searching by ostension, only instead of like-minde people, I am interested in like-located people.

    Imagine what life logging would mean for augmented reality. What happens to a cityscape when standing in front of a building, I have the design discussions of the architect and the client, the experience of the laborers who built it, reactions of architecture critics, views of what preceded it on the lot all at my disposal. Imagine being in a new city and having the whisperings of previous visitors or longtime residents in your ear. People often say, “imagine if these walls could talk.” In the future, they will.

  8. The long tail of entertainment. To apply a straightforward materialist analysis to it, life logging is essentially a long tail phenomenon. Production and distribution of content — news, entertainment, educational, documentary — used to entail significant costs, both opportunity and financial. There was only a little bit of bandwidth and fully equipped and staffed studios and broadcast stations were extremely expensive so producers stuck to the safe side of the 80/20 rule. They went with the lowest common denominator of programming to maximize return on bandwidth expended. As the price of production, storage and distribution fall and the learning curve flattens out, what makes the cut will move comparably further down the long tail. Do you think that a thousand television channels are too many? How about one for every man, woman and child in the world? How narrow will a niche become? It’s the other side of the question of how low will production and distribution costs go. Will it go so low that the niche audience shrinks to a single person? I don’t think that even that is the limit. Probably the remote possibility of a single view or incorporation of a minute fragment of one’s output into a larger work is the limit.

    Of course people’s level of interest in participation will be a limit, but as it becomes ever easier — so easy that participation is almost indistinguishable from nonparticipation — it will eventually require active rejection to not participate. And then society might develop positive inducements to overcome even that. There’s always the dreaded network effects, but one can imagine much more stringent inducements. Not having a life log might make a person a social pariah or a life log might serve in place of a curriculum vitae or a portfolio.

  9. Personality as entertainment. Already I think in programs like No Reservations with Anthony Bourdain, Man vs. Wild, MythBusters, The Rachael Ray Show, fill in your favorite example — and I know you have one — we are seeing the maturation of reality television into personality-based programming and hence a limited form of life logging. Already the focus of these shows isn’t a premise or a regular subject, so much as the featured personality. Yeah, sure, each theme-based channel — the Food Network, HGTV, Discovery — picks someone relevant to their brand and that’s cute and all, but at this point I suspect unnecessary. For all your boredom at the medium, a person with a well developed shtick is a person with entertainment potential. And already that’s widely the case with many a medium. Whether it’s Christiane Amanpour, Rick Steves, David Brooks, Matt Drudge, Ann Coulter or the Crocodile Hunter, people tune in for the personae as much as any of the other content.

    And regarding the expansion of personality-based programming into a more generalized life logging, is our meritocracy already so frictionlessly efficient that there are no overlooked talents, eccentrics, geniuses, subversives, whatnot left to be discovered? There’s a word for it already: micro-celebrity. It was the second thing I ever blogged about (“William Gibson’s Idoru and Blogging,” smarties, 21 June 2004). Yeah, sure, some of this is boring, but some shows get cancelled too.

  10. The Zeitgeist becomes tangible. Imagine being able to request a twenty minute medley of the top 100 conversations conducted at my comprehension level on a topic, say consciousness or string theory, over the last six months. You could scan the thoughts of humanity like you presently do radio stations on your car stereo. We’re stitching together the universal consciousness here. For that to happen our thoughts have to stop happening in isolation or small factions and occur in a unified intellectual space.

Was that what you had in mind, John, when you wrote that you were taking a risk dissenting against me?

All Those Moments Will be Lost in Time

S. and I are very interested in lifecasting, life streaming, life logging, life blogging or whatever you want to call it. We are doing some early investigations of technologies, techniques and approaches. Meanwhile, I see that our friend Frank has moved on to the experimentation phase (“August 27, 2008: A Day in the Life,” Too Frank?, 27 August 2008). I also notice a number of visits to Starbucks throughout Frank’s day.

I’m completely unsure what to do at this point. I don’t think I want to have to explain in my next job interview why I’m wearing some gigantic helmet that makes me look like a borg special child. Alternately, it could be as simple as a palmtop with a built-in digital camera and a flickr account.

The lifecasting Wikipedia page has lots of leads to life logging resources, but my two favorites don’t make the list — they are pretty simple. First, I am deeply impressed by Noah Kalina’s Every Day, where he made a six minute stream out of 2,356 photos, taken one per day from 11 January 2000 – 31 July 2006. I think in his case it’s half the music that makes his project seem so profound. Mr. Kalina has prompted a number of both cool and humorous imitators, with a photo a day during the nine months of a pregnancy seeming a particularly poignant use of this documentary form. Second is Jamie Livingston who took a Polaroid photo every day starting 31 March 1979 until 25 October 1997, the day before he died of a brain tumor at the age of 41. Mr. Livingston didn’t usually take self-portraits, just a picture of what was going on around him. Chris Higgins has a good digest of photos, especially the biographical ones, from the collection along with some background information (“He Took a Polaroid Every Day, Until the Day He Died,” mental_floss, 21 May 2008). Also a powerful collection.

I was overhearing someone the other day at the table next to me explain to his dining partner that it’s only a matter of time before data storage miniaturization allows us all to carry around enough storage to record our entire lives. Of course universal wireless will probably beat hard disks to the punch and anyway you wouldn’t want to be carrying around something that important and sensitive in your pocket. You’ll want something a little more secure and fault-tolerant. Whatever the case, the day is coming and I suspect that it will be epoch-defining. One day we will look back and marvel that at one time everyone just let their lives slip away into oblivion “like tears in rain.” I’d like to get a jump on it.

The Thinking Cap

Scientific American has an article on how the mind-machine interface is about to go commercial with a wearable EEG game controller that reads your mind (Sergo, Peter, “Head Games: Video Controller Taps into Brain Waves,” 14 April 2008). How’d they do it? Exactly the way the people at Wired would imagine. Rather than developing a series of hard-won determinate correlations between identified brain waves and intentions they just brute forced it. They recorded a gigantic quantity of sample data and processed it using a cloud computer to find the patterns:

Emotiv solved this brain-computer interface problem with the help of a multidisciplinary team that included neuroscientists, who understood the brain at a systems level (rather than individual cells), and computer engineers with a knack for machine learning and pattern recognition. Over the last four years, the company has conducted thousands of EEG recordings on hundreds of volunteers — not all gamers — as they experienced virtual scenarios that elicited various emotions, facial expressions and cognitive demands. The aim was to find a revealing brain activity that many people shared — a needle in a haystack of frenzied signals. Now, the EPOC allows users to fine-tune settings that allow it to pick up on even the subtlest of smirks.

When building these algorithms commenced two years ago, it had taken up to 72 hours for a bank of powerful computers to run through a mere 10 seconds of individual brain data and extract important features. Sorting through a seemingly endless stream of recordings eventually led Emotiv to find consistent signal patterns that revealed specific mental experiences. “Through a large enough sample size,” Le says, “we were able to get some consistency around the population to attain a high degree of confidence that it accurately measures an emotional state.”

And in dispensing with theoretical purity and just going with base correlation, the engineers at Emotive didn’t even have to concern themselves with the signal to noise ratio of the data:

Buch also suspects that the facial expressions that the EPOC detects are based more on the electrical activity of facial and scalp muscles than the brain per se. Although the electrical activity of muscles, he explained, is normally considered as artifact noise that needs to be filtered out to attain clean EEG signals that are of interest, they are still informative about how facial muscles move, such as during a wink. Tan agrees, saying that in their classification strategy some of the EPOC’s detections are based on muscle movements.

It’s all just correlation and if the noise helps identify the correlation, than it’s just as good as signal. In the petabyte age there is no phenomenon under consideration, not phenomenon under consideration issue. Any possible interference will be defeated by the size of the data set.

Now if they would just make a model that looks like this:

Robotech, Rick Hunter in the thinking cap

And maybe control an F-14 that transforms into a 50 foot tall robot instead of stupid games.

The Jules Verne of the Future Will be a Computer Scientist

Wired Magazine’s cover story this month on The End of Science / The Dawning of the Petabyte Age (Anderson, Chris, vol. 16, no. 7, July 2008, pp 107-121) has a very mundane answer to John’s enthusiasm: just scoop up tones of seawater, sequence every piece of DNA that you find, and compare it to a database of known DNA. The system will be able to flag each strand as existing species / new species.

We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

The best practical example of this is the shotgun gene sequencing by J. Craig Venter. Enabled by high-speed sequencers and supercomputers that statistically analyze the data they produce, Venter went from sequencing individual organisms to sequencing entire ecosystems. In 2003, he started sequencing much of the ocean, retracing the voyage of Captain Cook. And in 2005 he started sequencing the air. In the process, he discovered thousands of previously unknown species of bacteria and other life-forms.

Unfortunately this doesn’t do much to tell us about what the creature is like.

If the words “discover a new species” call to mind Darwin and drawings of finches, you may be stuck in the old way of doing science. Venter can tell you almost nothing about the species he found. He doesn’t know what they look like, how they live, or much of anything else about their morphology. He doesn’t even have their entire genome. All he has is a statistical blip — a unique sequence that, being unlike any other sequence in the database, must represent a new species.

This sequence may correlate with other sequences that resemble those of species we do know more about. In that case, Venter can make some guesses about the animals — that they convert sunlight into energy in a particular way, or that they descended from a common ancestor. But besides that, he has no better model of this species than Google has of your MySpace page. It’s just data.

But who knows, soon enough we’ll have software that will take a DNA sequence as input and produce a virtual model of a creature complete with visualization and tables of physiological data (bone density, blood chemistry, synapse count, etc.). We’ll never even have to find an instance of the creature.

Update, 25 June 2008: I think I’ve got my references a little crossed here. I titled the post The Jules Verne of the Future Will be a Computer Scientist for symmetry with John’s post, but Jules Verne is the author of the exploration stories, not the explorer himself, whereas the hypothetical computer scientist to which I am referring would be one of Jules Verne’s characters. The proper title should have been The Captain Nemo of the Future Will be a Computer Scientist.