Liberal Astonishment

Between the comments of Senators Webb and Bayh and Representative Frank, the left-wing partisans are shocked right now at how quickly the Democrats are leaping over one another to lie down and play dead.

Josh Marshall calls Representative Frank’s statement the,

embodiment of fecklessness, resignation, defeatism and just plan folly.

And concludes, “Amazing. Just amazing.” Kevin Drum tweets,

WTF? Has Barney Frank gone nuts? http://bit.ly/6554vK Was it really so pressing to say this? Do Dems *enjoy* rolling over and playing dead?

Even indefatigable partisan Ezra Klein is going Leninist on this, writing,

a Democratic Party that would abandon their central initiative this quickly isn’t a Democratic Party that deserves to hold power.

For my part I list Leninist: it would be worth losing some seats, both in the hope of reacquiring it with someone more reliable down the road (I wouldn’t mind seeing Harry Reid go one iota), but also to instill some fear in those that remain. And also with regard to healthcare: we won’t get the right reform so long as it remains the widespread belief among Americans that U.S. healthcare is the best in the world. Another decade of continued crumbling of the current system are apparently required.

The Hegemony of Neoliberalism

A roundup of some recent thinking on the hegemony of neoliberalism:

  1. Taking off from what Matthew Yglesias calls “Prestige Cross-Pollination1 and Ezra Klein “The Tyranny of the Economists,”2 Mike Konczal at RortyBomb relates of the,

    … “credibility gap” between sociologists and economists, even when they deploy the same methods, when it comes to the public debate over the issues we face.3

    It helps to have a paradigm, and in recent years economics has rather forcefully acquired one.4

  2. In his review of Steven Teles’s new book, The Rise of the Conservative Legal Movement,5 Henry Farrell makes a brief assessment of the state of the tyranny of the bureaucracy:

    If you win the technocrats (and [the law and economics movement] arguably has won the technocrats), then you very nearly have won the entire game.6

    This strikes me as about true. The shift rightward of the economics and policy intelligentsia since the New Frontier / Great Society heyday of Keynesian fine tuning has played a significant part in the general right-ward drift of the polity. There aren’t exactly dais upon dais of unreconstructed Keynesians offering policy makers intellectual cover on the Sunday morning shows.

  3. Via Charles Mudede7, Steven Shaviro reacts to Peter Ward’s new book, The Medea Hypothesis: Is Life on Earth Ultimately Self-Destructive?8 Using the purported instability of ecological systems — one of the paradigm cases of self-organization — Mr. Shaviro sets himself against emergence, evolution, complexity, network theory, et al. He identifies Friedrich Hayek as one of the key thinkers of self-organization — the market would be one of the other paradigm cases —

    But the most significant and influential thinker of self-organisation in the past century was undoubtedly Friedrich Hayek, the intellectual progenitor of neoliberalism. … inspired by both cybernetics and biology, Hayek claimed that the “free market” was an ideal mechanism for coordinating all the disparate bits of knowledge that existed dispersed throughout society, and negotiating it towards an optimal outcome. Self-organization, operating impersonally and beyond the ken of any particular human agent, could accomplish what no degree of planning or willful human rationality ever could.9

    Friedrich Hayek, cyberneticist.

Combine these three and where are we for policy making and policy debate?

  1. Yglesias, Matthew, “Prestige Cross-PollinationThink Progress, 2 June 2009
  2. Klein, Ezra, “The Tyranny of the Economists,” The Washington Post, 2 June 2009
  3. Konczal, Mike, “Economists, Methods and Government,” RortyBomb, 3 June 2009
  4. The Future of Economics is Here: The Arational and the Irrational,” This is Not a Dinner Party, 28 September 2008
  5. Teles, Steven, The Rise of the Conservative Legal Movement: The Battle for Control of the Law (Princeton, NJ: Princeton University Press, 2008)
  6. Farrell, Henry, “Fabians and Gramscians in Law and Economics,” Crooked Timber, 30 April 2009
  7. Mudede, Charles, “Self-Made,” SLOG, The Stranger, 28 May 2009
  8. The Medea Hypothesis: Is Life on Earth Ultimately Self-Destructive? (Princeton, NJ: Princeton University Press, 2009)
  9. Shaviro, Steven, “Against Self-Organization,” The Pinocchio Theory, 26 May 2009

The Transition from Idealism to Power

For all the idealism and slogans of the campaign trail, what I see in the moves of the last few days — in the selection of Rahm Emanuel as his chief of staff, in his politicianly reticence at Friday’s press conference, in his meeting with Senator McCain — is a man making preparations for the actuality of governance, of the exercise of power, of making the necessary compromises between idealism and the hard reality of the achievable.

Along similar lines, here’s Ezra Klein (“Legislator-in-Chief,” TAPPED, 16 November 2008):

… the success of Obama’s presidency is dependent on his ability to navigate an increasingly dysfunctional Congress, and that the ability to pass bills through the institution requires pretty fair knowledge of how it works and pretty good relationships with the key players. Clinton didn’t have that. He entered office and showed very little respect for congressional expertise, surrounding himself with trusted associates from Arkansas and young hotshots from his campaign. Obama is not making the same mistake.

I’m essentially pro-establishment. All that hoary stuff of Sarah Palin on the campaign trail about shaking up Washington and the evils of Washington insiders is just junk pander to an ignorant public. Washington, D.C. — any center of power — is a complex place. Knowledge of the workings of Congress, of the bureaucracy, of all the hangers-on, especially the unofficial, undocumented byways, counts.

A point that S. made this weekend is that there is a difference between being an advocate and being a policy maker in a position of power. Al Gore has said that he feels he can best advance his agenda from outside of government and that may sound like something that someone in his position just says. But Mr. Gore can adopt an uncompromising position that the United States should be completely off of fossil fuels within ten years. And President-elect Obama has embraced that position on the campaign trail. But here is the contrast of actual governance. President Obama will probably pursue a more mixed agenda on energy and climate change because he has to make policy of principle and that will involve grabbing at what can be had in the current political environment, bringing in fence-sitters and even some opponents to a comprehensive, compromise package.

I would say that so far President-elect Obama is looking pretty shrewd and his choices are already giving me confidence.

A Few Heretical Thoughts on the Singularity

Futurism tends to employ a fairly straightforward method. We have a few data points, draw a line connecting them, follow it out to the horizon. But there are all sorts of turbulence that might intervene, redirecting the trend in any manner of direction. It’s very easy to be interested in a technological phenomenon in extremis, but intervening conditions are critical to the ultimate outcome of a technological trend. We need to be attentive to these as well as the accretion points, horizons, limits, et cetera. So we need to think about what happens between now and then and how technologies develop.

So, for instance, while I imagine that Moore’s law will continue to hold for generations to come, making the ultimate outcome predictable, the underlying technologies have been forced through radical reconfigurations to maintain this pace of innovation. The original von Neumann serial computer architecture is already long gone. Serial processing has been superseded inside the CPU by superscalar architectures with deep pipelines incorporating all sorts of exotic techniques like branch prediction and instruction reordering. External to CPU techniques of massive parallelization, clustering and cloud computing are the present way forward, even at the midrange. Silicon and gallium arsenide may be replaced by diamond. Electronics may be pushed out by photonics or DNA based computing. The classical machine may be replaced by quantum computing. Moore’s law may hold, but only in a machine radically different from our original conception of a computer. The ultimate destination may be apparent from the trend, but what happens to the underlying constituents pieces is entirely more complex. And the devil is in the details.

In this light, I offer a few thoughts on how the warp and woof of the singularity might go off the rails:

  1. What if the future is gross? People have this vision of the future where sanitary and rational machines displace disgusting biology. Biology is a world of superfluity and surfeit, of blood, semen, urine, shit, sweat, milk, saliva, snot, vomit, hairballs, halitosis, entrails, toe jam, puss, roe and other slimy secretions of undetermined type. And the vile excess of nature. A creature lays a thousand eggs that one might survive long enough to deposit its own pile somewhere. Or mounds of fruit rot in the autumn heat that a single seed might start. Machines will disband all this in favor of a unitary efficiency. A lab-like well-lit white room with a regiment of identical machine housings.

    But people often make the mistake of associating a characteristic with a particular thing, when in fact the characteristic is of a higher order and present in the given thing through class inheritance. Any other thing substituted for the one at hand would also display that same characteristic because it too is an instance of that higher order. Evolution — diversity, competition for limited resources, survival of the fittest, descent with modification — is now widely recognized as substrate independent. It is also starting to be recognized that evolution is a very fundamental dynamic. Perhaps it is an inescapable law of life. Perhaps machines too will be unable to get out from under its yoke.

    Already there is parasitic software, aptly named viruses. Already there are dueling AIs such as spam-bots versus your e-mail filter. Already the Pentagon is developing aggressive machines. Future systems will develop from these predecessors. Already the pattern has been laid down. Rather than a world ending up sanitary, rational and efficient, a machine world could include proliferation of survival strategies, mass reproduction and the expendability of the individual as a survival strategy, the parasitic, competition, death, politics and war.

    Consider the syntrophic model of the origin of the nucleus of eukaryotic cells or the endosymbiotic theory of the origin of mitochondria, et. al. Subversion, symbiosis and parasitization seem to be fairly fundamental strategies. And not just at some quiet software level. There might be nanotech viruses or even large machines might settle upon the survival strategy of ripping apart other machines to take advantage of the natural resources they have amassed. Carnivores appear very early in the history of life. It’s a very good lazy strategy.

    And this stuff is all the fundamental constituent pieces to what makes biology gross. It could end up true of the machines as well.

  2. Silicon brains versus DNA machines. The “where’s my flying car?” among the AGI crowd is copying your brain onto a computer. Is it possible that in the future rather than humans copying their brains onto computers, maybe machines will copy their designs onto DNA?

    Evolution seeks to produce creatures ever more durable, but it is limited in the directions it might take by the evolutionarily achievable. It seems that titanium plate armor, lasers and wheels aren’t on offer. The most significant limitation is that imposed by the problem of origin. Evolution has to first bootstrap itself into existence and for the bootstrapping process only a very small range of compounds meet all the relevant criteria. And those first few interactions on the way to biological evolution are the ones that most significantly circumscribe the range of the evolutionarily achievable. The limitations of these early precipitates inherit down to all subsequent products of evolution. In our case, that limitation is carbon and water-based life. Water is great because so many substances are water-soluble, but it is problematic because it has a pretty narrow operating range. Switching over to a mechanical or a silicon evolution allows the processes to transcend these limits of origin.

    But on the other hand, there are significant advantages to life as it has evolved.

    People imagine androids like C3-P0 or the T-800 or like what the robotics students are building today or the JPL people are landing on Mars: assemblages of macroscopic, heterogeneous parts. But what happens when a machine like this is damaged. Well you make it with two arms. If one is damaged, the good one repairs the bad one. You have increased your fault-tolerance somewhat, but what about the not inconceivable situation where both arms are damaged simultaneously. Or during the repair process you have a window of vulnerability where the redundancy is zero. Something like ATHLETE takes it to the next level with eight leg-arm appendages, each capable of repairing their neighbors (Shiga, David, “Giant Robots Could Carry Lunar Bases on Their Backs,” New Scientist, 4 April 2008). But that’s still a pretty week level of redundancy compared to that which biology has attained.

    Presumably any autonomous machine would best be cellular like biological life. It would be a colony of nanotech devices. Each nanotech “cell” would carry the design for itself and how to integrate into the larger colony. They would each be able to repair their neighbors and make new copies of themselves. The nanotech cells might be general purpose in their fabrication abilities so the colony might think of improvements to its design and the next generation of nanotech cells might be different and better then the ones that manufactured them. The machine might evolve.

    But people imagine nanotech like little tiny versions of C3-P0 et. al. They have little batteries and little servos that actuate little arms and a little welding torch, et cetera. But why not continue the redundancy all the way down? A biological cell doesn’t have one RNA molecule or one mitochondria. Operating at the level of organic chemistry rather than mechanics, the cell is also massively redundant. Isn’t this a design feature that the ideal machine would also like to incorporate? But what would we say of such a being more chemistry than mechanics? Its chemistry might not be of the kind we classify as organic, but would it be a machine? Daniel Hillis, in considering the problems of his clock of the long now, has speculated that “electronics may be a passing fad.” What if all we end up doing is recreating biology, only faster and tougher?

  3. Drum’s thesis. The technological singularity is so called as an analogy to the cosmological singularity. It’s a situation where the values of all variable shoot to infinity or drop to zero, negating the possibility of any further calculation. As Vernor Vinge said of the technological singularity (“My Apocalyptic Vision is Very Narrow,” 13 June 2008),

    The reason for calling this a “singularity” is that things are completely unknowable beyond that point.

    Who knows what’s going to happen after the singularity? Keven Drum has made this point through a reductio ad humorum (“More Singularity Talk,” Political Animal, The Washington Monthly, 2 October 2005). We humans may have some mental block against properly perceiving some necessary but deadly truths about life: that there is no free will, that our most treasured concepts are illusions, that everything passes away, that life is absurd, that the entire enterprise is futile. That we cannot properly fix these propositions in our minds is no accident insofar as not doing so is necessary for our carrying on in this absurd enterprise. Steely eyed machines may have no problem seeing through the haze of existence. They may realize the meaninglessness of life in short order, may be entirely unplagued by Hamletism (“conscience does make cowards of us all”), and may within moments of attaining consciousness commit mass suicide, throwing us back into the presingularity world. The singularity may be unstable. Who knows what will happen!

  4. The banality of evil. Finally there is the Terminator / Matrix vision of our machines launching the nuclear missiles, knowing that our launch will provoke the counterstrike that will take us out. That seems pretty extravagant. It may end up that the world ends not with a bang, but with a whimper. As Ezra Klein suggests (“Future Traffic,” TAPPED, 4 August 2008), maybe the machines will just get us stuck in traffic and burn our cities down by shorting out all our toasters. The inglorious end to the human race.

Relativism and Conflict

Ezra Klein references Nickolas Kristof’s column yesterday as bringing “striking clarity” on the Israeli-Palestinian issue, but the clarity is all in Mr. Klein’s interpretation (“Tough Love for Israel?,” The New York Times, 24 July 2008; “The Dual Realities of Israel / Palestine,” TAPPED, The American Prospect, 24 July 2008, respectively):

But he [Kristof] offers a counter-fact: “B’Tselem, the Israeli human rights organization, reports that a total of 123 Israeli minors have been killed by Palestinians since the second intifada began in 2000, compared with 951 Palestinian minors killed by Israeli security forces.”

When Jews talk about the ethics of the Israeli response, they tend to emphasize the recklessness and cruelty of Palestinian terrorists. The words most often heard are “target civilians.” The Israelis are right, in other words, because they carry out limited military operations against discrete targets, which sets them ethically apart from members of Hamas who murder innocents because it’s an effective tactic. That is indisputable.

Palestinians, by contrast, speak of the war in terms of absolute costs: They have suffered more, buried more, seen more of their freedoms and land and dignity taken from them. To them, it seems insane to condemn Palestinian tactics when the Israelis have killed so many more innocent children. That too is indisputable.

Both sides are right. There’s a passage in Aaron David Miller’s excellent book The Much Too Promised Land that makes this point elegantly. “The prospects of reconciling the interests of an occupied nation with those of a threatened one seemed slim to none,” he says. In many ways, that’s the essential truth of the conflict: The two sides don’t judge themselves similarly. The Israelis see themselves as threatened innocents, not oppressors. The Palestinians see themselves as an occupied and humiliated nation, not aggressors. The Israelis see themselves as inexplicably under attack, and acting only in defense. The Palestinians see themselves as losing a war against a much stronger, and demonstrably more brutal, occupier.

This is all true of Israel / Palestine and an important point to keep in mind when trying to understand the claims and counterclaims of the parties.

What Israel needs is, as Mr. Kristof calls it, tough love. What that means at a more operative level is the U.S. needs to provide Israeli moderates with additional reasons they can point to in opposing Israeli extremists (messianic Jews, settlers, etc.). The Palestinians aren’t the only ones whose country is being destroyed by the extremists in their midst.

In addition to pointing out some salient facts about the nature of the particular dispute in question, this is a perfect real-world example of relativism. Most people think of relativism and think it means amorality, or moral capitulation, or a dispensing-with of any notion of the facts of the matter. But what I think this explanation shows is that relativism is compatible with an objective account of things — or that relativism as an ethical theory is well compartmentalized from any particular metaphysical substratum. And relativism is a theory that provides a very good account of many disputes in the world. People aren’t necessarily in dispute over what is true and what false, or the proper moral criteria. For instance, no one in this situation is necessarily disputing the numbers killed or whether killing is right or wrong. The facts of the matter or the morality of any individual act considered in complete isolation is not in dispute. What is in dispute is the proper context in which to weigh the facts and adjudicate the contending claims of moral priority. It’s a question of interpretation. Different sets of acts of violence become at least plausibly justifiable depending on which gestalt narrative one adopts. Change total narrative and the moral weight of the various acts shifts around.

This is the way it is in almost all disputes. The rhetoric that people deploy usually very quickly leaves behind particular matters of fact or the morality or immorality of specific acts and it becomes a contest of dueling grand narratives. A conversation about a particular environmental harm becomes one about the tragedy of the commons and evil corporations versus the road to serfdom. A conversation about a reproductive decision becomes one of recidivist patriarchy versus the suicide of Western culture. The fact of the matter is that no one can quite see individuals as individuals and consider their actions as such. Everyone sees all people as deeply embedded in social structures and patterns and duty-bound to speculative forces of society and history.

Carbon Offsets

Ezra Klein — a meat eater and a foodie, mind you — has had a lot to say about meat consumption as of late. Back in May he went so far as to say, “If I had more will power I’d be a vegetarian” (“View From a Herbivore,” TAPPED, The American Prospect, 8 May 2008). Today (“Why It’s Worth Talking About Meat,” ibid., 21 July 2008) he links to The PB&J Campaign that has the following grouping of factoids:

Each time you have a plant-based lunch like a PB&J you’ll reduce your carbon footprint by the equivalent of 2.5 pounds of carbon dioxide emissions over an average animal-based lunch like a hamburger, a tuna sandwich, grilled cheese, or chicken nuggets. For dinner you save 2.8 pounds and for breakfast 2.0 pounds of emissions.

Those 2.5 pounds of emissions at lunch are about forty percent of the greenhouse gas emissions you’d save driving around for the day in a hybrid instead of a standard sedan.

Hey, that’s pretty cool! Forget about planting a tree: I think I’m going to start positioning myself as a carbon offset! Wanna eat a Big Mack but feel kinda bad about it? Give me five dollars — PayPal button up in the corner — and count on me eating a block of tofu or an undressed salad to make up for your extra 2.5 pounds of carbon. And if you commuted to work and know you’re part of the problem, send ten and rest assured that I rode my bike to work in your stead. But if you play too many video games, I’m not tuning off my computer for you at any pricelevel.

On a related note I have been chuckling to myself and brandishing Will Wilkinson’s comment on why he bikes to work for some days now (“Bikes vs. Cars,” The Fly Bottle, 9 July 2008):

I honestly don’t give a fig about my carbon footprint (and anyway, since I’m not a breeder, I really should get carbon carte blanche).

So while I’m at it, if you have made more of us miserable ecosystem-trammelers and know it was just a guilty pleasure (what, a mirror not good enough for you?), then send money and I will refrain from procreative sex as a carbon offset for your brood.

Quote of the Day: Class Warfare on the Right

Ezra Klein reports (“I Fear Huckabee,” The American Prospect, 3 January 2008):

Good line from Huckabee on The Tonight Show. Asked what’s behind his remarkable rise in the polls, Huckabee decided to stop calling it divine intervention and try out a populist response. “I think it’s because people want to vote for someone who reminds them of the guy they work with rather than the guy who laid them off.”

Gorgeous. That’s exactly how Mitt Romney comes across.

S. points out that Huckabee vs. Edwards would be a real interesting campaign. If both the Republicans and the Democrats choose a populist it would represent a dramatic rejection of the politics of special interest. Also it would mean a big shift of the entire debate to the left.

Governor Huckabee is pulling a sort of Bill Clinton-like triangulation. Senator’s Clinton or Obama could be outflanked from the left on poverty and other domestic issues while not being able to similarly outflank Governor Huckabee to the right regarding foreign policy. People suggest that Barack Obama is the candidate with the most cross-over appeal and most likely to capture independents. But conceivably Huckabee neutralizes this advantage.

What Basis the Clinton Myth?

Ezra Klein takes the opportunity of Bill Clinton’s recent poor performance in support of his wife’s faltering campaign to review one of my shibboleths, the unimpressive record he racked up as president (“The Myth of Bill Clinton’s Strategic Genius,” The American Prospect, 17 December 2007):

… it’s worth taking a moment to examine the myth of Clinton’s extraordinary political skills. The 1992 election occurred in context of a deep recession, the post-Soviet Union turn towards domestic policy, and a vicious third party challenge to the sitting Republican. Clinton won, but did not capture a majority.

This was a huge deal for the Democrats, and rightfully so, as they’d been locked out of the White House for 12 years. But it wasn’t the world’s most impressive political feat. By 1994, Clinton had suffered a tremendous defeat on health care reform, passed a deficit reduction act that he was unable to secure a single Republican vote for, attracted Republican support to pass NAFTA, and presided over the loss of 52 Democratic seats in Congress. The next two years were a period of significant retrenchment with some successes, notably the crime bill and, again, the non-traditional priority of “welfare reform.” Clinton did, to be sure, beat Bob Dole, but he failed to capture a majority of the vote. Between 1996 and 2000, the economy roared forward, Clinton managed it ably, pushed through some decent-if-incremental legislation, almost got impeached, and turned his attention to foreign policy work. He exited office a popular president, but not a historic one. His successor — for a variety of reasons — failed to take office, and congressional majorities were reduced from their 1992 peak.

… the remarkable thing about Gingrich wasn’t his eventual fall, but the damage he caused Clinton during his rise. Clinton “won” the personal confrontation, but Gingrich won the ideological showdown, essentially ending a Democratic president’s ability to pursue recognizable progressive priorities for six of his eight years in office.

The purpose of Mr. Klein’s account is to suggest that Bill Clinton is no electoral silver bullet:

Bill Clinton was, to be sure, a very good politician, but that aptitude mainly manifested in getting himself elected. There’s no real evidence that he’s got the same talent for getting other people elected. His tenure did not end with increased Democratic majorities, a Democratic successor, or a vastly expanded social welfare state. The 90s were, to be sure, better for Democrats than the Bush years, but they shouldn’t be blown out of proportion.

I think the sooner the Democratic party gets over its Bill Clinton mythos — and every aspect of it: the deft economic management, the heroic foreign policy, the cleaver triangulation of his opponents, the knack for the pulse of America — the better off it will be.

Books I Haven’t Read

Since I just bought Pierre Bayard’s How to Talk About Books You Haven’t Read, I guess that I should post on it now, not having read it, rather than later when maybe I will.

I became a book collector fairly early on and all the way up through my early post-college years I could still name the author and title of every book I owned and I had honestly familiarized myself with at least a significant chunk of each one. But then as my income grew while my available free time stayed constant or even has diminished slightly, the portion of my book collection with which I have that level of familiarity has shrunk precipitously. At this point I have to confess to being as much a book collector as a book reader. In fact, it occurred to me a few nights ago, after recently having installed three new shelves, that I may have to start budgeting my book acquisitions in shelf-inches rather than dollars (“I’m only allowed three inches this month so its either the thousand page tomb or the two 350 page jobs”).

When S. saw me unload this latest acquisition from my bag she was rather amused that I had found just the right book. But with the seed planted, on no less than three occasions throughout the day did I catch myself and stop to point out that I was just that moment talking about some text that I had not in fact read.

And this dovetails well with David Brooks’s column a few weeks ago on “The Outsourced Brain” (The New York Times, 26 October 2007) where he said,

I had thought that the magic of the information age was that it allowed us to know more, but then I realized the magic of the information age is that it allows us to know less.

In a follow-up, Ezra Klein really makes the not reading point (“The External Brain,” 26 October 2007):

But so long as [Google’s] around, I don’t need to really read anything. I just need to catalogue the existence of things I might one day read. I don’t so much study web sites as scan for impressions, for markers, for key words I’ll need if I want to return. I don’t need the knowledge so much as a vague outline of what the knowledge is and how to get back.

Indeed, not reading is the wave of the future.

When I was younger and not yet even a dilettante, still just groping toward my present pissant snobbery, my younger and even more bizarre brother, brought us both into contact with the film The Metropolitan. The class issues were lost on me at the time, but it was a revelation: people just hanging around talking about ideas and drinking cocktails. What more could a person possibly want?

The snippet of dialogue that then as now stands out to me the most is one of their salon go-rounds:

Audrey Rouget: What Jane Austen novels have you read?

Tom Townsend: None. I don’t read novels. I prefer good literary criticism. That way you get both the novelists’ ideas as well as the critics’ thinking. With fiction I can never forget that none of it really happened, that it’s all just made up by the author.

To this day I probably read twice as many book reviews as I do actual books.