Tuesday, January 24, 2012

The Last Man

Last man

From Wikipedia,

The last man is a term used by the philosopher Nietzsche in Thus Spoke Zarathustra to describe the antithesis of the imagined superior being, the Übermensch, whose imminent appearance is heralded by Zarathustra.

The last man is tired of life, takes no risks, and seeks only comfort and security.
The last man's primary appearance is in "Zarathustra's Prologue." After having unsuccessfully attempted to get the populace to accept the Übermensch as the goal of society, Zarathustra confronts them with a goal so disgusting that he assumes that it will revolt them.


The last man is the goal that European civilization has apparently set for itself. The lives of the last men are comfortable. There is no longer a distinction between ruler and ruled, let alone political exploitation.

Social conflict is minimized.

Nietzsche said that the society of the last man would be too barren to support the growth of great individuals. The last man is possible only by mankind's having bred an apathetic creature who has no great passion or commitment, who is unable to dream, who merely earns his living and keeps warm. The last men claim to have discovered happiness, but blink every time they say so.

The last man, Nietzsche predicted, would be one response to nihilism. But the full implications of the death of God had yet to unfold. As he said, "the event itself is far too great, too distant, too remote from the multitude's capacity for comprehension even for the tidings of it to be thought of as having arrived as yet.

One Criticism of Shame

I think Shame one of the best movies of 2011.

But I think it's marred.

Brandon is a sex addict, with, therefore, an uncontrollable need for sexual release in whatever form. But, at the end, his sister's attempted suicide seems emotionally to unlock him. He is tender and loving with her at her hospital bed side. He, then, a lone figure, self brought to his knees, has an emotional outburst at some New York river side, a kind of expiation, as though with that expiation he allows love and emotion and tenderness in to crack the cocoon of his insular coldness.

The last scene is ambiguous.

The same married blonde from the first subway ride at the movie’s beginning now on the last ride beckons to him again with her eyes and face and then gets out of her seat to stand up invitingly, expecting him to join her, to rub up against her. But we have no evidence of his responsiveness to her. He seems to sit blankly as though resisting his previous impulse toward her. Then the screen grows blank; and then the movie ends and we don't know whether his new unlocked self is sufficient to surmount his addiction.

We don’t know whether, that is to say, Brandon has achieved, through his experience with his sister, some measure of self transcendence by way of gaining some measure of emotional wholeness, which will allow him to resist the demands of his addiction--the addiction which has so imprisoned him.

This final ambiguous ending seems pat and contrived in my view and cuts against and diminishes what throughout the movie had been a rather unrelenting, remorseless, unflinching and unsentimental presentation of his addiction.

In a word, the ending, however left irresolute, is, given the thrust what has gone before, a flinch.

There is a further point to be made given Shame’s ending.


It is in the nature of addiction, as I understand it, that emotional wholeness or unblocking is no "cure" for it: there is no cure, as I understand it, for addiction, only the imposition of one's will on impelling need through the hard establishing of conditions allowing the will to prevail one day at a time. I'm not aware that the proposition that there is no cure for addiction varies depending on the addiction. If this is correct, then the implication in Shame that Brandon's recovery of some measure of emotional wholeness of itself may get him past his addiction seems muddled to me and informs what I see as the movie's ending being too pat even in its irresolution

Sunday, January 22, 2012

Fred Siegel on Netzche

FRED SIEGEL
Nietzsche on Eggshells// City Journal
A new book on the philosopher’s American reception soft-pedals his dark influence.
American Nietzsche: a History of an Icon and His Ideas, by Jennifer Ratner-Rosenhagen (University of Chicago, 464 pp., $30)

Jennifer Ratner-Rosenhagen’s American Nietzsche is a 464-page footnote to Allan Bloom’s comment in The Closing of the American Mind that American readings of the German philosopher have produced “nihilism with a happy ending.” Her sense of Nietzsche is based heavily on the writings of the German-born Princeton scholar Walter Kaufmann, famed for softening the philospher of the übermensch’s writings. Like the apologists for jihad who portray it as an internal quest for purification, advocates for Nietzsche acrobatically rope off his praise for war and cruelty as matters of spiritual struggle.

Ratner-Rosenhagen begins with an examination of Ralph Waldo Emerson’s enormous influence. Emerson’s assertion that “permanence is but a word of degrees” becomes in Nietzsche’s writing what later thinkers would call perspectivism, the view that no firm footing exists for asserting the truth or falsity of any particular claim. Or, in Nietzsche’s words, “truths are illusions we have forgotten are illusions.” Presenting herself as an historian of the reception of ideas, Ratner-Rosenhagen traces the adoption of perspectivism in twentieth-century America. Perspectivism, she rightly notes—first as turn-of-the-century pragmatism and today in the form of postmodernism—has been central to the liberal critique of American ideals.

Ratner-Rosenhagen’s principles of selection seem askew. She discusses the plays of George Bernard Shaw, including Man and Superman, only in passing. But Shaw’s plays were one of the primary means through which Americans came to know of Nietzsche’s ideas. Instead, she devotes an entire chapter to letters from obscure Americans to Nietzsche and to his sister Elizabeth Forster-Nietzsche, who became the keeper of the Nietzschean flame after her brother’s death in 1900.

The book is stronger when it deals with Nietzsche’s influence on major figures such as H.L. Mencken and Randolph Bourne, both of whom devoted themselves to freeing the country from the strictures of Victorian morality and denouncing the American effort in World War I. “No author,” writes Ratner-Rosenhagen, “did more to establish the persona of Frederick Nietzsche in America than H.L. Mencken.” Indeed, Mencken was Nietzsche’s first American popularizer. The sage of Baltimore followed his 1908 book, The Philosophy of Friedrich Nietzsche, written when he was only 28, with The Gist of Nietzsche, a collection of the German’s aphorisms, in 1910, and a translation of The Anti-Christ, published in the aftermath of World War I. Mencken, Ratner-Rosenhagen notes, told a friend that his denunciations of American life and culture “were plainly based on Nietzsche; without him, I’d never have come to them.”

Ratner-Rosenhagen complains that in the postwar period, the Nietzsche once celebrated by radicals such as Max Eastman and Margaret Sanger as a “crusader for truth, a debunker of superstition, and an iconoclast who placed conscience above convention” was now seen as “the martial ideal of imperial Germany.” But in her own version of Kaufmann’s softening, she insists—ignoring or failing to read Mencken’s writings on Nietzsche and World War I—that “there is not a straight trajectory from. . . Mencken’s aristocratic. . . Übermensch to the image of the Übermensch as a menace to democracy during the war.” She’s wrong.

Writing for The Atlantic in “The Mailed Fist and Its Prophet,” Mencken celebrated Nietzsche as the inspiration for the new Germany, which was “contemptuous of weakness.” Mencken wrote that Germany was a “hard” nation with no patience for politics, because it was governed by the superior men of its “superbly efficient ruling caste.” “Germany,” he concluded, “becomes Nietzsche; Nietzsche becomes Germany.” Mencken approvingly quotes Nietzsche to the effect that “the weak and the botched must perish. . . . I tell you that a good war hallows every cause.”

Surely, in writing a book on Nietzsche’s reception in America, Ratner-Rosenhagen is duty-bound to respond to Conor Cruise O’Brien’s well-regarded 1970 New York Review of Books essay, “The Gentle Nietzscheans.” Yet she ignores this, too. O’Brien tellingly quotes from Nietzsche’s posthumously published The Will to Power on the “annihilation of decaying races.” “The great majority of men,” Nietzsche wrote, “have no right to existence, but are a misfortune to higher men. . . . There are also peoples that are failures.” This was an argument that appealed to supporters of eugenics as well as to the Nazis. Walter Kaufmann explained it away by noting that Nietzsche hadn’t mentioned the Jews and Poles directly.

Moving into the contemporary era, Ratner-Rosenhagen cites, apparently without irony, the postmodernist literary critic Paul de Man and the Black Panther Huey Newton as examples of people who put Nietzsche to good use in liberating, respectively, literature and African-Americans from outdated prejudices. She declines to mention de Man’s collaboration with the Nazis during World War II. As for Newton, who thought of himself as a superman of sorts, the question is: did he murder three innocents? Or was it four, or five?

The book closes with a sympathetic discussion of the philosopher Richard Rorty’s attempt to reconcile pragmatism’s emphasis on political practicality with Nietzsche’s concern with states of being rather than outcomes. Rorty embraces the Nietzschean absence of truth as socially liberating—but advertisers and politicians find the absence of truth liberating, too. Trapped in his own perspectivist logic, Rorty invokes the sense of empathy, “the inspirational value” derived from reading great novels, as the basis for his intellectual and political viewpoints.

In Ratner-Rosenhagen’s sunny reading, Americans have managed to rope off Nietzsche’s anti-democratic, aristocratic radicalism while embracing his perspectivism, all without doing damage to the body politic. But as we’ve unfortunately become a far more hierarchical and unequal society, it’s difficult to see how that judgment can hold. The absence of commonly held truths is all too compatible with a less democratic society dominated by an aristocracy of the successful and politically connected. They’re only too happy to manufacture their own truths.

Fred Siegel, a contributing editor of City Journal, is scholar in residence at St. Francis College in Brooklyn.

On David Brooks's The Social Animal

Philosophy Is Here to Stay
Benjamin Storey/ January 2012, The New Atlantis,

Hairdressing is among the occupations most closely correlated with happiness. Women’s sexual tastes vary widely with culture and education, while men want the same things regardless of religion, education level, or the influence of culture. Introspection makes you depressed. In one study, babies as young as eight months seem to care about justice.

Such striking findings from cognitive and social science show up on almost every page of David Brooks’s new book The Social Animal, and make it a consistently illuminating read. The book’s novelistic form is pleasing, too: the informative gems from science are melded into the stories of two characters, Harold and Erica, who, while not great literary creations, are real enough to care about. In the moral and political counsel it offers, the book seems sensible as well: Success in marriage matters far more to our happiness than income level or professional status. Government cannot deal with poverty effectively without attempting to change the culture of poor communities. Terrorism should not be understood as a response to poverty, but as a nihilistic expression of a longing for purity common among young men “caught in the no-man’s-land between the ancient and modern.”

But all the arresting data, all the comic-sociological observations, all the insightful meditations on the moral struggles of everyday life are not the main point of The Social Animal. That main point is the momentous claim Brooks made most clearly in a New Yorker article adapted from the book: “Brain science helps fill the hole left by the atrophy of theology and philosophy.” On the authority of brain science, Brooks settles old philosophic quarrels, declaring that “the French Enlightenment, which emphasized reason, loses; the British Enlightenment, which emphasized sentiments, wins.” He compares the cognitive revolution to the most momentous occasions in the history of Western thought: “just as Galileo ‘removed the earth from its privileged position at the center of the universe,’ so this intellectual revolution [in brain science] removes the conscious mind from its privileged place at the center of human behavior.” In politics, Brooks wants to see that “the new knowledge about our true makeup is integrated more fully into the world of public policy.” If cognitive science can fill the hole left by the atrophy of philosophy and theology; if it can vindicate some philosophies and discredit others; if it can relocate the center of human self-understanding; and if our public policy makers should look to it for guidance, then the unmistakable implication is that we should defer to it as our highest intellectual, moral, and political authority. A new sheriff, it seems, is in town.

The Social Animal, then, is an argument for a new kind of scientism. Interestingly, Brooks himself criticizes scientism in the book. Expanding on Irving Kristol’s remark that scientism is the “elephantiasis of reason,” Brooks explains that scientism entails “taking the principles of rational inquiry, stretching them without limit, and excluding any factor that doesn’t fit the formulas.” Brooks attacks the scientism of the French Revolutionaries, who “brutalized ... society in the name of beginning the world anew on rational grounds,” of Frederick Taylor-inspired corporate managers who tried to turn human workers into “hyper-efficient cogs,” and of rationalist urban planners who destroyed old neighborhoods and the valuable social networks they contained so as to put efficient but anonymous housing projects in their place. At present, Brooks sees this scientism embodied in a public policy consensus that accepts “the shallow social-science model of human behavior,” interpreting us as rational, self-interested actors who respond predictably to material incentives. Against this form of scientism, Brooks uses the findings of the cognitive revolution to emphasize the dominance of the unconscious mind, which, he writes, is “most of the mind,” and which frequently causes us to behave in ways that make no sense from a rational, self-interested perspective. He wants us to see just how much of ourselves is anything but rational.

Brooks is no doubt correct to believe that the model of man as a rational self-interested actor who behaves in ways that can be explained in mathematical terms is a gross oversimplification of our nature. But by attempting to elevate the cognitive sciences to the status of a new Galileo, Brooks merely replaces one form of scientism with another: the economists, with their demand curves, are out; the neuroscientists, with their brain scans, are in. Treating emotional and social animals as rational self-interested actors is one way to stretch the principles of rational inquiry beyond their limits; treating us as social and emotional animals who are nonetheless fully intelligible to the scientific method is another. To truly avoid scientism, Brooks would need to articulate the limits of science in general and cognitive science in particular. But one will find no consideration of the limits of science in The Social Animal. While Brooks draws on philosophers, poets, and theologians in his book, he never allows them or anyone else to say to science: “hitherto shalt thou come, but no further.” In spite of Brooks’s celebration of “epistemological modesty,” there is nothing epistemologically modest about this book.

But can science — cognitive or otherwise — really bear the weight of so much authority? Can it really tell us what we need to know in order to live well? Does science really answer the questions asked by philosophy and theology? Can any science that defines itself in terms of the rigor of its methods really see the human phenomena in all their complexity, as philosophers, theologians, and poets aim to do? Can there really be a science of love, happiness, and nobility — the distinctly human concerns to which The Social Animal purports to speak? Can science really address the question of our origin, our end, and our place in the whole without which any knowledge of ourselves would be radically incomplete?

To answer these questions, we need to look at the scientific findings Brooks reports in The Social Animal from a perspective that does not take the authority of science for granted and is open to aspects of human experience that might be invisible to a methodical science bent on identifying the efficient causes of things.

Let us begin with love. When Harold and Erica first fall in love, Brooks invites us to look “inside Harold’s brain” to see love as it appears to the eye of the cognitive scientist. One method scientists use to understand love is to put a patient in a brain scanner, show the patient a photo of his or her beloved, and watch which areas of the brain “light up” in response to this stimulus. Such a method might tell us something, but its understanding of love will plainly be partial: any halfway-competent Don Juan knows that love loves a beach, a bottle of wine, and a sunset. One’s ardor might be dampened by the syringes, medical scrubs, and electrodes of the laboratory, distorting the very phenomenon the scientist seeks to study. Experimental science that seeks quantifiable results can perhaps grasp those aspects of an experience such as love that will submit to the apparatus of experimentation and permit of quantification, but the rest of that experience, and in particular the whole of that experience, will remain the domain of philosophers and poets.

Next, consider happiness. Brooks cites extensively from social-scientific happiness research, which is conducted “mostly by asking people if they are happy and then correlating their answers with other features of their lives.” Brooks acknowledges that this method “seems flimsy,” but argues that its results are “surprisingly stable and reliable.” The stability of the results, however, does not address the fundamental flimsiness of the method in question: the problem is not that one cannot establish a stable pattern of correlation between self-reported happiness and other aspects of life. The problem is the difficulty of measuring the correlation between self-reported happiness and actual happiness: the willingness to call oneself happy when asked by a researcher could be as much a sign of self-deception, vanity, or vapidity as it is of actual happiness. And to speak accurately of one’s own happiness, one would have to know what happiness is. As Brooks admits, this is “a subject of fierce debate among the experts,” which is no surprise, because any answer to the question of happiness depends on comprehensive reflection on the whole of human experience and aspiration, such as one finds in Aristotle’s Nicomachean Ethics. Such reflection may be the work of a lifetime.

With respect to nobility, Brooks speaks to the distinctly human concern with the noble in a fascinating passage on thumos, the ancient Greek word for “spiritedness,” which he uses in the broader meaning of “ambition” — as he puts it, “the desire to have people recognize your existence, not only now but for all time.” Brooks is clearly cognizant of the power of thumos for explaining human behavior, but has little to say about the explosive question upon which the notion of thumos opens, as Socrates pointed out long ago: What should we recognize? What deserves to be celebrated as human virtue so excellent that it should never be forgotten? What, in short, truly deserves to be called noble? Is it the warrior’s courage and martial prowess? The statesman’s capacity for superintending the political whole and leading it to greatness? Or is it the philosopher’s unstinting dedication to understanding the truth about justice, human nature, and happiness, and his willingness to live in the light of that truth? On the questions of what human activities are most worthy of respect, of which human exemplars should command our admiration and emulation, cognitive research is necessarily silent. While it might find some way to analyze respect — perhaps scanning our brains while we look at photos of Lincoln — when it comes to deciding what is respectable, the question is unintelligible from the point of view of the necessary relations of cause and effect which science studies.

Finally, we must consider a curious lacuna in The Social Animal: the question of our place in the whole. Brooks occasionally refers to evolutionary explanations for our preferences and predilections. For example, he notes that “evolutionary psychologists argue that people everywhere prefer paintings that correspond to the African savanna, where humanity emerged.” He has little to say, however, about the mechanism thought to be at the root of evolution itself: a brutal struggle for genetic survival — “Nature, red in tooth and claw,” in Tennyson’s words. If that is the fundamental truth of our being, then happiness, so central to Brooks’s argument, is a delusion: nature makes us what we are and it wants the species to evolve, without the slightest concern for the happiness of individuals.

Evolution’s account of the nature of nature, though pitiless, at least speaks to the philosophic question of the character of the whole within which we find ourselves. Beyond this natural question, however, looms the theological question of the origins of the natural whole — the question of who, or what, is God. God makes an occasional appearance in The Social Animal, as when Harold reflects on his soul on the day of his death:

The brain was physical meat, but out of the billions of energy pulses emerged spirit and soul. There must be some supreme creative energy, he thought, that can take love and turn it into synapses and then take a population of synapses and turn it into love. The hand of God must be there.

This passage nicely encapsulates how science’s account of the brain as “physical meat” can be compatible with an affirmation of spirit or soul. But even should we accept Brooks’s argument for the compatibility of soul and synapse, and Harold’s musing that “the hand of God” must be responsible for the enlivening of matter with spirit, we arrive only at the beginning of the theological questions: Why would God do such a thing? What kind of a God is God, anyway? Is God the God of love and mercy we know from the Gospels? Is God the radically mysterious and terrifying God who speaks to Job from the whirlwind? Or is God the God of the philosophers — not a person, and above concern with human affairs? Answers to such questions are, of course, intractably elusive. However, we cannot understand ourselves without at least attempting to face them, because the answers to them are dispositive for how we understand our place in the world, our happiness, and our end.

These are the questions about which we have become grossly inarticulate because of the undeniable atrophy of philosophy and theology from which Brooks begins. But our science recused itself from even asking such questions at its inception in the seventeenth century, when it narrowed its purview to the world of efficient causality, and thereby attained the precision and predictive power from which it derives its technical prowess and public authority. Insofar as the truly fundamental questions are questions about wholes — from the question of a happy human life as a whole to the question of the nature and origin of the world as a whole — science, which, in the words of Francis Bacon, requires a method “which shall analyse experience and take it to pieces,” cannot tackle such questions without ceasing to be what it is. (Recent scientific talk of “emergent systems” — where, as Brooks puts it, “different elements come together and produce something that is greater than the sum of their parts” — implicitly acknowledges that the world contains phenomena that do not permit of the precise causal explanation that makes science science.) For all of the empirical precision it derives from its methods, unless science is supplemented and corrected with the holistic reflections characteristic of theology and philosophy, it is and will remain humanistically and cosmologically naïve.

None of this means that the striking findings Brooks reports from cognitive and social science are irrelevant to the question of how we should live, and Brooks should be praised for making so many new insights available to non-scientists. But his presentation of cognitive science as the decisive voice on these questions encourages our already deep-seated habit of passive deference to scientific authority, and implicitly encourages the further atrophy of philosophy and theology Brooks laments by suggesting that science can replace them.

One sometimes wonders whether Brooks is aware that he cedes too much ground to scientism. A striking passage in the middle of The Social Animal suggests that he might be. Erica has started a consulting firm, hoping to use what she learned about the importance of culture from her childhood and in college to help businesses match their marketing to the cultural predilections of their customers. She finds, however, that her insights on culture get no traction with business executives, who are not attentive to the language of culture. She therefore reluctantly decides to present her work in the language of behavioral economics, which sounds, at least, like “rigorous, tough-minded science.” “Her clients,” Brooks writes, “respected science” — particularly behavioral economics, which is “hot and in demand” — and Erica yields to the predilections of her audience. One cannot help but wonder if a similar calculation took place in the mind of David Brooks. Neuroscience is hot and in demand; Edmund Burke and Alexander Hamilton, the deepest sources of Brooks’s political philosophy, are not.

In truth, the vision of human life presented in The Social Animal is a vision drawn from an enormous variety of sources, humanistic as well as scientific. Henry Adams-esque musings on the spirit of the Middle Ages and Allan Bloom-ite ruminations on love and friendship are so commingled with the findings of cognitive science that the book’s humanistic sources often seem at least as important as its scientific ones. Perhaps Brooks exaggerates the novelty and authority of the findings of the “cognitive revolution” to help some old insights find a wider audience in our pop-science-addled age.

Authorial responsibility, however, requires minding the difference between taking stock of the prejudice of one’s audience as a rhetorical starting point and reinforcing those prejudices. Brooks rarely if ever questions the findings of cognitive or social scientists in The Social Animal. It was not always thus; as recently as in On Paradise Drive (2004), Brooks was willing to counter a Gallup poll that reported that “96 percent of teenagers said they got along with their parents” with a demur based on his own experience: “I’m not sure families are quite that healthy.” He reported that “college students talk about prudential sex — the kind you have for leisure without any of that romantic Sturm und Drang, as a normal part of life,” but again demurred in his own voice that “many of them are lying.” One wishes that that David Brooks had been at the switch when the happiness researchers told him that hairdressers were among the happiest people in the world. If he had been, we might have gained from science without being asked to leave our judgment behind.

If, however, we set aside Brooks’s puffed-up claims on behalf of the cognitive revolution, we can begin to see the true merits of his book. The Social Animal is an astonishing feat of research, and rescues countless important discoveries about our nature from the purgatory of specialist literature. Brooks puts this research in the service of a sensible and humane teaching about moral and political life, alternately highly serious and gently comic. His novelistic imagining of the inner lives of Harold and Erica takes us inside the struggle to be moral when our will and reason are of limited power; to find work that “absorbs all [our] abilities” and satisfies our desire for recognition without being subsumed by it; to make the sacrifices necessary to truly care for another human being and unite with that person in love; and to face death with the consolation that our conduct and character have been a serious response to the demands life makes of us.

While the story of Harold and Erica has been criticized as lacking in the high drama that is the stuff of great novels, The Social Animal doesn’t pretend to be a great novel. Instead, Brooks imagines his characters facing the kind of moral dilemmas his readers are likely to experience: the tension between work and family, the temptations of booze and boredom-induced adultery, the struggle to focus one’s attention and really think while surrounded by “the normal data smog of cyber-connected life.” Brooks has a powerful grasp of how his professional-class readers live and think, and I saw much of myself in the mirror of the book.

Insofar as there is a little bit of Harold and Erica in each of us, The Social Animal can thus help us to know ourselves — an effort in which we can use all the help we can get. But help is one thing, and authority another. On the question of ourselves, we can have philosophers, theologians, poets, and, yes, scientists, for our companions and conversation partners. But the question, in the end, should — must — rest with us.

Benjamin Storey is an assistant professor of political science at Furman University.

Wednesday, January 18, 2012

On Niall Ferguson's Civilization

“Civilization: The West and the Rest,” by Niall Ferguson

By Steven Pearlstein, Published: January 13, 2012, WAPO

Niall Ferguson doesn’t hide the fact that he means for his latest work of meta-history to take its place on the global bookshelf next to the rise-and-fall works of Edward Gibbon, Kenneth Clark, Francis Fukuyama, Jared Diamond, Samuel P. Huntington and Paul Kennedy. “Civilization” tackles the big questions: What is a civilization? Why do they decline? Why, for the past 500 years, have Europe and the United States dominated everyone else? And — here’s the payoff — is the West now destined to take a back seat to Asia?

Surely there was nothing foreordained about the West’s economic, scientific, cultural or military superiority, as Ferguson reminds us. At various times, the Chinese, Aztec and Ottoman civilizations boasted the world’s highest standards of living, the best infrastructure, the fiercest armies, the largest cities, the most productive fields, the longest lifespans, the deepest understanding of the natural world, the best technology, the wisest rulers.

And yet beginning in about 1500, it was Europe, led by England, that began to pull ahead, to break out of the Malthusian trap that, up to that point, had sentenced most of humanity to short lives lived in poverty. Ferguson’s thesis — not entirely original — is that Europe’s success came not as the result of any natural advantages but because it was able to develop just the right mix of political, legal and social institutions that made it resilient enough to withstand the inevitable plagues, natural disasters, failed leaders or just plain bad luck.

The book, ostensibly, is organized around what Ferguson considers the six most vital of those institutional arrangements:

Competition, meaning a decentralization of power among nations and within them, necessary to create the right environment for capitalism;

Science, whose discoveries laid the basis not only for the Industrial Revolution but also for overwhelming military advantage;

Property rights, which provided a framework for the rule of law and laid the foundation for shared political and economic power;

Medicine, which led to a dramatic rise in life expectancy;

A consumer focus to economic life that fueled demand for modern industrial products; and

A work ethic that provided a moral framework for savings, investment and hard toil.

This topic and this structure play to Ferguson’s skills as an economic historian known for the breadth of his knowledge, the clarity and pithiness of his prose, and the originality of his analysis. His knack is for translating academic history into accessible concepts and concrete examples, setting them in the grand sweep of history and making them relevant to our present-day circumstances.

Always the intellectual provocateur, Ferguson also means to challenge the insidious dogma, now ascendant on university campuses, that holds that the “triumph of the West” was nothing more than a self-centered fiction concocted by European and American scholars to justify centuries of brutal colonialism and oppression. And while he stops short of arguing that the West’s decline is inevitable, he warns that it has become a real possibility that could unfold rather quickly.

While the basic outline of Ferguson’s argument is sound, the book itself is something of a disappointment. It reveals the strains on an ambitious academic who has churned out nine books in 13 years, all while hosting five series for British television, holding down two appointments at Harvard — one in the history department and one at the business school — and part-time fellowships at Stanford and Oxford, and writing a regular column for Newsweek. For the past eight years, he has also been working on a biography of Henry Kissinger.

The result of this prodigious over-scheduling is a book that is a mishmash of disconnected and sometimes contradictory riffs held together by faulty logic, inapt metaphors and clever turns of phrase. Instead of presenting himself as the well-read and widely traveled polymath he genuinely is, Ferguson comes off as an intellectual showoff who couldn’t be bothered to edit his own ideas.

The chapter on science, for example, opens with the rout of the Ottoman armies from the gates of Vienna in 1683 after they had laid siege to the capital of the Hapsburg empire. Ferguson wants us to ponder how the application of science to weaponry provided the West with a crucial military advantage over the rest of the world. Yet despite his entertaining stroll through the court of Frederick the Great, a lengthy discourse on the military precision of the Prussian army and a brief history of the secularization of Turkey in the 20th century, it’s hard to recall what the point is. Surely it is not the connection between science and weaponry, inasmuch as there is but a single passing reference to Napoleon-era howitzers.

In the chapter on property rights, Ferguson contrasts the different approaches taken by England and Spain toward their colonies in the New World. For England, North America became not only a source of raw materials, but a place where thousands of politically enfranchised citizens could go to stake a claim to land and begin moving up the social and economic ladder. In the Spanish south, by comparison, where there was a single-minded focus on extracting gold, silver and other natural resources, it was hardly surprising that almost all the land was held by the king, with most of the work done by subjugated natives and slaves imported from Africa.

Having made his point, convincingly, Ferguson can’t resist embarking on a long, rambling discourse on different rates of miscegenation in the United States and Brazil, complete with bar charts on the differing racial makeup of the two societies. What lesson this holds for the success of a civilization remains a mystery.

Even more bizarre is the chapter on the role of medicine in the rise of the West, in which Ferguson manages to pretty much avoid the subject of health care almost entirely. Instead, he hops from Napoleon to the revolution of 1848 to France’s management of its colonies in Africa. In his mind, this leads naturally to an exposition of Germany’s experiments with eugenics in its African colonies, from which a direct line must therefore be drawn to Nazi genocide in Eastern Europe. It was no mere coincidence, Ferguson assures us, that Hermann Goering, Hitler’s confidante and the head of the Nazi Luftwaffe, was the son of the German high commissioner in southwest Africa.

Ah, so that explains it!

In the chapter on work, Ferguson claims that the primary cause of Europe’s declining work ethic is the decline in Europe’s religious faith, which at one point he appears to blame on John Lennon. And applying the same logic, drawn from the sociologist Max Weber, he argues that the seeds of China’s economic miracle were planted by generations of Protestant missionaries who taught the Chinese the value of literacy, thrift and hard work. He even has a map to prove it.

The high point of “Civilization” may the chapter on the importance of consumers in fostering economic progress — something that Marx never foresaw, Henry Ford never forgot, Hitler and Stalin never understood, and Japan and China came to embrace. “Perhaps the greatest mystery in the entire Cold War,” Ferguson writes, “is why the [Soviet Union] could not manage to produce a decent pair of blue jeans.” But even this line of inquiry is badly muddied. I have no idea whether it is true, as Ferguson asserts with great authority, that the research into body sizes done by the uniform division of the U.S. military during World War II laid the foundation for the boom in off-the-rack clothing in the postwar period, but it’s the kind of anecdote that is a Ferguson specialty and tends to underpin his analysis.

Ferguson’s take-away is that if there is a threat to the West’s continued dominance in the world, it comes not from China or Islam, but from ourselves — from our lack of knowledge of history and our lack of faith in the civilization we inherited. It’s a powerful theme and a great ending, alas, for a book that is still to be written.


Steven Pearlstein is a Washington Post business and economics columnist and the Robinson professor of public and international affairs at George Mason University.

Sunday, January 15, 2012

Obama's Recess Appointments

The OLC Opinion on Recess Appointments

MICHAEL MCCONNELL | JANUARY 12, 2012 | 8:33 PM

On January 10, I published an op-ed in the Wall Street Journal stating that I could see no plausible legal argument to support President Obama’s recent recess appointments to the Consumer Financial Protection Board and the National Labor Relations Board. I noted that the Administration had not relied on any opinion from the Office of Legal Counsel, and inferred that it must not have obtained such an opinion. http://www.advancingafreesociety.org/2012/01/10/democrats-and-executive-outreach/

Today, January 12, 2012, the Administration released an Office of Legal Counsel opinion, dated January 6, opining that the recess appointments were constitutional. The Opinion concludes that the pro forma sessions of the Senate conducted every three days during the December and January holiday are not sufficiently substantive to interrupt a Senate recess, meaning that the Senate was in recess from December 17 well into January.

I compliment the Administration for releasing the opinion, while still wondering what was their reason was for delay. It is reassuring that in this instance the Administration followed proper legal channels before taking a controversial constitutional position at odds with recent precedent (precedent established in 2007 by Senate Democrats, including then-Senator Obama).

I have not had time to give careful study to the 23-page OLC Opinion, but my preliminary reaction is not to be convinced. The Opinion makes arguments that are not frivolous, but it seems to me the counterarguments are more powerful.

In particular, the Opinion places enormous weight on the fact that the Senate’s resolution providing for pro forma sessions declared that there would be “no business conducted.” There are two problems with this, as a legal matter. First, as the Opinion concedes, the important question is whether at these sessions the Senate is “capable” of exercising its constitutional functions – not whether, on any particular occasion, it has chosen not to do so.

Second, in actual fact the Senate has conducted major business during these sessions, including passing the payroll tax holiday extension during a pro forma session on December 23. The Opinion weakly responds that, notwithstanding this evidence of actual practice, the President “may properly rely on the public pronouncements of the Senate that it will not conduct business.” It is hard to see why the Senate’s stated intention not to do business takes legal and constitutional precedence over its manifest ability to do so. The President is well aware the Senate is doing business on these days, because he has signed two pieces of legislation passed during them.

More fundamentally, the Opinion creates an implausible distinction between the legal efficacy of pro forma sessions for various constitutional purposes. According to the Opinion, a pro forma session is not sufficient to interrupt a recess for purposes of the Recess Appointments Clause, but it is sufficient to satisfy the constitutional command that neither branch adjourn for more than three days without the consent of the other (Art. I, §cl. 4) and that Congress convene on January 3 unless a law has provided for a different day. There is longstanding precedent that pro forma sessions are sufficient to satisfy these constitutional requirements. Why a pro forma session would count for some purposes and not others is a mystery. It is difficult to escape the conclusion that OLC is simply fashioning rules to reach to the outcomes it wishes.

Finally, it bears mention that a great deal of the authority OLC cites in support of the President’s authority to make recess appointments during intrasession recesses in the first place – wholly apart from the pro forma issue – consists of prior executive branch pronouncements that are at odds with both the language and the history of the constitutional text. It would not be surprising if the judiciary were to reject these self-serving executive interpretations in favor of more straightforward ones. In particular, courts might rule that the Recess Appointments Clause applies only when a vacancy “happens” during a recess, as the text of Att. II, § 2, cl. 3, says, and that “the recess” of the Senate occurs only between sessions, and not (as here) in the midst of a session. The OLC Opinion acknowledges as much, when it says that the appointments face “some litigation risk.”

But the Obama Administration cannot be faulted for following longstanding executive precedent, which has been used by past Presidents both Republican and Democrat. It is only the novel arguments that I criticize here. It seems to me that the Administration is under special obligation to provide a bullet-proof legal argument when it declares invalid a strategy devised by Majority Leader Harry Reid in 2007, supported by then-Senator Barack Obama, and successfully used by them to stymie President George W. Bush’s recess appointment power. The law cannot change just because the shoe is on the other foot.

The fact that the Administration obtained an OLC opinion in advance of the appointments (the Opinion is dated two days after the appointments, but presumably it reflects the advice given to the President in advance) shows that they were not made, as initially appeared, without benefit of independent legal analysis. And the public should welcome the release of the opinion itself, so that we can know the offical legal basis for the President’s acts, rather than having to guess. On the merits, though, at least on first study, the Opinion does not have the better of the argument.

Friday, January 13, 2012

Eagleton on De Botton on the Usefulness of Atheism (Not)

The novels of Graham Greene are full of reluctant Christians, men and women who would like to be rid of God but find themselves stuck with him like some lethal addiction. There are, however, reluctant atheists as well, people who long to dunk themselves in the baptismal font but can't quite bring themselves to believe. George Steiner and Roger Scruton have both been among this company at various stages of their careers. The agnostic philosopher Simon Critchley, who currently has a book in the press entitled The Faith of the Faithless, is one of a whole set of leftist thinkers today (Slavoj Žižek, Alain Badiou, Giorgio Agamben) whose work draws deeply on Christian theology. In this respect, the only thing that distinguishes them from the Pope is that they don't believe in God. It is rather like coming across a banker who doesn't believe in profit.
Such reluctant non-belief goes back a long way. Machiavelli thought religious ideas, however vacuous, were a useful way of terrorising the mob. Voltaire rejected the God of Christianity, but was anxious not to infect his servants with his own scepticism. Atheism was fine for the elite, but might breed dissent among the masses. The 18th-century Irish philosopher John Toland, who was rumoured to be the bastard son of a prostitute and a spoilt priest, clung to a "rational" religion himself, but thought the rabble should stick with their superstitions. There was one God for the rich and another for the poor. Edward Gibbon, one of the most notorious sceptics of all time, held that the religious doctrines he despised could still be socially useful. So does the German philosopher Jurgen Habermas today.

Diderot, a doyen of the French Enlightenment, wrote that the Christian gospel might have been a less gloomy affair if Jesus had fondled the breasts of the bridesmaids at Cana and caressed the buttocks of St John. Yet he, too, believed that religion was essential for social unity. Matthew Arnold feared the spread of godlessness among the Victorian working class. It could be countered, he thought, with a poeticised form of a Christianity in which he himself had long ceased to believe. The 19th-century French philosopher Auguste Comte, an out-and-out materialist, designed an ideal society complete with secular versions of God, priests, sacraments, prayer and feast days.

There is something deeply disingenuous about this whole tradition. "I don't believe myself, but it is politically prudent that you should" is the slogan of thinkers supposedly devoted to the integrity of the intellect. If the Almighty goes out of the window, how are social order and moral self-discipline to be maintained? It took the barefaced audacity of Friedrich Nietzsche to point out that if God was dead, then so was Man – or at least the conception of humanity favoured by the guardians of social order. The problem was not so much that God had inconveniently expired; it was that men and women were cravenly pretending that he was still alive, and thus refusing to revolutionise their idea of themselves.

God may be dead, but Alain de Botton's Religion for Atheists is a sign that the tradition from Voltaire to Arnold lives on. The book assumes that religious beliefs are a lot of nonsense, but that they remain indispensible to civilised existence. One wonders how this impeccably liberal author would react to being told that free speech and civil rights were all bunkum, but that they had their social uses and so shouldn't be knocked. Perhaps he might have the faintest sense of being patronised. De Botton claims that one can be an atheist while still finding religion "sporadically useful, interesting and consoling", which makes it sound rather like knocking up a bookcase when you are feeling a bit low. Since Christianity requires one, if need be, to lay down one's life for a stranger, he must have a strange idea of consolation. Like many an atheist, his theology is rather conservative and old-fashioned.

De Botton does not want people literally to believe, but he remains a latter-day Matthew Arnold, as his high Victorian language makes plain. Religion "teaches us to be polite, to honour one another, to be faithful and sober", as well as instructing us in "the charms of community". It all sounds tediously neat and civilised. This is not quite the gospel of a preacher who was tortured and executed for speaking up for justice, and who warned his comrades that if they followed his example they would meet with the same fate. In De Botton's well-manicured hands, this bloody business becomes a soothing form of spiritual therapy, able to "promote morality (and) engender a spirit of community". It is really a version of the Big Society.

Like Comte, De Botton believes in the need for a host of "consoling, subtle or just charming rituals" to restore a sense of community in a fractured society. He even envisages a new kind of restaurant in which strangers would be forced to sit together and open up their hearts to one another. There would be a Book of Agape on hand, which would instruct diners to speak to each other for prescribed lengths of time on prescribed topics. Quite how this will prevent looting and rioting is not entirely clear.

In Comtist style, De Botton also advocates secular versions of such sacred events as the Jewish Day of Atonement, the Catholic Mass and the Zen Buddhist tea ceremony. It is surprising he does not add Celtic versus Rangers. He is also keen on erecting billboards that carry moral or spiritual rather than commercial messages, perhaps (one speculates) in the style of "Leave Young Ladies Alone" or "Tortoises Have Feelings As Well". It is an oddly Orwellian vision for a self-proclaimed libertarian. Religious faith is reduced to a set of banal moral tags. We are invited to contemplate St Joseph in order to learn "how to face the trials of the workplace with a modest and uncomplaining temper". Not even the Walmart management have thought of that one. As a role model for resplendent virtue, we are offered not St Francis of Assisi but Warren Buffett.

What the book does, in short, is hijack other people's beliefs, empty them of content and redeploy them in the name of moral order, social consensus and aesthetic pleasure. It is an astonishingly impudent enterprise. It is also strikingly unoriginal. Liberal-capitalist societies, being by their nature divided, contentious places, are forever in search of a judicious dose of communitarianism to pin themselves together, and a secularised religion has long been one bogus solution on offer. The late Christopher Hitchens, who some people think is now discovering that his broadside God Is Not Great was slightly off the mark, would have scorned any such project. He did not consider that religion was a convenient fiction. He thought it was disgusting. Now there's something believers can get their teeth into …

Monday, January 9, 2012

Michael Gerson on Romney

Real Clear Politics, January 10, 2011.






WASHINGTON -- It is commonly argued that Mitt Romney has benefited from a weak Republican field, which is true. And that the attacks of his opponents have been late and diffuse. True, and true.


But the political accomplishment of Willard Mitt Romney should not be underestimated. The moderate, technocratic former governor of a liberal state is poised to secure the nomination of the most monolithically conservative Republican Party of modern history.




Some of this improbable achievement can be attributed to Romney's skills as a candidate. In 14 debates, he delivered one gaffe (the $10,000 bet) and once lost his temper (with Rick Perry) -- neither lapse particularly damaging. Under a barrage of awkward formats and dopey questions, Romney has been calm, knowledgeable and reassuring. The slickest network anchor could not have done better.


Romney is the varsity -- a far better candidate than, say, Bob Dole or John McCain. A Republican nominating process that swerved again and again toward silliness -- alternately elevating for consideration Donald Trump, Michele Bachmann and Herman Cain -- seems ready to settle on a serious, accomplished, credible candidate. Republicans, it turns out, are choleric and fractious -- but not suicidal.


The nominating process has also revealed Romney's limitations. It would be awkward for anyone this stiff to pose as a working-class stiff, and Romney should not try. But if he gains the nomination, Romney's rival in connecting with average voters will not be Bill Clinton. It will be professor Barack Obama. Again, Romney benefits from the luck of the draw.


Romney has paired his skills with a sophisticated political strategy. His campaign team learned something from the failures of four years ago. Last time, Romney flooded the early states with money and personal attention. In Iowa, his limited return on investment made him a political punch line. This time, Romney rationed both his money and his presence -- lowering expectations and generating genuine enthusiasm when he finally arrived to campaign. When a late political opportunity presented itself -- in the form of a persistently divided Republican field -- the Romney campaign skillfully ramped up for a narrow win. Adding a victory in New Hampshire is an achievement that Ronald Reagan never managed as a challenger.


Ideology has always been Romney's main vulnerability. Running and winning in Massachusetts before running twice for the Republican presidential nomination is a process best described by biologists -- a story of adaptation and evolution.


Other candidates have naturally carried more vivid ideological messages. In the end, the intra-Republican argument has come down to Ron Paul versus Rick Santorum -- both effective spokesmen for their views. Paul, by his own description, is preaching the pure "gospel of liberty." He carries the hopes of libertarians and those who seek a return to the federal government of an 18th-century agrarian republic. Santorum stands more in the empowerment tradition of Jack Kemp or George W. Bush. On the whole, he is reconciled to the goals of modern government -- encouraging equal opportunity and care for the elderly, sick and vulnerable -- but not to the bureaucratic methods of modern government. Santorum's lot would encourage the provision of services through credits, vouchers and defined contributions.


I come down on the empowerment side of this divide. But maybe, at this moment, the Republican Party doesn't need a clear decision on its identity (which might not be possible anyway). Romney has this advantage: In supporting him, no Republican is called upon to surrender his or her deepest ideological convictions. Romney is temperamentally conservative but not particularly ideological. He reserves his enthusiasm for quantitative analysis and organizational discipline. He seems to view the cultural and philosophic debates that drive others as distractions from the real task of governing -- making systems work.


His competitors have attempted to portray Romney's ideological inconsistency over time as a character failure. It hasn't worked, mainly because Romney is a man of exemplary character -- deeply loyal to his faith, his family and his country. But he clearly places political ideology in a different category of fidelity. Like Dwight Eisenhower, Romney is a man of vague ideology and deep values. In political matters, he is empirical and pragmatic. He studies problems, assesses risks, calculates likely outcomes. Those expecting Romney to be a philosophic leader will be disappointed. He is a management consultant, and a good one.


Has the moment of the management consultant arrived in American politics? In our desperate drought of public competence, Romney has a strong case to make.



michaelgerson(at)washpost.com
















Wednesday, January 4, 2012

The Fat Lady Has Sung?

Jaco Weisberg, Slate, January 4, 2012

Is there anyone not annoyed by Mitt Romney’s narrow win in the Iowa caucus? Conservatives are disappointed because they recognize that the former Massachusetts governor, who used to be pro-choice and was for Obamacare before it was called that, is only pretending to be one of them.

Seventy-five percent of Iowa’s Republican voters wanted someone further to the right. But because their votes were divided among too many weak and weird candidates, the only moderate running in their state came out on top.
Liberals are bummed because Romney is the strongest potential challenger to President Obama.


This shows up clearly in head-to-head polls, which put Romney tied with or slightly ahead of Obama, while other Republican contenders trail by 10 points or more. It was hard for Obama campaign officials to suppress their glee last month when Newt Gingrich, the only even remotely plausible alternative to Romney, briefly ran at the head of the pack. But even they knew this was a momentary aberration.

Short of Republicans committing collective suicide by picking someone else, Democrats would like to see Romney win the nomination after a protracted, costly struggle that would deplete his financial resources, sully his image, and drag him further to the right. Today, that scenario looks less likely.

We journalists are sorriest of all, because Romney coasting to victory is a weak story. Were the press any other industry, cynicism about its self-interest in promoting marginal challengers would prevail. Local television stations (many of them owned by media conglomerates such as Slate’s owner, the Washington Post Company) count on election-year revenue bumps from political advertising in important primary states.

If the nomination contest is effectively over by, say, the time of the Michigan primary on Feb. 28, valuable money will be left on the table. But for reporters, rooting for the underdog, any underdog, is really a matter of wanting a more dramatic story. The straight-laced front-runner winning Iowa and New Hampshire before securing the nomination early on does not count as a compelling narrative. Hence the media’s pretense of taking seriously a succession of nonviable candidates with outlandish views.

Rick Santorum is not, under any circumstances, going to be the GOP nominee.
This confluence of motives amounts to an insider conspiracy to resist the obvious.

So expect to hear more and more about less and less likely alternatives to a Romney victory in the coming weeks. Jon Huntsman, the only candidate yet to enjoy a moment of popular enthusiasm, could do better than expected in New Hampshire.

Once Rick Perry joins Michele Bachmann in dropping out, conservative sentiment could coalesce around the unlikely survivor Rick Santorum.* Chris Christie could still change his mind! Anything could happen, of course, but it won’t. In the end, the GOP is overwhelmingly likely to nominate Romney because he is the most electable candidate available and at this point, no one else can beat him.

The Republican party Romney is likely to lead into battle has, however, revealed itself in a diminished state—dominated by its activist extreme, focused on irrelevancy, and deaf to reason about the country’s fiscal choices. To survive a Republican debate you are required to hold the incoherent view that the budget should be balanced immediately, taxes cut dramatically, and the major categories of spending (the military, Social Security, Medicare) left largely intact. There is no way to make these numbers add up, and the candidates do not try, relying instead on focus-group tested denunciations of Obama and abstract hostility to the ways of Washington.


Above every other issue, the candidates in Iowa pandered about how thoroughly and completely they would ban abortion. Paul, an obstetrician by training, blanketed the state with ads making the dubious claim that he once saw doctors dispose of a live baby in a trash bin. (If so, why did he not intervene?) Gingrich proposed throwing out the Constitution to defy judges who invalidate anti-abortion legislation.

In the closing days of the campaign, Perry augmented his opposition to abortion to include cases of rape or incest. Santorum toured with members of the Duggar family, who are featured in TLC reality show 19 Kids and Counting. Like the Duggars, Santorum believes contraception is “not okay.”
The notion that the Tea Party stood for something new on the American right has now dissolved in favor of a familiar range of radical, not really conservative tendencies. Iowa clarifies this factionalism by presenting it in exaggerated form.


There is juvenile libertarianism, represented by Ron Paul. There is theocratic moralism, offered in evangelical Protestant flavors by Bachmann and Perry, and in a Catholic version by Santorum. There is the idea of ideas-based politics, represented by Gingrich.

When all of these alternatives finish falling by the wayside, what will remain is the attempt to actually win a national election, represented by one Willard Mitt Romney.

Tuesday, January 3, 2012

Knowledge and the Internet

What Lies Beneath
BY EVGENY MOROZOV SUNDAY, JANUARY 1, 2012

"Too Big to Know: Rethinking Knowledge Now that the Facts Aren’t the Facts, Experts Are Everywhere, and the Smartest Person in the Room Is the Room"
by David Weinberger
Basic Books, $14.29


Karl Popper, a towering figure in 20th century philosophy of science, firmly opposed the view that theories emerge from random observations. Once he even ridiculed a hypothetical scientist “who dedicated his life to natural science, wrote down everything he could observe, and bequeathed his priceless collection of observations to the Royal Society to be used as inductive evidence.” For Popper, “though beetles may profitably be collected, observations may not.”

David Weinberger, the author of “Too Big to Know” and a Harvard researcher, doesn’t mention Popper. But his rejoinder is easy to predict: Popper’s theory of knowledge was conditioned and constrained by the medium he used to develop it. Had Popper stopped worrying about limited paper supply and embraced today’s world of hyperlinks and infinite storage, even the most inconsequential of observations would look like knowledge to him.

Weinberger wants to be the Marshall McLuhan of knowledge management. Where McLuhan claimed that the medium shapes the reception of the message, Weinberger claims that the medium also shapes what counts as knowledge. Or, as he himself puts it, “transform the medium by which we develop, preserve, and communicate knowledge, and we transform knowledge.”

How does this happen? Weinberger argues that on the Internet facts are born “linked,” pointing to other facts and opinions. With time, other entities start linking to them, creating digital traces that can be used to scrutinize and even revise original facts.

On paper, facts look firm and reliable; online, they are always in flux. Furthermore, the Internet, unlike your local library, is infinite. Librarians choose which books to acquire; books that don’t make the cut become invisible. Not so with search engines. What they filter out doesn’t disappear — it stays in the background. New filters, Weinberger claims, don’t “filter out” but “filter forward.”

This triumph of the “networked” and the “hyperlinked” unsettles everything: facts (those who think that Barack Obama was born in Kenya also have facts), books (they are unable to contain “linked” and infinite knowledge) and even knowledge itself (it’s too obsessed with theories and consensus-seeking). Thus, “knowledge has become a network with the characteristics — for better and for worse — of the Net.”

This is an ambitious thesis. It’s also not original. “The Postmodern Condition: A Report on Knowledge,” a famous 1979 book by the French philosopher Jean-François Lyotard, makes a similar claim about computerization. “Along with the hegemony of computers comes a certain logic, and therefore a certain set of prescriptions determining which statements are accepted as ‘knowledge statements,’” wrote Lyotard. Weinberger doesn’t mention Lyotard by name but claims that “the Internet showed us that the postmodernists were right.”

Too bad, then, that his argument is ridden with familiar postmodernist fallacies, the chief of which is his lack of discipline in using loaded terms like “knowledge.” This term means different things in philosophy and information science; the truth of a proposition matters in the former but not necessarily in the latter. Likewise, sociologists of knowledge trace the social life of facts, often by studying how and why people come to regard certain claims as “knowledge.” The truth of such claims is often irrelevant.

For epistemologists, however, to say that “S knows that p” three conditions must be met. P must be true; S must believe that p; S must be justified in believing that p. One can’t “know” that “Barack Obama was born in Kenya” because it’s untrue. On the other hand, to “know” that “Barack Obama was born in Hawaii,” one needs to have justification. A copy of his birth certificate would do. The hyperlink nirvana has not rid us of the justification requirement. The Internet may have altered the context in which justification is obtained — one can now link to Obama’s birth certificate — but it hasn’t changed what counts as “knowledge.”

That scientific facts today are no longer persuasive on their own has less to do with our changing attitudes toward knowledge than with our changing attitudes — marked by suspicion of power, expertise and claims to neutrality — toward science as a socio-political enterprise. Postmodernists foresaw some of these changes, but Weinberger overstates their contribution. The sociology of scientific knowledge and science studies played a more consequential role. These two disciplines have posed valid challenges to traditional epistemology, but Weinberger is too impatient to position his argument within that debate, at times invoking claims from both camps but never stating his own theory of truth (thus, he can simultaneously claim that “truth will remain truth” and that “knowledge is becoming ... unthinkable without ... the network that enables it”). Such carelessness stems from Weinberger’s fixation on media and its history, a fixation that comes at the expense of engaging with other fields and disciplines studying the production of knowledge. He may be right that the notion of “objectivity” is dead in journalism. It is, however, very much alive in science.

Many of Weinberger’s other claims fall apart on closer examination. Does Hunch.com, a site that asks users hundreds of questions to predict what movies or books they might like, exemplify “a serious shift in our image of what knowledge looks like”? Hunch.com uses techniques of statistics, data mining and machine learning to turn correlations into recommendations. All of these, of course, are well-established disciplines predating the Internet. For Weinberger, the claim that “75% of people who liked ‘Mad Men’ also liked ‘Breaking Bad’” is revolutionary because, unlike Darwin’s theory of evolution, it is “theory-free.” However, such “theory-free knowledge” has a very long history. Think of census reports, surveys or marketing questionnaires. Yes, people fill in these forms online now. But is this somehow revolutionary?

Weinberger does have an annoying penchant for finding novelty in things simply because they exist online. Thus, he invokes PolitiFact.com — a prominent journalistic project that fact-checks what public figures say — to argue that the Internet is also capable of countering misinformation. What an odd choice: PolitiFact.com, as it happens, is not run from a geek’s basement. It’s run by St. Petersburg Times, a card-carrying member of the old knowledge regime. Yes, it operates online. But the claim that PolitiFact can therefore tell us something about “the Internet” is highly suspicious.

In the end, it’s hard to say what this book is about. Weinberger is too incurious to interrogate the modern state of knowledge or explain which of our current attitudes toward it are driven by the Internet and which by other social dynamics. And it’s certainly not a book about technology: Weinberger distances himself from this topic, and his shallow treatment of online filters suggests it was a wise decision. In the end, perhaps, this might be a book about the prospect of yet another digital revolution — a revolution so vague that no one would blame Weinberger if it fails to materialize.

Evgeny Morozov, a visiting scholar at Stanford University, is the author of “The Net Delusion: The Dark Side of Internet Freedom" (PublicAffairs)