Support First Things by turning your adblocker off or by making a  donation. Thanks!

For Americans, the 1990s are both the most sharply defined and the most fuzzily understood of modern decades. The nineties began on 11/9/1989, with the breaching of the Berlin Wall by East Germans—a symbolic repudiation of communism and a glorious American victory in the Cold War. They ended on 9/11/2001, when al-Qaeda terrorists, most of them Saudi Arabians, flew two airplanes into the Twin Towers of the World Trade Center. President George W. Bush responded by launching the invasion of Iraq, which brought a historic military defeat and an even more consequential reputational one. At the start of the nineties, Americans seemed to possess unique insight into the principles on which modern economies and societies were built. At the end of the nineties, Americans were stunned to discover that the person with the best insight into their own country and its vulnerabilities was Osama bin Laden.

Something in the nineties had gone calamitously, tragically, but invisibly wrong. The United States had endured setbacks: the Los Angeles riots of 1992; various mid-decade standoffs, shoot-outs, and bombings, from Waco to Ruby Ridge to Oklahoma City; and the dot-com equities crash at century’s end. Yet there was scarcely an instant in the whole decade when the country’s strength, stability, and moral pre-eminence were questioned, at least in mainstream media outlets.

It was not as if nothing changed in the nineties—but almost all the changes seemed to make the position of the United States more secure. The country underwent the largest peacetime economic expansion in its history. The stock market boomed. Home ownership rose. The government showed more fiscal responsibility than it had in a generation, finishing the decade with annual budget surpluses. Government spending as a percentage of GDP fell to levels last seen in the 1960s. So did crime of all kinds.

Using computer networking technology devised by its military and refined by its scientists, bureaucrats, and hackers, the United States was managing the global transition to an information economy. The United States got to write the rules under which this transformation took place. That should have been a source of safety—but it turned out to be a source of peril. The Cold War victory, combined with a chance to redefine the economic relations that obtain among every human being on earth, was a temptation to Promethean excess. An exceptionally legalistic, hedonistic, and anti-traditional nation, the United States was poorly equipped to resist such a temptation. It misunderstood the victory it had won and the global reconstruction it was carrying out.

In a way, information won the Cold War. In the last two years of the Reagan administration, diplomatic channels to Moscow were wide open and Secretary of State George Shultz was developing a trusting relationship with Soviet leader Mikhail Gorbachev. One way in which Shultz wowed Gorbachev was by sharing data from analysts’ reports shown him by his friend, the former Citicorp chairman Walter Wriston. These reports mostly concerned the market for information: among other things, the effect on the finance industry of instantaneous fund transfers, the savings to be gained from replacing mined copper wires with fiber optic cables, and the declining cost of computing power. Wriston theorized that the increasing velocity of information was making longstanding ideas of Westphalian sovereignty impracticable in the West and longstanding means of party control impossible in the Eastern Bloc.

Wriston was wrong, as the example of twenty-first-century China would show. But soon after the Berlin Wall fell, some specialists were attributing that event to computing capacity. Economists had understood the Cold War this way for more than half a century. The so-called economic calculation problem had been noted in the 1930s by Friedrich von Hayek and Ludwig von Mises. The two Austrian economists argued that markets were an indispensable tool for pricing (and thus allocating) goods, and that their absence would introduce fatal inefficiencies into socialism. Hayek and Mises had long been thought to hold the losing intellectual hand. Now, it seemed, they had been vindicated.

In the last days of March 1990, Vytautas Landsbergis, the anti-communist parliamentary speaker of the Lithuanian Soviet Socialist Republic, was leading police and nationalist protesters in a standoff against Soviet army troops in Vilnius. The newly elected anti-communist president of Nicaragua, Violeta Chamorro, was taking over the country’s army from the Marxist Sandinistas who had built it. And one of the most distinguished followers of Hayek, James M. Buchanan, traveled to Australia to explain before a triumphalist audience of conservatives why all this stuff was happening. Buchanan, a Chicago-trained economist, pioneer of “public choice theory,” and Nobel Prize–winner in economics in 1986, would focus on three things: information, efficiency, and values. Because exchange is “complex,” Buchanan told the Australians, state planners are too far away from the action to “fully exploit the strictly localized information that emerges in the separate but interlinked markets.” Considering what we had come to know about the information carried in market prices, Buchanan was incredulous that so many had denied the superiority of free markets for so long.

Read decades later, his speech gives the sense that the triumph of free markets was on shakier intellectual ground than anyone understood at the time. Buchanan’s assumptions about “strictly localized” information—presumably from factories, shops, and households—reflected how the problem of gathering business information had been understood between the 1930s and 1990. But the internet would begin to draw a broad commercial public roughly three years later, and once it was up and running, almost no information would remain “strictly localized”—nor could it be kept private, except through measures that were themselves costly. A new tool for centralized, comprehensive, and efficient surveillance of market transactions was on its way—and with the invention of HTTP cookies, such information might simply be requisitioned, like grain stocked by Soviet peasants in the 1930s. Eventually, certain academic economists would suggest that, in enabling capitalism to ­triumph in practice, the personal computer had made socialism possible in principle. A new universe of economic possibilities was opening up.

And not just in the formerly communist world. Sweeping social change falls on the just and the unjust. Much as the putatively non-racist North had been altered by the Civil Rights Act of 1964 no less than the putatively racist South, so the putatively capitalist “free world” would be rocked by the lessons the socialist world was being taught about markets.

Buchanan naturally defended the market economy as more efficient. “It is now, in 1990, almost universally acknowledged that such an economy ‘works better’ than a socialized economy,” he told his Aussie listeners. “And the meaning of ‘works better’ is quite straightforward: the private-­ownership, individualized economy produces a higher valued bundle of goods and services from the resource capacities available to the individuals in a politically organized community.” But here, too, Buchanan was out on a limb. In the academy, the meaning of “works better” was indeed as straightforward as he said. But in society, the meaning of “works better” was not straightforward at all. An economic system produces more than consumer products. It produces attitudes, traditions, hierarchies, and geographies. The efficiency of an economic system, like the efficiency of a grammar school curriculum or a marriage regime, might not be evident until decades or generations later.

Only at the end of his discussion did Buchanan address “values,” a word that better than any other links social behaviors to commodity prices. He addressed it in a way that was strange and a bit disturbing. He did not tell his listeners to put their noses to the grindstone, as an economist of his grandfather’s generation might have done. He said this:

The only proviso here is that the value scalar, the measure through which disparate goods and services are ultimately compared, must be that which emerges from the voluntary exchange process itself. If the value scalar is, itself, determined by the centralized socialist planners, there is, of course, no reason to think that the private ownership economy will ‘work better’ in generating more ‘value’ along this measure.

The values in a capitalist system, in other words, must be all capitalist. What sells is what’s right. If citizens try to import their traditions and sentiments into the economic system, the system will seize up and cease to work. That used to be the main argument of capitalism’s foes. Marx and Engels warned that, under capitalism, “all fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away.” That capitalism’s evangelists now insisted on this point looked like a bait-and-switch. It introduced a different argument than the one Westerners had thought they were having. Most of the momentum for embracing capitalism had arisen from specific discontents with socialism: the shoddiness of American cars once the United Auto Workers were in control; London left freezing whenever the United Mine Workers wanted a raise; the censorship of Leonid Brezhnev; high income taxes; mediocrity; standardization.

Except among Ayn Rand’s readers, the twentieth-century argument about social systems almost never rested on the case for unconstrained free markets (though a generation of ­entrepreneurs and boosters would later begin casting the argument that way). Just as a good many idealistic socialists in the 1930s wound up muttering to themselves in the Gulag, “I didn’t ask for this . . . I just didn’t like my boss,” so a good many Americans who had voted for Ronald Reagan in the 1980s would later mutter at their twenty-first-­century rideshare jobs, “I didn’t ask for this . . . I just didn’t like the long lines at the DMV.” When, today, Western Europeans refer to the trente glorieuses or the Wirtschaftswunder, the thirty years they are calling “glorious” and “miraculous” are the social-democratic ones that followed World War II—certainly not the capitalist ones that followed the European Union’s 1992 Maastricht Treaty.

Buchanan’s invocation of values also revealed the capitalist system as fragile. Capitalism is the best system only so long as non-market principles don’t contaminate it. On its own, the capitalist system lacks the resources to defend itself against illiberal principles, should any be introduced. “With no overriding principle that dictates how an economy is to be organised,” he warned, “the political structure is open to maximal exploitation by the pressures of well-organised interests which seek to exploit the powers of the state to secure differential profits.” What few Americans understood at the time was that, since the Civil Rights Act of 1964, there already were illiberal principles at work at the heart of the American governmental machinery.

The nineties are remembered as a time of dizzying change, but the momentous stuff came into view only gradually. The early years of the decade had seemed an insipid aftermath to the banquet of optimism Reagan had served up. A recession in 1991, in a country that had grown unused to recessions, brought a collapse of the public’s faith in George H. W. Bush, a prohibitive favorite for reelection just months before. The way was cleared for the young governor of Arkansas. Bill Clinton is the protagonist of the American politics of the nineties; his predecessor, president during three of the ten years of the decade, is missing from most people’s memories of the time.

In the summer of 1994, Rwandans were still pouring across the border into Zaire after the recent genocide in their country. President Clinton, having failed to pass his controversial national health plan, was lobbying Congress to pass an assault weapons ban that might serve as a centerpiece in the upcoming midterm elections. It did—and played a big role in the Democrats’ historic fifty-four-seat loss. Meanwhile, the advance of computer networks was becoming central to Americans’ understanding of their economy and society. Four writers on technology (Esther Dyson, George Gilder, George Keyworth, and Alvin Toffler) had noticed that the internet was now “huge,” with a scarcely believable 2.2 million computers connected to it. Not themselves inventors but rather internet theorists and ideologues of long standing, they decided to write what they called a “Magna Carta for the Knowledge Age.”

It was a typical product of the time. Ambitious people, tipped off that a new era was dawning, volunteered to be the Thomas Jeffersons, even the John Lockes, of the information age, on the strength of a memo dashed off one afternoon after lunch or a neologism coined at a breakfast meeting. “The central event of the 20th century is the overthrow of matter,” the authors portentously began. “In technology, economics, and the politics of nations, wealth—in the form of physical resources—has been losing value and significance. The powers of mind are everywhere ascendant over the brute force of things.”

The Gilder-Dyson manifesto may have inspired the more famous “Declaration of the Independence of Cyberspace” by the countercultural activist John Perry Barlow, which made a splash when Barlow declaimed it at the Davos World Economic Forum: “Governments of the Industrial World, you weary giants of flesh and steel,” it ran in part, “I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.” Though celebrated in its time, Barlow’s declaration is today generally cited to comic effect.

Spoiler alert: The technological changes underway in the early Clinton years did not do away with matter. There was more of it every day, in fact! China, which had had 5.5 million motor vehicles in 1990, would have three times as many in 2000. The factory hub of Shenzhen began the decade with 1.7 million people and ended with more than 7 million, though Apple would not make it the center of its manufacturing operations until the following decade. But the manifesto authors didn’t see that. They were far from the places where matter was being extracted and assembled: the Nike factories in Vietnam, the TSMC chip factory in Taiwan, not to mention the Chilean rare-earth mines and the hitherto unreachable crannies out of which fossil fuels were being slant-drilled and hydraulically fractured. The United States had gone to war against Iraq in 1990 rather than accept Saddam Hussein’s contention that he had invaded Kuwait because it had slant-drilled its way into oil deposits that lay under Iraqi soil. In 1991, with the opening of the Barnett Shale, a colossal “tight gas” deposit just outside of Fort Worth, fracking got underway in earnest. It was there, in 1997 and 1998, that many of the techniques were developed that would turn the United States into a net energy exporter in the twenty-first century.

On the question of whether we were living in a material world, Madonna was apparently better informed than Esther Dyson. What was going on was not the volatilization of matter but the internationalization of the division of labor, such that the United States could profit from the production of matter without (in most cases) suffering from its proximity. Global energy-related CO2 emissions rose by 12 percent during the nineties, but in the United States the Environmental Protection Agency boasted of steep drops in the air concentration of virtually all pollutants.

The writers of the cyber–Magna Carta held the then-unanimous opinion that the internet would be a revolutionarily open place: “As America continued to explore new frontiers,” they enthused, “from the Northwest Territory to the Oklahoma land-rush—it consistently returned to this fundamental principle of rights, reaffirming, time after time, that power resides with the people. Cyberspace is the latest American frontier.” This assertion would prove mostly wrong. Many of the cocksure policy predictions in the document were lifted from Toffler’s bestseller, The Third Wave, written during the Carter administration, nearly two decades earlier: “The reality is that a Third Wave government will be vastly smaller (perhaps by 50 percent or more) than the current one.” The authors also believed that censorship was not only being eliminated but becoming unthinkable:

For government to insist on the right to peer into every computer . . . for government to influence which political viewpoints would be carried over the airwaves . . . might have made sense in a Second Wave world. . . . [It makes] no sense at all in the Third Wave.

In fact, cyberspace was already revealing its possibilities as place of surveillance. In a 1993 New Yorker cartoon, Peter Steiner’s dog sits at a desktop computer telling a canine friend, “On the internet, nobody knows you’re a dog.” By decade’s end, it would be the most reproduced cartoon in the history of the magazine. And yet the great promise of anonymity was about to be withdrawn. The web would feel less like outer space and more like a hive.

The credentialed Bildungsbürger who got in on the internet’s ground floor must have been flattered to read: “Cyberspace is the land of knowledge, and the exploration of that land can be a civilization’s truest, highest calling.” Can be, yes. But the internet wasn’t there yet. The high calling of culture comes from long-nurtured habits. No such habits attached to the internet. Until the twenty-first century, the internet served more to store old culture than to produce new. Perhaps that is why so much of nineties culture took the form of parodies and mashups and remakes and ironic allusions: the songs of They Might Be Giants and the movies of the Coen brothers, Quentin Tarantino, and Wes Anderson. The culture had a Promethean cast: man-made things made from man-made things.

The internet was not a tool to expand freedom. It was a tool to expand knowledge. The authors of the cyber–Magna Carta, like most Americans of the early nineties, failed to understand this distinction. They had no reason to understand it. But things were changing.

Knowledge was not just a credential. In the context of those end-of-the-Cold-War arguments about whether the state has any right to govern things about which it lacks knowledge, knowledge was a source of governing legitimacy. The ordinary person’s superior knowledge of his immediate environment was the basis for his autonomy and his sovereignty. That was the heart of James Buchanan’s argument for the superiority of capitalism to socialism.

Notice that it is a very different grounding for individual rights than the one found in the U.S. Constitution. The Constitution says that you are entitled to govern yourself because it is your right. The free-marketeers say that you are entitled to govern yourself if you can demonstrate that you understand your circumstances better than anyone else understands your circumstances. Until the nineties, this difference had few practical consequences. Almost by definition, no one else could understand your circumstances better than you did.

But in the nineties, as the MIT Media Lab began putting monitors and sensors in “smart clothing,” from eyeglasses to underwear, the theoretical possibility arose that the internet would shortly begin outstripping individuals’ knowledge of their own most intimate situations, and even their most intimate urges. That would complicate matters. A man might say after work, “I feel like a drink,” but as early as 2007, with the founding of Fitbit, a commercially available source of scientific authority could tell him he was not necessarily the best judge of that. Then there was the corporate hoarding of all the data people threw off just by looking around online. You might claim to be an upright citizen, but that’s not what your browser history tells us. By the end of the nineties it was evident that, on the internet, everyone knows you’re a dog.

The authors of the cyber–Magna Carta got pretty much nothing right, aside from deriding the metaphor then favored by Vice President Al Gore to describe the internet. He called it the “Information Superhighway,” which made the internet sound like a New Deal public-works project. A better metaphor for the internet would have been a refrigerator door onto which Republican free-market talking points could be magnetically stuck.

Documents like the cyber–Magna Carta were valuable for demonstrating how people who wanted something out of the internet—politicians and businessmen—would react to it. What the authors really wanted to know was: “Who will define the nature of cyberspace property rights, and how?” Whenever they caught their breath after a torrent of patriotic fustian, it was evident that they were seeking very specific things: regulatory approval of the merger of cable and phone companies, accelerated depreciation schedules from the IRS, a scrapping of utilities regulation. This is crucial to remember about those years in the middle of the nineties: Only 2.2 million computers were on the internet, and already the spoils were being divvied up. They were being divvied up before most Americans realized there was anything to divvy up.

That ­explains, in part, how Bill Clinton, not always a beloved president and for stretches of his administration a pitiable one, was able to turn the country in a new direction. Progressives blame Clinton for embracing Reaganism, conservatives for only pretending to embrace it. Both greatly underestimate him. Clinton convinced the country to endorse his project of “reinventing government”—a deeper, darker, more coherent, and original enterprise than anyone at the time imagined. It solved most of the domestic problems that had defeated presidents Carter, Reagan, and Bush Sr., though in so doing it created others. It was not a partisan project. Three very different governors elected in 1990—Democrat Lawton Chiles of Florida, Republican George Voinovich of Ohio, and liberal Republican Bill Weld of Massachusetts—had begun putting its principles into practice by the time Clinton came to Washington.

David Osborne, a journalist, and Ted Gaebler, city manager of wealthy San Rafael, California, had been gathering case studies and governing tips for about a decade and sharing them with any politician who would listen. They published their findings in a book called Reinventing Government, in time for the 1992 elections. Sometimes the authors called their system “an American perestroika,” at other times “entrepreneurial government.”

It was actually a theory of management. Gaga for classic American management literature—W. Edwards Deming, Tom Peters, Robert Waterman, and above all Peter Drucker—the authors asked why government couldn’t work with the efficiency of a business. Americans had always asked that question, but in 1992 it was in the air more than ever. The erratic Texas businessman H. Ross Perot made it the centerpiece of his third-party run for president that year and won 19 percent of the vote.

Osborne and Gaebler had several explanations for government’s relatively poor performance:

Companies “can make quick decisions behind closed doors”; government is tied up in all sorts of structures of accountability.
Companies can focus on making money; government’s responsibility to “do good” makes it subject to moral absolutes.
Companies are “mission driven,” and have higher goals than just keeping voters content, as governments must.
Companies raise money through sales, whereas governments raise them through taxes, leaving the public with “a constant impulse to control.”

The lesson, which the authors never explicitly enunciated, leaps off the page: Government doesn’t work because it is democratically accountable to voters. Notwithstanding its anodyne title, Reinventing Government was a rough, tough, cutthroat set of case studies about what bureaucrat heroes could accomplish if they were only ruthless enough to put the voting public in its place.

The first story Osborne and Gaebler laid before readers would have shocked any New Deal–era progressive. The town of Visalia, California, had long wanted a swimming pool for its high school, but like many municipalities it had been short of tax revenues since the state’s property-tax revolt of 1978. “One hot Thursday in August 1984,” the authors began, “a parks and ­recreation employee”—elsewhere the authors describe him as “third-level”—“got a call from a friend in Los Angeles, who told him that the Olympic committee was selling its training pool.”

What followed reads like a true-crime story. The guy flies to L.A. and is offered a deal: New, the pool costs $800,000. He can have it for half that, but he’ll have to put up a $60,000 nonrefundable deposit. The parks-and-rec guy calls the school board, which agrees to bring it up for a vote in two weeks. But they don’t have two weeks. His connections in L.A. say they’ve got two colleges ready to bite. If he wants the pool he’ll have to show them the money. Does this third-level bureaucrat importune the school board again? No. He contacts an assistant city manager, who cuts him the check, and he drives it down after hours. Totally efficient. And almost totally unaccountable.

Or, let us say, accountable to a different set of rules, those of capitalism rather than democracy.

Osborne and Gaebler introduced a new vocabulary to bureaucrats around the world when they wrote: “We need better government. To be more precise, we need better governance.” Government is about rulers’ relationship to the ruled—it is constitutional accountability. Governance is about rulers’ relationship to their projects—it is business accountability. The authors took the Hayekian principle of legitimation through expertise and turned it on its head. Voters had previously asked ­bureaucrats: “Who are you to tax my business?” Now bureaucrats could also ask voters: “Who are you to call this waste? Have you ever surveyed a Superfund site? Have you ever managed a bond auction?”

This epistemological counterrevolution was the great innovation of the Clinton administration, and it changed American life. Between 1994 and 2000, federal expenditures fell to 17.6 percent of GDP, the lowest level since the early 1970s. Whether or not the public had been consulted, it was happy. Would they have wanted the government to “listen” more to the public-service unions and the road pavers’ lobbies?

Clinton was accused of running a permanent campaign. While his bureaucracy whirred, the president focused on the mood of the public, which now seemed to care more about effectiveness than accountability. On a Monday in April 1998, he stood in front of a row of cops in the White House Rose Garden to announce an executive order banning the import of fifty-nine kinds of assault rifle. On Tuesday he was in Kansas City, imploring an assembled forum to help him protect Social Security. On Wednesday he visited Chicago’s inner city to urge a number of school construction projects. On Thursday he was in Kentucky to campaign against Big Tobacco.

Tobacco had long been a passion of both Bill and Hillary Clinton, who sought a legal reform to dissuade Americans, particularly young Americans, from smoking. It had been tough to get voters to back such a reform, since senators from tobacco states were obstructive and smokers themselves were opposed. But in the age of reinvented government, none of that mattered much. Now there were ways to revolutionize American life without bringing voters on board. In many communities where voters stubbornly rejected smoking regulations, local health boards introduced the regulations by decree. In 1997 and 1998, forty-six state attorneys general, led by the thirty-two Clinton-allied Democrats among them, sued the major tobacco companies and arrived at two settlement agreements that brought a payout of hundreds of billions to cash-strapped state Medicaid funds and set conditions for the companies’ marketing of their wares. The settlement was not law, but it had the effect of law.

A good many functions that government had never previously thought to administer now became its business. The tobacco settlement was an instance of what Osborne and Gaebler called ­anticipatory government: “prevent[ing] problems before they emerge, rather than simply offering services afterward.” They proposed government-appointed “futures commissions” to study emerging trends. No longer would the public set the government’s priorities; the government would set the public’s.

Osborne and Gaebler thought the ethical watchdogs of the early twentieth century had gone too far in fighting Tammany Hall–style political machines: “In making it difficult to steal the public’s money,” they wrote, “we made it virtually impossible to manage the public’s money.” That a modern state or national government is a money manager with an asset portfolio had been true for a long time, but it was in the nineties that mostly Democratic administrations drew two logical conclusions.

First, the public funds they invested gave them a source of paragovernmental political leverage, which could be exploited. By the turn of the twenty-first century, the progressive directors of the giant California state employees’ pension system, with its two million members, had a foreign policy—bearing down on companies that invested in Burma, for instance.

Second, governments needed expertise. The successful and the credentialed were invited into a privileged relationship with government. Not trade unions and university economics faculties but investment banks, above all Goldman Sachs, became the recruiting ground of choice for top Democratic policymakers, beginning with the arrival of Goldman co-chair Robert Rubin as treasury secretary at the start of the Mexican peso crisis in 1995. “New ‘partnerships’ blossom overnight,” marveled Osborne and Gaebler, “between business and education, between for-profits and nonprofits, between public sector and private.”

Expertise is good, but experts have their interests, and Clinton administration economic policy grew more and more congruent with the worldview of venture capitalists: Witness the passage of the North American Free Trade Agreement in 1994, the establishment of the World Trade Organization in 1995, the Telecommunications Act of 1996, and the repeal of the New Deal–era Glass-Steagall regulation in 1999.

The investment climate produced by the Clinton administration was unprecedented. For the consumer it briefly seemed almost magical. Starbucks cafés and Borders bookstores, charming mainstays of Seattle and Ann Arbor respectively, became national chains. Borders, while it lasted, was astonishing. Its stores—more than five hundred of them at the chain’s turn-of-the-century peak, many in cities like Phoenix and Fort Lauderdale, which had never been considered literary meccas—each stocked an average of 100,000 titles. Several of them were bigger than any bookstore in the country had been before 1992. A revolution in food was going on, too—in supermarkets and high- and low-end restaurants. Of two chains that were just going national in 1995, Esquire’s food critic John Mariani marveled, “If you were able to transport any outlet of the Cheesecake Factory or the Bennigan’s chain back to the Atlanta of 1977, it would, with no exaggeration, qualify as the best restaurant in the city.”

All over the world, reinvented government invigorated the center-left parties that practiced it: Wim Kok came to power in the Netherlands in 1994, Tony Blair in the United Kingdom and ­Lionel Jospin in France in 1997, Gerhard Schröder in Germany in 1998. All would eventually run into problems. Blair and Schröder, in particular, would come to be vilified in their respective countries. Like Clinton, they used votes inherited from their parties’ days as defenders of the common people to install governments of experts. They ruled as if the main criterion for government was not whether it operates in the name of the people it represents but whether it carries out its projects. And the projects were increasingly dreamt up by Baby Boomer investors and financiers. Clinton himself fared much better than his European counterparts, disposing as he did of two areas in which presidents had long been given free rein: civil rights and foreign policy.

Democrats are the party of the university-educated. As university-generated high technology moved to the center of the American economy, Democrats quite naturally consolidated their position as the party of the country’s business and financial elite. But Democrats are also dependent on black voters, who are, on the whole, disproportionately dependent upon government programs. The alliance between university know-it-alls and hard-pressed minorities can be an effective one, but only so long as government spending is rising. And it was not.

Clinton was able to keep the alliance alive in an era of cuts by making adroit use of the Civil Rights Act of 1964 and the regulations, executive orders, and court-ordered expansions stemming from it. He shunted the cost of black advancement into the private sector through affirmative action and housing finance subsidies. He opened civil rights to other groups, particularly women and gays. And—the first president to do so—he made an almost religious appeal to diversity as an American calling, casting as unpatriotic any allegiance to the traditions and cultures of the majority.

During the 1990–1991 academic year, the term “political correctness” had been introduced into American life on the covers of news magazines and at the top of nightly news broadcasts. Most people had never heard the phrase, though it had been putting down roots ever since the passage of the Civil Rights Act. In its purest form it was confined to university life and the public school system, where it prescribed ethnic- and sexual-minority curricula, politicized lessons, and various kinds of censorship and speech control. There is probably no need to go into detail, since the controversies resemble today’s, except in one particular: In the early nineties, almost no one thought the politically correct side had any chance of carrying the day, let alone of becoming a state ideology with a system of censorship to protect it. The claims were too ridiculous. The country wouldn’t tolerate it.

It is true that there were new openings in several different cultural directions as the nineties began. The ethic of authenticity, which had prevailed in one way or another since the 1960s, no longer held sway. Suddenly, there was nothing wrong with using distressed furniture to make a bar in a shopping mall look old, or hammering a tin ceiling into the ground-floor restaurant of a brand-new skyscraper. This aesthetic impulse would not have been understood before the late 1980s.

Eccentric, non-conforming, and grotesque things were the most deeply mined literary vein of the decade. There were memoirs of alcoholism (Caroline Knapp’s Drinking: A Love Story, 1996),depression (Elizabeth Wurtzel’s Bitch, 1998), and other personality disorders (Susanna Kaysen’s Girl, Interrupted, 1993). There was a fascination with edgy sex, especially homosexuality: Magazine ads had always favored young and attractive models in erotic poses, but the homoerotic photography for Diesel, Versace, and Calvin Klein ads was brazenly pornographic. So was Madonna’s 1992 coffee-table book, Sex. The nineties witnessed not merely an elevation of workaday black culture (as in the films of Spike Lee) but also an indulgence of its radical political aspect (as in the fad for Malcolm X clothing at the start of the decade—black “X” caps and T-shirts reading “By Any Means Necessary”) and an affection for its dimmer corners (as in the endless variations on Bud Light’s “Whassup!” commercials at decade’s end).

Yet it could not be said that the country was in a generous mood regarding matters of rights. Never in the nineties were Americans indulgent about progressive crusades around sex: In Clarence Thomas’s 1991 Supreme Court confirmation hearings, support for Thomas rose after his former colleague Anita Hill testified that he had harassed her. And on the day House Republicans passed two articles of impeachment against Bill Clinton in December 1998, his approval rating reached 71 percent.

Progressive race agitation was even less welcome. In 1994, with the public still deploring the Los Angeles riots of two years earlier, Richard Herrnstein and Charles Murray’s book The Bell Curve, which cast many of the problems of black criminality and underachievement as insoluble, became a succès de scandale. A few weeks later, Republicans took both houses of Congress for the first time since the Eisenhower administration, in what was describ­ed as an uprising of “angry white males.” This result was not surprising: In the first midterm election after political correctness became a national controversy, the country went to the polls and rejected it overwhelmingly. A new Congress came in with plans to set things right, to reform affirmative action, to defund offensive government-sponsored art. What was surprising was what followed: Nothing. The civil rights vision won. After decades of opposition, the nineties were the decade in which civil rights “took” and became the main driver of social evolution in the United States.

Important demographic distortions created conditions for a more open attitude toward both blacks and gays than had been possible before or would be possible after. A program of mass incarceration, launched as part of Ronald Reagan’s war on drugs, had landed the great majority of young black criminals in jail. For the first time in a generation, black neighborhoods became safe for non-blacks to enter and spend money in, and the non-incarcerated remainder had more in common with their non-black contemporaries than had seemed to be the case in previous generations. What had most bothered people about gays, as late as the 1984 Democratic National Convention in San Francisco, was the promiscuity of the anonymous “bathhouse” scene. Then the AIDS epidemic arose, and by the time effective therapies came on the market in 1994, hundreds of thousands of the men who had belonged to that world were dead. The survivors had been selected for fidelity and bourgeois prudence, and many had shown extraordinary courage and character in enduring the worst ordeal any group of American men had undergone since the Vietnam War. The movement for gay marriage won over Hawaii’s Supreme Court in 1996 and Vermont’s in 1999.

Affirmative action was something else altogether. The mystery is how it had even survived into the nineties. Americans hated it, as polls showed year in and year out. In the 1980s, when the essayist Richard Rodriguez published his memoir Hunger of Memory, his editor advised him to cut the passages about racial preferences, saying: “Nobody’s going to remember affirmative action in another twenty-five years.” That the hard power of civil rights was still at work in American life seemed like an oversight, and in a way it was. So confident in the power of free markets were the Hayekian libertarians who shaped Republican ideology in the 1980s that they forgot there was any other kind of power. They assumed that government programs, being less effective than private ones, would simply wither away. The Business Roundtable convinced Ronald Reagan not to revoke Lyndon Johnson’s Executive Order 11246, which had established affirmative action for federal contractors. And Reagan’s departure revealed that the party’s attractiveness to real conservatives—reactionaries, traditionalists, and cultural pessimists, as opposed to opportunistic businessmen—had been personal and contingent. Conservatives had never been given a functional policy role in the Republican party, not even in Reagan’s time. Though still a primarily Republican voting bloc, they began to drift. The rallying of “angry white males” to the GOP in 1994 was a last hurrah, at least until 2016. The Republicans became the 45-percent party they remain to this day.

Civil rights survived because it proved an extraordinary tool—unlike any in peacetime constitutional history—for contravening democratic decision-making. By withholding money, by suing states and businesses, the federal government can use civil rights law to coerce local authorities ­into changing policies; it can alter the behavior of private citizens. When Bill Clinton broadened the remit of civil rights, he didn’t have to spend money to do it. His predecessor, George H. W. Bush, had taken the first steps down this road. Bush’s Civil Rights Act of 1991 introduced punitive damages in a broad range of civil rights cases, creating major incentives to file lawsuits for race and sex discrimination. In the wake of the 1992 L.A. riots, Bush lowered standards of creditworthiness for inner-city home buyers. But it was Clinton who opened the floodgates of housing credits by threatening, on the strength of misrepresented agency data, to find lenders guilty of “redlining” black neighborhoods. He used the Carter-era Community ­Reinvestment Act to pressure banks politically. Black homeownership rose by 25 percent between the mid-1990s and the mid-2000s. This was the era of subprime loans, which would bring on the crash of 2008 and the ensuing global recession. The American media has never been comfortable acknowledging that minority homeownership programs were at the root of an international economic calamity. But economists (notably Atif Mian and Amir Sufi of Chicago, and Viral Acharya of NYU) have understood it all along, and the progressive Cambridge University ­historian Gary Gerstle, in his recent The Rise and Fall of the Neoliberal Order, puts the Bush-Clinton subsidies squarely at the center of the 2008 crash.

Progressive attitudes on race and sex prevailed because they were enforced, disseminated, and protected (even from democratic review) by civil rights law. At a time when all certitudes were being toppled, racial progressivism, along with its sexual offshoot, was spared. It was the functional equivalent of a state religion or a national ideology. As Clinton entered office, promising to appoint a cabinet that “looks like America,” progressivism became the organizing principle of the White House. It would become the organizing principle of American foreign policy as well.

Clinton was incurious about foreign affairs. He had one guiding belief on the matter, and it was not of the sort that would induce him to educate himself in office. His belief, repeated in speech after speech until he left office in 2001, was that there was no longer a meaningful distinction between foreign and domestic policy, that these were “two sides of the same coin in a world that is growing smaller and smaller and more and more interconnected.” He meant it: The quotation comes from Clinton’s most rigorous defense of the U.S.-led NATO attack on Serbia in the spring of 1999. The speech is a historic redefinition of NATO as an offensive military alliance with a mandate to reorder the world. He delivered it not in a national address or in a congressional speech but over lunch at the convention of the American Federation of State, County and Municipal Employees (AFSCME), a public employees’ trade union.

The intermingling of near and far was not a new development. In 1991 Ted Turner had announced that CNN employees who referred to anything as “foreign” would be fined, with the proceeds sent to the United Nations. His implication was that it was racist or xenophobic to notice a difference between the United States and anywhere else. “­International” was his word of choice. The Turner-Clinton view was shortsighted. You might think that erasing the difference between domestic and foreign policy would make the world safer by uniting us all in one family. It could also make the world more dangerous by casting distant and hard-to-untangle disagreements as immediate domestic threats.

The Clinton administration went about reinventing the country’s foreign policy in a Promethean spirit. Early in his first autumn in power, Clinton asked his friend and national security advisor Anthony Lake to consolidate his foreign-policy thoughts into one slogan. Lake then delivered a speech that contained the closest thing to a national security doctrine the president would produce. The speech was called “From Containment to Enlargement.” Clinton himself would lay out its main points at the UN a week later.

The fall of the Soviet Union, Lake explained, “requires us to think anew, because the world is new.” The obligation to defend America against threats never came up. What did come up was an agenda for governing a borderless world, as America sought to “increase our prosperity” (here) and “promote democracy” (there).

There are two obvious problems with the Clintonian model of a borderless world. The first is that civilizations are built out of choices. To choose is to close off options, to make rules, and to set limits. A borderless civilization is therefore a contradiction in terms. The NATO forces Clinton was leading had represented many things in the minds of the people they defended: the United States and various European countries, Western civilization, “the free world.” If there were no borders, then all those identities were out of bounds—invidious, biased, illegitimate. So for whom was the United States fighting, and on what grounds?

In the tangled account Lake gave in 1993, two contradictory notions keep knocking each other out of the frame: leadership and subversion. The United States fights because it leads the international order. Our armed forces “are part of the necessary price of security and leadership in the world.” But the United States also fights because it subverts the international order. “Democracy and market economics have always been subversive ideas to those who rule without consent,” said Lake. “These ideas remain subversive today.” (Among campus progressives and evidently foreign policy intellectuals, the adjective “subversive” had by 1993 become a term of outright praise, much as “disturbing” had been for the movie critics of the 1970s.) This subversive leadership must have presented a confusing picture to foreigners.

Something called “centralized power” was the new enemy. Lake explained: “Centralized power defends itself. It not only wields tools of state power such as military force, political imprisonment and torture, but also exploits the intolerant energies of racism, ethnic prejudice, religious persecution, xenophobia and irredentism.” Aha! These happen to be the principles—or to use the more usual word, “values”—over which civil rights law gives the federal government almost unlimited power to discipline its citizens.

The world was thus made intelligible, in a Manichean sort of way. For Lake, those at home who suggested downsizing NATO after the Cold War were “neo–Know Nothings” (after the xenophobes of the 1850s). Those abroad who were not reforming their governments along American-approved lines were “backlash states.” Faced with such states, “our policy . . . must seek to isolate them diplomatically, militarily, economically, and technologically.”

Logically, of course, if there is no border between home and abroad, and if the United States is exercising “leadership” of the whole world, then there is no case in which a foreign country can go its own way without offending U.S. interests. American foreign-policy thinkers were veering away from reality. Supposedly the United States had an ideology that had liberated the world, and a military advantage so pronounced that the concept of homeland defense barely troubled the minds of its leaders. And yet its worldview was suffused with insecurity and paranoia. Said Lake:

Unless the major market democracies act together—updating international economic institutions, coordinating macroeconomic policies, and striking hard but fair bargains on the ground rules of open trade—the fierce competition of the new global economy, coupled with the end of our common purpose from the Cold War, could drive us into prolonged stagnation or even economic disaster.

Heaven forfend that another sovereign country should dissent from the American view on unlimited free trade, or stand apart from Washington’s right to “coordinate macroeconomic policies.” In Lake’s vision, the U.S. combined the military wherewithal of the most heavily armed empire in the history of mankind with the motivations of an underdog fighting to fend off “disaster”—fighting, even, for its life.

Lake condemned the ongoing conflict in Bosnia as “driven by ethnic barbarism”—though no one in the Clinton administration would use that expression when, a few months later, Rwandan Hutus and Tutsis began hacking each other to death by the hundreds of thousands. The Balkans would be the main American defense preoccupation of the nineties. In February 1994, the U.S. led NATO on the first combat mission since its founding half a century before, to enforce a Bosnian no-fly zone.

And then, on March 23, 1999, Clinton made his historic speech to the union bosses of AFSCME in which he justified a NATO attack on “Yugoslavia,” a rump statelet that consisted of not much more than—and not all of—the Serbian parts of the earlier country that had borne that name. Clinton argued that Serbian strongman Slobodan Milošević had to be forced to offer autonomy and peace to the ethnic Albanian Muslims in his country’s southern province of Kosovo, where 40,000 Serbian troops were present. Clinton had boned up on Kosovo enough to understand that World War I had started in the Balkans, though perhaps not enough to understand that the crusading ambitions of outside empires were what had started it.

Clinton laid out his casus belli simply, if not concisely or clearly. American values were at stake, he told the AFSCME attendees:

With all of our increasing diversity in America, I wanted an America that really reaffirmed the idea of community, of belonging; the idea that none of us can pursue our individual destinies as fully on our own as we can when we want our neighbors to do well, too; and that there is some concrete benefit to the idea of community that goes beyond just feeling good about living in a country where you’re not discriminated against because of some predisposition or anything else that has nothing to do with the law, and nothing to do with how your neighbors live their lives; and that what we have in common is more important than what divides us. . . . That’s what Kosovo’s about—look all over the world. People are still killing each other out of primitive urges because they think what is different about them is more important than what they have in common.

On the day after Clinton’s AFSCME appearance, the United States launched a campaign of aerial bombardment and cruise-missile attacks on Serbian troops, bringing interstate war to European soil for the first time since World War II. Serbia, ordered to surrender, did not. As the weeks wore on, the United States bombed civilian targets in Belgrade, cutting the power, leaving the city without water, striking the Chinese embassy, and killing sixteen journalists in an attack on a Serbian TV broadcaster. Then finally, on June 11, after seventy-eight days, the U.S. secured a surrender from Serbia through the intercession of an infuriated but cornered Russian president Boris Yeltsin.

The vagueness and forgettability of the humanitarian sentiments that Clinton had used to justify the war, the degree of violence and sophisticated weaponry that had been required to halt an incursion by a Cold War–era army of 40,000 soldiers, the allied misgivings about the American justification and conduct of the war, the mortification of Russia’s hitherto pro-American leadership at seeing Eastern Europe turned into a theater of invasion—all of this led certain commentators to warn that the president had gone down a dangerous new route, and that the United States might not be so lucky next time. America’s discovery of world dominance might turn out in the twenty-first century to be what Spain’s discovery of gold had been in the sixteenth—a source of destabilization and decline disguised as a windfall.

Few were inclined to listen. The country still possessed a wide margin of error in international affairs, and its prestige seemed untarnishable. Besides, summer was beginning, and there were other things going on. Three days after Serbia’s surrender, the governor of Texas, George W. Bush, announced his campaign for the presidency of the United States.

Christopher Caldwell is a contributing editor of the Claremont Review of Books.

Image by Kenneth C. Zirkel licensed via Creative Commons. Image cropped, filter added.

This is the first of your three free articles for the month.
Read without Limits.
Stacked Mgazines