The Age of AI:
And Our Human Future
by henry a. kissinger, eric schmidt, and daniel huttenlocher
john murray, 256 pages, $30
This is a book with three authors, which is both unusual and tricky because, while reading it, you’re constantly wondering who might have written the section or sentence before you. Unsurprisingly, it is a book incapable of entering into functional relationships. You cannot settle down with it or get to know the mind that created it, so as to succumb to or fight against it. This book has an insinuating purpose that is not literary, not purposefully discursive, not even argumentative. What it advances is a rather sly, self-interested, and one-sided brief for how the most pressing issue currently facing the human race might be boxed off to the benefit of you-know-who.
The overall impression is of a kind of manifesto for an election yet to be declared. Clearly, the book aims to seize the initiative on AI so that Big Tech can monopolize and control it, because that’s why Big Tech exists. Anyone genuinely seeking to understand what is happening with AI and the related spheres of transhumanism, posthumanism, and the Technological Singularity should probably look elsewhere.
The questions concerning what is to be done with or about AI, and who will have control of the toggle switch, are about to become urgent ones. After a period of seeming technical somnambulance—“AI winters,” techies call these pauses in advancement—there has lately been a burst of renewed above-ground activity, perhaps indicating that the moment of Technological Singularity (essentially, when the machines created by man become more clever than their creators) may be at hand. This moment will trigger a frenetic jockeying for position—by governments, corporations, and especially by Silicon Valley—to lay claim to control, to table and filibuster against regulation, to frame the philosophical contract that will govern this new era.
As things stand, it appears that artificial intelligence, while capable of outperforming humans in certain tasks and reckonings, still requires human supervision. Being neither sentient nor self-aware, AI cannot reflect on its own processes. It gets things wrong, mainly due to insufficient, poor, or confusing inputs, albeit less so than before. Sometimes the problems arise from human bias manifesting in the input data. It seems the AI cannot (as yet?) be taught common sense. Such teething problems are inevitable, we are told, but the authors pointedly note that “while developers are continually weeding out flaws, deployment has often preceded troubleshooting.” This tendency, they concede, is extremely risky. But also, I would interject, inevitable when things are left in the hands of amoral corporations.
For many years, the pursuit of what is called the technological posthuman has continued at the subterranean level, pushing forward without much pause for check or scruple. The discussion, such as it was, happened in-house at Silicon Valley, and largely had to do with how far things might go before anyone started to wonder why not much about what was happening was being reported above ground.
The undoubtedly determined march of AI, with or without the Singularity, will change the majority of human lives beyond all recognition, eliminating most human work, creating a form of supra-intelligence to which humans may rapidly become subject on terms lacking accountability or transparency, and essentially demoting humanity to the role of second most intelligent “species” on the planet. We have no idea where this will take us, and we have yet to begin any coherent general conversations about it.
It goes without saying that the “risks” associated with AI have nothing ultimately to do with the inert pieces of metal and plastic comprising the attendant technology, but with the people who will control it. The most important question is: Who should manage this epoch-making moment?
Big Tech already controls the world via the internet, through data harvesting, intimate surveillance, and censorship. Now it moves toward the final stage: the unity of humans and machine, but not on the terms of the human, or at least not the human race. Instead, as usual, the plan is for things to be handled by placing the well-placed few over the befogged many, in the name of progress.
The three authors of this book are insiders: Eric Schmidt is a former CEO and chairman of Google, Daniel Huttenlocher is a tech academic and Amazon board member; Henry Kissinger is Henry Kissinger. It goes without saying: All three authors are convinced globalists. The idea seems to be to lead the discussion in the required direction, raising the “democratic” and “human” concerns, but happily subjecting these to a series of controlled explosions so as to minimize the possibility of their being raised again before we are well past the finishing line.
In November, Time magazine published an article titled “Henry Kissinger’s Last Crusade: Stopping Dangerous AI,” which included interviews with Kissinger and Schmidt. It contained a quote from Schmidt that defines the central problem with this book:
I am very concerned about the misuse of all of these technologies. I did not expect the Internet to be used by governments to interfere in elections. It just never occurred to me. I was wrong. I did not expect that the Internet would be used to power the antivax movement in such a terrible way. I was wrong. I missed that. We’re not going to miss the next one. We’re going to call it ahead of time.
This may have gone down well with readers of Time, but to the unwashed and unwoke it is clear that Schmidt comes to bury governments, not corporations. An even narrower agenda is visible also, since his reference to “governments interfering in elections” is designed to invoke the Russian interference lie that was comprehensively debunked by revelations arising from the Robert Mueller investigation—a lie sustained by Big Tech. Schmidt thus eloquently conveys that his concern is neither philosophical nor anthropological, but superficially ideological, which is to say money- and power-related. His reference to the “antivax movement” is even more tedium-inducing. As far as the present COVID controversy is concerned, there is, in effect, no “antivax movement”—just campaigns by concerned citizens against certain vaccines, and for very good reasons that Big Tech seeks to suppress.
Schmidt’s arguments, in short, derive entirely from the palette of woke pseudo-liberalism, which is to say the emerging tyranny now threatening the world and its inhabitants.
This book bears the same stamp, albeit more subtly imprinted. The word “disinformation,” for example, is scattered throughout the text, but nowhere does it manifest as other than a camouflaged apologia for partisan ideological censorship—for silencing those who say things Big Tech doesn’t like. There is no criticism, even implied, of Silicon Valley abuses in this connection. Nor is there any mention of Twitter's suppression of the story of Hunter Biden’s laptop, or of that company’s high-handed suspension of the account of a sitting president of the United States.
Yet the authors also concede that “In a free society, the definition of harmful and disinformation should not be the purview of corporations alone. But if they are entrusted to a government panel or agency, that body should operate according to defined public standards and thorough verifiable processes in order not to be subject to exploitation by those in power.” Chance would be a fine thing. And, what, by the way, does “power” mean? Governmental power only, it is clear.
The Age of AI claims to set out the questions to be faced in the coming years of the AI advance, as well as “tools to begin answering them.” What do AI-enabled innovations in health, biology, space, and quantum physics look like?; what does AI-enabled war look like?; when AI participates in assessing and shaping human action, how will humans change?; what, then, will it mean to be human?
Good questions, urgent questions. Too urgent to be left to insiders spinning on behalf of interests already proven to be unfit to hold power. The “committee” nominated to discuss or dispose of the pressing AI issues should contain the minimum possible of Bilderbergers, Trilateralists, current or past board members of Google, or members of the Party of Davos.
Some scientists acknowledge, rather blithely, that the moment of Technological Singularity may well result in the obliteration of virtue, conscience, and morality, and even the final exit of the human species from the world, as human beings lose the battle to justify their existence against the claims of vastly more intelligent “beings.” Against these risks, scientists posit benefits like increased cognitive capacity and processing speed, leading to the possibility of more and more scientific discoveries, but rarely do they get to the question: to whose benefit? The outcome of such questions may depend on the emphasis placed on values, conscience, and morality in programming the AI, and will depend also on the meanings attributed to “rationality” and “intelligence,” and whether these are compatible with a moral framework. A super-intelligent entity, primed to maximize rationality in pursuit of even higher intelligence, may decide that human-centered morality is irrational, and therefore counter-productive. Inevitably, as things “progress,” the pressure will grow to remove impediments to the growth of machine intelligence, which will by definition mean that humans will be first to “hit the pine.”
The Age of AI issues intermittent calls for a “discussion” of such questions, and yet it reflects precisely the demeanor that has radically curtailed public discussion over the past two decades. It fails to deal with or even mention the selective censorship practices of Silicon Valley operators, while still implicitly assuming that such operators have some kind of prior entitlement to continue at the wheel even after the age of intelligent inanimacy has moved into top gear.
AI ultimately will either be a new beginning or a final ending. There is a view in tech circles that, since the human race faces extinction thanks to its own behavior, some kind of absorption of humanity by the machine may be the only way of maintaining an intelligent, albeit mechanical, human presence on earth. Thus, this thesis expands, the biological essence of humanity might have to be sacrificed, and the species maintained in the only form by that stage possible: posthumanist “man.” Conversely, there is the hypothesis that the moment of Technological Singularity will bring with it a radical threat to natural selection: The machine will elevate humans according to values different from those of nature—a Superman. Where have we heard that before?
We have reached the upper stories of the Tower of Babel and most of us are coming down with acute vertigo—and the only level-headed ones remaining have rather worrying glints in their eyes.
John Waters is an Irish writer and commentator, the author of ten books, and a playwright.
First Things depends on its subscribers and supporters. Join the conversation and make a contribution today.
Click here to make a donation.
Click here to subscribe to First Things.
Photo by Virtuo Doc via Creative Commons. Image cropped.