Support First Things by turning your adblocker off or by making a  donation. Thanks!

In his recent Why We Drive, Matthew B. Crawford acknowledges that the cars made today are safer than those of earlier decades. But automotive safety is a complicated affair. Safety regulations require devices that can reduce the costs of accidents. Yet these devices can’t buy more safety. Drivers adjust behavior to accommodate new gadgets, sometimes in ways that make their driving more dangerous. As a result, “the net safety picture is . . . messier than it appears if one confines one’s gaze to the progress of technology.”

Seat belts don’t affect our sense of danger and so don’t change how we drive. On the other hand, “the elevated, tanklike enclosure of a large SUV” makes a driver feel invulnerable. Confident our vehicle can survive anything, we scoff at mundane regulations, like maintaining a safe distance from the car in front of us. Sheer bulk has other drawbacks. The more encased we are, the less we can see. To compensate for reduced visibility, manufacturers add backup cameras and distorting convex mirrors. Increased safety in one sector produces less safety in another, which requires still more safety equipment.

More complicated devices alter the relationship between man and machine. Navigation systems are usually reliable, but their failures can be tragic. Such technology seems superhuman and so “earns our trust, based on its impeccable performance.” When we believe that “the automation knows best,” we delegate some of the skill of driving to the machine and pay less attention to the act of driving and to our surroundings. This makes our driving less safe. Warning systems have a similar effect, since we “substitute the secondary task of listening for alerts and alarms” for the primary task of managing the vehicle. We don’t need to check fluid levels because the idiot lights will tell us when something is wrong. Automation “has a kind of totalizing logic to it. At each stage, pockets of human judgment and discretion appear as bugs that need to be solved.” Technical failures are attributed to “human error,” and the solution is to turn humans into robots or let the machines take over completely.

Total automation appears to be an attempt to solve deep social problems. The open road is a paradigm of community life: “our ability to share the road together smoothly and safely is based on our capacity for mutual prediction.” Driving requires a “socially realized” form of intelligence that depends on “robust social norms that can anchor sound expectations of others’ behavior.” A society where people no longer trust each other and where cooperation is not a given still needs to coordinate individual behavior in some way. Planners are tempted to “replace trust and cooperation with machine-generated certainty.” Automation gets sucked into a feedback loop: As we delegate the basic human skills of self-government to machines, will we be able to maintain those skills? Does a regime of automated safety make it more difficult for us to take risks? Does safety forever demand still more safety?

Crawford’s book is about driving, but it’s also a parable about the dehumanizing thrust of automation, often justified by appeals to safety. As the intelligence needed to control machines erodes, so does our human capacity for spiritedness. As Crawford writes, “the opacity of the automation logic both encourages and requires a certain disposition of character in the operator, which we might call spiritlessness.” Spiritlessness works so long as everything is functioning, but spiritedness, the ability to take charge, is necessary in an emergency. When automated airplane systems fail, pilots need to be assertive enough to override them, which is possible only “if one has confidence—not only in one’s skills, but in one’s understanding of what is going on, and how to fix it.”

Automation drains these qualities. It trains us in deference instead of assertiveness; it obscures the machine’s inner workings instead of facilitating understanding. We no longer wield machines as tools but instead “feel ourselves responsible to them, afraid to be wrong in their presence, and therefore reluctant to challenge them.” Automation saps us of the thumos we need when automation fails. In a regime of automation, the spirited man comes to seem dangerous, “maladaptive,” “a bug in the system.”

Crawford paints a portrait of a dystopian world ruled by safety devices: “Ultimately, it is we who are being automated, in the sense that we are vacated of that existential involvement that distinguishes human action from mere dumb events.” He worries about the human and social effects of “delegation at scale, or rather mass absenteeism.” What happens when each of us “stands at one remove from one’s own doings, not episodically, but as a basic feature of living”? What will life be like if we no longer act, but merely monitor the machines that act on our behalf? Is such mediated doing really “doing”? And what kind of creatures will we be if we no longer do anything for ourselves?

Peter J. Leithart is president of the Theopolis Institute.

First Things depends on its subscribers and supporters. Join the conversation and make a contribution today.

Click here to make a donation.

Click here to subscribe to First Things.

Comments are visible to subscribers only. Log in or subscribe to join the conversation.



Filter Web Exclusive Articles

Related Articles