Support First Things by turning your adblocker off or by making a  donation. Thanks!

According to an article in the New Scientist, A simple formula can predict how people would want to be treated in dire medical situations as accurately as their loved ones can, say researchers. According to a study, surrogate decision makers were only 68% right when trying to predict what their loved one would want under hypothetical circumstances. But the computer program did better, getting it right about 78% of the time.

The formula is based on people, asked to imagine themselves in comas, wanting care if they have a 1% chance of recovering awareness. But using such a formula would be a case of garbage in, garbage out. Diagnoses of persistent vegetative state are notoriously inaccurate. Indeed, there are unexpected awakenings, of confusion between a “locked in state” (unable to communicate) and a persistent unconscious state (actually unaware).

More importantly, using such a computer model would not treat the patient as an individual. Even if the computer could predict in 78 out of 100 cases, that would still leave 22 people whose wishes would be violated.

Everyone should create an advance directive. Absent that, patients and society are much better served giving the benefit of doubt to life over death when we do not know what the patient would want. My friend Bobby Schindler had it right, when he told the New Scientist: “I believe it would be extremely irresponsible to allow machines to make decisions involving life and death,” says Bobby Schindler, brother of Terri Schiavo. Schiavo was in a persistent vegetative state for 15 years until she died in 2005 after doctors removed her feeding tube. Her case sparked huge debate in the US. “If a person becomes incapacitated, is not dying, and can assimilate food and water via a feeding tube, then I believe that we are morally obligated to care for the person and provide them this basic care—regardless of a computer attempting to ‘predict’ what that person’s wishes might be,” Schindler adds. “Essentially, you would be allowing a machine to determine what is ethical, what is right and wrong, which no machine is able to do.”


Comments are visible to subscribers only. Log in or subscribe to join the conversation.

Tags

Loading...

Filter First Thoughts Posts

Related Articles