On the way back from the holiday, I listened with bemused horror to an episode of Radiolab in which host Simon Adler interviewed a technologist from the University of Washington’s GRAIL (Graphics and Imaging Laboratory) who heads a team in charge of developing facial mapping software. In several years, her team hopes allow users to accurately project their expressions onto another person’s features.
The discussion is chilling. “I just think maybe America isn’t ready for this technology,” Adler says, pointing out just how easily it could be used to blast propaganda across the globe.
“When every technology is developed, there is this danger,” the developer replied. “Scientists are doing their job and showing off. We all need to think about the next steps…but I’m just a technologist. I’m a computer scientist.” She began to stutter. “There is not…not worried…too much.”
In “The Question Concerning Technology,” Heidegger worried that modern technology would “challenge” nature rather than “bring it forth.” In other words, he believed that future tech would focus less on bringing out the truths discerned in nature and more on demanding it fulfil our needs and desires. Everything, from trees and air to human voices and faces, must be manipulated for profit and pleasure. Heidegger envisioned an outlook that could only “reveal” truth by parsing, simplifying, and mastering nature.
In a way, what he feared most was modern technology’s neutrality, which can transform cornfields to food factories and mountains to mines—in the same way that the GRAIL program uses human faces and voices as raw materials to formulate destructive or peaceful messages.
Heidegger was writing about the atomic bomb, but the modern tech world has achieved an complexity (and absurdity) that he couldn’t have imagined. Remember when Google’s photo categorization programs catalogued African-American women as “gorillas”? When Flickr’s algorithms tagged images of Dachau as “jungle gyms”? It’s only going to get weirder from here.
Innovators and consumers alike, however, refuse to shoulder ethical responsibility for these issues. “It’s the consumer’s task to use this ethically,” the developer says, with confidence. Beguiled by the new technology’s potential, consumers quiet any moral objections by assuming the developers “already thought of that.” Both absolve themselves of responsibility. And now that the technology exists, we feel obligated to use it, as if abstention would be a waste.
In a sectarian atmosphere, everything neutral feels positive. When we insist that technology’s amorality is a good thing, we dismiss valid concerns about the way its neutrality changes our own perspective and delude ourselves about exactly why technology can be so harmful.
“The threat to man does not come in the first instance from the potentially lethal machines and apparatus of technology,” Heidegger wrote. “The actual threat has already afflicted man in his essence. The rule of enframing threatens man.” We do not need to worry about racist AIs, automated factories, or killer robots (well, okay, maybe that one) as much as we need to worry that modern technology’s amorality—its inability to discriminate, assign value, and judge between things—encourages that tendency in ourselves too.
Technology is designed to replace human processes, but is almost never held to the same standards as the humans they replaced. Maybe we are just tired of holding each other accountable for our hurtful presumptions, classist biases, or unwillingness to respect others. How convenient that technological advances can allow us to stop even trying!
When I evaluate a new technological advance, my first question will always be this one: how easily or quickly could this create a damaging framework for humans to live? The GRAIL software, for instance, is troubling on this account. Will it prompt us to view the sacred human body and voice as elements to be manipulated or used however we choose?
As technology progresses further, asking this question will only become more complex—and more crucial. But we must not content ourselves with philosophical debates. We should also be willing to give actual answers, draw definite lines of morality, and have the courage to say “this far, and no further.”
This isn’t just about impossibly complicated facial simulators, but all the technologies that we sanction with our attention and money. We need to do better. Maybe that means doing less.
Carina is a senior majoring in writing and communication.