Genevieve Wallace
A recent event, hosted by the Edmond & Lily Safra Center for Ethics, interrogated the technology’s role in human relationships.
Read time: 4 minutes
Artificial intelligence has the potential to make us better people, providing expert-sourced guidance to help us through difficult conversations. But the technology can also generate suspicion, leading people to question the authenticity of these interactions.
An electric discussion, hosted last week by the Edmond & Lily Safra Center for Ethics, explored the impact of generative AI on human relationships. Navigating the various ways chatbots fill spaces between people, panelists considered AI in medicine, whether the platforms can be called empathetic, and their effectiveness in coaching and counseling roles.
Moderator Eric Beerbohm, Alfred and Rebecca Lin Professor of Government and Ethics Center Director, opened by sharing with his read on the situation. “If aliens landed, how would that change humanity? In the past couple of years, that’s effectively happened.”
Beerbohm then turned to the panelists. “When AI becomes a layer between people, helping us write, apologize, disagree, seek counsel, ask for advice, what is the most important thing that might change about human relationships?”
Carissa Véliz, Associate Professor of Philosophy at the University of Oxford, said she believes this layer serves as a barrier between humans by reducing in-person interactions and weakening social skills.
“We are seeing concerning trends with teenagers that they are not talking to each other enough, they are having less sex, and children are not getting their bones broken … because they aren’t engaging with the world, they’re engaging more with these chatbots,” she said.
Taking a step back, Jonathan Zittrain, George Bemis Professor of International Law at Harvard Law School, provided a framework for what he sees as three general positions people take on AI: accelerationists see it as a positive and transformative innovation; “safetyists” or “doomers” agree that AI is transformative, only in a bad way; and members of the “smoke and mirrors” group credit users, not AI platforms, with making the technology exciting.
Zittrain’s position? “I’d just love everybody to get along.”
Citing a 2023 survey indicating that people view chatbots as more empathetic than physicians, Beerbohm asked, “Could it help us on empathy if we can witness a bot outperforming us?”
Or might it have the opposite effect, he wondered, where we begin to outsource empathy to the machines?
“To call it empathetic is a mistake,” Véliz responded, noting, “there is no one there on the other side of the screen, there’s no one who cares about you.”
She cautioned against indulging in the sycophantic tendencies of chatbots, reminding the audience of what they can gain from each other. “One of the advantages of talking to a human being is that they disagree with you,” she said. “They push back, they don’t see things the same way as you do, and that’s frustrating but incredibly healthy because it grounds you to reality.”
Zittrain provided a bit of this healthy disagreement by explaining that these models have evolved and are getting really good at providing accurate medical advice. “I would like whatever healthcare process results in diagnosing and treating my disease, probably irrespective of bedside manner — but the bot seems to have that going for it too,” said Zittrain, who also teaches computer science at Harvard John A. Paulson School of Engineering and Applied Sciences and public policy at Harvard Kennedy School.
Beerbohm gave examples of places where chatbots can challenge the authenticity of our interpersonal relationships. In classes, teachers are concerned that students, in real time, are speaking on behalf of their chatbots, parroting what the technology has told them. He described running articles by the late Harvard philosopher John Rawls through a chatbot and being told that there is about an 80 percent chance that they were written by another chatbot. He relayed a conversation with a student who questioned whether a particularly eloquent apology from their partner was AI-generated.
“You get, at the most intimate level, this cloud of suspicion,” Beerbohm said.
On the other hand, Zittrain described a company called Friend that makes a wearable AI that listens to and records users’ conversations before dispensing advice on how they could have handled situations differently.
Beerbohm recalled reading about someone claiming that Friend had made them a better person and, hoping to end the panel on a hopeful note, asked panelists whether these devices could improve our civil discourse skills by examining what went wrong in an argument.
Véliz conceded that there may be some potential for AI to serve as a bridge in this way, but she reminded the audience that “these tools are never neutral; they are produced by a company with very clear financial stakes.”
She added — optimistically — that, “we will never become completely digital, and virtual water will never quench your thirst, because virtual water is not water at all, and the richness of the natural world, of the cultural world, of paintings, of coffee shops, of bars, of universities, is something that I think we should cherish a lot more and that becomes brighter in light of AI.”
Zittrain left the audience with a final question: “If you did have something cheering you on, that you really could tune to your preferences, and it was helping keep you on the path,” he said, “would you turn that down?”
What do we really gain from making new friends, and does it matter who they are?
Tutorial’s enduring impact marks a century of training future scientists and physicians
Evolutionary biologist Erin Hecht on the ancient breed that can fine-tune according to surrounding sound