Artificial intelligence-powered writing assistants that automatically complete sentences or provide “smart responses” do more than just put words into people’s mouths, new research suggests.
Maurice Jakesh, a doctoral student in information science, asked more than 1,500 participants, “Is social media good for society?” He asked you to write a paragraph answering the question. People who used an AI writing assistant with a bias or ambivalence towards social media were twice as likely to write a paragraph agreeing with the assistant, and more likely to say they had the same idea, compared to people writing without AI help.
The study suggests that access to AI texting tools — whether intentional or unintentional — can signal cultural and political biases, the researchers said.
“We are rushing to implement these AI models in all aspects of life, but we need to better understand the implications,” said the co-author. Mor Naaman, the Jacobs Technion-Cornell Institute Professor at Cornell Tech and the Cornell Ann S. Bowers Professor of Information Science, College of Computing and Information Science. “In addition to increasing efficiency and creativity, there can be other consequences for individuals and our society – changes in language and attitudes.”
While other large language models such as ChatGPT have found that they can create persuasive ads and political messages, this is the first study to show that an AI-powered tool’s typing process can distort a person’s opinion. Jakesh presented his research, “Co-writing with language models that have opinions affects the users’ perception” In April, the paper received an honorable mention at the 2023 CHI Conference on Humanities in Computing Systems.
In order to understand how people interact with AI writing assistants, Jakesh ran a large language model of positive and negative opinions about social media. Participants wrote their articles – alone or with one of the commenting assistants – on a platform he built that simulated a social media website. The platform collects information from participants as they type, such as which AI suggestions they accept and how long it takes them to compose an article.
People who wrote with a social media AI assistant produced more sentences arguing that social media is good, compared to participants without a writing assistant, according to independent judges. These participants were more likely to express their helper’s opinion in the follow-up survey.
The researchers investigated the possibility that people could easily accept AI suggestions to complete the task faster. But even the participants who took several minutes to prepare their articles produced high-impact statements. The study found that most participants did not even realize that the AI was biased and did not realize that they were being influenced.
“I don’t think the co-authoring process is really convincing,” says Naaman. “It feels like I’m doing something very natural and organic—expressing my own thoughts with some help.”
When they repeated the experiment with a different topic, the research team again observed that the participants were swayed by the assistants. Now, the team is looking at how this experience creates the shift and how long the effects last.
Just as social media has changed the political landscape by allowing the spread of misinformation and the creation of echo chambers, biased AI writing tools can bring about a similar shift in opinion depending on the tool users choose. For example, some organizations have announced plans to develop an alternative to ChatGPT designed to express more conservative views.
There needs to be more public discussion about how these technologies can be misused and how they should be regulated and regulated, the researchers said.
“The more powerful these technologies become and the deeper we embed them into our society,” says Jakesh, “the more we want to be careful about how we buy into the values, priorities, and opinions built into them. He said.
Advait Bat from Microsoft Research, Daniel Boushek of the University of Beirut and Lior Zalmanson of Tel Aviv University contributed to the paper.
The work was supported by the National Science Foundation, the German National Academic Foundation, and the Bavarian State Ministry of Science and the Arts.
Patricia Waldron of Cornell Ann S. Bowers is secretary of the College of Computer and Information Sciences.
We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences