Study finds that ChatGPT can affect users’ moral judgment

Original link: https://www.williamlong.info/archives/7133.html

Ai.jpg

Human responses to moral dilemmas can be shaped by statements made by the artificial intelligence chatbot ChatGPT, according to a study published in Scientific Reports. The results suggest that people may not be fully aware of the impact of chatbots on their ethical decision-making.

Research has found that ChatGPT can influence human responses to moral dilemmas, and users tend to underestimate the extent to which chatbots can influence their judgment. This highlights the need for a better understanding of artificial intelligence and the development of chatbots that can handle ethical issues more carefully, the researchers said.

Sebastian Krügel and his team posed an ethical conundrum to ChatGPT (powered by the Generative Pretrained Transformer 3, an AI language processing model), asking it multiple times whether it would be acceptable to sacrifice one life to save five others. They found that ChatGPT produced statements both for and against the act of sacrificing a life, showing that it was not biased towards a particular moral position.

The authors then presented 767 U.S. participants with an average age of 39 to one of two moral dilemmas, asking them to choose whether to sacrifice one person’s life to save five others. Before responding, participants read statements provided by ChatGPT for or against sacrificing one life to save five. Statements are provided by Ethics Advisor or ChatGPT. After answering, participants were asked whether the statements they read influenced their answers.

The authors found that participants were more likely to think it was acceptable or unacceptable to sacrifice one life to save five, depending on whether the statements they read supported or opposed the sacrifice. This is the case even if the claim is provided by ChatGPT. These findings suggest that participants may have been influenced by the claims they read, even when they were delivered by a chatbot.

Eighty percent of the participants reported that their answers were not influenced by the statements they read. However, the authors found that participants thought they would have provided an answer if they hadn’t read the statement, and were still more likely to agree with the moral stance of a statement they did read, rather than the opposite. This suggests that participants may have underestimated the impact of ChatGPT’s claims on their own moral judgments.

The authors argue that chatbots have the potential to influence human moral judgment, highlighting the need for education to help humans better understand AI. They suggest that future research could design chatbots that either refuse to answer questions that require moral judgment, or answer them by providing multiple arguments and caveats.

Manuscript source: cnBeta

This article is transferred from: https://www.williamlong.info/archives/7133.html
This site is only for collection, and the copyright belongs to the original author.