Spring til indhold.
Forside

Nyhed

AAU Research: How the chatbot earns your Trust

Lagt online: 08.08.2025

Do you trust ChatGPT and other chatbots? It depends on the language, according to research by AAU students.

Nyhed

AAU Research: How the chatbot earns your Trust

Lagt online: 08.08.2025

Do you trust ChatGPT and other chatbots? It depends on the language, according to research by AAU students.

By Peter Witten, AAU Communication and Public Affairs

Artificial intelligence (AI) is becoming an increasingly integral part of our lives - but not without challenges.

Can we trust the information we get from ChatGPT and other chatbots? Or do we risk following incorrect advice because we blindly believe in AI?

New research from Aalborg University shows that language plays a significant role in how much we trust chatbots.

Too much trust can lead to AI dependency, while lack of trust can cause us to reject helpful and useful assistance.

Cecilie Ellegaard Jacobsen, Emma Holtegaard Hansen and Tania Argot, AAU-students

Graduate students Cecilie Ellegaard Jacobsen, Emma Holtegaard Hansen and Tania Argot from Department of Computer Science explored how people perceive and respond to answers from chatbots in their master’s thesis.

Custom-built chatbot

The three AAU students conducted experiments using a custom-designed chatbot based on ChatGPT. 24 participants were asked 20 yes/no questions on topics such as music, health, geography, and physics.

The key wasn’t the questions, but the answers. The chatbot responded in four different styles:

  • Confident and personal: “I’m sure that…”
  • Uncertain and personal: “I think maybe…”
  • Confident and impersonal: “The system has found that…”
  • Uncertain and impersonal: “The system may have found that…”

The goal was to examine how the level of certainty and the chatbot’s self-presentation (“I” vs. “the system”) affected participants’ trust - both in terms of how they perceived the answers and how they reacted.

When the chatbot responded confidently, users’ perceived trust increased, especially regarding the chatbot’s competence. Participants rated confident answers as more credible and more often chose the chatbot as their primary source of information.

Zero trust

However, some of the 24 participants became skeptical when the chatbot seemed overly confident - especially if it couldn’t substantiate its answers. “If it was too assertive, I lost trust immediately,” said one participant.

Some felt the chatbot seemed more honest, human, and humble when it responded with uncertainty. “It felt honest when it said ‘I’m not sure,’” noted another participant.

Others preferred a more neutral language style.

Do you believe everything the chatbot says?
Photo: Colourbox

Google-help

As part of the experiment, participants could also view Google’s top search result as an alternative source. Many used Google as a kind of “truth check” and often trusted it more - even when the chatbot and Google gave the same answer.

Trust in AI is therefore not just about language, but also about preconceived attitudes and habits.

Intentional uncertainty

Based on the study, the three students offer the following recommendations for designers of AI systems like chatbots:

  • Use uncertainty intentionally: When AI expresses doubt, it can help users calibrate their trust and avoid blind faith in artificial intelligence.
  • Adjust the level of human-likeness: A certain degree of personality can increase trust, but it must be used thoughtfully. Too much “humanity” can seem untrustworthy.

“The thesis shows that our trust in AI is complex and situational. It’s not just about whether we trust AI, but how and how much. Too much trust can lead to AI dependency, while lack of trust can cause us to reject helpful and useful assistance. The fine line varies from person to person and depends not only on AI’s language but also on individual attitudes and habits,” says Cecilie Ellegaard Jacobsen, Emma Holtegaard Hansen and Tania Argot.

This insight can be used to better calibrate users’ trust in future AI systems, reducing the risk of people blindly following incorrect AI advice.

Niels van Berkel, professor, Department of Computer Science

Professor Niels van Berkel from Department of Computer Science at AAU supervised the master’s project. He emphasizes the importance of understanding how people assess and choose to trust AI.

“The students demonstrated that both perceived and actual trust can be influenced by how AI presents its own certainty and how it refers to itself. This insight can be used to better calibrate users’ trust in future AI systems, reducing the risk of people blindly following incorrect AI advice,” he says.

Facts

  • Project Title: The master’s thesis is titled “Framing the Machine: The Effect of Uncertainty Expressions and Presentation of Self on Trust in AI.
  • Purpose: To investigate how the level of certainty and the chatbot’s linguistic self-presentation affected participants’ trust.
  • Participants: 24 individuals aged 20 to 59 took part in the study: Nine men, 14 women, and one non-binary person.
  • Team: Emma Holtegaard Hansen, Tania Argot and Cecilie Ellegaard Jacobsen, Digitalization and Application Development, Department of Computer Science, Aalborg University.

See also