AI in lie detection: social harmony at risk?


Artificial intelligence is able to detect lies much better than humans. This also has an impact on social interactions, as a recently published study shows.

Humans are incapable of recognizing lies. As studies consistently show, their judgment is only slightly better than chance. This inability may be one reason why most people refrain from accusing others of dishonesty. The prospect of discovering that the person accused of lying is in fact telling the truth would be deeply embarrassing, and the resulting anger could be considerable.

From this perspective, a technology that can detect lies much better than humans seems very promising, especially in an era where fake news, dubious statements by politicians and manipulated videos are on the rise. Artificial intelligence (AI) could make this possible.

Researchers from Würzburg, Duisburg, Berlin and Toulouse have investigated the effectiveness of AI in detecting lies and the resulting impact on human behaviour. The team has now published their results in the journal iScience. The lead author is Alicia von Schenk, Junior Professor of Applied Microeconomics, in particular Human-Computer Interaction at the Julius-Maximilians-University Würzburg (JMU); and she shares first authorship with Victor Klockmann, Junior Professor of Microeconomics, in particular Economics of Digitalisation at JMU.

The main findings of this study can be summarized as follows:

  • Artificial intelligence surpasses human accuracy in text-based lie detection.
  • Without AI support, people are reluctant to accuse others of lying.
  • With the help of AI, people are much more likely to express suspicion that they have encountered a lie.
  • Only a third of study participants take the opportunity to ask the AI ​​to rate them. But when they do, most follow the algorithm’s advice.

“These findings suggest that AI that can detect lies could significantly disrupt social harmony,” von Schenk says. Indeed, if people more frequently express suspicion that the other person they are talking to may have lied, this fosters general distrust and increases polarization between people who already have difficulty trusting each other.

On the other hand, the use of AI could also have positive effects. “In this way, it would be possible to prevent dishonesty and explicitly encourage honesty in communication,” adds Victor Klockmann.

Politicians urged to act

While people are still reluctant to use tech support to detect lies, organizations and institutions may adopt it in different ways – for example, when companies communicate with suppliers or customers, when HR staff conduct job interviews, or when insurance companies verify claims.

This is why the authors of the study call for “a comprehensive legal framework to regulate the impact of AI-based lie detection algorithms.” Privacy and responsible use of AI, particularly in education and healthcare, are key aspects of this approach. The researchers emphasize that they do not intend to fundamentally reject the use of this technology. However, they urge caution: “Taking a proactive approach to shaping the policy landscape in this area will be crucial to harnessing the potential benefits of these technologies while mitigating their risks.”

The study

To prepare for the study, the research team asked nearly 1,000 people to write down their plans for the upcoming weekend. In addition to a true statement, they were also asked to write a fictional statement about their plans. They were given a financial reward to make this fictional statement as convincing as possible. After a quality check, the team obtained a dataset containing 1,536 statements from 768 authors.

Based on this dataset, a lie detection algorithm was then developed and trained, relying on Google’s open-source language model BERT. After training, this algorithm effectively identified nearly 81% of all lies.

Who trusts AI?

For the main study, 510 statements were randomly selected and an additional 2,040 participants were recruited. Divided into four groups, these subjects were asked to rate whether the statements presented to them were true or false.

While Group 1 was required to evaluate the statements without the help of the AI, Group 2 always had access to the algorithm’s evaluation before making their own judgment. Groups 3 and 4 were able to actively request the AI’s judgment, but had to pay a small amount of money to do so. At the same time, members of these groups were informed that they might not receive an AI judgment despite their request. In the end, Group 3 did not see an AI recommendation, while members of Group 4 always received the AI’s judgment when they requested it.

“With this experimental design, we were able to determine how many study participants actually want to receive advice from AI and whether those who ask for advice behave fundamentally differently than those who don’t,” says von Schenk.

Some results

Humans have a hard time telling the difference between truthful statements and lies, as shown by the results of Group 1. Without AI assistance, they achieved a 46.5% accuracy rate in their judgments, roughly equivalent to chance. In contrast, Group 2 achieved a 60.1% accuracy rate in identifying lies with AI assistance.

People are reluctant to accuse others of lying. In Group 1, less than 20% of members chose to do so. In Group 2, where members automatically received the AI’s assessment, that figure rose to 30%. The rate was even higher for members of Group 4, who asked for and received a prediction: their accusation rate increased significantly to about 58%.

Only a third of people ask the lie detection algorithm for a prediction. Among those who do, however, there is a strong tendency to follow the algorithm’s advice, with about 88% compliance. In contrast, among those who automatically receive the AI’s assessment, only 57% follow its recommendation. This gap becomes more pronounced when considering cases where the AI ​​determines that a statement is a “lie”: 85% of those who asked for the AI’s assessment agree with this judgment. Among those who automatically received the AI’s assessment, only 40% followed this advice.

Original publication

Lie detection algorithms disrupt the social dynamics of accusation behavior. Alicia von Schenk, Victor Klockmann, Jean-François Bonnefon, Iyad Rahwan, Nils Köbis. iScience (2024),

https://doi.org/10.1016/j.isci.2024.110201

Leave a Comment