The polygraph test ostensibly measures a person's breathing rate, pulse, blood pressure, and perspiration to determine whether they are lying — though the 85-year-old technology has long been used by scientists. It has been finished for a long time.
Basically, the possibility of false positives and the subjectivity involved in the interpretation of results greatly undermine the utility of the polygraph as a lie detector. Clearly, their findings are generally not admissible in US courts.
Since it's 2024, researchers are now asking if artificial intelligence can help. In a new study published in the journal iSciencea team led by University of Würzburg economist Alicia von Schenk found that yes, it could—but, as MIT Tech Review According to reports, this led the experimental subjects to make more accusations overall, in another warning about the far-reaching dangers of replacing human intuition with algorithms.
First, the researchers asked participants to write statements about their weekend plans. If they successfully lied about their plans without being found out, they were given a small financial reward.
The collected statements were then used to train an algorithm based on Google's large language model BERT. Scientists found that it was able to tell whether a statement was a lie with a success rate of 67 percent, a significant improvement over humans, who get it right only 50 percent of the time.
Van Schenk and his colleagues then offered a separate group of volunteers the option to use the algorithm for lie detection or to trust their human intuition for a small fee.
Only a third of the volunteers were willing to use the tool – but they also became power users.
“When you make an active choice to trust the technology, we see that people almost always follow what the AI predicts… they trust its predictions a lot,” von Schenck said. . MIT Tech.
As a result, those who did not use the tool determined on average that only 19 percent of the statements were false. But those who used the tool determined that 58 percent of the statements were false, a huge increase in the attribution rate.
“This finding supports the default theory of truth and replicates findings commonly observed in the lie detection literature, documenting that people generally avoid accusing others of lying. Avoid,” reads the paper. “One possible reason is that they are not very good at it and want to minimize the risks of paying for false accusations for themselves and the accused. In support of this idea, in our study, people also reliably were not successful. Finding out the truth from the false statements.”
But were these determinations accurate enough to ever be completely reliable? To trust an AI lie detector it would require a much better than average success rate, eg MIT Tech to mark
Despite the technology's shortcomings, such a tool could serve an important role given the proliferation of misinformation on the Internet, a trend that is being accelerated by the use of AI.
Even AI systems themselves are getting better at lying and cheating, a dystopian trend that should encourage us to strengthen our defenses.
“While older machines like polygraphs are very noisy and other technologies that rely on physical and behavioral characteristics such as eye tracking and bridge dispersion measurements are complex and expensive to implement, current natural language Processing algorithms can detect fake reviews, spam detection on X/Twitter, and achieve above-chance accuracy for text-based lie detection,” the paper reads.
In other words, perhaps being willing to make more accusations can be a good thing – with enough caveats, of course.
“Given that we have a lot of fake news and misinformation spreading, these technologies have an advantage,” von Schenk said. MIT Tech. “However, you really need to test them – you need to make sure they're substantially better than humans.”
More on lies: Scientists found that AI systems are learning to lie and cheat.