Law-enforcement technologyComputer spots liars by looking at the way they talk
Computer scientists are exploring whether machines can read the visual cues of an individual’s conduct to discover whether or not that individual is lying; in a study of forty videotaped conversations, an automated system the researchers developed correctly identified whether interview subjects were lying or telling the truth 82.5 percent of the time
Inspired by the work of psychologists who study the human face for clues that someone is telling a high-stakes lie, UB computer scientists are exploring whether machines also can read the visual cues that give away deceit.
Results so far are promising: In a study of forty videotaped conversations, an automated system the researchers developed correctly identified whether interview subjects were lying or telling the truth 82.5 percent of the time.
A university of Buffalo - the State University if New York release quotes Ifeoma Nwogu, a researcher at UB’s Center for Unified Biometrics and Sensors (CUBS), to say that this is a better accuracy rate than expert human interrogators typically achieve in lie-detection judgment experiments.
In published results, even experienced interrogators average closer to 65 percent, Nwogu says.
“What we wanted to understand was whether there are signal changes emitted by people when they are lying, and can machines detect them? The answer was yes, and yes,” says Nwogu.
Her colleagues on the study included CUBS scientists Nisha Bhaskaran and Venu Govindaraju, and Mark G. Frank, a UB professor of communication and behavioral scientist whose primary area of research has been on facial expressions and deception.
In the past, Frank’s attempts to automate deceit detection have used systems that analyze changes in body heat or examine a slew of involuntary facial expressions.
The release notes that the automated UB system tracks a different trait — eye movement. The system employs a statistical technique to model how people moved their eyes in two distinct situations: during regular conversation and while fielding a question designed to prompt a lie.
People whose pattern of eye movements changed between the first and second scenario were assumed to be lying, while those who maintained consistent eye movement were assumed to be telling the truth.
In other words, when the critical question was asked, a strong deviation from normal eye-movement patterns suggested a lie.
Previous experiments in which human judges coded facial movements found documentable differences in eye contact at times when subjects told a high-stakes lie. Nwogu and the other computer scientists created an automated system that could verify and improve upon information used by human coders to successfully classify liars and truth tellers.
The research team presented its results at last year’s IEEE International Conference on Automatic Face and Gesture Recognition.
Though the study’s sample size was too small for the research to be statistically significant, the findings are exciting, Nwogu says, suggesting that computers may be able to learn enough about a person’s behavior in a short time to perform a task that challenges even experienced investigators.
The study included people with various skin colors, head poses, lighting and obstructions such as glasses.
The forty videos that Nwogu and her colleagues used for the study were culled from a set of 132 that Frank recorded during a previous experiment.
Nwogu says the next step in the research will be to expand the number of subjects studied and to develop automated systems using more advanced pattern-recognition models to identify liars.
She emphasizes that while automated deceit-detection systems could find use in such fields as law enforcement and security screenings, the technology is not foolproof — a very small percentage of the subjects in the study were excellent liars, maintaining their usual eye-movement patterns as they lied. Also, the nature of an interrogation and interrogators’ expertise can influence the effectiveness of the lie-detection method.
— — Read more in Larry Greenemeier, “In-Your-Face: Can Computers Catch You Telling a Lie?” Scientific American (5 march 2011)