Fans of the TV show Backstrom know that the title character is a detective with health problems, and in a recent episode the police force’s doctor requires him to wear a FitBit-type health monitor to ensure he’s getting his prescribed 10,000 steps per day — or he’s off the job. The character finds creative ways around this (like giving his tracker to a homeless man for the day), but the concept stands: he has a much harder time lying to his doctor about his health habits.
As we collect more data about what we do, it will become extremely difficult for someone to lie without being found out. Increasingly, big data and analytics innovations are able to detect whether we are telling the truth or not. Data from wearable devices and smart phones is already being used by motor insurance companies to track actual driving habits of their customers and by hospitals to monitor their patients. From previous jobs and career performance, your health and driving habits, to even a delayed plane or train, everything can now be tracked and data about most things is stored somewhere.
So, is this the end of lying as we know it?
Companies are getting wise to the little white lies consumers tend to tell. For example, one of my insurance company clients can now monitor how we fill in application forms online. This can show them how often we re-type data into certain boxes to attempt to get a better quote when we say our car is parked in a garage rather than on the road. Big data analytics tools are now watching out for this type of behavior and will flag any potential fraud.
In a serious auto accident, police often collect the phones of the parties involved to see if anyone was talking, texting, or using an app at the time of the crash. A person may claim his hands were on the wheel and eyes on the road, but his smart phone data could say otherwise. Because of the sophisticated sensors in our smart phones (GPS trackers, accelerometer sensors, light sensor, etc.) our gadgets already know where we are and how fast we are moving. It is only a matter of time until we will use all of this data to investigate driving behavior leading up to a crash – witness statements will become less important.
But it doesn’t even have to be on such a grand scale. Today, an HR manager can quickly check an applicant to verify past employment or graduation records and systems are being put in place to automate the verification process of information contained in job application forms. Using smart phone or smart watch data, parents can monitor their teenagers to find out where they were, how fast they were driving, and how long they stopped at any given location. They can even receive a text or email alert when a person or car leaves a predefined geographic area.
Social networks are also full of lies and rumors, which can spread very quickly – often with severe consequences. During the London riots rumors spread on Facebook and Twitter that a tiger was loose in Primrose Hill or that the London Eye was on fire.
One example that illustrates the dangers of such lies was the hacking of the Associated Press news agency’s twitter account. Hackers broke in and tweeted “Breaking: Two Explosions in the White House and Barack Obama is Injured.” The false information spread virally. Traders on Wall Street quickly started reacting to the rumors, which caused the Dow to plunge 140 points. Once the truth came out, markets recovered again, but calls for lie detection systems became louder.
One project that aims to develop a system to detect such lies and rumors is Pheme. The interdisciplinary big data project is funded by the European Union, named after the Greek goddess of fame and rumors, and brings together partners from the fields of natural language processing and text mining, web science, social network analysis, and information visualization. The goal is to develop and release veracity intelligence algorithms as open source material so that we can all benefit form them. Such algorithms could then be applied in social media networks, web search or email systems to detect rumors, lies and misinformation.
Another way to detect lies is from our biometric data because lying causes changes in blood pressure, pupil dilation and our voice. Linguistic analysis can now pick up on those signals in our voice and identify possible lies. One example is computer voice stress analysis (CVSA) that detects changes in our voice pitch to detect lies. Approved by a U.S. federal court ruling, this technology can now be used to monitor sex offenders as part of their post-release supervision. I find it interesting to see how long it takes for such systems to find their way into call and contact center technology to detect lies customers might tell their service providers.
Another interesting system to detect lies is AVATAR (Automated Virtual Agent for Truth Assessments in Real-Time) developed by the National Center for Border Security and Immigration at the University of Arizona. The screening system uses artificial intelligence and non-invasive sensor technology to detect deceptions and lies. AVATAR is designed to make the borders safer. It conducts an automated interview during which it analyzes document data (including visa application forms, travel history, etc.) as well as biometric data (such as pupil dilation, eye and body movements as well as changes in vocal pitch) to establish someone’s credibility. If suspicions are raised, the traveller is referred to an officer for further investigations.
I find it fascinating that many of these algorithms now find their way into mobile apps. At this point, many are more for fun than serious work but as camera, sensor and processing technology in our smart devices improves, we could soon have a phone that tells us if and when the person we are speaking to is telling the truth or not.
This article was written by Bernard Marr from Forbes and was legally licensed through the NewsCred publisher network.