If you like getting free money, one highly-illegal way of doing so is to falsely claim that you purchased a certain item which then got stolen, so your insurance will cover the cost of a new one. This will require you to fill out a police report, however, and you could soon be caught out by software designed to detect when such reports are nothing but a load of hokum.
Developed by computer scientists from Cardiff University and Charles III University of Madrid, the VeriPol system was "trained" on a database of police reports that had already been proven fraudulent. Using natural language processing – which is a form of artificial intelligence – it identified language patterns which frequently appeared in those reports, but not in legitimate ones.
Among other things, it was found that the bogus reports tended to include shorter statements that were centered more around what was stolen than on the incident itself, plus they lacked details regarding both the attacker and the incident, and they didn't include any mention of witnesses, or of the victim contacting police immediately after the robbery occurred.
When initially tested on a collection of over 1,000 reports from the Spanish National Police, VeriPol was found to be more than 80 percent accurate at identifying which ones had already been declared false, and which ones were legitimate.
The technology has since been the subject of a June 2017 pilot study, in which it was used by police in the Spanish cities of Murcia and Malaga. Within a one-week period, it flagged 25 robbery reports in Murcia and 39 in Malaga, all of which were deemed false after the claimants were further interrogated. By contrast, throughout the month of June in the years 2008 to 2016, the average number of false reports that were manually detected by police officers was 3.33 for Murcia and 12.14 for Malaga.
The system has now been rolled out for use by police departments across Spain.
"Our study has given us a fascinating insight into how people lie to the police, and a tool that can be used to deter people from doing so in the future," says Cardiff's Dr. Jose Camacho-Collados, co-author of a paper on the study, which was recently published in the journal Knowledge-Based Systems.
Source: Cardiff University
Here comes the bad part....after running the program and finding those that officers would probably perceive as suspicious, the results were handed back to the officers would went back and interrogated further, till a high percentage were rated as suspicious. ....and that was view as confirmation....not of the bias, but of the efficacy. ...and that lead to the technology being widely distributed. Absurd.
Like its always been with computing: Garbage in garbage out!
With AI this is even a more dangerous issue as it can pick up on subtilties in the data that corelate with a certain desired output - but in reality has other causes. AI without doubt has its uses, but we are nowhere near having a 'real AI' and definitely not one that should be making this sort of decision about human honesty etc.
Of course the AI could be used to create reports of false claims that would pass human and AI inspection. So a double edged sword!