Good Thinking

Software uses AI to detect bogus robbery reports

Software uses AI to detect bogus robbery reports
VeriPol detects suspicious language patterns in reports of robberies
VeriPol detects suspicious language patterns in reports of robberies
View 1 Image
VeriPol detects suspicious language patterns in reports of robberies
VeriPol detects suspicious language patterns in reports of robberies

If you like getting free money, one highly-illegal way of doing so is to falsely claim that you purchased a certain item which then got stolen, so your insurance will cover the cost of a new one. This will require you to fill out a police report, however, and you could soon be caught out by software designed to detect when such reports are nothing but a load of hokum.

Developed by computer scientists from Cardiff University and Charles III University of Madrid, the VeriPol system was "trained" on a database of police reports that had already been proven fraudulent. Using natural language processing – which is a form of artificial intelligence – it identified language patterns which frequently appeared in those reports, but not in legitimate ones.

Among other things, it was found that the bogus reports tended to include shorter statements that were centered more around what was stolen than on the incident itself, plus they lacked details regarding both the attacker and the incident, and they didn't include any mention of witnesses, or of the victim contacting police immediately after the robbery occurred.

When initially tested on a collection of over 1,000 reports from the Spanish National Police, VeriPol was found to be more than 80 percent accurate at identifying which ones had already been declared false, and which ones were legitimate.

The technology has since been the subject of a June 2017 pilot study, in which it was used by police in the Spanish cities of Murcia and Malaga. Within a one-week period, it flagged 25 robbery reports in Murcia and 39 in Malaga, all of which were deemed false after the claimants were further interrogated. By contrast, throughout the month of June in the years 2008 to 2016, the average number of false reports that were manually detected by police officers was 3.33 for Murcia and 12.14 for Malaga.

The system has now been rolled out for use by police departments across Spain.

"Our study has given us a fascinating insight into how people lie to the police, and a tool that can be used to deter people from doing so in the future," says Cardiff's Dr. Jose Camacho-Collados, co-author of a paper on the study, which was recently published in the journal Knowledge-Based Systems.

Source: Cardiff University

This sounds like confirmation bias run wild. The program is trained on previously is cases that are suspected of being fraudulent and identies characteristics with greater or lesser degrees (not outright present or absent as the article claims, rather only more or less). Pause there for a moment to understand that the material used to train the program is not certain to know that all of the fraudulent cases are known as such, not that every case noted as fraudulent actually is. After training that likely in reality has trained the program to identify people who will likely present their stories in such a way to be viewed as suspicious by officers (regardless of actual veracity).
Here comes the bad part....after running the program and finding those that officers would probably perceive as suspicious, the results were handed back to the officers would went back and interrogated further, till a high percentage were rated as suspicious. ....and that was view as confirmation....not of the bias, but of the efficacy. ...and that lead to the technology being widely distributed. Absurd.
Bob Stuart
Insurance is a shell game. I don't make insurance claims because of fraudulent, misleading policies. I don't call the Police because I have Aspergers, and so they assume I'm lying when I try talking under stress. After major problems, now I always call a politician or lawyer for advice before trying 911.
Brian M
Like its always been with computing: Garbage in garbage out!
With AI this is even a more dangerous issue as it can pick up on subtilties in the data that corelate with a certain desired output - but in reality has other causes. AI without doubt has its uses, but we are nowhere near having a 'real AI' and definitely not one that should be making this sort of decision about human honesty etc.
Of course the AI could be used to create reports of false claims that would pass human and AI inspection. So a double edged sword!
Jean Lamb
The point of having witnesses is a valid one--in a real robbery, at least one snoopy neighbor would have noticed it (and we also act like snoopy neighbors too for our friends here). However, a wise person would look at what *legitimate* reports look like and use them for a model, just sayin'.