Are we making the mistake of blaming the algorithm because we don't like the outcome? As stated race wasn't a factor in the algorithm but blacks were categorized by it as twice as likely to reoffend. We've determined this to not be an acceptable outcome before bothering to ask if the result is accurate. I won't go down the rabbit hole of crime stats but if you sort by income, race, gender, or a host of other labels you get different results. If the same algorithm determined that males are more likely commit armed robbery does that also mean that it's broken?
You know the financial industry in the US went down this same path. Federal legal doctrine dictates that even if a bank applies the same standards to all its customers, it can still be considered breaking the law if that standard results in fewer loans on average to a particular group. Though race is not a direct factor in mortgage qualification because more blacks than whites were being turned down for mortgages banks were accused of racism. So banks responded by creating special programs to specifically help African Americans with low credit scores get mortgages anyway (called subprime loans). Then the housing collapse happened and the banks got accused of racism for the special programs they created to provide subprime mortgages to African Americans saying it was discriminatory to put people with low credit scores in the position of paying off expensive high interest loans. In hindsight the banks were probably right the first time and wrong to create subprime loan programs specifically for African Americans but did anyone see a headline admitting that anger at banks the first time around for blindly just using credit scores was probably short sighted? I certainly didn't. The lesson to be learned in this is that just because you don't like the outcome of an algorithm isn't always a strong enough reason on its own to go in and tamper with it. This mistake helped compound the worst depression in the US since the great depression in the 30's. The banks engaged in predatory behavior along the way so I'm not saying they are blameless either. Some of the bankers knew the subprime loans would go south because they bundled them up, had agencies rate them as AAA (a lie but a legal one), and sold them off as securities (stocks) which they shorted. With the housing crisis over loans to minority borrowers are down from 2007 as banks are trying to carry healthier balance sheets and avoid riskier (subprime) loans so they are damned if they do and damned if they don't. Although the government mandate for banks to have an equal outcome in who they loan money to probably meant well, given the number of African Americans who were hit hard in the housing crisis I'm not sure tampering with the algorithm served to benefit anyone. The road to hell is paved with good intentions.
With big data, if you know someone's address and their name and their college, high school and elementary school (and their job title and history, and the stores they spend money at, and the products they buy...) you have an awfully accurate guess at their race without ever touching any official racial information.
The big problem is that AI algorithms are being trained on data sets that are often structurally racist. Black people get stopped by police and arrested at higher rates, even for the same behavior. Mortgage brokers assign minorities to more-expensive, harder-to-repay loan products than whites, even when they have the same credit scores. And so on. If we take flawed human judgement as the gold standard for AIs to emulate, then we're right back to the ancient Garbage In, Garbage Out.
S Michael
If the shoe fits; wear it. Crime and who commits it is just one aspect of human failure. Coddling of criminals, creating special treatment, or by favoring one form of human from another is a flaw in the human nature to not accept in your face facts. Human compassion should only play a small role in the development of AI. If it is tweeked in any direction it will become irrelative and untrustworthy thus ignored.
If an AI can only sort into male/female, it will offend the other 60+ gender identifications.
While we're on the subject, let's address another inequality which is increasingly apparent: ideology suppression. Let's make sure that everything from search engines on up don't filter things on us without our consent. That is already happening on Google and YouTube, where their liberal biases are becoming more and more apparent in search results. A few years ago, Google admitted to tailoring search output according to your previous search patterns. Now they're tailoring them to their preferences, not necessarily yours. This is resulting in more and more ideology bias and more segregation in society. That.Is.A.Bad.Thing.
Make AIs Neutral Now!
As a research scientist working with the concepts of random and bias on a regular basis the idea of AI training sets not being rigourously designed and subjected to scrutiny and peer review is terrifying. Training anyone - especially AI systems - using data known to contain contemporary or historical social and institutional bias is lunacy. @Daishi - the fundamental problem with the Compas system is that the algorithm does not look at reoffending per se but at the likelihood of being charged for reoffending. The US is still suffering the long term social and spatial consequences of its history and there remain strong auto-correlations between those factors used in the model and those that are claimed to be excluded.
Give them a copy of Asimov's 'I, Robot' or any of his books, come to it. It is not the principles that are the question but the users driven by egos, greed, money and the litigants ( driven byvthe same) who seek to enforce and benefit from rules they made up at the first place.
Graeme S
I think the issue is much simpler, we must rethink what it is that we are discussing, if we reassigned the AI to mean what it actually is .. like Artificial Innovation, then we could compare it to many other useful items like a calculator or a can opener.
The hype over Intelligence is misleading, to have intelligence you first must have a morality, then comes the decision to do good or bad, then upon what basis do we arrive at good / bad, the only litmus test for that must be someone who is truth, and He has been misaligned since humanity began breathing, we have walked away from Him filled our minds with religion and somehow think we can invent something that we ourselves cannot or will not do.
Twice as likely to reoffend, or, twice as likely to be arrested?
Those are two VERY different things...