Computers

Human rights groups call for protections against discriminatory and biased artificial intelligence

View 2 Images
A coalition of human rights groups has released The Toronto Declaration, which offers guidelines to governments and private companies on ways to keep bias and discrimination out of machine learning algorithms
The MIT study found extraordinary error rates in picking up gender in dark-skinned female subjects
MIT / Joy Buolamwini
A coalition of human rights groups has released The Toronto Declaration, which offers guidelines to governments and private companies on ways to keep bias and discrimination out of machine learning algorithms

A collective of human right groups has called for ethical frameworks to be established that can guide the development of machine learning algorithms to ensure discrimination does not become embedded into new artificial intelligence technologies.

Drafted by Amnesty International and Access Now, with subsequent endorsement from the Wikimedia Foundation and Human Rights Watch, The Toronto Declaration seeks to address the risks of human right harms that can result from the use of machine learning systems.

"The Toronto Declaration is unique in setting set out tangible and actionable standards for states and the private sectors to uphold the principles of equality and non-discrimination, under binding human rights laws," says Anna Bacciarelli, from Amnesty International.

The Declaration lays out a comprehensive framework to guide both governments and private companies in the development and implementation of artificial intelligence technologies. The action is an interesting step for human rights organizations as they begin to grapple with the social implications of certain advanced technologies being integrated into social spaces.

Some of the suggestions in the Declaration include private companies taking into account the risk of bias being introduced into a system through incomplete machine learning training data, and governments assessing potential discriminatory outcomes when acquiring and deploying these technologies.

Despite the largely symbolic nature of the Declaration, this is a timely move for human rights groups to make. There have undoubtably been numerous occurrences of problematic bias seeping into technologies steered by machine learning algorithms.

The MIT study found extraordinary error rates in picking up gender in dark-skinned female subjects
MIT / Joy Buolamwini

Earlier this year, a study from MIT and Stanford University researchers revealed that three major facial recognition systems returned significant error rates on any subject that wasn't white or male. It was suggested that these flawed systems were generated by machine learning algorithms trained on datasets that contained high volumes of white males over other gender and racial types. The concerns are clear when one realizes these facial recognition systems are already being utilized by law enforcement agencies and health care departments.

Perhaps more worryingly was a report in 2016 revealing that a computer program, used by US courts to inform judges on the risk of defendants reoffending, was dubiously classifying black people as twice as likely to reoffend as white people. The system, called Correctional Offender Management Profiling for Alternative Sanctions (Compas), has been guiding judge's sentences for years, but this report starkly suggested its inbuilt racial bias was perpetuating prejudice in the criminal justice system.

The company behind Compas disagreed with the report and claimed the system bases its risk assessments on a variety of questions, none of which specifically query race. The report, on the other hand, looked at the data, examining the risk scores attributed to 7,000 people, and followed up for several years to compare those scores to the reality of whether they actually reoffended. Despite the algorithms not specifically addressing race, the bias still seemed present, with the system wrongly labeling black defendants as potential reoffenders at double the rate of white defendants.

Although the Toronto Declaration is not currently legally binding, the authors stridently suggest it should be, as it is fundamentally based on international human rights law, which is an established legal framework. This is just a small first step, but a potentially important one, as more and more global systems begin to be controlled by artificial intelligence trained on machine learning algorithms.

Source: Access Now

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
12 comments
Daishi
Are we making the mistake of blaming the algorithm because we don't like the outcome? As stated race wasn't a factor in the algorithm but blacks were categorized by it as twice as likely to reoffend. We've determined this to not be an acceptable outcome before bothering to ask if the result is accurate. I won't go down the rabbit hole of crime stats but if you sort by income, race, gender, or a host of other labels you get different results. If the same algorithm determined that males are more likely commit armed robbery does that also mean that it's broken?
Daishi
You know the financial industry in the US went down this same path. Federal legal doctrine dictates that even if a bank applies the same standards to all its customers, it can still be considered breaking the law if that standard results in fewer loans on average to a particular group. Though race is not a direct factor in mortgage qualification because more blacks than whites were being turned down for mortgages banks were accused of racism. So banks responded by creating special programs to specifically help African Americans with low credit scores get mortgages anyway (called subprime loans). Then the housing collapse happened and the banks got accused of racism for the special programs they created to provide subprime mortgages to African Americans saying it was discriminatory to put people with low credit scores in the position of paying off expensive high interest loans. In hindsight the banks were probably right the first time and wrong to create subprime loan programs specifically for African Americans but did anyone see a headline admitting that anger at banks the first time around for blindly just using credit scores was probably short sighted? I certainly didn't. The lesson to be learned in this is that just because you don't like the outcome of an algorithm isn't always a strong enough reason on its own to go in and tamper with it. This mistake helped compound the worst depression in the US since the great depression in the 30's. The banks engaged in predatory behavior along the way so I'm not saying they are blameless either. Some of the bankers knew the subprime loans would go south because they bundled them up, had agencies rate them as AAA (a lie but a legal one), and sold them off as securities (stocks) which they shorted. With the housing crisis over loans to minority borrowers are down from 2007 as banks are trying to carry healthier balance sheets and avoid riskier (subprime) loans so they are damned if they do and damned if they don't. Although the government mandate for banks to have an equal outcome in who they loan money to probably meant well, given the number of African Americans who were hit hard in the housing crisis I'm not sure tampering with the algorithm served to benefit anyone. The road to hell is paved with good intentions.
paul314
With big data, if you know someone's address and their name and their college, high school and elementary school (and their job title and history, and the stores they spend money at, and the products they buy...) you have an awfully accurate guess at their race without ever touching any official racial information.
The big problem is that AI algorithms are being trained on data sets that are often structurally racist. Black people get stopped by police and arrested at higher rates, even for the same behavior. Mortgage brokers assign minorities to more-expensive, harder-to-repay loan products than whites, even when they have the same credit scores. And so on. If we take flawed human judgement as the gold standard for AIs to emulate, then we're right back to the ancient Garbage In, Garbage Out.
S Michael
If the shoe fits; wear it. Crime and who commits it is just one aspect of human failure. Coddling of criminals, creating special treatment, or by favoring one form of human from another is a flaw in the human nature to not accept in your face facts. Human compassion should only play a small role in the development of AI. If it is tweeked in any direction it will become irrelative and untrustworthy thus ignored.
eMacPaul
If an AI can only sort into male/female, it will offend the other 60+ gender identifications.
ljaques
While we're on the subject, let's address another inequality which is increasingly apparent: ideology suppression. Let's make sure that everything from search engines on up don't filter things on us without our consent. That is already happening on Google and YouTube, where their liberal biases are becoming more and more apparent in search results. A few years ago, Google admitted to tailoring search output according to your previous search patterns. Now they're tailoring them to their preferences, not necessarily yours. This is resulting in more and more ideology bias and more segregation in society. That.Is.A.Bad.Thing.
Make AIs Neutral Now!
AberashCrispin
As a research scientist working with the concepts of random and bias on a regular basis the idea of AI training sets not being rigourously designed and subjected to scrutiny and peer review is terrifying. Training anyone - especially AI systems - using data known to contain contemporary or historical social and institutional bias is lunacy. @Daishi - the fundamental problem with the Compas system is that the algorithm does not look at reoffending per se but at the likelihood of being charged for reoffending. The US is still suffering the long term social and spatial consequences of its history and there remain strong auto-correlations between those factors used in the model and those that are claimed to be excluded.
judit
Give them a copy of Asimov's 'I, Robot' or any of his books, come to it. It is not the principles that are the question but the users driven by egos, greed, money and the litigants ( driven byvthe same) who seek to enforce and benefit from rules they made up at the first place.
Graeme S
I think the issue is much simpler, we must rethink what it is that we are discussing, if we reassigned the AI to mean what it actually is .. like Artificial Innovation, then we could compare it to many other useful items like a calculator or a can opener.
The hype over Intelligence is misleading, to have intelligence you first must have a morality, then comes the decision to do good or bad, then upon what basis do we arrive at good / bad, the only litmus test for that must be someone who is truth, and He has been misaligned since humanity began breathing, we have walked away from Him filled our minds with religion and somehow think we can invent something that we ourselves cannot or will not do.
christopher
Twice as likely to reoffend, or, twice as likely to be arrested?
Those are two VERY different things...