"The values adopted to build today's AI systems will be reflected in the decisions those systems make for a decade or more," says IBM. Whether it's rating your credit history, offering you a job or even granting your parole, organizations are increasingly turning to artificial intelligence to automate that decision making.
Yet only recently we looked at how facial recognition systems can amplify rather than remove human bias, and that's just one example of an area of increasing concern: AI algorithms using imperfect data and flawed assumptions, inheriting the biases and prejudices of the humans behind them.
With its newly-announced AI Fairness 360 toolkit, IBM is throwing down the gauntlet. The toolkit is a sort of Swiss Army knife of algorithms, available to anyone, designed to eliminate that bias.
"This toolbox has multiple components," IBM Research AI's Kush R. Varshney tells New Atlas. "The first is how to check for the biases. The second is how to explain how those checks are happening and what it means to the user. The third aspect is the algorithms to correct the biases and to fix the models that come out of them to make them more fair."
"… human decisions have traces of bias whether that's implicit or explicit …"
But where do these biases originate? The source data that AI algorithms rely on is very often the cause of the problem. "With machine learning algorithms, they are trained using historical data and typically that data is [made up of historic] human decisions that have been made on the same task," Varshney explains. "And oftentimes the human decisions have traces of bias whether that's implicit or explicit for various reasons. So if the machine learning model is trained based on this biased trained data set then it will inherit and replicate those biases."
Yet it's not only the data that's the issue. "[There are] cases where a data science practitioner or modeler can, without knowing, do something to transform the data inappropriately or introduce bias to the modeling process in some shape or form," Aleksandra Mojsilović, IBM Fellow of AI Science tells New Atlas. A simple example may be the seeming-innocuous action of sorting data into buckets, say by age or education, which may have knock-on implications for a decision on a college admission.
What does the toolkit actually do? "Simply checking for bias is quite straightforward," Varshney explains. "The algorithms are where the real interest is for us so the toolbox right now contains 10 different algorithms for remediating or mitigating bias." Those algorithms broadly fall into three types. One group preprocesses the data to make sure different groups of people are treated fairly. Another acts on the processing of data itself, which may include building in algorithmic safeguards to weed out biased data. And the third checks for bias in the outcomes of the AI algorithms. "The fact that we have multiple [algorithms] is good for allowing users to choose what's most appropriate for them," he adds.
"… Putting it out in the open and as a call for the community to collaborate/contribute is really important for us …"
Why make the toolkit open source? "Fairness and bias are so complex," Mojsilović explains. "Even for humans, it's very difficult to define and understand, and even more difficult for practitioners to make it a part of their solutions. We felt this is one [area] where the industry should come together, and everyone should come together, because as we advance together it's going to be for the common good. Putting it out in the open and as a call for the community to collaborate/contribute is really important for us."
"This is a very active area of research in the machine learning community," Varshney adds. "And I think in order to keep things fresh, to keep the most recent developments available for everyone, I think that's one reason it should be open source. We really do want this to become the hub for practitioners to incorporate this into all the code they develop for all their applications in any industry – finance, human resources, healthcare, education, anything public sector – by making it open source people can take it and really fold it into their workflows."
It's not only the AI industry interested in IBM's work. "We've heard many stories from industry practitioners who come to visit our lab who would be really interested in finding out whether there was bias in their historical decisions made by humans" Mojsilović says. "It's really useful from an organisational perspective whenever there are massive amounts of data and decisions are being made on a daily basis in high volume."
"… We really do have the opportunity now to start addressing the biggest problems that the world has …"
Sometimes the algorithms have good news to report. "We were working with this group named Echoing Green, and they actually are a group that offers fellowships and other support to promising social entrepreneurs," Varshney tells New Atlas. "They receive something like 3,000 applications per year for 30 slots that they can offer. We were looking at using machine learning techniques to automate, or at least help the human decision makers to cut down on their load. Actually it turned out that Echoing Green's own processes were quite fair with respect to the variables that we looked at and that's an example of something where we were looking for it but didn't find it which is a nice thing."
Both Varshney and Mojsilović are optimistic for the future of AI. "We really do have the opportunity now to start addressing the biggest problems that the world has – so hunger, poverty, health and education," says Varshney. "One of those ways is the Science for Social Good initiative." The initiative, which Mojsilović and Varshney co-direct, sees IBM scientists and engineers work with NGOs to tackle societal problems. Initiatives cover diverse fields, from looking to biomimicry for technological inspiration, to addressing hate speech online.
"It's really looking at how you use AI and machine learning related technologies to treat problems that go beyond the revenue generating agenda," Mojsilović adds. "How do we address the problems of this world? That's a massively exciting idea."
But she warns that a greater understanding of AI is needed if we're to get the most benefit from the technology. "It's still a very new technology in the sense of how we're putting it to use. It's also very poorly understood, especially by public legislators and decision makers. We need to figure out a way to connect all these dots and work in a very multidisciplinary setting so that the progress is made in the best possible way, and so we don't limit it in a way that means we are going to restrict the usages that have this enormous potential."
You can read more about AI Fairness 360 on the IBM developer blog.
For instance, among US citizens taking the Quantitative Reasoning "math" GRE (which is almost identical to the math SAT), the average Asian man scores above more than 98.5% of Black women. Would any sane person want engineering firms declared ineligible to bid on state contracts for not having more Black women than Asian men designing bridges? Would you rather drive your family over a bridge designed by a company that was "fair", or one that hires by ability?
(calc. from GRE publisher ETS's data given in page 7 of snapshot_test_taker_data_2015.pdf , which shows up at the top of a search for that file name. 16,469 Black women averaged 142.9 with a standard deviation of 6.3, while 7,773 Asian men averaged 156.7 s.d. 8 - and ~5% of them would be expected to hit the 170 maximum score on the test, their group average would have been a bit higher if higher scores were possible)
Biases can be incorrect or real. I can only assume AI will come up with its own biases based on its collection of data and humans will probably disagree with these biases, much the same as we argue about perceived human originated biases...
Biases are much the same as statistics and their use... Statistics never lie but liars use statistics!
Here's an excellent example: http://www.abc.net.au/news/2018-09-05/fact-check-sudanese-gangs-victoria/10187550
That is a government-compliant "Fact Check" on crime statistics, which only once touched on the "truth", and only in fine print that you literally need a maths degree to notice and understand. They correctly said that Antipodeans commit 73.5% of crime, but Sudanese only 1%. They have pages of graphs and charts and stats to back all that up. They do not chart proportions. They lie by deliberate and obvious omission to present an unfair-to-locals racist result which fails to make it clear that Sudanese (making 0.1% of the population) are in fact 1000% (10 times) more likely to commit crime than Antipodeans.
So how does an AI use "fair" data in assessing a Sudanese parole application I wonder? Does this mean that IBM is facilitating anti-Sudanese racism? Or is it being "fair" to Antipodeans? You can't have both.