Artificial intelligence can never be unbiased, but it can be responsible.
“I’ve never seen an algorithm yet that does not have bias in it,” Dr. Celeste Fralick told an audience at Mobile World Congress (MWC) LA 2019. “But society changes, and our algorithmic bias must change with it.”
Fralick, a chief data scientists and principle data engineer at McAffee, was part of a panel at MWC, “Can We Build Responsible AI?”, which tasked experts with answering some key questions about well-known bias in artificial intelligence – namely can AI ever truly be free of bias and what steps can be taken to remove or reduce bias in AI?
All the panelists agreed that bias in AI exists. “The key is recognizing that the bias exists and understanding the factors that cause it,” Bret Greenstein, SVP and global head of AI and analytics at Cognizant, said. There are ways to counterbalance [bias]. It’s not the elimination [of bias], but the recognition of it.”
Rashmi Misra, general manager of AI platforms business development at Microsoft, noted that though biases are very difficult to identify and get rid of, it is a least a problem that can be tackled in AI. “The intention of an algorithm – where you’re allowed to understand its baises – is the advantage. How many times have we seen humans that can’t change bias? At least with AI we effect change.”
Mark Birkin, program director of urban analytics at The Alan Turing Institute, pointed out however that a big part of discovering bias in AI is in examining the data used to train algorithms. “It doesn’t matter how unbiased the algorithm is if the data is all over the place.” Birkin said.
“All the data we train AI on today is old,” Greenstein added.
The Data Problem
“AI systems are, by definition, trained from data collected from society and since our society is not a perfectly equitable one, it’s inevitable that these biases make their way into AI products,” Krishna Gade, CEO of Fiddler Labs, told Design News via email. Gade, who was not a part of the MWC panel, said the problem with AI bias is that, unlike human bias, it is amplified many times over because of the nature of automation. A small bit of bias in data can have a huge ripple effect depending on how far reaching the algorithm is.
A recent study by UC Berkeley researchers published in Science, for example, examined almost 50,000 records from an academic hospital and found that an algorithm that is commonly used to predict which patients will need extra care recommends white patients over black ones at a significant rate. As the study reads:
“…This racial bias reduces the number of Black [sic] patients identified for extra care by more than half. Bias occurs because the algorithm uses health costs as a proxy for health needs. Less money is spent on Black [sic] patients who have the same level of need, and the algorithm thus falsely concludes that Black [sic] patients are healthier than equally sick White [sic] patients.”
In her work at McAfee Fralick said she has first-hand experience with just how dangerous bad data can be for AI. Beyond simply skewing results and decisions made by algorithms, data can be used maliciously to outright trick AI.
Research has already been done regarding what are called “adversarial examples” – tiny alterations in data designed to mistrain algorithms. Modifying even a few pixels of an image can cause an algorithm to misclassify it for example. Researchers at MIT found that altering only a tiny fraction of the pixels of an image of a pig fooled an algorithm into identifying the pig as an airliner.
Fralick and her team at McAfee refer to this phenomenon as “model hacking.” “Model hacking is just a more digestible term,” she said. Fralick said McAfee researchers took 700 features built into a neural network that was 99 percent accurate in identifying malware and found that a cyberattack that changed only 11 of those features caused the algorithm to consider all malware as benign. “That should scare us,” Fralick said. The silver lining, she noted, was that this was a hack done solely by researchers and wasn’t a reaction to a real-world attack. “We need to become more proactive,” she added.
Greenstein extended the example further into the domain of self-driving cars. “People have found ways to put stickers on roads or signs to throw off the algorithms that drive these cars. It works because the algorithms weren’t trained to assume people would do that – just what a good stop sign looked like.”
Toward More Transparent AI
But if AI can never be unbaised, how can we ever trust it?
“Transparency is paramount for trust, Fiddler Labs’ Gade said. “If AI systems are not transparent, it is difficult for humans to build trust with them. How do we know why an AI system made a certain decision? And if we don’t know the ‘why’ behind it, how can we course-correct?”
|Aarti Borkar, VP of IBM Security, speaks to an audience a Mobile World Congress LA 2019.|
Gade, as well as the MWC panelists, advocated for a shift away from today’s approach, where AI systems are mostly black boxes. Fidder Labs, for example, works to create “explainable AI” that provide visibility and transparency into how algorithms are making the decisions they make.
“Explainability gives teams visibility into the inner workings of an AI system, which in turn allows them to fix things that are incorrect,” Gade said. “As AI penetrates our lives more deeply, there is a growing ask to make its decisions understandable by humans, and laws are being put in place to ensure this is a need-to-have rather than a nice-to-have. The end goal is to increase trust in AI systems.”
In a separate talk as part of the MWC panel, Aarti Borkar, VP of IBM Security, also emphasized a need for explainability in AI. “How do you eliminate bias in your technology? You have to ask yourself four things: Is it fair?; Is it easy for a human being to understand? – can you explain why the system suggested what it did?; Is it secure? Can someone inject something into it that it does something bad?; And is it accountable? Do you have a chain of lineage explaining who and how it was made?”
Bokar further explained to the MWC audience, “As developers our intent needs to be very clear and it needs to be explained to the people who are going to use our technology.
“To actually take that intent and idea and make it reality you need a few things: You need to have a 360-dgree view of the data needed to train that system; you need the people to help you build it (not just computer scientists); and you need trust.”
For Bokar, the solution to combating AI bias is creating an ecosystem of trust and accountability around the development and deployment of AI. “Why would you have computer scientists by themselves solving problems like health checks and bank loans? You need subject matter experts. If you have a healthcare problem make sure you have clinicians at the table,” she said.
For the panelists, even if we accept bias as an issue to be mitigated more than eliminated, it is important that the engineers developing AI be able to answer these questions posed by Bokar. Equally as important is that the users and buyers who deploy and implement these technologies ask these questions of engineers. Simply chalking up bias as inevitable, an unconscious thing that will always occurs at any scale, is unacceptable when discussing AI that can make wide-reaching decisions.
“Unconscious bias is the best excuse on the planet,” Bokar said. “That’s fine when you talk about one person, but not when a system affects hundreds, thousands, or even millions of people…We don’t have the privilege of being unconsciously biased in a world of AI.”
Chris Wiltz is a Senior Editor at Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics.