Critics of artificial intelligence (AI) have found their fair share of reasons to raise concerns about this technology. Facial recognition systems that don’t recognize darker-skinned faces. Hiring algorithms that bypass resumes that contain the word “women.” A chatbot taught through interactions with users to be racist and misogynistic. But these examples don’t have to define the future.
AI systems are only as good as the data put into them, including implicit racial, gender or ideological biases. As more and more examples of AI gone wrong arise, the risks are threatening to overshadow AI’s potential benefits. And with each new example, the public’s trust in AI further erodes.
A growing number of companies are taking note and action to mitigate the risk and, in the process, finding opportunities to offer solutions.
After becoming one of the founding members of the Partnership for Responsible AI, tech giant IBM is creating methodologies that will enable its clients to detect and mitigate bias within their AI applications. The company is also sharing its knowledge through the Fairness 360 Open Source Toolkit, which helps app developers examine, report and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. This program includes more than 70 fairness metrics and 10 bias mitigation algorithms with relevance to finance, human capital management, healthcare and education. It also contains three tutorials that detect and mitigate age bias in credit scoring, racial bias in medical management, and gender bias in face images.
Microsoft is building a tool to identify bias in a range of different AI algorithms and to raise awareness among its employees.
“The most important thing companies can do right now is educate their workforce so that they’re aware of the myriad ways in which bias can arise and manifest itself and create tools to make models easier to understand and bias easier to detect,” Rich Caruna, a senior Microsoft researcher told the MIT Technology Review last year.
Google has introduced a Google Crash Course: Intro to Fairness module that teaches developers about top fairness considerations when building, evaluating and deploying machine learning models.
Jigsaw, a unit within Google's parent company Alphabet, recently announced a partnership with GLAAD to create public data sets and machine learning research resources to help make online conversations more inclusive of the LGBTQ community.
“Our mission is help communities have great conversations at scale,” said Cj Adams, Jigsaw Product Manager. “We can't be content to let computers adopt negative biases from the abuse and harassment targeted groups face online.”
It’s not just tech firms who have been proactive on the AI front. Accenture is also getting on board through its new Applied Intelligence practice, which recently launched the “AI Fairness Tool.” The tool examines data influence of sensitive variables (age, gender, race, etc.) on other variables in a model, measuring how much of a correlation the variables have with each other to see whether they are skewing the model and its outcomes. It then helps correct the bias in the algorithm.
Accenture is currently fielding partners for testing the tool, which it has prototyped so far on a model for credit risk, and is preparing for a soft launch. The tool will be part of a larger program called AI Launchpad, which will help companies create frameworks for accountability and train employees in ethics, so they know what to think about and what questions to ask.
“I’m hoping that this is something we can make accessible and available and easy to use for non-tech companies – for some of our Fortune 500 clients who are looking to expand their use of AI, but they’re very aware and very concerned about unintended consequences,” says Rumman Chowdhury, Accenture’s global responsible AI lead.
Michael Li, founder and CEO of The Data Incubator, a data science training and placement firm, told the Harvard Business Review he believes that “the risks of AI can come from any aspect of business, and no single manager has the context to spot everything. Rather, in a world in which AI is permeating everything, companies need to train all their business leaders on AI’s potential and risks, so that every line of business can spot opportunities and flag concerns.”
He calls on business to offer its employees specialized AI training to understand both the possibilities and the risks. “This is not technical — executives don’t need to be hands-on practitioners — but they do need to understand data science and AI enough to manage AI products and services. Business leaders need to understand the potential for AI to transform business for the better, as well as its potential shortcomings — and dangers.”
Image credit: Gerd Altmann from Pixabay
Maggie Kohn is excited to be a contributor to Triple Pundit to illustrate how business can achieve positive change in the world while supporting long-term growth. Maggie worked for more than 20 years at the biopharma giant Merck & Co., Inc., leading corporate responsibility and social business initiatives. She currently writes, speaks and consults on corporate responsibility and social impact when she is not busy fostering kittens for her local animal shelter. Click here to learn more.