logo

Wake up daily to our latest coverage of business done better, directly in your inbox.

logo

Get your weekly dose of analysis on rising corporate activism.

logo

The best of solutions journalism in the sustainability space, published monthly.

Select Newsletter

By signing up you agree to our privacy policy. You can opt out anytime.

Amy Brown headshot

New Global Hub Helps Companies Avoid the Ethical Pitfalls of AI

By Amy Brown
artificial-intelligence-risks-Amy-Brown.jpg

Business is in fast pursuit of artificial intelligence (AI) as the next frontier in creating new products and services as well as boosting profits. And this shift reaches far beyond technology firms: companies in sectors from banking to retail to energy, security and healthcare recognize the allure of AI, with new applications emerging every day.

But are companies thinking about the ethical pitfalls around privacy, unintended bias and misuse of data that could stop their innovation in its tracks?

Recent headlines would indicate otherwise. A voice recognition platform was found to have gender bias. A crime prediction algorithm targeted black neighborhoods. And an online ad platform was more likely to show men highly paid executive jobs.

Ethical concerns around bias, fairness, safety, privacy, transparency and accountability associated with AI are growing. With an estimated $1.2 trillion value in 2018 according to Gartner, the AI industry isn’t about to slow down any time soon. In the meantime, a number of concerned voices from within industry, academia and civil society are calling for a more thoughtful approach that balances innovation and responsibility.

Nordic hub offers guidance


To that end, the AI Sustainability Center (AISC) just launched in Stockholm to serve as a global multi-disciplinary hub to promote what its founders call “responsible and purpose-driven technology.”

Artificial intelligence is capable of a speed and capacity of processing that extends far beyond that of humans, but it cannot be trusted to be fair or neutral. That has led to what AISC described as unintended effects, such as misuse of data leading to privacy intrusion, data and algorithm biases leading to discrimination.

The broad use of personal data and AI systems are posing ethical risks which are difficult to predict and understand," says AISC co-founder Elaine Weidman-Grunewald, former senior vice president and chief sustainability and public affairs officer for Ericsson. “Even with the General Data Protection Regulation [GDPR, a sweeping privacy policy in the European Union] as a starting point, the regulatory frameworks can’t keep up. We need to develop sustainable AI frameworks and strategies that can help companies keep humanity and fairness at the core of AI applications.”

Companies, she said, need to not only look not at how digitalization will improve the bottom line, but also examine the sustainability gains and risks inherent in the technology. “Our mission is to help companies be fair and inclusive and act proactively as these AI systems mature,” Weidman-Grunewald told TriplePundit.

The center aims to develop operational frameworks that identify and address pitfalls and the broader ethical implications of AI and conduct multidisciplinary research. AISC says it wants to guide companies, tech startups, regulators and policymakers to make “human-centric decisions in the AI area.”

AISC brings together companies, academic institutions, public agencies and civil society.  Partners include technology investment firm Atomico, media company Bonnier, mobile network operator Telia Company, Microsoft and the KTH Royal Institute of Technology in Sweden.

Costly ethical pitfalls


The center focuses on four specific areas of ethical pitfalls in AI: the misuse of this technology and data, leading to privacy intrusions; immature AI (with facial recognition software’s racial bias a prime example); the bias of the creator (where a programmer’s values or biases are intentionally or unintentionally embedded in an algorithm, i.e. “the white guy problem”); and data and machine bias, in which the data or machine is not reflecting the reality, or preferred reality.

“We’re at a crossroads where we need to collectively chose a purpose-driven and responsible approach to AI,” AISC co-founder Anna Felländer, a digital economist and former chief economist for Swedbank, told TriplePundit.

"Short-term profits are seductive, yet the unintended pitfalls are costly—both from a financial and societal perspective," she said. "We are convinced that responsible and purpose-driven AI can be combined with profitable business models."

Developing best practices


Many companies have made missteps in their use of AI that underscore the need to address the technology’s ethical side. Algorithmic biases around gender, for example, are showing up on Google’s search platform. Amazon scrapped a secret AI recruiting tool after it showed a bias against women. Amazon was also found in a 2016 analysis by Bloomberg to have used an algorithm that made same-day delivery less likely in ZIP codes that are predominantly black.

To address these issues, a number of companies are introducing tools to detect bias or developing policies to avoid unintended bias in AI. Last fall IBM launched the Fairness 360 Kit to scan for signs of bias, recommend adjustments, and analyze how and why algorithms make decisions in real time.

Microsoft published an extensive report on “the future computed” last year and has stated the need for principles, policies and laws for the responsible use of AI.

Vahé Torossian, corporate vice president for Microsoft and president of Microsoft Western Europe, says that while the benefits of AI are “significant,” companies need to show “accountability to ensure it is used responsibly and ethically, to the benefit of all. Cooperation across industries has never been more important as this technology becomes increasingly pervasive.”

A group of companies including Google, Microsoft, Amazon, Facebook, Apple and IBM formed the Partnership on AI in 2016 to develop industry best practices to guide AI development.

But are these efforts enough? AISC’s Weidman-Grunewald thinks that what is needed is a robust risk-assessment framework to guide decision-making around AI. That will be one of the first tasks of the center, she said.

“There’s nothing really out there to help companies navigate the sustainability elements of future technology. And that’s especially concerning when you think about how fast the technology is evolving,” she told TriplePundit.

“The ‘move fast and break things’ approach to product development has resulted in too many examples of AI causing harm,” Kristofer Ågren, head of data insights for Division X for Telia Company, told TriplePundit. “We think it is imperative to work proactively with identification and mitigation of risks as we increase the use of AI in our data-driven products, such as Crowd Insights for smart cities."

Crowd Insights are insights from anonymized data designed to help cities better understand their citizens as they seek to improve public transport and urban planning, among other things, according to Telia Company. While Telia says it puts privacy first by anonymizing and aggregating the data based on groups, not individuals, a number of studies have pointed out the privacy risks of compiling mobility data.

“In the end,” Ågren says, “customers and consumers will favor companies that act in their long-term best interests. Working with experts and other partners in AISC will support the development of our skills, processes and tools and allow us to continuously apply the latest knowledge.”

Image credit: GLAS-8/Flickr

Amy Brown headshot

Based in Florida, Amy has covered sustainability for over 25 years, including for TriplePundit, Reuters Sustainable Business and Ethical Corporation Magazine. She also writes sustainability reports and thought leadership for companies. She is the ghostwriter for Sustainability Leadership: A Swedish Approach to Transforming Your Company, Industry and the World. Connect with Amy on LinkedIn and her Substack newsletter focused on gray divorce, caregiving and other cultural topics.

Read more stories by Amy Brown