logo

Wake up daily to our latest coverage of business done better, directly in your inbox.

logo

Get your weekly dose of analysis on rising corporate activism.

logo

The best of solutions journalism in the sustainability space, published monthly.

Select Newsletter

By signing up you agree to our privacy policy. You can opt out anytime.

The Next CSR Challenge: Engaging in a Dialogue About Artificial Intelligence

By Jim Pierobon
AI-partnership-logo-screen-capture.jpg

Products using artificial intelligence (AI) are creeping into our lives: in the home, online, at work, in the marketplace, in the doctor’s office. What if AI gets carried away, if it hasn't already?

Plenty of movies and books contemplate this. While those scenarios may be easy to dismiss, the consequences of what could happen are not. Unless it’s fully grasped for its benefits, companies that use AI are putting their brands at risk if society doesn’t adequately understand how it benefits from the technology.

That’s my principal take-away from reading a relatively new book – "Connect, How Companies Succeed by Engaging Radically with Society." I also spoke with two of its authors, former BP Chairman Lord John Browne and technology entrepreneur Tommy Stadlen. McKinsey & Co. partner Robin Nuttall also helped write the book.

At the risk of over-simplification, the opportunities and challenges for businesses and governments deploying AI can be boiled down to one question: Do consumers trust how it’s being deployed?

The answer to that question, Browne, Nuttall and Stadlen opine, will either propel AI forward or turn back the clock.

In "Connect," you’ll be reminded of German philosopher Georg Hegel, who said what "history teaches us is that people and governments have never learned anything from history.” Are the developers of AI listening?

So, as it defines the future, will technology companies (and who isn’t one these days) learn from the past?  Will the new Google Home, Tesla’s driverless cars, Apple’s latest iPhone, GE’s new industrial machines, IBM’s Watson and Amazon’s Echo – to name just a few – continue building trust? Or could one or a series of missteps disrupt the rush to deploy and make money using AI?

While that question may never be answered, it will be “the hot button issue,” at least for the next five years, Stadlen asserts. CBS’ 60 Minutes illuminated the same last Sunday.

“There is a real fear of the leverage to do harm that technology represents,” Stadlen said. “People are wary of what big business will do with technology that is incredibly complex and unknown to consumers and regulators.”

With all the advances digital applications are making, Stadlen said, it needs to do a much better job of explaining AI’s benefits to society. And to do that, corporate leaders must engage their stakeholders in radical ways that only a handful of companies have even contemplated, much less carried off successfully.

Perhaps the biggest exposure to AI is the looming prospect of robots taking over a significantly new array of manual jobs.  McKinsey researchers say they found 40 percent of today’s labor activities can be automated with currently available technologies. If that plays out, there are significant implications for businesses, elected officials and everyone who tries to influence them, Stadlen said.

Until now, society has seen the creation of enough new jobs to more than replace the jobs being eliminated by industries on the decline. But the halo Stadlen said hovers over many technology breakthroughs could vanish quickly unless the Silicon Valley culture enables an accessible platform, or forum, to guide civil society through the AI pitfalls that loom.

The best way to approach such a far-reaching endeavor, Stadlen said, is to create an independent advisory panel with real powers. “I would put on that panel representatives of government, unions, civil society, NGOs – a real cross-section of key stakeholders. I would arm them with all the information they need and make their work authoritative.”

Information is critical, Stadlen said, because it’s the imbalance of information that complicates, if not blocks, useful dialogues that can bridge gaps between experts and the public. The authors call for “radical openness” in the sharing of information.

“Business takes advantage of that to society’s detriment,” Stadlen says. In response, governments “slam down” burdensome regulations that turn out to be bad for business and undercut what it can do for society and the money it can make. “We need to avoid that with artificial intelligence.”

A good example in their book of an industry constantly under the microscope but that also helps power the world are oil and gas producers. Browne recounts a useful example of radical openness when he led BP to produce natural gas in 2002 from the “supergiant” Tangguh field in Bintuni Bay, off the Bird’s Head Peninsula of Papua, Indonesia.

Perhaps some companies deploying AI see the value of formally engaging with civil society. On Sept. 28, executives at Microsoft and the DeepMind unit at Google, along with founding support from Amazon, Facebook and IBM, launched the Partnership on AI nonprofit.

The partnership’s stated goals are to “study and formulate best practices on A.I. technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.”

The partnership was met by the technology trade press with some skepticism. Microsoft’s Eric Horvitz, who co-chairs the partnership with DeepMind co-founder Mustafa Suleyman, told a press briefing that recent hype concerning AI’s threats and a resulting “echo chamber of anxiety” is among the dynamics that pushed the companies to collaborate.

Horvitz and Suleyman invite any tech company, research group and non-profit to join the partnership, the Verge reported. They say “leadership will be shared equally” among the founding corporations and non-corporate participants.

The way Browne, Nuttall and Stadlen see it, the partnership should specify quickly how inclusive it will be and the genuine authority and real powers it will have.

“This is an excellent step in the right direction,” Stadlen said. “Technology companies have to earn the moral license to operate when it comes to complex and ethically-charged domains such as AI. It's on industry to persuade society that AI will benefit ordinary people, and is safe in the hands of companies.”

“To build credibility and trust,” he continued, “you have to give independent voices real authority and genuine independence.” He called on the Partnership to “publish what the academic and civil society experts say and follow their recommendations.”

Clean energy advocate, strategic marketer and story teller with 15+ years supervisory experience and a proven track record achieving strategic and program objectives for energy, utility, technology and other clients in their marketplaces and policy arenas while engaging their priority stakeholders and target audiences. I'm always on the lookout for innovative policies, people, technologies and businesses that are demonstrating how sustainability can be both healthy and profitable. Catch my blog posts at TheEnergyFix.com. I've also written for The New York Times, Houston Chronicle, The Huffington Post and TheEnergyCollective.com.

Read more stories by Jim Pierobon