With companies announcing new artificial intelligence (AI) products nearly every day, Axon—the largest manufacturer of police body cameras—is saying, not so fast.
Last week, the company announced it will not be commercializing face-matching products on its body cameras, at least not at this time. Axon CEO Rick Smith said the decision was based on the recommendation of Axon’s AI and Policing Technology Ethics Board, which concluded that face recognition technology is not yet reliable enough to justify its use on body-worn cameras. The board expressed particular concern regarding evidence of unequal and unreliable performance across races, ethnicities, genders and other identity groups.
Facial recognition is a category of biometric software that maps an individual's facial features mathematically and stores the data as a faceprint. The software uses deep-learning algorithms to compare a live capture or digital image to the stored faceprint in order to verify an individual's identity.
AI software in law enforcement has raised particular privacy concerns and the risk of inherent bias.
Axon’s Ethics Board, which operates independently from the company, cautioned that face recognition technology should not be deployed until the technology performs with far greater accuracy and performs equally well across races, ethnicities, genders, and other identity groups. It said in its report to Axon leadership, “Whether face recognition on body-worn cameras can ever be ethically justifiable is an issue the Board has begun to discuss, and will take up again if and when these prerequisites are met.”
The decision by Axon to not commercialize the technology shows that business can act responsibly, using independent input to weigh the risks of AI against its potential revenue-earning benefits.
"We made the decision that just because you could deploy a certain technology does not make it right," Mike Wagers, Axon vice president of emerging markets, told NPR last week.
The company believes face-matching technology deserves further research to better understand and solve for the key issues identified in the report, including evaluating ways to de-bias algorithms. It says it will continue to evaluate the state of face recognition technologies and will keep the board informed. The company has also committed to publish a formal ethical framework later this year that will outline how ethical considerations are integrated into its product development efforts.
While Axon has paused on face-matching software, AI already plays a central role in many of Axon’s products and services, used by more than 4 million law enforcement officers around the world. Moji Solgi, Axon director of AI and machine learning, says the technology provides an effective set of tools for public safety officers to do more and have better access to crucial information and real-time safety measures.
One example where Solgi sees AI making a difference is in the Los Angeles Police Department, which uses Axon’s artificial intelligence software to categorize video footage captured by police body cameras. The software is meant to reduce the amount of time it takes officers to review, analyze and categorize body camera footage. The software does not have facial recognition technology, and it will not make decisions on police interactions, crimes or other subjective issues.
Most body camera videos are an hour long, but on average, only 15 minutes shows an incident that requires officers to review. Axon’s software will label chunks of footage that officers need to review and other chunks as unnecessary to view. This will redirect officer's time and help them not waste hours watching inconsequential parts of the video, an Axon spoke person told Inc.
Daniel Gomez, a sergeant in the LAPD, agreed. "In the past year alone, we have accumulated more than 33 years' worth of video data to analyze," Gomez told Inc. "Reducing the time it takes for our staff to review footage is a priority for us so we can invest more time and energy in the field."
The long-term goal is for the software to auto-populate police reports with simple, objective facts, Axon says. One day, the software could start a police report and include simple facts, like the make and color of a car involved in a robbery, so the officer can focus on more complex aspects of a crime.
Solgi says the company will continue to take ethical considerations seriously as it develops the software:
“We do our AI development while keeping the ethical aspects front and center to every step of the process—from data collection to model training, inference, evaluation, and interpretation of the results. We ensure that there is always a human-in-the-loop to make the final judgments whenever there might be a risk of impinging upon privacy or civil rights.”
As AI continues to take the world by storm, it is imperative for business to lead by example, examining the inherent risks and asking if the technology has evolved far enough to safeguard against potential pitfalls. Companies that fail to do this will risk eroding public and customer trust—which could be the biggest risk of all.
Axon CEO Smith agrees. “Outside ethical advisory boards are a new concept among technology companies, and we are proud to embrace it and design an ethical roadmap that we hope other companies can emulate.”
Image credit: Axon/Facebook
Maggie Kohn is excited to be a contributor to Triple Pundit to illustrate how business can achieve positive change in the world while supporting long-term growth. Maggie worked for more than 20 years at the biopharma giant Merck & Co., Inc., leading corporate responsibility and social business initiatives. She currently writes, speaks and consults on corporate responsibility and social impact when she is not busy fostering kittens for her local animal shelter. Click here to learn more.