logo

Wake up daily to our latest coverage of business done better, directly in your inbox.

logo

Get your weekly dose of analysis on rising corporate activism.

logo

The best of solutions journalism in the sustainability space, published monthly.

Select Newsletter

By signing up you agree to our privacy policy. You can opt out anytime.

Leon Kaye headshot

Companies Using AI in Hiring Will Soon Face New Regulations

By Leon Kaye
AI

Despite studies concluding that artificial intelligence (AI) often contributes to bias against women and people of color in the workplace, many companies continue to use software that incorporates AI as part of their hiring practices. On one hand, it’s understandable: Going through resumes submitted online is a slog. The same goes for scheduling a long week of video interviews. Nevertheless, concerns over AI led to a dozen leading brands to announce last week that they were committed to stop artificial intelligence and algorithmic bias from having any impacts on their hiring practices.

But such action may be too little, too late for local and state governments. New York City’s government recently passed a law that will require any company selling human resources software using AI to complete third-party audits in order to verify that such technology doesn’t show bias against any gender, ethnicity or race. In addition, companies using such software would be required to inform job applicants that they use AI as part of any hiring process, as well as disclose whether any personal information helped its HR software make any decisions.

Editor's note: Be sure to subscribe to our Brands Taking Stands newsletter, which comes out every Wednesday.

It’s easy to dismiss New York City as a political outlier, but states such as Illinois and Maryland have also enacted similar legislation. Both states’ laws attempt to tackle the problem of using AI to analyze video interviews of job applicants. While the technologies are relatively new, the ongoing problem isn’t: At a minimum, legislation considered across the U.S. seek both transparency and the consent of prospective employees before such technology is used to assess their fit within a company.

And in a move that could have implications across the country, The U.S. Equal Employment Opportunity Commission (EEOC) launched a working group in October in order to ensure that the use of such hiring tools comply with federal hiring and civil rights laws that the agency is tasked to enforce.

“This is a big deal because it could provide people some transparency when it comes to learning why they weren’t hired,” wrote Fortune’s Ellen McGirt in her most recent RaceAhead newsletter. “For people of color, knowing whether AI was used to determine if they were a good fit for the job could be revelatory.”

Image credit via Adobe Stock

Leon Kaye headshot

Leon Kaye has written for 3p since 2010 and become executive editor in 2018. His previous work includes writing for the Guardian as well as other online and print publications. In addition, he's worked in sales executive roles within technology and financial research companies, as well as for a public relations firm, for which he consulted with one of the globe’s leading sustainability initiatives. Currently living in Central California, he’s traveled to 70-plus countries and has lived and worked in South Korea, the United Arab Emirates and Uruguay.

Leon’s an alum of Fresno State, the University of Maryland, Baltimore County and the University of Southern California's Marshall Business School. He enjoys traveling abroad as well as exploring California’s Central Coast and the Sierra Nevadas.

Read more stories by Leon Kaye