logo

Wake up daily to our latest coverage of business done better, directly in your inbox.

logo

Get your weekly dose of analysis on rising corporate activism.

logo

The best of solutions journalism in the sustainability space, published monthly.

Select Newsletter

By signing up you agree to our privacy policy. You can opt out anytime.

Matt Martin headshot

The 5 Most Important Ethical Issues Facing Tech

By Matt Martin
Tech

Tech workers are building the future. With that power comes a responsibility to build a future that is more free, just, and more prosperous than the present. Many tech workers are taking that responsibility seriously.

For example, since 2018, employees at GoogleFacebook, and Amazon have publicly protested their companies' behavior on ethical grounds.

It’s essential that we understand what’s at stake when it comes to who we work for and what we build. Below are five areas within technology that represent forks in the road. Each of them holds tremendous possibility. Some are helping to usher in a better future. But all of them have the potential to hasten dystopia. Here's a brief summary of each of these areas and why they matter.

Mass surveillance

In a nutshell: Private companies including social media sites and cellular phone service providers are collecting vast troves of detailed, real-time location and communication metadata and selling it to and sharing it with law enforcement, immigration enforcement, and the intelligence community without informing users.

What may be at stake: Surveillance by immigration enforcement is literally a matter of life and death. Law enforcement’s use of surveillance technology to identify and track protestors and journalists threatens First Amendment rights. Amazon Ring and other surveillance tools can present risks that police could escalate responses to protestors to the point at which violence can result.

Where to learn more: The Intercept and 20 Minutes into the Future are starting points for sources of surveillance reporting. Be sure to follow these five leaders on tethics (tech ethics); one listee, Eva Gaperin, updates an excellent Twitter feed that provides constant updates on surveillance. And be sure to check out our this post on the pros and cons of employee surveillance.

Be aware of deepfakes

In a nutshell: In April, State Farm debuted a widely discussed TV commercial that appeared to show an ESPN analyst making shockingly accurate predictions about the year 2020 in 1998. It was a deepfake - a disturbing trend that is occurring within media worldwide.

Deepfakes are media representations of people saying and doing things they didn’t actually say or do. To make a deepfake, someone records a photo, audio clip, or video of someone and then swaps out his or her likeness for another person's.

What may be at stake: Detecting deepfakes is one of the most important challenges ahead of us. Examples of deepfakes in include a video in which Belgium’s Prime Minister Sophie Wilmès links COVID-19 to climate change. In one particularly frightening example, rumors that a video of the president of a small African country was a deepfake helped instigate a failed coup. On the other hand, brands are using deepfakes for marketing and advertising to positive effect. Other positive uses include creating “voice skins” for gamers who want realistic-sounding voices that aren’t their own.

Where to learn more about these tech challenges: This synopsis by MIT and this CSO intro both do a good strong job covering how deepfakes are made and the risks they impose. The Brookings Institution offers good summary of the potential political and social dangers of deepfakes. Further, this guide, in addition to additional work on Forbes, are good primers on how advanced deepfake technology is - along with its potential to become even more sophisticated. Finally, the videos embedded in this CNN analysis can help those interested in this challenge can get up to speed.

Stay vigilant on disinformation

In a nutshell: A play on the word “misinformation,” disinformation is a type of propaganda meant to mislead or misdirect a rival. For example, a 2019 Senate Select Committee on Intelligence (SSCI) report confirmed that Russian-backed online disinformation campaigns exploited systemic racism to support Donald Trump’s candidacy in the 2016 election.

What may be at stake: When disinformation from Chinese and Russian-backed groups is distributed online, it can have real-world consequences. Between 2015 and 2017 Russian operatives posing as Americans successfully organized in-person rallies and demonstrations using Facebook. In one instance, Muslim civil rights activists counter-protested anti-Muslim Texas secessionists in Houston who waved Confederate flags and held “White Lives Matter” banners. Russian disinformation operatives organized both rallies. Experts predict more Russian-backed disinformation in the run-up to the 2020 elections.

Where to learn more: Dan Harvey’s 20 Minutes into the Future is among the leading newsletters on this topic, and his most recent edition is a quick read on the recent developments in Russian disinformation. In it, he recommends this analysis of Internet Research Agency (IRA) campaigns put together by Oxford University. The Axios Codebook newsletter is also insightful, and its June edition on Russian disinformation disinformation is an especially compelling resource. For a thorough-but-readable long read, I recommend DiResta’s The Digital Maginot Line. For a more academic analysis, check out Stanford University’s Internet Observatory.

Be wary of addictive user experience

In a nutshell: Product managers, designers, tech marketers and start-up founders are all trying to build tools that users can’t put down. The benefits of addictive technology is obvious for the builders. But what is the long-term impact on users?

What may be at stake: Habit-forming tech products aren’t bad in and of themselves. But not all habits turn out to be healthy. Multiple studies have linked social media use with anxiety and depression, although the causal relationship isn’t clear. After the fintech company Robinhood made it free, easy, and fast to trade individual stocks, some users developed an unhealthy relationship with trading. One 20-year-old user committed suicide after seeing his $730,000 negative balance.

Arguably, no app is more addictive than TikTok. As a Chinese company, TikTok owner, ByteDance, is required to pass user data to the Chinese government. And going back to the disinformation section, TikTok has little incentive to resist pressure to display content that gives China an advantage over the US. In 2019 Senator Josh Hawley introduced ham-fisted legislation aimed at combating any addictive user experiences.

Where to learn more: This Scientific American piece is a good overview of the research on social media’s impact on mental health. The Margins newsletter is a good source of information on the pros and cons of technology and its Robinhood edition is a worthwhile read. Ben Thompson’s Stratechery newsletter is nuts-and-bolts, but delves into useful analysis of the ethical implications of technology.

Racist AI can reflect our own biases

In a nutshell: Artificial intelligence (AI) is only as good as the data it’s on which it’s based. Since humans still, by and large, exhibit racial biases it makes sense that the data we produce and use to train our AI is also going to contain racist ideas and language. The fact that Black and Latino Americans are severely underrepresented in positions of leadership at influential technology companies exacerbates the problem. Tech workers are only 3.1 percent are Black nationwide. Silicon Valley companies lag on diversity, as only 3 percent their total workforce is Black. Finally, only 1 percent of tech entrepreneurs in Silicon Valley are Black.

What may be at stake: After Nextdoor moderators found themselves in hot water for deleting Black Lives Matter content, the company said it would use AI to identify racism on the platform. But racist algorithms are causing harm to Black Americans. Police departments are using facial recognition software they know misidentifies up to 97 percent of Black suspects, leading to false arrests.

The kind of modeling used in predictive policing is also inaccurate, according to researchers. And judges are using algorithms to assist with setting pre-trial bail that assign Black Americans a higher risk of recidivism based on their race. Amazon scrapped its internal recruitment AI once it came to light that it was biased against women. On the other hand, one study showed that a machine learning algorithm led to better hires and lower turnover while increasing diversity among Minneapolis schoolteachers.

Where to learn more: The Partnership on AI, a nonprofit coalition committed to the responsible use of AI, is a great resource to learn more about the challenges within this space. This discussion on algorithms and a November 2019 assessment on the pitfalls of AI are both good valuable as they are short, readable intros to on the topic. Race after Technology is a concise, readable, quotable tome on what author Michelle Alexander calls the New Jim Crow.

Image credit: Michael Aleo/Unsplash

Matt Martin headshot

Matt Martin is the co-founder and CEO Of Clockwise.

Read more stories by Matt Martin