logo

Wake up daily to our latest coverage of business done better, directly in your inbox.

logo

Get your weekly dose of analysis on rising corporate activism.

logo

The best of solutions journalism in the sustainability space, published monthly.

Select Newsletter

By signing up you agree to our privacy policy. You can opt out anytime.

Abha Malpani Naismith headshot

Democratizing AI: Why Public Input Matters in the Future of Technology

AI is moving fast. Innovators like The Collective Intelligence Project want to ensure everyday people have their say in how it's created and deployed.
A graphic of a laptop open to an AI chatbot.

(Image: Philip Oroni/Unsplash)

The world is moving towards an "AI-first" paradigm, where artificial intelligence (AI) is not only becoming pervasive but foundational to business, technology and daily life. It is used across industry and demographics — to diagnose diseases, detect fraud, build software, optimize operations and facilitate decision-making. But it is also spreading disinformation, enabling surveillance without consent, and reinforcing inequality when built using biased datasets.

As AI quietly seeps into our daily existence, there is a bigger question looming over us: Who gets to decide how it comes into our lives? Who gets to decide how it’s created and deployed? And, do we have a say?

"Technology shouldn't happen to people. It should happen with and by people," said Divya Siddarth, founder and executive director of The Collective Intelligence Project, a tech nonprofit developing governance models for AI that involve more democratic processes. 

Having spent almost a decade in Silicon Valley and at Microsoft Research, Siddarth knows how powerful technologies are made and how little public input factors into those decisions. 

"I don't think we need democratic input on every tech decision, like the user interface of Google Sheets or the look of a MacBook desktop, but when a technology is truly transformative, the risks of not involving collective input become much more serious,” Siddarth said. 

In 2025, more than 378 million people worldwide are expected to use AI tools, and that number is projected to reach 730 million by 2030, according to the industry group Edge AI and Vision Alliance.

Studies show that while nearly 80 percent of people worldwide believe AI will significantly affect their lives in the next decade, over 85 percent feel they have no say in how these technologies evolve. A similar example is the moment social media began to permeate our lives, reshaping communication and society often without guardrails or consent, Siddarth said. 

“We’ve seen major harms from social media, for example, with kids using these platforms or in countries where they ignored their impact on elections,” Siddarth said. “Many of those issues might have been avoided if the voices and experiences of everyday people had been part of the decision-making process. History has shown that listening to those voices isn’t the default, but it should be.” 

But when everyday people are not building AI and don’t fully understand how it works, how can they be involved in the process? 

“Of course, I can’t just walk up to someone on the street and ask how we should evaluate Claude 3.7 for its societal impact. That’s not realistic,” Siddarth said. “What we can do is identify the parts of the tech stack where decisions are about values, not just technical specs. Those are the points where people should have a say.”

Some of the most promising examples of collective input for technology are happening in places like Taiwan, where despite having some of the highest levels of external misinformation in the world, public feedback is shaping national AI policy.

“In Taiwan, we’ve helped develop policies around freedom of speech and the use of AI in election campaigns,” Siddarth said. “We gathered input from hundreds of thousands of people, asking them, ‘Where’s the line between protecting free speech and fighting misinformation?’ Through public input, we reached a point where platforms are now required to down-rank false content, share that data across platforms, and be fully transparent about how the algorithms work.”

In the latest round of its Global Dialogues initiative, which asks people around the world how they interact with and are impacted by AI, The Collective Intelligence Project surveyed about 1,000 participants across 63 countries, asking for input on some of the most pressing issues in AI, including chatbot-human relationships. The results show that while many respondents expressed discomfort with the idea of AI replacing intimate human relationships, a significant number made exceptions for areas like palliative care or mental health support. 

“There’s a lot of confusion around these issues, but I don’t think you need deep technical knowledge of AI to weigh in,” Siddarth said. “Before we end up in a society we don’t recognize — or don’t like — we should understand what people want. Where are we okay with this tech, and where are we not?” 

The lack of AI education among the general public should not be used as an excuse to ignore public input, Siddarth said. “For example, if your elderly parent spends hours talking to a chatbot, what would you want to know about the company that built it, or about its personality?” She said. “These are the kinds of questions people can answer without needing to be AI experts.”

There’s a strong parallel between AI governance and data privacy regulation, as both involve technologies that are reshaping our lives, mostly without our direct input.

“I was fresh out of college and very gung-ho about giving people direct control over their data: You should know what’s happening, make choices about where your data is going,” Siddarth said. “One month into the work, I realized no one wants that. Even I don’t want to spend my days thinking about where my data packets are going or who’s seeing what. What I do want is a trustworthy intermediary. Someone who has my best interest at heart. I just don’t want to be that person myself.”

Ordinary people can bring more grounded, practical concerns that are often overlooked in these debates, Siddarth said. 

“In early 2023, we ran a project with OpenAI to understand what risks people were worried about with AI,” she said. “The public conversation was portrayed as totally polarized: either AI was going to destroy us or save us. But when we talked to everyday people, what came up most wasn’t fear of doom. It was fear of overreliance. People were saying, ‘I don’t need to be involved in building this thing day-to-day.  I don't understand cybersecurity risks or things like that. But what I do know is that this is going to be built into our systems before we understand what's going on.” 

As conversations about AI governance become more prominent, it's also easy to fall into extremes. “I think there’s a danger when we say things like, ‘People making decisions aren’t acting in the public interest, so let’s give direct democratic control over everything,’” Siddarth said. “Or ‘Let’s have the government be involved in everything.’ In reality, what we need is a better intermediary layer.”

Making a form of democracy work for AI governance is about finding smart ways to organize and use public input. The Collective Intelligence Project is designing systems that find what Siddarth calls "surprising agreement" across divides. 

"You want to surface ideas that people with different perspectives all value, not just platitudes like, 'Kids should be happy,' but specific, actionable consensus," she said.

The push for collective input needs to be across the board — by internal champions in the government, at tech companies, in AI labs, and from independent voices working together, Siddarth said. That includes organizations with credible neutrality who can work across organizations and prioritize collective input. 

As The Collective Intelligence Project expands its Global Dialogues program, Siddarth hopes to see a future where no one feels excluded from the technologies that are shaping their lives.

“I’d love to see a world where you could go to a far-flung corner of the earth and ask someone if they felt they had a say in the technology they use — or that’s used around them — and they’d say, ‘Yes, I know what’s going on, and I’m involved in it,’’ she said. “That feels like success to me. It’s ambitious, but that’s what I’m aiming for.”

Abha Malpani Naismith headshot

Abha Malpani Naismith is a writer and communications professional who works towards helping businesses grow in Dubai. She is a strong believer in the triple bottom line and keen to make a difference. She is also a new mum, trying to work out a balance between thriving at work and being a mum. In her endeavor to do that, she founded the Working Mums Club, a newsletter for mums who want to build better careers and be better mums.

Read more stories by Abha Malpani Naismith