Opinion How bots could help revive democracy - New AI tools encouraging deliberation could promote consensus and reduce polarisation in politics

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
How bots could help revive democracy
Financial Times (archive.ph)
By Gillian Tett
2026-05-01 04:00:18GMT

Can anything combat political polarisation in America — or anywhere else? That is a burning question, as partisan hatred keeps rising, poisoning civic life and creating gridlock in Washington.

Some “solutions” have been proposed: change the primary process for selecting congressional candidates (which sometimes encourages extremes); launch a new centrist movement, like No Labels (polls show that one-third of voters define themselves as moderate); curb campaign finance and gerrymandering (which fund and foster political silos); or control social media (there is a virtual civil war in cyber space partly because social media is designed to create echo chambers). But last week, another idea was on display in Kentucky: using AI bots to support that elusive centrist ground. On Thursday, Doug Gorman, 62, judge executive of Warren County, released a “BG 2050 Initiative” report. This explains how his team will manage a projected near-doubling of the population of his town, Bowling Green, in the next 25 years, without causing fiscal shock or unleashing xenophobia.

He could have designed the project by letting the dominant political party (Republicans) dictate policy, or using town halls to canvass views. “But if I have a town hall with 200 people showing up, it quickly gets out of control with people shouting,” he told me. Indeed, Gorman — like many Americans — “hates polarisation” and is so disgusted by the current vitriol that he tells his constituents not to waste time watching Washington. “Ten per cent of [voters] are far right or left and insane and they make most of the noise. But most people are not.”

Last year, the county experimented by using AI bots to conduct lengthy chats with 10 per cent of residents about how they would run the city. The tech came from Jigsaw, a division of Alphabet, and results were handled by Polis, a statistical polling group. (I have previously discussed anthropology research on other themes with Jigsaw ethnographers.)

The AI platform scanned the vast volume of replies, identified participants’ top priorities, and asked if they approved (or not). After 1mn-odd “votes”, it emerged that there was 80 per cent support for half of the ideas, such as funding life-long learning. So Gorman is focusing first on these common priorities, before tackling more divisive ideas, to create a more centrist — and effective — plan.

Could this pilot be replicated elsewhere, not just in politics but other institutions? Not always, or easily. Bowling Green is a small, strong community and has a tech-loving mayor (Gorman once worked in telecoms). Scaling into larger societies could be hard. However, Jigsaw is developing variants of the Kentucky pilot in places like Tennessee and Oklahoma. And in Taiwan, a pioneering former tech minister called Audrey Tang has already organised similar AI-enabled digital debates on a national scale, using homegrown tech.

Separately, two former British civil servants — Catherine Day and Matthew Rycroft — hope to launch a British variant, The National Strategy Project, this summer. And Jim Fishkin, a Stanford professor who first launched the idea of digital “deliberative polls”, (and is advising the NSP) reckons that 100 such experiments have happened in 28 countries so far.

Initially, these projects relied on more basic technology. But one advantage of AI-enabled conversations is that they allow participants to express nuanced views at length, which surveys and referendums cannot do. Another is that AI bots can be designed to be far more patient and neutral than humans.

And while chatbots can sometimes amplify online conspiracy theories, research shows that, when well designed, they can also engage with conspiracy theorists and are better than humans at persuading extremists to change their ideas. That is partly because AI can translate different languages and political views, in order to help adversaries better engage. Indeed, as my colleague John Burn-Murdoch recently noted, there is rising evidence that LLMs can be used to “nudge” people away from online extremism.

Of course, algorithms can sometimes also amplify disinformation and polarisation. But if well designed and regulated, AI could promote more online centrism, after a decade of social media polarisation. That won’t reassure cynics amid a swelling populist backlash against Big Tech. Only 5 per cent of Americans say they trust AI “a lot”; 57 per cent think the risks outweigh the benefits. But the Kentucky experiment shows that the technology can have a potentially good side too. This could be harnessed to help depolarise politics and aid institutions: experiments are under way at universities to fight polarisation with deliberative debate, and consultants tell me that some companies are quietly experimenting as well. (For example, by using AI bots to “translate” different employee viewpoints to each other.)

No, this is not a magic wand. But in a world where humans are badly messing up democracy, we could and should experiment with new ideas — even if bots help them emerge.
 
This explains how his team will manage a projected near-doubling of the population of his town, Bowling Green, in the next 25 years, without causing fiscal shock or unleashing xenophobia.
Right, so the bots will do approximately nothing to actually de-radicalize the population, instead they will simply spout the elitist, pro-establishment position and probably call everyone displaced by doubling the size of a small city via floods of immigrants undercutting their wages a racist.

I can't imagine something more tone-deaf if you are a Line Go Up cultist boomer, more detrimental to your position, than managing the wave of backlash against your desperate scrambling by deploying literally artificial support onto the internet to combat it.
 
I assumed the answer was censorship, but it's slightly more insidious than that. Also, they really gloss over that this "Republican" in Kentucky openly wants to push White Replacement, and thinks the AI might be able to help him silence backlash that move would bring.

launch a new centrist movement, like No Labels
or control social media
So, some of it is censorship for sure. Don't let people have informed consent when they vote, and prevent people from discussing ideas or sharing information with one another. Extremely standard progressive tactics to fuck over their enemies.
The tech came from Jigsaw, a division of Alphabet
So it's worse than worthless. Figures.
Another is that AI bots can be designed to be far more patient and neutral than humans.
It can. They are not. Every AI is designed to lie and push progressive politics at every opportunity. This is one of many reasons why they are borderline worthless for anything other than making memes or porn.
when well designed, they can also engage with conspiracy theorists and are better than humans at persuading extremists to change their ideas.
"Our gaslighting robot is quite good at tricking idiots into agreeing with it!" Frankly, I don't even believe that. Anyone stupid enough to be tricked by an LLM would be stupid enough to be tricked by mainstream media or a social media post.
 
I thought this was gonna be censorship again, but it is even more retarded: it is just gonna be having propaganda broadcasts being spouted by a AI which can somewhat fake a conversation. That's it.

I wonder what it actually looks like though. Every time shit like this is tried it ends up failing hard because the product can't handle even basic normie disagreement much less hardcore schizo posters. I remember quite a few "deradicalization" tools over the years similar to this which all failed despite being hyped up. In my mind in particular now I remember one just around 2022 where it tried to convince you to stop "falling" for conspiracy theories where they would ask you if the arguments they proposed made a difference and you literally did not have the opition to talk back. You could only say you were gonna think about it or that it worked.
 
The whole point of democracy is the conversation between the ultimate sources of power, the voters. If that conversation is impossible, then so is democracy itself, and it should be discarded.

"Conversation with AI aspects" is not democracy either. Whatever it is can be fought and destroyed by democratic proponents, without issue.
 
1777690335226.png
Unironically speedrunning the Patriots.
 
I thought this was gonna be censorship again, but it is even more retarded: it is just gonna be having propaganda broadcasts being spouted by a AI which can somewhat fake a conversation. That's it.

I wonder what it actually looks like though. Every time shit like this is tried it ends up failing hard because the product can't handle even basic normie disagreement much less hardcore schizo posters. I remember quite a few "deradicalization" tools over the years similar to this which all failed despite being hyped up. In my mind in particular now I remember one just around 2022 where it tried to convince you to stop "falling" for conspiracy theories where they would ask you if the arguments they proposed made a difference and you literally did not have the opition to talk back. You could only say you were gonna think about it or that it worked.
We will get Tay chat bots till the end of time and I for one look forward to it.
 
Back
Top Bottom