After a decade and a half of algorithms dragging people toward the political fringes, AI might be nudging them back toward the centre.
Financial Times journalist John Burn-Murdoch wrote last week about the moderating impact that AI may have on public discourse. Drawing on research from the Cooperative Election Study, he showed that while social media tends to elevate fringe views, AI systems often nudge users back toward the political centre.
All the major LLMs demonstrate this trait, providing information that clusters around broadly mainstream interpretations of politics and social issues. Part of the reason is the training data. Chatbots like ChatGPT rely heavily on sources such as Wikipedia, meaning their default responses tend to reflect something close to the political consensus. A 2024 study found that engaging with AI reduced beliefs in conspiracy theories, while another suggested chatbot use could reduce scepticism around issues like climate change.
Users are noticing it, too — particularly those with right-wing views. Even Grok — branded as the ‘anti-woke’ chatbot — has drawn criticism on X for rejecting conspiratorial claims. One user complained: ‘Grok is programmed to not acknowledge the ongoing White erasure problem… Nothing has been more radicalizing than realizing the so-called ‘right wing AI’ is guard-railed to prevent White collectivism.’
Most people aren’t probing chatbots for ideological bias though. One of the main uses of chatbots is simply seeking information. Data from OpenAI shows that 49% of messages fall into the ‘asking’ category, suggesting users value ChatGPT most as an advisor. And the impact isn’t limited to chatbot users. Google’s AI Overviews now place synthesized answers at the top of many searches, and the company is even trialling AI-rewritten headlines, reducing the influence of clickbait and outrage-driven content further.
For brands, that shift could have mixed implications. AI environments provide fewer signals about users’ interests and biases, potentially making targeting harder. But they may also reduce the pressure for every brand to take a stance on every cultural flashpoint. Wading into the latest controversy matters less in a world where fewer people are being algorithmically pushed toward outrage in the first place.
Still, it would be premature to start planning for a future where LLMs have completely solved society’s polarisation problem.
For one thing, it’s becoming increasingly clear that AI businesses will operate with the same corporate incentives that shaped social media and the open web. ChatGPT’s recent flirtation with chatbaiting shows that these platforms are just as likely to deploy whatever tactics are necessary to maximise user engagement — whether or not they contribute to the betterment of public discourse.
Also, the arrival of AI chatbots doesn’t mean that people will stop being influenced by social media, which still offers something LLMs can’t replicate — an environment where identity, status and belonging are constantly negotiated. Those dynamics are powerful drivers of belief, and behavioural economists, like Cass Sunstein, argue that, even without algorithms nudging them, groups of like-minded people will tend to form a consensus around the most extreme views when deliberating an issue. It’s just human nature.
And let’s not forget that decades of access to the entirety of human knowledge via the internet has failed to make humanity any better informed. So, just because an informative and moderating source of information is available, we should not assume that people will take full advantage of it.
Still, there is good reason to hope that AI’s growing role as a first point of information contact may exert a subtle moderating influence. And if billions of people are plugged into these tools, that could be something worth celebrating.
