Site icon MediaCat UK

AI shopping agents prefer bot-written copy, study finds

New research has found that AI shopping agents are more likely to select a product when its description is AI-written.

In the study, ‘What is Your AI Agent Buying?’ from researchers at Columbia and Yale universities, the team used a simulated marketplace to test how AI models like GPT-4.1, Claude 4 Sonnet and Gemini 2.5 flash make purchasing decisions. In the simulation, the models acted as shopping agents, selecting products from a list of eight options based on specific user requirements.

The study discovered that swapping a human-written product description for an AI-generated one resulted in a 2.69% average increase in market share. 

Most of the time, the change had little effect. But in a quarter of cases the difference was dramatic. One toilet paper listing gained 15.4% with Claude Sonnet, while GPT-4.1 increased a mousepad’s share by 21.8%. An iPhone 16 Pro Cover’s market share also increased by 23.6% for Gemini 2.5 Flash and 9% for Claude Sonnet 4.

The researchers noted that this finding could prove significant for the advertising industry, writing: ‘Firms typically invest heavily in marketing and may not expect double digit increases from minor edits, yet our results indicate that, in the presence of AI-mediated demand, targeted description modifications alone may materially shift selection shares.’

This AI-to-AI preference isn’t an isolated finding. The results echo a separate PMAS study which found that large language models (LLMs) overwhelmingly preferred content written by other LLMs, selecting it 89% of the time. In contrast, human researchers in the same study chose the AI-written content over the man-made option only 36% of the time. The authors of that study cautioned that this phenomenon could lead to ‘future AI systems implicitly discriminating against humans as a class.’

These findings suggest we may be entering an era where AI-generated content isn’t just a cost-saving measure but a strategic necessity for boosting market share.

The ‘What Is Your AI Agent Buying’ study also uncovered strong positional biases. All models put a premium on products on the top row, but GPT-4.1 heavily favours products on the left-hand side, while Claude Sonnet opts for the middle. Simply moving an item from the bottom right to the top row boosted its selection rate five-fold in Claude’s case.

More worryingly, the models sometimes failed to pick the most obvious bargain. In one test, GPT-4.1 ignored the cheapest of eight otherwise identical products 16% of the time.  

The researchers argue these quirks make a ‘compelling case’ for agent-specific storefronts and transparent developer testing. Without them, they say, regulators should step in to protect consumers.

While the findings may inspire SEOs looking to become GEOs, the study shows how unstable the task will be. Each model — and every update — favours different quirks, making it nearly impossible for marketers to lock down a single playbook. The researchers see regulation as the only way to ensure a level playing field.

Exit mobile version