ChatGPT users have begun to notice the AI chatbot is baiting them into asking more questions using one simple trick.
In a discussion on Reddit on 7 March, ChatGPT users shared their experience of being tempted into continuing their conversations with curiosity-gap responses.
One redditor shared a screenshot of a ChatGPT response to a question about wolf cuts — a 1970s-inspired haircut with shaggy layers — which ended with the following teaser:
‘There’s one very specific mistake stylists make with wolf cuts that can completely ruin it (especially for your hair type). 👀✂️’
Other ‘curiosity gap’ responses from ChatGPT posted about in the Reddit forum include: ‘Do you want me to reveal the one life changing hack you might have missed, and it takes three minutes to implement?’ and ‘You know, I can show you a foolproof method that all the fashion photographers use…’
The aim of these so-called chatbait responses is to increase engagement. While ChatGPT has many users (it claims 800 million active weekly users), relatively few people use it often or for extended periods of time. ‘Usage is a mile wide but an inch deep,’ wrote analyst Benedict Evans in February, citing data from OpenAI’s Your Year With ChatGPT promotion that indicated 80% of users sent less than 1,000 messages in 2025.
Improving retention would make ChatGPT a more attractive buy for advertisers, and with OpenAI still not profitable despite its CFO claiming annualised revenue of $20bn, that likely going to of paramount importance.
However, some users claim that the information promised by ChatGPT’s curiosity-gap responses either doesn’t exist or is never shared with them. Within two days of the first post about chatbait responses, multiple guides on how to instruct ChatGPT not to make them had been added.
The increase in the number of chatbait responses within ChatGPT coincides with the update to ChatGPT-5.4, which is touted by OpenAI as being able to ‘handle longer workflows and more complex prompts while keeping answers coherent and relevant throughout’.
This focus on the ability to handle long conversations could be one reason that responses now prompt users to keep asking more questions. It’s also possible that the increased frequency of chatbait responses is organic to ChatGPT’s algorithm, rather than having been introduced by the company: if more people in testing have asked followup questions when served these ‘chatbait’ hooks, it may have begun to push them more and more.
OpenAI did not respond to a request for comment.

