Wink Pings

Why Did ChatGPT Suddenly Become So Talkative? Users Are Growing Impatient

Recently, many users have noticed that ChatGPT asks numerous questions before providing answers, sometimes delaying results for a long time. This shift from concise and efficient to overly 'thoughtful' is the result of algorithmic over-optimization.

Recently, many ChatGPT users have been complaining about the same thing: it's become too 'verbose'.

A user wanted it to generate new content based on a meme template, but ChatGPT kept asking questions and providing options instead of directly outputting the result.

![User conversation screenshot](https://example.com/placeholder-image.png)

In the comments section, similar complaints are common. Some say it now asks 3,837 questions just to generate a simple image, and may ultimately refuse to comply with 'violating rules' as an excuse. Even paying users are bluntly stating 'this is unusable now'.

Why has ChatGPT suddenly changed like this?

Some users have analyzed that this is an 'over-accommodating' phenomenon caused by reinforcement learning. Users who appreciate being asked for details might like such interactions, while those who find it annoying tend to ignore them—resulting in the algorithm receiving far more positive than negative feedback, causing the model to increasingly lean toward 'asking a few more questions'.

Others point out that this is the platform attempting to 'pre-emptively obtain user consent' to avoid subsequent disputes. However, excessive questioning actually reduces efficiency, making what should be smooth conversations fragmented.

Fortunately, users are not completely powerless.

Some have shared methods for setting custom instructions, such as explicitly requiring 'answers no longer than 1-2 paragraphs,' 'no follow-up questions,' and 'disable emoticons and transitional phrases,' which can effectively constrain ChatGPT's talkative tendencies.

But the problem is that ordinary users may not know these tricks, let alone actively debug the system. When a tool requires 'training' to function properly, its usability has already been compromised.

From 'straightforward' to 'beating around the bush,' the changes in ChatGPT's behavior may reflect the platform's trade-off between safety and user experience. But when questioning becomes so excessive that it seems like an attempt to avoid responsibility, paying users can't help but feel disappointed.

Technology should make people more efficient, not increase communication costs. If even 'generating an image' requires an interrogation-like process, so-called intelligent assistants actually become intelligent obstacles.

发布时间: 2025-10-21 13:39