After Claude Code Source Code Leaked, Someone Analyzed It Using OpenAI's Codex
A bug worth hundreds of thousands of dollars was fixed with three lines of code. Anthropic has officially confirmed it is investigating the issue.
The follow-up to the Claude Code source code leak is more interesting than expected.
Someone fed the leaked source code to OpenAI's Codex for analysis, and it actually found the culprit behind the crazy token consumption—the autoCompact (automatic context compression) mechanism would retry infinitely without any upper limit when it failed. The source code comments note that there was a session with up to 3,272 consecutive failures.
The fix is surprisingly simple: add a limit of MAX_CONCURRENT_AUTOCOMPACT_FAILURES = 3, and stop retrying after 3 consecutive failures. Three lines of code, and it's done.
Users who applied the patch reported that their usage quota has returned to normal. Part of the previous complaint that "hitting rate limits after just a few uses" may have been this bug secretly burning tokens.
The official side has finally made a move. Lydia Hallie from Anthropic confirmed the issue on Twitter, stating that the team is investigating it and it is a top priority.
Interestingly, though—using OpenAI's tool to analyze Claude's bug has an indescribable sense of black humor.
Repository link: https://github.com/Rangizingo/cc-cache-fix/tree/main




发布时间: 2026-04-01 14:29