1 link tagged with all of: tokens + claude + ai-efficiency
Click any tag below to further narrow down your results
Links
Boris Cherny of Anthropic outlines nine ways Claude squanders 73% of your tokens before processing your prompt, including base model overhead, re-reading history, and forgotten hooks. He debunks “Claude got dumber” complaints and shows how to spot and fix these token drains.