marginalia_nu11 hours ago | | | parent | | on: 47754179
This is methodologically flawed, as bytes only weakly correlate with tokens.

Unless you're sending identical requests, you can't expect the same token counts for any given of bytes, or that a slightly longer (but different) message will lead to more tokens than a slightly shorter one, or vice versa.

Bolwin2 hours ago | | | parent | | on: 47765052
> The numbers came from the same project and the same prompt across versions.

I'm pretty sure the tester checked. If the request format is the same (which it is, given it uses the same as Anthropic's stable public API) and the same prompt/messages then bytes will correlate pretty well.

marginalia_nu1 hour ago | | | parent | | on: 47771625
The prompt may be the same, but the project context would have have surely changed. User prompt itself is unlikely to be ~200KB.
tencentshill4 hours ago | | | parent | | on: 47754179
On the free plan, I hit the limit instantly by uploading one 45kb PDF and one prompt. Even for a free plan, I expect a bit more. Oh well, local models can be pushed to do what I need.
a_c1 day ago | | | parent | | on: 47754179
I had the same suspicion so made this to examine where my tokens went.

Claude code caches a big chunk of context (all messages of current session). While a lot of data is going through network, in ccaudit itself, 98% is context is from cache.

Granted, to view the actual system prompt used by claude, one can only inspect network request. Otherwise best guess is token use in first exchange with Claude.

https://github.com/kmcheung12/ccaudit

simple101 day ago | | | parent | | on: 47759054
I got kinda obsessed with observability a month ago and wired together a full stack for personal use.

https://github.com/simple10/agent-super-spy - llm proxy + http MiTM proxy + LLMetry + other goodies

https://github.com/simple10/agents-observe - fancier claude hooks dashboard

It started as a need to keep an eye on OpenClaw but is incredibly useful for really understanding any agent harness at the raw LLM request level.

F7F7F71 day ago | | | parent | | on: 47754179
What is the system prompt for $1000 Alex (RIP)?
g4cg54g5422 hours ago | | | parent | | on: 47759060
a git status + "recent commits" (that will _always_ change) https://github.com/anthropics/claude-code/issues/47107

and a missing cache-mark that will make skills & project-claude.md cachemiss every time too https://github.com/anthropics/claude-code/issues/47098

  TLDR: for now launch using `CLAUDE_CODE_DISABLE_GIT_INSTRUCTIONS=1 claude "Hello"`
  https://news.ycombinator.com/item?id=47754795
simianwords17 hours ago | | | parent | | on: 47754179
I don’t buy it. The same problem was reported in Claude.ai at the same time which means same underlying root cause.