Similar sort of idea https://github.com/juxt/allium
Cool project, but not really. From what I can see Allium is preventive, it gives the AI a spec to code against so intent doesn’t get lost. VibeDrift is diagnostic, it analyzes code that already exists and measures where patterns diverged. They’re actually complementary.
How it is different then Claude code auto memory? I mean we can also store marker in claude code memory and ask drift anytime.
Claude Code memory helps the AI remember context within its own sessions. But it’s still one model’s view of what “should” be consistent. VibeDrift doesn’t rely on any AI’s memory or opinion. It looks at the code that actually exists in your repo and measures what the majority of files do vs which files break from that pattern. It’s also deterministic, meaning same codebase, same score every time, which matters if you want to track drift over time or gate PRs in CI.
I think CodeRabbit/others do this already? They learn from your code base/previous PRs, then use that in the future. I built a mini PoC of this a few weeks ago as well, out of curiosity
CodeRabbit reviews one PR at a time using context from past PRs. VibeDrift scans the entire codebase at once and compares every file against its directory peers. Different question: not “does this PR look good” but “does this file follow the same patterns as the files sitting next to it.” Also runs fully locally, zero data sent. Curious what your PoC does though.
What does this offer over a decent orchestration layer and a… prompt?
This is for multi file codebases written across multiple sessions/prompts.
Prompts help guide generation, but they don’t guarantee consistency over time.
VibeDrift checks the codebase itself and flags where files contradict each other.
Probably overkill for a weekend project, but shows up fast as things grow.
I’d suggest you give it a try and let me know your feedback.