For retrieval, there is a semantic filesystem that makes it easy for LLMs to search using shell commands.
It is currently a scrappy v1, but it works better than anything I have tried.
Curious for any feedback!
The hard part is usually knowing what +not+ to write down. Every system I've seen eventually drowns in low-signal entries.
I think in terms of noise, it is less problematic here because not everything is being retrieved. The agent can selectively explore subsets of the tree (plus you can edit the exploration policy by yourself).
Since there is no context bloat, it is quite forgivable to just write things down.
The bigger problem is avoiding what I call the Memento Effect. I won't spoil the movie for anyone, but Memento involves a character who cannot make new memories, so he has to take meticulous notes about everything. But if any of those notes are vague or incorrect, they still get accept as truth when next reviewed. So you really need your Markdown memory to be pristine and mustn't allow it to become polluted.
As I have understood it, in LLM Wiki, the human is very much in the loop in what gets written. In ReadMe, the human control is mostly on the policy (prompt) level, and it is done once, the agent then goes full autonomously afterwards.
After a quick skim of your project.
I have tried an embedding-based knowledge base as well, but it is a bit tricky to make the embedding match a user query. For example, "What happened?" is not at all similar to "Batman defeats Joker." You need to reformulate the query using an LLM, which is tricky given that the query is conditioned on the whole chat history. That's partly why I abandoned embedding-based methods.
But given that MCPTube already works on Gemini CLI, I could see it work natively without embeddings. Gemini is capable of reading video files natively. Worth a try?
But in the end, it doesn't really matter; it is public on GitHub, so anyone can use it.
Treat it as an MVP, would love to hear how your agent performs!
The problem always is that when there are too many memories, the context gets overloaded and the AI starts ignoring the system prompt.
Definitely not a solved problem, and there need to be benchmarks to evaluate these solutions. Benchmarks themselves can be easily gamed and not universally applicable.
Also since I thought for another 30 seconds, the “too many memories!” Problem imo is the same problem as context management and compaction and requires the same approach: more AI telling AI what AI should be thinking about. De-rank “memories” in the context manager as irrelevant and don’t pass them to the outer context. If a memory is de-ranked often and not used enough it gets purged.
ReadMe does support loading memories mid-reasoning! It is simply an agent reading files.
Although GPT-5.4 currently likes to explore a lot upfront, and only then responds. But that is more of a model behaviour (adjustable through prompting) rather than an architectural limitation.
A removal mechanism is not (yet) implemented. But in principle, we could adjust the instructions in Update.md so that it does a minor "refactor" of the filesystem each day, then newer abstractions can form, while irrelevant gets pruned/edited. That's the beauty of the architecture, you define how the update can occur!
But if you do have a new memory (possibly contradicting an old one), is it really a good idea to prune/edit it?
If you are genuinely uncertain between choice A and B, then having them both exist in the memory archive might be a feature. The agent gets the possibility of seeing contradictory evidence on different dates, which communicates indecisiveness.
The purpose of memory pruning is not to “forget” useful or even contradictory information, but to condense it so that the useful bits of the memory take less context and be more immediately accessible in situations that need it.
I address it through merging the lower-level memories into more abstracted ones through a temporal hierarchical filesystem. So, days -> months -> quarters -> years. Each time scale focuses on a more "useful" context since uncertain/contradictory information does not survive as it goes up in abstraction.
For example, A day-level memory might be: "The user learned how to divide 314 by 5 with long division on Jan 3rd 2017."
A year-level memory might be: "The user progressed significantly in mathematics during elementary school."
From the perspective of the LLM, it is easier to access the year-level memories because it requires fewer "cd" commands, and it only dives down into lower levels when necessary.
You need clever naming for the filesystem and exploration policy in AGENTS.md. (not trivial!)
The benchmark is definitely the core bottleneck. I don't know any good benchmark for this, probably an open research question in itself.
I guess the markdown approach really has a advantage over others.
PS : Something I built on markdown : https://voiden.md/
Good question. Since it is just an LLM reading files, it depends entirely on how fast it can call tools, so it depends on the token/s of the model.
Haven't done a formal benchmark, but from the vibes, it feels like a few seconds for GPT-5.4-high per query.
There is an implicit "caching" mechanism, so the more you use it, the smoother it will feel.