A removal mechanism is not (yet) implemented. But in principle, we could adjust the instructions in Update.md so that it does a minor "refactor" of the filesystem each day, then newer abstractions can form, while irrelevant gets pruned/edited. That's the beauty of the architecture, you define how the update can occur!
But if you do have a new memory (possibly contradicting an old one), is it really a good idea to prune/edit it?
If you are genuinely uncertain between choice A and B, then having them both exist in the memory archive might be a feature. The agent gets the possibility of seeing contradictory evidence on different dates, which communicates indecisiveness.
The purpose of memory pruning is not to “forget” useful or even contradictory information, but to condense it so that the useful bits of the memory take less context and be more immediately accessible in situations that need it.
I address it through merging the lower-level memories into more abstracted ones through a temporal hierarchical filesystem. So, days -> months -> quarters -> years. Each time scale focuses on a more "useful" context since uncertain/contradictory information does not survive as it goes up in abstraction.
For example, A day-level memory might be: "The user learned how to divide 314 by 5 with long division on Jan 3rd 2017."
A year-level memory might be: "The user progressed significantly in mathematics during elementary school."
From the perspective of the LLM, it is easier to access the year-level memories because it requires fewer "cd" commands, and it only dives down into lower levels when necessary.