Autonomous LLM agents are not ready for business. Full stop.
Curious, why don't you think?
Prompt injection attacks and hallucinations haven't been solved for.
That's why you don't let agents act unsupervised. I built atmita.com with an approval layer where every action gets reviewed before it executes. Doesn't solve hallucinations, but it contains them.