Never is too much time. And humans not aware of intent or context also can make unethical decisions, even if we assume an absolute and eternal ethical framework.
Asimov robot stories (with it’s magical three/four rules) had examples of situations where even being “ethical” bad things happened. And in Black Mirror episode Men Against Fire humans were the ones with a fake context making unethical decisions (and reality is much worse than fiction as we’ve seen in the last months).
Taking out the absolutes, I would stop in that today’s LLMs lack context, critical thinking, and a lot more than make them unethical and unsafe. But something future that could be labeled as AI too could have some of those problems mitigated, maybe making better/safer decisions than humans in general.