You can't say that something can never be ethical/safe on the one hand, and then on the other hand say that being ethical/safe depends on context/intent. Those two statements contradict each other.
Either AI can be safe and ethical in the right context with the appropriate intent which contradicts the title, or it can't be safe/ethical regardless of intent/context, in which case the title is correct but the reasoning is incorrect.
There is no consistent way to interpret the remainder of the article with such a glaring and obvious inconsistency.
When Anthropic et al. say that their AI is ethical and safe, they are saying so in absolute terms, same as the title. Just one instance of unethical or unsafe behavior is enough to prove that it's not ethical or safe.
No one would say a knife or a gun is safe because we're all aware of the harm it could cause, thus requires care and diligence in use. The term "ethical" doesn't apply in this analogy because an inanimate object cannot act, but an LLM can.
There is no contradiction here.
> It doesn’t make those frameworks worthless. It makes them incomplete by design—and it means, again, that AI will never be entirely ethical or safe.
Lots of people in this thread are reading the headline and making the same comparisons that the author does - "Most people don’t provide their context. They never have—not to search engines, not to librarians, not to hardware store clerks."
The article isn't saying "AI will never be ethical and safe, and it is unique in that way," it is saying "and so it is similar to these other things." If anything, it is critiquing the claims made by corporate AI that they can successfully make AI both useful and totally safe.