I think they're arguing against Anthropic et al. claiming their models are "ethical" and "safe". The point being that it can't be absolutely in all circumstances ethical or safe because even seemingly benign information can be used to cause harm, hence it requiring knowing the user's intent to actually make an ethical and safe choice of whether to provide information or not.
When Anthropic et al. say that their AI is ethical and safe, they are saying so in absolute terms, same as the title. Just one instance of unethical or unsafe behavior is enough to prove that it's not ethical or safe.
No one would say a knife or a gun is safe because we're all aware of the harm it could cause, thus requires care and diligence in use. The term "ethical" doesn't apply in this analogy because an inanimate object cannot act, but an LLM can.