The article immediately starts off with such a glaring contradiction that it makes it very hard to correctly interpret the remainder of it.

You can't say that something can never be ethical/safe on the one hand, and then on the other hand say that being ethical/safe depends on context/intent. Those two statements contradict each other.

Either AI can be safe and ethical in the right context with the appropriate intent which contradicts the title, or it can't be safe/ethical regardless of intent/context, in which case the title is correct but the reasoning is incorrect.

There is no consistent way to interpret the remainder of the article with such a glaring and obvious inconsistency.

16bitvoid9 hours ago | | | parent | | on: 47767248
I think they're arguing against Anthropic et al. claiming their models are "ethical" and "safe". The point being that it can't be absolutely in all circumstances ethical or safe because even seemingly benign information can be used to cause harm, hence it requiring knowing the user's intent to actually make an ethical and safe choice of whether to provide information or not.

When Anthropic et al. say that their AI is ethical and safe, they are saying so in absolute terms, same as the title. Just one instance of unethical or unsafe behavior is enough to prove that it's not ethical or safe.

No one would say a knife or a gun is safe because we're all aware of the harm it could cause, thus requires care and diligence in use. The term "ethical" doesn't apply in this analogy because an inanimate object cannot act, but an LLM can.

evnp9 hours ago | | | parent | | on: 47767248
The point is that safety depends on context and intent being known - with unknown context or intent, dangerous situations will appear _some_ of the time, thus the system as a whole can "never" be fully safe.

There is no contradiction here.

chaos_emergent9 hours ago | | | parent | | on: 47767248
Yeah I hate the title because it almost verges on clickbaity because one assumes that he's making the assertion that AI has a moral stance in the first place, versus AI being morally neutral and driven by its wielder
fwip9 hours ago | | | parent | | on: 47767248
I think without reading the final line, you might get the wrong impression.

> It doesn’t make those frameworks worthless. It makes them incomplete by design—and it means, again, that AI will never be entirely ethical or safe.

Lots of people in this thread are reading the headline and making the same comparisons that the author does - "Most people don’t provide their context. They never have—not to search engines, not to librarians, not to hardware store clerks."

The article isn't saying "AI will never be ethical and safe, and it is unique in that way," it is saying "and so it is similar to these other things." If anything, it is critiquing the claims made by corporate AI that they can successfully make AI both useful and totally safe.