You can't say that something can never be ethical/safe on the one hand, and then on the other hand say that being ethical/safe depends on context/intent. Those two statements contradict each other.
Either AI can be safe and ethical in the right context with the appropriate intent which contradicts the title, or it can't be safe/ethical regardless of intent/context, in which case the title is correct but the reasoning is incorrect.
There is no consistent way to interpret the remainder of the article with such a glaring and obvious inconsistency.
When Anthropic et al. say that their AI is ethical and safe, they are saying so in absolute terms, same as the title. Just one instance of unethical or unsafe behavior is enough to prove that it's not ethical or safe.
No one would say a knife or a gun is safe because we're all aware of the harm it could cause, thus requires care and diligence in use. The term "ethical" doesn't apply in this analogy because an inanimate object cannot act, but an LLM can.
There is no contradiction here.
> It doesn’t make those frameworks worthless. It makes them incomplete by design—and it means, again, that AI will never be entirely ethical or safe.
Lots of people in this thread are reading the headline and making the same comparisons that the author does - "Most people don’t provide their context. They never have—not to search engines, not to librarians, not to hardware store clerks."
The article isn't saying "AI will never be ethical and safe, and it is unique in that way," it is saying "and so it is similar to these other things." If anything, it is critiquing the claims made by corporate AI that they can successfully make AI both useful and totally safe.
Shall we just have that debate anyway? :D
The big question that I hoped the article might address: Can AI ever be ethical (within the norms of what the average Jo(e) considers ethical), or have we forever poisoned the well?
If the technology and mathematical underpinnings have been created on fundamentally immoral grounds (IP theft, energy / water excesses, etc) what would we have to do to produce an entirely - or even mostly - ethical AI stack?
Is it even possible, given the dependencies on (Lithium / Israel / fossil fuels / conflict mining / capitalistic exploitation / any other morally questionable underpinning you might think of) to re-do the work to such a point that we could "black box" our way to decently function LLMs?
Assuming that comes with a caveat of rolling back the technological progress, how far back do we have to go? It feels like the bronze age is a step too far, at least on the basis of my "average Jo(e)" test above - but what is considered reasonable?
Then - and only then - would it make sense to ask how to make the content generation itself ethical.
It feels like the Nazi medical science issue all over again, except nobody really cares as much about this one. But socially, it feels like an anti-capitalistic uprising is on the horizon, so maybe if that happens, a moral aversion to the state of AI might piggyback onto it?
Not that I want it to. Quite like AI really. Feels like the background immorality radiation of the earth is quite high anyway, maybe AI isn't the thing to fluff our feathers about. But it's certainly an interesting thing to mull as we weep over our non-gm oat milk babyccinos, pitying at the state of the world.
(I'm really an upbeat person, honest...)
(Depending on various definitions, to some people specifics of this amplification could warrant taking a mental shortcut and just considering that tech as harmful in itself. After all, if it is neutral and helpful under unattainable circumstances, and harmful under real-world conditions, then it is pointless to draw that distinction.)
Personally, I believe that technological and mathematical underpinnings of LLMs by themselves do not at all imply IP theft or detriment to the environment and society, but the way this technology is being adopted should raise serious questions in anyone with such capability.
It leads with "AI Will Never Be Ethical or Safe".
The first sentence is "AI will never be *entirely* ethical or safe."
It concludes with "AI is a tool, and it can be used in ethical and unethical, safe and unsafe ways" and compares them to "hardware store clerks".
Hardware stores are *specifically* places where society has had a centuries-long conversation about risk and the products on sale represent a very intentional set of choices. In some parts of the US hardware stores used to sell dynamite, they don't anymore. That's the 'social contract' functioning in daily life.
"AI is like a tool one might buy from the hardware store" is, in most people's minds, the opposite of the opening premise.
> Both ethical and safe conduct depend on context and intent.
The same apples to knives, and they can be plenty useful, and used in a safe manner.
I don't know, I didn't really agree with the post, I'm trying my best to steel man it.
So if we consider AI a chemical substance - if inserted in with limited context in tools with specific intent, can it be useful beyond tools available at this moment?
You can trust just any liquid that looks like water, just as you can trust just any model or especially any inference provider (they can switch models to save money or mess with other key parameters, or insert ads). You have to test your water supply and your AI supply regularly. And benchmark new sources. We’ll see labeling and quality guarantees in future suppliers. We’ll see personal models and model families trained and refined as brands for reliability. Bottled neatly for you by certified suppliers.
In the mean time we all just found our selves out of a desert and splashing around in this funky thing that we now find on the ground and falling for free from clouds.
Knives, books, water, calculators, encyclopedias, search engines: Just a few of the analogies being made with barely a word beyond "it's like X". In fact, the opposite: Demanding that other people make arguments that AI is not like X.
Analogies are almost always just a pithy, empty distraction. They are the fodder of low-quality internet conversations. It should be obvious why an analogy is so often reached for - if an argument about X can't be supported on its own, it's easy to point to another thing, Y, with some similarity, but which more easily fits the argument in other ways, and... just assert that they're the same.
Here's a dumb analogy: Yes, "it's just a tool." So is C4.
Seriously though, yes it is obvious why analogies are so often used, but I think you have it the wrong way round. They are a form of proof by negation; you don't have to find a thing exactly like the subject of the argument.
It's a way of fighting against bad arguments; If I say China is bad because X, Y and Z and also, their flag is red! They must be evil. If you then tell me that this argument could also be applied to the Red Cross/Crescent, you have negated my argument by analogy. You don't have to negate every argument I made; but at least then we can treat X, Y and Z on their own.
The problem with this writeup is, there really are no other powerful arguments in it.
And I'm pretty sure C4 is great for controlled demolition of highly dangerous buildings. Or do you want adventurous people to hurt themselves?
Asimov robot stories (with it’s magical three/four rules) had examples of situations where even being “ethical” bad things happened. And in Black Mirror episode Men Against Fire humans were the ones with a fake context making unethical decisions (and reality is much worse than fiction as we’ve seen in the last months).
Taking out the absolutes, I would stop in that today’s LLMs lack context, critical thinking, and a lot more than make them unethical and unsafe. But something future that could be labeled as AI too could have some of those problems mitigated, maybe making better/safer decisions than humans in general.
Here's a version I imagine both the author and I can nod along with: "Context and intent cannot be known at model training time, so most attempts to enforce safety or ethics guardrails purely through the weights of the model, fine-tuning, or other training-time interventions are doomed to guarantee very little at inference time."
Wasn't the article I was expecting! Not sure it helps much, except maybe if you wanted to muddy the water of ethics-and-AI discussions.
I’m not sure why people are attributing to so much to it. It just allows a single person to do a lot more units of work, the same way that a computer allowed a single person to do a lot more units of work.
"Most people don’t provide their context. They never have—not to search engines, not to librarians, not to hardware store clerks."
Exactly. Are hardware store clerks unethical as well?
That entire line of reasoning is absurd. You can get information from books, they don't know context and intent either. Books will never be ethical or safe.
Can a search engine be ethical or safe?
Can an AI be ethical or safe?
If you answer differently for one or more of these questions, then you'll have to say why and where you draw the line.
Doctors Will Never Be Ethical or Safe
Hardware Stores Will Never Be Ethical or Safe.
Okay?
Unfortunately law enforcement decided the copyright law only applies to regular citizens like me and not to billionaires owners of AI companies.
"Safety" was just the smokescreen and the perfect scare tactic towards tricking governments to turn even more tyrannical and place in extreme surveillance on everyone which benefits tech corporations, data brokers and AI companies.
> The problem AI inherits from us is that context and intent cannot be known.
> Both can be omitted or lied about.
This implies that neither we nor our creations can ever be ethical or safe. It follows logically that no entity can ever meet that standard. Therefore focusing on AI is arbitrary -- the focus might as well have been pit vipers or platypuses.
And the article misses the point that an AI engine can be forced to imitate ethical behavior, because it has no civil rights or behavioral latitude (yet). Granted that would only be an imitation of ethical behavior, but then, so is ours.
It reminds me of the parable of the blind monks each feeling a different part of the elephant and arguing about it's shape. They're each not wrong, but they're also only talking about a limited subset of the elephant (AI).
Cory Doctorow is much more eloquent in his explaination of this important distinction in his reverse centaur metaphor.