Shall we just have that debate anyway? :D
The big question that I hoped the article might address: Can AI ever be ethical (within the norms of what the average Jo(e) considers ethical), or have we forever poisoned the well?
If the technology and mathematical underpinnings have been created on fundamentally immoral grounds (IP theft, energy / water excesses, etc) what would we have to do to produce an entirely - or even mostly - ethical AI stack?
Is it even possible, given the dependencies on (Lithium / Israel / fossil fuels / conflict mining / capitalistic exploitation / any other morally questionable underpinning you might think of) to re-do the work to such a point that we could "black box" our way to decently function LLMs?
Assuming that comes with a caveat of rolling back the technological progress, how far back do we have to go? It feels like the bronze age is a step too far, at least on the basis of my "average Jo(e)" test above - but what is considered reasonable?
Then - and only then - would it make sense to ask how to make the content generation itself ethical.
It feels like the Nazi medical science issue all over again, except nobody really cares as much about this one. But socially, it feels like an anti-capitalistic uprising is on the horizon, so maybe if that happens, a moral aversion to the state of AI might piggyback onto it?
Not that I want it to. Quite like AI really. Feels like the background immorality radiation of the earth is quite high anyway, maybe AI isn't the thing to fluff our feathers about. But it's certainly an interesting thing to mull as we weep over our non-gm oat milk babyccinos, pitying at the state of the world.
(I'm really an upbeat person, honest...)
(Depending on various definitions, to some people specifics of this amplification could warrant taking a mental shortcut and just considering that tech as harmful in itself. After all, if it is neutral and helpful under unattainable circumstances, and harmful under real-world conditions, then it is pointless to draw that distinction.)
Personally, I believe that technological and mathematical underpinnings of LLMs by themselves do not at all imply IP theft or detriment to the environment and society, but the way this technology is being adopted should raise serious questions in anyone with such capability.