Too little too late. OpenAI's shit was nearly worthless for cybersec for what, a year already?

ChatGPT 5.x just tries to deny everything remotely cybersecurity-related - to the point that it would at times rather deny vulnerabilities exist than go poke at them. Unless you get real creative with prompting and basically jailbreak it. And it was this bad BEFORE they started messing around with 5.4 access specifically.

And that was ChatGPT 5.4. A model that, by all metrics and all vibes, doesn't even have a decisive advantage over Opus 4.6 - which just does whatever the fuck you want out of the box.

What's I'm afraid the most of is that Anthropic is going to snort whatever it is that OpenAI is high on, and lock down Mythos the way OpenAI is locking down everything.

jruz2 hours ago | | | parent | | on: 47771779
That’s the whole point of this variant of the model, it won’t have those guardrails.
ACCount372 hours ago | | | parent | | on: 47771952
Yes. But "perform a humiliation ritual of KYC to access the actual model instead of the nerfed version of it that's so neurotic about cybersec you have to sink 400 tokens into getting it to a usable baseline" does not inspire any confidence at all.
alephnerd1 hour ago | | | parent | | on: 47771779
> OpenAI's shit was nearly worthless for cybersec for what, a year already

Most AI for Cybersecurity companies use a mixture of models depending on iteration and testing, including OpenAI's.