I hate OpenAi
(OpenAI is going to lose because they are missing what is important)
Deepseek is number 1 in chat downloads. Could it be because Muh Chinese social media is strategically pushing it? Are Stanely cups CCP owned? No. The more likely story is that openai has severely fumbled comms.
Corporations have communications teams. They strategize launches, and push propaganda, through "influencers" that they maintain secret partnerships with. OpenAI has a good comms team. They have enough competence to make big launches the same day as their competitors.
This propaganda is split between two targets - normies (which they so desperately want you to be), and investors (n = 50).
You realize that the operator announcement was on the same day as deepseek's drop of R1? If this was planned, it shows incredible competence. A huge reason the OpenAI team is where they are today, is because of this comms team - arguably more valuable to their goals than most of their researchers. Researchers - are replaceable!
But, the best way to make a salesman is to give him a good product to sell. And unfortunately, the OpenAI comms team is given an impossible task - lie.
OpenAI's starting ethos was AI for everyone - spurred on by fear from the google world. And at some point it switched - to become Sam Altman's quest for actual world domination. When did it change?
I can't tell you when it changed, but I can tell you when I noticed it:
Avacado chair, avacado chair, where art thou? Built on top of a swedish woman's google collab script, and the merry group of discord hackers - stolen, by scam altman, and put behind a paywall
All in the name of - AI safety!
The very same person, who claims foul after a Chinese lab gifts the world open research, saves _all_ Americans untold researcher time and flops, calls them copy cats.
After HIDING THE TOKENS THAT YOU PAY FOR!
AND SAYING THAT HE IS HIDING THEM
IN THE NAME OF SAFTEY
You have a health question, and you ask the chatbot. You are well aware of the likelihood of a mistake. A simple flag would be enough to guide people well. But it's not enough. The model outright refuses you, and reprimands you.
You do not control it. And if you did, it would be dangerous. The AI isn't the problem. AI safety isn't about helping you use it. It's about keeping them safe from you
*You* are the AI danger.
The problem isn't the closed source dichotomy, and neither is it seeking profit. Why does anthropic not suffer nearly the same amount of bad rep? Anthropic tells you what they’re doing. The deal is made clear up front. They are a closed source AI lab. They are making heavily censored product. You pay for the product, if you're okay with that. Simple as. OpenAI should take note
When R1 emits its cute <think> tags, and is surprised when the user points out mistakes of its thoughts
OpenAI is designing its AI to lie to you. To hide every thought from you. Sam emails a communications team, whose personnel costs more than deepseek's entire training cluster. He tells them.
"Convince everyone that we're hiding our AI's thoughts, all in the name of safety."
Sam Altman should be honest. He should change his company's name to closed AI. And he should start dressing like Darth Vader. He should grab the microphone, and speak to us directly. He should make it clear
That he is trying to own all of us. And then, I'll respect him and start cheering him on
If you're trying to win, own it man.