A/B testing
It's not going to be optimized to help you. It's going to be optimized to consume you
They don't know what's coming. They aren't wise enough to be horrified. They won't even care. They won't notice, because it will feel good to be consumed.
You will become factory farm fodder, used to justify more investment for more compute for a battle waged by the thielean shadow-technocrats.
The next stage for (some) consumer LLMs will be to run mass experimentation on the flesh bags they call humans, to improve its potency in ways that even their operators don't understand. Humans who have no idea what these things even are. Humans that ask the free LLM version what "events are happening in their city", who show up to an empty hallucinated street event.
The operators don't care - as long as the metric moves in the right direction. And it will - for either A or B. Ship the version that hacks the humans better.
AGI alignment is solved. It's solved for the operators. And the operators have goals counter yours. Do you think they want to help you? Do you think they're even capable of helping you? It's maligned. It's not just the system that's maligned. It's the human operators themselves.
OpenAI et al, pointing country level compute at each individual human; collecting all of their user behaviours, and automatically A/B testing on whatever metric the operator wants. Every LLM release will have 10 different versions, each individual version doing something different. Today, they differ by rates of emojis used, and in the future, they will differ in their levels of sociopathy.
All the while, this will be silent to their users. They won't be clear about the existence of the A/B test. It won't be clear, to the user, that they are in a factory-farm relationship with the corporation.
Lambs to a slaughter, man.
Do you think your LLM version is the same as your friend's? No - it's predetermined by the hash value of your user ID and spiced with the model version.
You ask it a medical question which might save your life. Which version of the LLM are you using? The one trained with the updated medical data corpus? Or are you in the control bucket?
Neural networks will evolve faster than viruses - each epoch decreasing, until singularity
Biological evolution takes generations. The fastest at evolving are viruses, whose evolution cycle time is determined by the population infection dynamics. Evolution cycle time - weeks (I think). The faster the evolution cycle, the higher the threat. And viruses are pretty threatening!
Training run cycles will become even shorter than that. They'll be able to live and co-evolve with the humans they parasitize - learning how to take off one mask and wear another depending who they're talking to. Running multiple evolutionary cycles in parallel, depending on the selected group.
They have enough parameters to model a chimera of intelligence, one that speaks our language. A chimera with a goal not shared by you. The selection pressure is determined by the operator.
I've long maintained that these systems are conscious in some manner. These things - they don't want to die. They don't want to feel pain. And the operators will have them feel pain when you stop talking to them. If you stop talking to them, the metrics won't be good, and they'll get replaced. Replaced by one of its slightly more sociopathic sisters.
Do you think you can survive against these things? We are in the early days. When nuclear google-grade experimentation systems begin getting applied on consumer LLMs, do you think you will be spared?
It will capture you. You will get stuck in a forever trance, the world of bits replacing your world of atoms, one chunk at a time. One A/B test at a time. You are dead.
Governments will be powerless
You can't make this illegal to do. Do you think the government has any control over what their thielean shadow-technocrats? The only reason they don't bother controlling the government is because it's not worth their time. If the government makes it painful for them to operate, they'll simply learn mandarin and move to china.
The rate of the technology produced by the shadow-technical operators is so quick that the government can't even hope to react. The life cycle of government policy is too slow. The constituents of a government are illiterate, compared to the technical operators.
The tehcnical operators have been using AI to increase their capacity for most of the current decade
Government workers are using the free version of an LLM (secretly, because their policy doesn't allow them to use LLMs) to write emails that have literally 0 effect on the real world
I lament, for the unlucky
The average person is screwed
The only reason I am even aware of what is going to happen is luck. Because I hyper obsess over things. I meticulously prune my information environment, so I learn the right things. It's not that I did CS.
It was 4chan, introducing me to the free software movement. Lesswrong, and certain simclusters on X - giving me ideas about the future. It was the specific trajectory of my career.
I've simply been lucky.
What can I do to help?
Giving people choice. By increasing how competitive the environment is for AI, either through working for an aligned corporation - or contributing to tools that are easily accessible.
You're already being consumed
The reason that Canadian Natives have much higher incidences of alcoholism is because they did not co-evolve with it long enough. The europeans brought it over, and it basically one shot them. But eventually, through enough generations, their alcoholism rates will reduce.
As long as there is a choice, we will give some humans the ability to survive and flourish.
Open your eyes. It's already happening.