I don’t think the general idea is wrong. And it’s easy to generalize (to, for instance, engineering of new viruses)
Lootboxes, clickbait, sexualization, sugar, drugs, etc. are superstimuli, and they form maximums, which means that you can’t really compete with them or create better alternatives which are healthier.
Since AIs optimize, they’re likely to discover these dangerous maximums. And if there’s one defense against Moloch, it’s a lack of information. Atomic weapons are only dangerous because we can make them, and lootboxes are only harming gaming because we know these strategies of exploitation.
It’s likely that AIs can find drugs which feel so good that people will destroy themselves just for a second dose. Something much more addictive than anything which exists currently. Outside of drugs too, AIs can find extremely effective strategies with terrible consequences, and both AIs and humans tend towards the most effective strategies, even if everyone loses in the process.
We have fought against dishonesty and deception for 1000s of years, and warned against alcohol, gambling and hedonism, and used strict social norms to guard against their dangers. Now we’re discovering much worse things, and at the same time relaxing our social norms, leading to degeneracy and weak-willed people who can’t resist dangerous temptations (and as we will soon see, religious people had a point about the dangers of indulgence).
You convince me of outcome, but not of comparative capacity:
Drug addictivity has upper limit—the % of people who take it once that become addicted to it, and the % of people who successfully quit. It caps at 100% and 0% respectively. Fentanyl probably isn’t too far off that cap.
Without AI, more addictive opioids than fentanyl will probably be discovered at some point. How much higher is the capacity for creating addictiveness?
I don’t think the general idea is wrong. And it’s easy to generalize (to, for instance, engineering of new viruses)
Lootboxes, clickbait, sexualization, sugar, drugs, etc. are superstimuli, and they form maximums, which means that you can’t really compete with them or create better alternatives which are healthier.
Since AIs optimize, they’re likely to discover these dangerous maximums. And if there’s one defense against Moloch, it’s a lack of information. Atomic weapons are only dangerous because we can make them, and lootboxes are only harming gaming because we know these strategies of exploitation.
It’s likely that AIs can find drugs which feel so good that people will destroy themselves just for a second dose. Something much more addictive than anything which exists currently. Outside of drugs too, AIs can find extremely effective strategies with terrible consequences, and both AIs and humans tend towards the most effective strategies, even if everyone loses in the process.
We have fought against dishonesty and deception for 1000s of years, and warned against alcohol, gambling and hedonism, and used strict social norms to guard against their dangers. Now we’re discovering much worse things, and at the same time relaxing our social norms, leading to degeneracy and weak-willed people who can’t resist dangerous temptations (and as we will soon see, religious people had a point about the dangers of indulgence).
You convince me of outcome, but not of comparative capacity:
Drug addictivity has upper limit—the % of people who take it once that become addicted to it, and the % of people who successfully quit. It caps at 100% and 0% respectively. Fentanyl probably isn’t too far off that cap.
Without AI, more addictive opioids than fentanyl will probably be discovered at some point. How much higher is the capacity for creating addictiveness?