The reason why nobody in this community has successfully named a ‘pivotal weak act’ where you do something weak enough with an AGI to be passively safe, but powerful enough to prevent any other AGI from destroying the world a year later—and yet also we can’t just go do that right now and need to wait on AI—is that nothing like that exists.
Only a sith deals in absolutes.
There’s always unlocking cognitive resources through meaning-making and highly specific collaborative network distribution.
I’m not talking about “improving public epistemology” on Twitter with “scientifically literate arguments.” That’s not how people work. Human bias cannot be reasoned away with factual education. It takes something more akin to a religious experience. Fighting fire with fire, as they say. We’re very predictable, so it’s probably not as hard as it sounds. For an AGI, this might be as simple as flicking a couple of untraceable and blameless memetic dominoes. People probably wouldn’t even notice it happening. Each one would be precisely manipulated into thinking it was their idea.
Maybe its already happening. Spooky. Or maybe one of the 1,000,000,000:1 lethally dangerous misaligned counterparts is. Spookier. Wait, isn’t that what we were already doing to ourselves? Spookiest.
Anyway, my point is that you don’t hear about things like this from your community because your community systemically self-isolates and reinforces the problem by democratizing its own prejudices. Your community even borks its own rules to cite decades-obsolete IQ rationalizations on welcome posts to alienate challenging ideas and get out of googling it. Imagine if someone relied on 20 year old AI alignment publications to invalidate you. I bet a lot of them already do. I bet you know exactly what Cassandra syndrome feels like.
Don’t feel too bad, each one of us is a product of our environment by default. We’re just human, but its up to us to leave the forest. (Or maybe its silent AGI manipulation, who knows?)
The real question is what are you going to do now that someone kicked a systemic problem out from under the rug? The future of humanity is at stake here.
Only a sith deals in absolutes.
There’s always unlocking cognitive resources through meaning-making and highly specific collaborative network distribution.
I’m not talking about “improving public epistemology” on Twitter with “scientifically literate arguments.” That’s not how people work. Human bias cannot be reasoned away with factual education. It takes something more akin to a religious experience. Fighting fire with fire, as they say. We’re very predictable, so it’s probably not as hard as it sounds. For an AGI, this might be as simple as flicking a couple of untraceable and blameless memetic dominoes. People probably wouldn’t even notice it happening. Each one would be precisely manipulated into thinking it was their idea.
Maybe its already happening. Spooky. Or maybe one of the 1,000,000,000:1 lethally dangerous misaligned counterparts is. Spookier. Wait, isn’t that what we were already doing to ourselves? Spookiest.
Anyway, my point is that you don’t hear about things like this from your community because your community systemically self-isolates and reinforces the problem by democratizing its own prejudices. Your community even borks its own rules to cite decades-obsolete IQ rationalizations on welcome posts to alienate challenging ideas and get out of googling it. Imagine if someone relied on 20 year old AI alignment publications to invalidate you. I bet a lot of them already do. I bet you know exactly what Cassandra syndrome feels like.
Don’t feel too bad, each one of us is a product of our environment by default. We’re just human, but its up to us to leave the forest. (Or maybe its silent AGI manipulation, who knows?)
The real question is what are you going to do now that someone kicked a systemic problem out from under the rug? The future of humanity is at stake here.
It’s going to get weird. It has to.