If all governments and tech leaders could be effectively convinced that x-risk is not the only outcome, but also s-risk, then things would AT LEAST start changing. The biggest problem is that I think 99% are terribly deluded about this, both leaders and masses, and the biggest factor might well be self-delusion to preserve sanity/happiness. Is it a monumental task? Sure. But way more feasible than on-time alignment, specially in the sight of transformers and the scaling hypothesis. Also, things needn’t be done in just a big whole step. If we could someway affect the production and availability of compute resources, that might very well give us decades (considering that the scaling hypothesis is what is creating the high probabilities on short timelines). Decades that could be used to implement other measures. It would be a gradual step.
If you claim to be midly skeptical of s-risk and then give 3 quite plausible outcomes, you’ve just contradicted yourself. Most people worry only about x-risk because no one is talking about s-risk, aka social bias, and to preserve their sanity. But we must start realizing that physical suffering is way worse than any anxiety about it. If this happens, we might have a chance to do something.
You’re right, a self-rejuvenating 80 year old might not feel tired of life. It wasn’t the best explanation. But maybe a 800 year old would. I can’t see any way of sane immortality without partial memory deletion. But also partial personality deletion. So you’d still die in a way. Anyway this is all highly speculative. There’s no way of knowing which side is right. That why I vehemently agree when you say
“the only reason to be a luddite is fear that opening Pandora’s box will bring about something worse than our current vale of tears”
Precisely. Would you play wheel of fortune with 50% chance of heaven and 50% chance of hell? Or even 70-30, or even 90-10. Only someone living in a rare modern oasis of comfort would ever take that chance in my opinion, being removed from the worst possibilities on the hedonic scale.
Agree that suffering is not the purpose of the world. It’s merely a tool for survival. But it’s a very outdated and cruel tool considering modern human life, which the more the technology, the more it can be hi-jacked (i.e. torture). That’s how I see technology getting out of control and bringing up hell on Earth.
That’s what makes me a luddite, at least until some point.
I hope so. Still, a very grim view, so that we should do everything to avoid it, aka doing everything to not develop AGI and nano before they are provably safe. The s-risk outcome is just too grim, and the probability doesn’t seem small.
I think it’s way too late to stop. Now the world knows what transformers can do, how are you going to stop it worldwide? Shut down every data center you can? Have trusted cyber-regulators overseeing every program that runs, in every remaining data center?
I always promote https://metaethical.ai by @june-ku as the best concrete proposal we have. Understand it and promote it and you might be doing some good. :-)
Basically we need a lot more Eliezers. We need a lot more AI advocates that tell it like it is, that make us shit our pants, that won’t soften their message to appear reasonable. That are actually realistic about timelines and risk. As long as most of the popular advocates keep with the approach of “don’t panic, don’t be afraid, don’t worry, it’s doable, if only we remain positive and believe in ourselves” then there is no hope. As long as people will keep lying to themselves to avoid panic, there is no hope. Panic can be treated in many ways, even in the most extreme cases with benzodiazepines. Disaster once it settles has no solution, and it’s a lot worse.
It would take a vast proportion of the world to shit their pants and form international organizations for regulation. As long as you can restrict global production and access to supercomputers, you can gain a few decades. Those decades will allow for more measures to be tried.
Formalizing ethics seems like a bad way. We need concrete priorities, not values. Value learning is dangerous. Anyway, like with most other alignment approaches, you’d need centuries for that. What’s the probability you’ll get there in 1-2 decades? I’d say less than 1%. Whereas my approach gives you time, time that can be used to try a multitude of approaches, yours included.
Very thoughtful reply, thanks. To each paragraph:
If all governments and tech leaders could be effectively convinced that x-risk is not the only outcome, but also s-risk, then things would AT LEAST start changing. The biggest problem is that I think 99% are terribly deluded about this, both leaders and masses, and the biggest factor might well be self-delusion to preserve sanity/happiness. Is it a monumental task? Sure. But way more feasible than on-time alignment, specially in the sight of transformers and the scaling hypothesis. Also, things needn’t be done in just a big whole step. If we could someway affect the production and availability of compute resources, that might very well give us decades (considering that the scaling hypothesis is what is creating the high probabilities on short timelines). Decades that could be used to implement other measures. It would be a gradual step.
If you claim to be midly skeptical of s-risk and then give 3 quite plausible outcomes, you’ve just contradicted yourself. Most people worry only about x-risk because no one is talking about s-risk, aka social bias, and to preserve their sanity. But we must start realizing that physical suffering is way worse than any anxiety about it. If this happens, we might have a chance to do something.
You’re right, a self-rejuvenating 80 year old might not feel tired of life. It wasn’t the best explanation. But maybe a 800 year old would. I can’t see any way of sane immortality without partial memory deletion. But also partial personality deletion. So you’d still die in a way. Anyway this is all highly speculative. There’s no way of knowing which side is right. That why I vehemently agree when you say
“the only reason to be a luddite is fear that opening Pandora’s box will bring about something worse than our current vale of tears”
Precisely. Would you play wheel of fortune with 50% chance of heaven and 50% chance of hell? Or even 70-30, or even 90-10. Only someone living in a rare modern oasis of comfort would ever take that chance in my opinion, being removed from the worst possibilities on the hedonic scale.
Agree that suffering is not the purpose of the world. It’s merely a tool for survival. But it’s a very outdated and cruel tool considering modern human life, which the more the technology, the more it can be hi-jacked (i.e. torture). That’s how I see technology getting out of control and bringing up hell on Earth.
That’s what makes me a luddite, at least until some point.
I hope so. Still, a very grim view, so that we should do everything to avoid it, aka doing everything to not develop AGI and nano before they are provably safe. The s-risk outcome is just too grim, and the probability doesn’t seem small.
I think it’s way too late to stop. Now the world knows what transformers can do, how are you going to stop it worldwide? Shut down every data center you can? Have trusted cyber-regulators overseeing every program that runs, in every remaining data center?
I always promote https://metaethical.ai by @june-ku as the best concrete proposal we have. Understand it and promote it and you might be doing some good. :-)
Basically we need a lot more Eliezers. We need a lot more AI advocates that tell it like it is, that make us shit our pants, that won’t soften their message to appear reasonable. That are actually realistic about timelines and risk. As long as most of the popular advocates keep with the approach of “don’t panic, don’t be afraid, don’t worry, it’s doable, if only we remain positive and believe in ourselves” then there is no hope. As long as people will keep lying to themselves to avoid panic, there is no hope. Panic can be treated in many ways, even in the most extreme cases with benzodiazepines. Disaster once it settles has no solution, and it’s a lot worse.
It would take a vast proportion of the world to shit their pants and form international organizations for regulation. As long as you can restrict global production and access to supercomputers, you can gain a few decades. Those decades will allow for more measures to be tried.
Formalizing ethics seems like a bad way. We need concrete priorities, not values. Value learning is dangerous. Anyway, like with most other alignment approaches, you’d need centuries for that. What’s the probability you’ll get there in 1-2 decades? I’d say less than 1%. Whereas my approach gives you time, time that can be used to try a multitude of approaches, yours included.