Moreover, even if these things don’t work that way and we get a slow takeoff, that doesn’t necessarily save humanity. It just means that it will take a little longer for AI to be the dominant form of intelligence on the planet. That still sets a deadline to adequately solve alignment.
If a slow takeoff is all that’s possible, doesn’t that open up other options for saving humanity besides solving alignment?
I imagine far morehumans will agree p(doom) is high if they see AI isn’t aligned and it’s growing to be the dominant form of intelligence that holds power. In a slow-takeoff, people should be able to realize this is happening, and effect non-alignment based solutions (like bombing compute infrastructure).
If a slow takeoff is all that’s possible, doesn’t that open up other options for saving humanity besides solving alignment?
I imagine far more humans will agree p(doom) is high if they see AI isn’t aligned and it’s growing to be the dominant form of intelligence that holds power. In a slow-takeoff, people should be able to realize this is happening, and effect non-alignment based solutions (like bombing compute infrastructure).