Here’s an event that would change my p(doom) substantially:
Someone comes up with an alignment method that looks like it would apply to superintelligent entities. They get extra points for trying it and finding that it works, and extra points for society coming up with a way to enforce that only entities that follow the method will be created.
So far none of the proposed alignment methods seem to stand up to a superintelligent AI that doesn’t want to obey them. They don’t even stand up to a few minutes of merely human thought. But it‘s not obviously impossible, and lots of smart people are working on it.
In the non-doom case, I think one of the following will be the reason:
—Civilization ceases to progress, probably because of a disaster.
—The governments of the world ban AI progress.
—Superhuman AI turns out to be much harder than it looks, and not economically viable.
—The above happy circumstance, giving us the marvelous benefits of superintelligence without the omnicidal drawbacks.
Here’s an event that would change my p(doom) substantially:
Someone comes up with an alignment method that looks like it would apply to superintelligent entities. They get extra points for trying it and finding that it works, and extra points for society coming up with a way to enforce that only entities that follow the method will be created.
So far none of the proposed alignment methods seem to stand up to a superintelligent AI that doesn’t want to obey them. They don’t even stand up to a few minutes of merely human thought. But it‘s not obviously impossible, and lots of smart people are working on it.
In the non-doom case, I think one of the following will be the reason:
—Civilization ceases to progress, probably because of a disaster.
—The governments of the world ban AI progress.
—Superhuman AI turns out to be much harder than it looks, and not economically viable.
—The above happy circumstance, giving us the marvelous benefits of superintelligence without the omnicidal drawbacks.