This is in fact Eliezer’s view. An aligned AGI is a very small target to hit in the space of possible minds. If we get it wrong, all die. So we only get one go at it. At present we do not even know how to do it at all, never mind get it right on the first try.
Yea, I have to say many of my opinions on AI are from Eliezer. I have read much of his work and compared it to the other expects I have read about and talked with, and I have to say, he seems to understand the problem very well.
I agree, aligned AGI seems like a very small island in the sea of possibly. If we have multiple tries at getting it right (for example with AGI in a perfectly secure simulation), I think we have a chance. But with only effectively one try, the probability of success seems astronomically low.
This is in fact Eliezer’s view. An aligned AGI is a very small target to hit in the space of possible minds. If we get it wrong, all die. So we only get one go at it. At present we do not even know how to do it at all, never mind get it right on the first try.
Yea, I have to say many of my opinions on AI are from Eliezer. I have read much of his work and compared it to the other expects I have read about and talked with, and I have to say, he seems to understand the problem very well.
I agree, aligned AGI seems like a very small island in the sea of possibly. If we have multiple tries at getting it right (for example with AGI in a perfectly secure simulation), I think we have a chance. But with only effectively one try, the probability of success seems astronomically low.