I got the impression from, “do the impossible” that Eliezer was going for definitely safe AI and might be safe was not good enough.
The hypothesis here is that if you are unsure whether AGI is safe, it’s not, and when you are sure it is, it’s still probably not. Therefore, to have any chance of success, you have to be sure that you understand how the success is achieved. This is a question of human bias, not of the actual probability of success. See also: Possibility, Antiprediction.
The hypothesis here is that if you are unsure whether AGI is safe, it’s not, and when
you are sure it is, it’s still probably not.
I really didn’t get that impression… Why worry about whether the AI will separate humanity if you think it might fail anyway. Surely spend more time making sure it doesn’t fail...
The hypothesis here is that if you are unsure whether AGI is safe, it’s not, and when you are sure it is, it’s still probably not. Therefore, to have any chance of success, you have to be sure that you understand how the success is achieved. This is a question of human bias, not of the actual probability of success. See also: Possibility, Antiprediction.
I also thought that ad-hoc brings insight, but after learning more I changed my mind.
I really didn’t get that impression… Why worry about whether the AI will separate humanity if you think it might fail anyway. Surely spend more time making sure it doesn’t fail...