My position is yes technology can kill us all. Also lack of technologies can get us all killed,
The ability to create intelligence is one we need on the long scale. I don’t think ultra safe self-improving AI is a coherent concept, but I understand my thinking might be wrong. My problem is how can we move on from following that path if it is a dead end?
Here it is again: there is no such requirement that FAI needs to be “ultra safe” and that if it’s not, it’s unacceptable. This is a strawman. The requirement is that there needs to be any chance at all that the outcome is good (preferably a greater chance). Then, there is a separate conjecture that to have any chance at all, AI needs to be deeply understood.
If you think that being careful is unnecessary, that ad-hoc approach is ready to be used positively, you are not disputing the need for Friendliness in AGI. You are disputing the conjecture that Friendliness requires care. This is not a normative question, this is a factual question.
The normative question is whether to think about consequences of your actions, which is largely decided against or rather dismissed as trivial by far too many people who think they are working on AGI.
I got the impression from, “do the impossible” that Eliezer was going for definitely safe AI and might be safe was not good enough. Edit Oh and the sequence on fun theory suggested that scenarios where humanity just survived, were not good enough either.
I think we are so far away from having the right intellectual framework for creating AI or even thinking about its likely impact on the future, that the ad hoc approach might be valuable for pushing us in the right direction or telling us what the important structure in the human brain is going to look like.
I got the impression from, “do the impossible” that Eliezer was going for definitely safe AI and might be safe was not good enough.
The hypothesis here is that if you are unsure whether AGI is safe, it’s not, and when you are sure it is, it’s still probably not. Therefore, to have any chance of success, you have to be sure that you understand how the success is achieved. This is a question of human bias, not of the actual probability of success. See also: Possibility, Antiprediction.
The hypothesis here is that if you are unsure whether AGI is safe, it’s not, and when
you are sure it is, it’s still probably not.
I really didn’t get that impression… Why worry about whether the AI will separate humanity if you think it might fail anyway. Surely spend more time making sure it doesn’t fail...
Huh. Looking back, it actually seems that I already wrote up the complete reply to this post and it is “Raised in Technophilia” (Sep 17 ’08).
My position is yes technology can kill us all. Also lack of technologies can get us all killed,
The ability to create intelligence is one we need on the long scale. I don’t think ultra safe self-improving AI is a coherent concept, but I understand my thinking might be wrong. My problem is how can we move on from following that path if it is a dead end?
Here it is again: there is no such requirement that FAI needs to be “ultra safe” and that if it’s not, it’s unacceptable. This is a strawman. The requirement is that there needs to be any chance at all that the outcome is good (preferably a greater chance). Then, there is a separate conjecture that to have any chance at all, AI needs to be deeply understood.
If you think that being careful is unnecessary, that ad-hoc approach is ready to be used positively, you are not disputing the need for Friendliness in AGI. You are disputing the conjecture that Friendliness requires care. This is not a normative question, this is a factual question.
The normative question is whether to think about consequences of your actions, which is largely decided against or rather dismissed as trivial by far too many people who think they are working on AGI.
I got the impression from, “do the impossible” that Eliezer was going for definitely safe AI and might be safe was not good enough. Edit Oh and the sequence on fun theory suggested that scenarios where humanity just survived, were not good enough either.
I think we are so far away from having the right intellectual framework for creating AI or even thinking about its likely impact on the future, that the ad hoc approach might be valuable for pushing us in the right direction or telling us what the important structure in the human brain is going to look like.
The hypothesis here is that if you are unsure whether AGI is safe, it’s not, and when you are sure it is, it’s still probably not. Therefore, to have any chance of success, you have to be sure that you understand how the success is achieved. This is a question of human bias, not of the actual probability of success. See also: Possibility, Antiprediction.
I also thought that ad-hoc brings insight, but after learning more I changed my mind.
I really didn’t get that impression… Why worry about whether the AI will separate humanity if you think it might fail anyway. Surely spend more time making sure it doesn’t fail...