(EDIT: See below.) I’m afraid that I am now confused. I’m not clear on what you mean by “these traits”, so I don’t know what you think I am being confident about. You seem to think I’m arguing that AIs will converge on a safe design and I don’t remember saying anything remotely resembling that.
EDIT: I think I figured it out on the second or third attempt. I’m not 100% committed to the proposition that if we make an AI and know how we did so that we can definitely make sure it’s fun and friendly, as opposed to fundamentally uncontrollable and unknowable. However it seems virtually certain to me that we will figure out a significant amount about designing AIs to do what we want in the process of developing them. People who subscribe to various “FOOM” theories about AI coming out of nowhere will probably disagree with this as is their right, but I don’t find any of those theories plausible.
I also I hope I didn’t give the impression that I thought it was meaningfully possible to create a God-like AI without understanding how to make AI. It’s conceivable in that such a creation story is not a logical contradiction like a square circle or a colourless green dream sleeping furiously, but that is all. I think it is actually staggeringly unlikely that we will make an AI without either knowing how to make an AI, or knowing how to upload people who can then make an AI and tell use how they did it.
However it seems virtually certain to me that we will figure out a significant amount about designing AIs to do what we want in the process of developing them.
Significant is not the same as sufficient. How low do you think the probability of negative AI outcomes is, and what are your reasons for being confident in that estimate?
(EDIT: See below.) I’m afraid that I am now confused. I’m not clear on what you mean by “these traits”, so I don’t know what you think I am being confident about. You seem to think I’m arguing that AIs will converge on a safe design and I don’t remember saying anything remotely resembling that.
EDIT: I think I figured it out on the second or third attempt. I’m not 100% committed to the proposition that if we make an AI and know how we did so that we can definitely make sure it’s fun and friendly, as opposed to fundamentally uncontrollable and unknowable. However it seems virtually certain to me that we will figure out a significant amount about designing AIs to do what we want in the process of developing them. People who subscribe to various “FOOM” theories about AI coming out of nowhere will probably disagree with this as is their right, but I don’t find any of those theories plausible.
I also I hope I didn’t give the impression that I thought it was meaningfully possible to create a God-like AI without understanding how to make AI. It’s conceivable in that such a creation story is not a logical contradiction like a square circle or a colourless green dream sleeping furiously, but that is all. I think it is actually staggeringly unlikely that we will make an AI without either knowing how to make an AI, or knowing how to upload people who can then make an AI and tell use how they did it.
Significant is not the same as sufficient. How low do you think the probability of negative AI outcomes is, and what are your reasons for being confident in that estimate?