But if you’re going to bother visualizing the future, it does seem to help to visualize more than one way it could go, instead of concentrating all your strength into one prediction.
So I try not to ask myself “What will happen?” but rather “Is this possibility allowed to happen, or is it prohibited?”
I thought that you were changing your position; instead, you have used this opening to lead back into concentrating all your strength into one prediction.
I think this characterizes a good portion of the recent debate: Some people (me, for instance) keep saying “Outcomes other than FOOM are possible”, and you keep saying, “No, FOOM is possible.” Maybe you mean to address Robin specifically; and I don’t recall any acknowledgement from Robin that foom is >5% probability. But in the context of all the posts from other people, it looks as if you keep making arguments for “FOOM is possible” and implying that they prove “FOOM is inevitable”.
A second aspect is that some people (again, eg., me) keep saying, “The escalation leading up to the first genius-level AI might be on a human time-scale,” and you keep saying, “The escalation must eventually be much faster than human time-scale.” The context makes it look as if this is a disagreement, and as if you are presenting arguments that AIs will eventually self-improve themselves out of the human timescale and saying that they prove FOOM.
I thought that you were changing your position; instead, you have used this opening to lead back into concentrating all your strength into one prediction.
I think this characterizes a good portion of the recent debate: Some people (me, for instance) keep saying “Outcomes other than FOOM are possible”, and you keep saying, “No, FOOM is possible.” Maybe you mean to address Robin specifically; and I don’t recall any acknowledgement from Robin that foom is >5% probability. But in the context of all the posts from other people, it looks as if you keep making arguments for “FOOM is possible” and implying that they prove “FOOM is inevitable”.
A second aspect is that some people (again, eg., me) keep saying, “The escalation leading up to the first genius-level AI might be on a human time-scale,” and you keep saying, “The escalation must eventually be much faster than human time-scale.” The context makes it look as if this is a disagreement, and as if you are presenting arguments that AIs will eventually self-improve themselves out of the human timescale and saying that they prove FOOM.