“In IEEE Spectrum’s sad little attempt at Singularity coverage, one bright spot is Paul Wallich’s “Who’s Who In The Singularity”,...”
Brightness here being a relative quality… I am labeled green, meaning “true believer, thinks it will happen within 30 years.” Yet I am quoted (correctly) as saying “I would… assign less than a 50% probability to superintelligence being developed by 2033.” (I also don’t endorse “once the singularity comes near, we will all be kicking ourselves for not having brought it about sooner”, even though they attribute this to me as my “central argument”.)
Reg Oracle AI, I’m not sure how much of a disagreement there exists between Eliezer and me. My position has not been that it is definitely the case that Oracle AI is the way to go. Rather, my position is something like “this seems to have at least something going for it; I have not yet been convinced by the arguments I’ve heard against it; it deserves some further consideration”. (The basic rationale is this: While I agree that a utility function that is maximized by providing maximally correct and informative answers to our questions is clearly unworkable (since this could lead the SI to transform all Earth into more computational hardware so as to better calculate the answer), it might turn out to be substantially easier to specify the needed constraints to avoid such catastrophic side-effects of an Oracle AI than it is to solve the Friendliness problem in its general form—I’m not at all sure it is easier, but I haven’t yet been persuaded it is not.)
Reg disagreement between Robin and Eliezer on singularity: They’ve discussed this many times, both here and on other mailinglists. But the discussion always seems to end prematurely. I think this would make for a great disagreement case study—topic is important, both are disagreement savvy, both know and respect one another, both have some subject matter expertise… I would like them to try once to get to the bottom of the issue, and continue discussion until they either cease disagreeing or at least agree exactly on what they disagree about, and why, and on how each person justifies the persistent disagreement.
“In IEEE Spectrum’s sad little attempt at Singularity coverage, one bright spot is Paul Wallich’s “Who’s Who In The Singularity”,...”
Brightness here being a relative quality… I am labeled green, meaning “true believer, thinks it will happen within 30 years.” Yet I am quoted (correctly) as saying “I would… assign less than a 50% probability to superintelligence being developed by 2033.” (I also don’t endorse “once the singularity comes near, we will all be kicking ourselves for not having brought it about sooner”, even though they attribute this to me as my “central argument”.)
Reg Oracle AI, I’m not sure how much of a disagreement there exists between Eliezer and me. My position has not been that it is definitely the case that Oracle AI is the way to go. Rather, my position is something like “this seems to have at least something going for it; I have not yet been convinced by the arguments I’ve heard against it; it deserves some further consideration”. (The basic rationale is this: While I agree that a utility function that is maximized by providing maximally correct and informative answers to our questions is clearly unworkable (since this could lead the SI to transform all Earth into more computational hardware so as to better calculate the answer), it might turn out to be substantially easier to specify the needed constraints to avoid such catastrophic side-effects of an Oracle AI than it is to solve the Friendliness problem in its general form—I’m not at all sure it is easier, but I haven’t yet been persuaded it is not.)
Reg disagreement between Robin and Eliezer on singularity: They’ve discussed this many times, both here and on other mailinglists. But the discussion always seems to end prematurely. I think this would make for a great disagreement case study—topic is important, both are disagreement savvy, both know and respect one another, both have some subject matter expertise… I would like them to try once to get to the bottom of the issue, and continue discussion until they either cease disagreeing or at least agree exactly on what they disagree about, and why, and on how each person justifies the persistent disagreement.