“In IEEE Spectrum’s sad little attempt at Singularity coverage, one bright spot is Paul Wallich’s “Who’s Who In The Singularity”,...”
Brightness here being a relative quality… I am labeled green, meaning “true believer, thinks it will happen within 30 years.” Yet I am quoted (correctly) as saying “I would… assign less than a 50% probability to superintelligence being developed by 2033.” (I also don’t endorse “once the singularity comes near, we will all be kicking ourselves for not having brought it about sooner”, even though they attribute this to me as my “central argument”.)
Reg Oracle AI, I’m not sure how much of a disagreement there exists between Eliezer and me. My position has not been that it is definitely the case that Oracle AI is the way to go. Rather, my position is something like “this seems to have at least something going for it; I have not yet been convinced by the arguments I’ve heard against it; it deserves some further consideration”. (The basic rationale is this: While I agree that a utility function that is maximized by providing maximally correct and informative answers to our questions is clearly unworkable (since this could lead the SI to transform all Earth into more computational hardware so as to better calculate the answer), it might turn out to be substantially easier to specify the needed constraints to avoid such catastrophic side-effects of an Oracle AI than it is to solve the Friendliness problem in its general form—I’m not at all sure it is easier, but I haven’t yet been persuaded it is not.)
Reg disagreement between Robin and Eliezer on singularity: They’ve discussed this many times, both here and on other mailinglists. But the discussion always seems to end prematurely. I think this would make for a great disagreement case study—topic is important, both are disagreement savvy, both know and respect one another, both have some subject matter expertise… I would like them to try once to get to the bottom of the issue, and continue discussion until they either cease disagreeing or at least agree exactly on what they disagree about, and why, and on how each person justifies the persistent disagreement.
You are looking at the wreckage of an abandoned book project. We got bogged down & other priorities came up. Instead of writing the book, we decided to just publish a working outline and call it a day.
The result is not particularly optimized for tech executives or policymakers — it’s not really optimized for anybody, unfortunately.
The propositions all *aspire* to being true, although some of may not be particularly relevant or applicable in certain scenarios. Still, there could be value on working out sensible things to say to cover quite a wide range of scenarios, partly because we don’t know which scenario will happen (and there is disagreement over the probabilities), but partly also because this wider structure — including the parts that don’t directly pertain to the scenario that actually plays out — might form a useful intellectual scaffolding, which could slightly constrain and inform people’s thinking of the more modal scenarios.
I think it’s unclear how well reasoning by analogy works in this area. Or rather: I guess it works poorly, but reasoning deductively from first principles (at SL4, or SL15, or whatever) might be equally or even more error-prone. So I’ve got some patience for both approaches, hoping the combo has a better chance of avoiding fatal error than either the softheaded or the hardheaded approach has on its own.