I would ordinarily vote down a post that restated things that most people on LW should already know, but… LW is curiously devoid of discussion on this issue, whether criticism of CEV, or proposals of alternatives. And LP’s post hits all the key points, very efficiently.
If LW has a single cultural blind spot, it is that LWers claim to be Bayesians, yet routinely analyze potential futures as if the single “most-likely” scenario, hypothesis, or approach accepted as dogma on LessWrong (fast takeoff, Friendly AI, multiple worlds, CEV, etc.) had probability 1.
If LW has a single cultural blind spot, it is that LWers claim to be Bayesians, yet routinely analyze potential futures as if the single “most-likely” scenario, hypothesis, or approach accepted as dogma on LessWrong (fast takeoff, Friendly AI, multiple worlds, CEV, etc.) had probability 1.
Eliezer has stated that he will not give his probability for the successful creation of Friendly AI. Presumably because people would get confused about why working desperately towards it is the rational thing to do despite a low probability.
As for CEV ‘having a probability of 1’, that doesn’t even make sense. But an awful lot of people have said that CEV as described in Eliezer’s document would be undesirable even assuming the undeveloped parts were made into more than hand wavy verbal references.
I dunno, I perceive a lot of criticism of CEV here—if I recall correctly there have been multiple top-level posts expressing skepticism of it. And doesn’t Robin Hanson (among others) disagree with the hard takeoff scenario?
That’s true. (Although notice that not one of those posts has ever gotten the green button.)
CEV does not fit well into my second paragraph, since it is not a prerequisite for anything else, and therefore not a point of dependency in an analysis.
I would ordinarily vote down a post that restated things that most people on LW should already know, but… LW is curiously devoid of discussion on this issue, whether criticism of CEV, or proposals of alternatives. And LP’s post hits all the key points, very efficiently.
If LW has a single cultural blind spot, it is that LWers claim to be Bayesians, yet routinely analyze potential futures as if the single “most-likely” scenario, hypothesis, or approach accepted as dogma on LessWrong (fast takeoff, Friendly AI, multiple worlds, CEV, etc.) had probability 1.
“Devoid”?
http://lesswrong.com/lw/30q/amoral_approaches_to_morality/
http://lesswrong.com/lw/1pi/strong_moral_realism_metaethics_and/
http://lesswrong.com/lw/1oj/complexity_of_value_complexity_of_outcome/
http://lesswrong.com/lw/2b7/hacking_the_cev_for_fun_and_profit/
http://lesswrong.com/r/discussion/lw/44i/another_argument_against_eliezers_metaethics/
http://lesswrong.com/lw/3fn/two_questions_about_cev_that_worry_me/
Not to mention various comments elsewhere...
Eliezer has stated that he will not give his probability for the successful creation of Friendly AI. Presumably because people would get confused about why working desperately towards it is the rational thing to do despite a low probability.
As for CEV ‘having a probability of 1’, that doesn’t even make sense. But an awful lot of people have said that CEV as described in Eliezer’s document would be undesirable even assuming the undeveloped parts were made into more than hand wavy verbal references.
I dunno, I perceive a lot of criticism of CEV here—if I recall correctly there have been multiple top-level posts expressing skepticism of it. And doesn’t Robin Hanson (among others) disagree with the hard takeoff scenario?
That’s true. (Although notice that not one of those posts has ever gotten the green button.)
CEV does not fit well into my second paragraph, since it is not a prerequisite for anything else, and therefore not a point of dependency in an analysis.