If LW has a single cultural blind spot, it is that LWers claim to be Bayesians, yet routinely analyze potential futures as if the single “most-likely” scenario, hypothesis, or approach accepted as dogma on LessWrong (fast takeoff, Friendly AI, multiple worlds, CEV, etc.) had probability 1.
Eliezer has stated that he will not give his probability for the successful creation of Friendly AI. Presumably because people would get confused about why working desperately towards it is the rational thing to do despite a low probability.
As for CEV ‘having a probability of 1’, that doesn’t even make sense. But an awful lot of people have said that CEV as described in Eliezer’s document would be undesirable even assuming the undeveloped parts were made into more than hand wavy verbal references.
Eliezer has stated that he will not give his probability for the successful creation of Friendly AI. Presumably because people would get confused about why working desperately towards it is the rational thing to do despite a low probability.
As for CEV ‘having a probability of 1’, that doesn’t even make sense. But an awful lot of people have said that CEV as described in Eliezer’s document would be undesirable even assuming the undeveloped parts were made into more than hand wavy verbal references.