I agree with many of Bill’s comments. I too am more concerned about a “1% machine” than a technical accident that destroys civilization. A “1% machine” seems a lot more likely. This is largely a “values” issue. Some would argue that a “0%” outcome would be exceptionally bad—whereas a “1%” outcome would be “OK”—and thus is acceptable, according to the “maxipok” principle.
Just to clarify—by “1% machine” do you mean a machine which serves (the most powerful) 1% of humanity?
There’s definitely a values issue as to how undesirable such an outcome would be compared to human extinction. I think there’s also substantial disagreement between Bill & Luke about the relative probabilities of those outcomes though.
(As we’ve seen from the Hanson/Yudkowsky foom debate, drilling down to find the root cause of that kind of disagreement is really hard).
I agree with many of Bill’s comments. I too am more concerned about a “1% machine” than a technical accident that destroys civilization. A “1% machine” seems a lot more likely. This is largely a “values” issue. Some would argue that a “0%” outcome would be exceptionally bad—whereas a “1%” outcome would be “OK”—and thus is acceptable, according to the “maxipok” principle.
Just to clarify—by “1% machine” do you mean a machine which serves (the most powerful) 1% of humanity?
There’s definitely a values issue as to how undesirable such an outcome would be compared to human extinction. I think there’s also substantial disagreement between Bill & Luke about the relative probabilities of those outcomes though.
(As we’ve seen from the Hanson/Yudkowsky foom debate, drilling down to find the root cause of that kind of disagreement is really hard).
Yes, that’s right.