Robin, I see a fair amount of evidence that winner take all types of competition are becoming more common as information becomes more important than physical resources.
Whether a movie star cooperates with or helps subjugate the people in central Africa seems to be largely an accidental byproduct of whatever superstitions happen to be popular among movie stars.
Why doesn’t this cause you to share more of Eliezer’s concerns? What probability would you give to humans being part of the winning coalition? You might have a good argument for putting it around 60 to 80 percent, but a 20 percent chance of the universe being tiled by smiley faces seems important enough to worry about.
Eliezer,
This is a good explanation of how easy it would be to overlook risks.
But it doesn’t look like an attempt to evaluate the best possible version of an Oracle AI.
How hard have you tried to get a clear and complete description of how Nick Bostrom imagines an Oracle AI would be designed? Enough to produce a serious Disagreement Case Study?
Would the Oracle AI he imagines use English for its questions and answers, or would it use a language as precise as comupter software?
Would he restrict the kinds of questions that can be posed to the Oracle AI?
I can imagine a spectrum of possibilities that range from an ordinary software verification tool to the version of Oracle AI that you’ve been talking about here.
I see lots of trade-offs here that increase some risks at the expense of others, and no obvious way of comparing those risks.
Robin, I see a fair amount of evidence that winner take all types of competition are becoming more common as information becomes more important than physical resources.
Whether a movie star cooperates with or helps subjugate the people in central Africa seems to be largely an accidental byproduct of whatever superstitions happen to be popular among movie stars.
Why doesn’t this cause you to share more of Eliezer’s concerns? What probability would you give to humans being part of the winning coalition? You might have a good argument for putting it around 60 to 80 percent, but a 20 percent chance of the universe being tiled by smiley faces seems important enough to worry about.
Eliezer,
This is a good explanation of how easy it would be to overlook risks.
But it doesn’t look like an attempt to evaluate the best possible version of an Oracle AI.
How hard have you tried to get a clear and complete description of how Nick Bostrom imagines an Oracle AI would be designed? Enough to produce a serious Disagreement Case Study?
Would the Oracle AI he imagines use English for its questions and answers, or would it use a language as precise as comupter software?
Would he restrict the kinds of questions that can be posed to the Oracle AI?
I can imagine a spectrum of possibilities that range from an ordinary software verification tool to the version of Oracle AI that you’ve been talking about here.
I see lots of trade-offs here that increase some risks at the expense of others, and no obvious way of comparing those risks.