I didn’t follow everything, but does this attempt to address self-fulfilling prophecies? Assume the oracle has good track record and releases its information publicly. If I ask it “What are the chances Russia and the US will engage in nuclear war in the next 6 months?”, answers of “0.001″ and “0.8” are probably both accurate.
I didn’t follow everything, but does this attempt to address self-fulfilling prophecies? Assume the oracle has good track record and releases its information publicly. If I ask it “What are the chances Russia and the US will engage in nuclear war in the next 6 months?”, answers of “0.001″ and “0.8” are probably both accurate.
The self-fulfillment is addressed by the v_E term—the AI acts as if its predictions were never read.