Thinking this over a bit more, it seems that the situation of Predictors being in feedback loops with each other is already the case today. Each of us has a Predictor in our own brain that we make use of to make decisions, right? As I mentioned above, we can break a Predictor’s self-feedback loop by conditionalizing its predictions on our decisions, but each Predictor still needs to predict other Predictors which are in turn trying to predict it.
Is there reason to think that with more powerful Artificial Predictors, the situation would be worse than today?
We do indeed have billions of seriously flawed predictors walking around today, and feedback loops between them are not a negligible problem. Going back to that example, we nearly managed to start WW3 all by ourselves without waiting for artificially intelligent assistance. And it’s easy to come up with a half a dozen contemporary examples of entire populations thinking “what we’re doing to them may be bad, but not as bad as what they’d do to us if we let up”.
It’s entirely possible that the answer to the Fermi Paradox is that there’s a devastatingly bad massively multiplayer Mutually Assured Distruction situation waiting along the path of technological development, one in which even a dumb natural predictor can reason “I predict that a few of them are thinking about defecting, in which case I should think about defecting first, but once they realize that they’ll really want to defect, and oh damn I’d better hit that red button right now!” And the next thing you know all the slow biowarfare researchers are killed off by a tailored virus that left the fastest researchers alone (to pick an exaggerated trope out of a hat). Artificial Predictors would make such things worse by speeding up the inevitable.
Even if a situation like that isn’t inevitable with only natural intelligences, Oracle AIs might make one inevitable by reducing the barrier to entry for predictions. When it takes more than a decade of dedicated work to become a natural expert on something, people don’t want to put in that investment becoming an expert on evil. If becoming an expert on evil merely requires building an automated Question-Answerer for the purpose of asking it good questions, but then succumbing to temptation and asking it an evil question too, proliferation of any technology with evil applications might become harder to stop. Research and development that is presently guided by market forces, government decisions, and moral considerations would instead proceed in the order of “which new technologies can the computer figure out first”.
And a Predictor asked to predict “What will we do based on your prediction” is effectively a lobotomized Question-Answerer, for which we can’t phrase questions directly, leaving us stuck with whatever implicit questions (almost certainly including “which new technologies can computers figure out first”) are inherent in that feedback loop.
Thinking this over a bit more, it seems that the situation of Predictors being in feedback loops with each other is already the case today. Each of us has a Predictor in our own brain that we make use of to make decisions, right? As I mentioned above, we can break a Predictor’s self-feedback loop by conditionalizing its predictions on our decisions, but each Predictor still needs to predict other Predictors which are in turn trying to predict it.
Is there reason to think that with more powerful Artificial Predictors, the situation would be worse than today?
We do indeed have billions of seriously flawed predictors walking around today, and feedback loops between them are not a negligible problem. Going back to that example, we nearly managed to start WW3 all by ourselves without waiting for artificially intelligent assistance. And it’s easy to come up with a half a dozen contemporary examples of entire populations thinking “what we’re doing to them may be bad, but not as bad as what they’d do to us if we let up”.
It’s entirely possible that the answer to the Fermi Paradox is that there’s a devastatingly bad massively multiplayer Mutually Assured Distruction situation waiting along the path of technological development, one in which even a dumb natural predictor can reason “I predict that a few of them are thinking about defecting, in which case I should think about defecting first, but once they realize that they’ll really want to defect, and oh damn I’d better hit that red button right now!” And the next thing you know all the slow biowarfare researchers are killed off by a tailored virus that left the fastest researchers alone (to pick an exaggerated trope out of a hat). Artificial Predictors would make such things worse by speeding up the inevitable.
Even if a situation like that isn’t inevitable with only natural intelligences, Oracle AIs might make one inevitable by reducing the barrier to entry for predictions. When it takes more than a decade of dedicated work to become a natural expert on something, people don’t want to put in that investment becoming an expert on evil. If becoming an expert on evil merely requires building an automated Question-Answerer for the purpose of asking it good questions, but then succumbing to temptation and asking it an evil question too, proliferation of any technology with evil applications might become harder to stop. Research and development that is presently guided by market forces, government decisions, and moral considerations would instead proceed in the order of “which new technologies can the computer figure out first”.
And a Predictor asked to predict “What will we do based on your prediction” is effectively a lobotomized Question-Answerer, for which we can’t phrase questions directly, leaving us stuck with whatever implicit questions (almost certainly including “which new technologies can computers figure out first”) are inherent in that feedback loop.