In particular, imagine an AI elsewhere in the universe aware of this experiment. Such an AI may create many new embeddings of these observations into the universe, with the goal of “hijacking” the resulting observation process and controlling its output [...] an AI motivated to do so could run many simulations of you up to the current moment and then modify their future experiences arbitrarily, controlling your (inductive) expectations arbitrarily.
How could such an AI exist? One possibility is interference from an alternative Everett branch in which a singularity went badly.
I have to tweak the questions I am asking academics about risks from AI. Instead of asking, “do you think risks from AI are to be taken seriously?”, I should ask, “do you think that possible interferences from alternative Everett branches in which a singularity went badly could enable an unfriendly AI to take over the universe?”. I might increase the odds of being taken seriously that way...but I’ll have to think that matter over first.
I have to tweak the questions I am asking academics about risks from AI. Instead of asking, “do you think risks from AI are to be taken seriously?”, I should ask, “do you think that possible interferences from alternative Everett branches in which a singularity went badly could enable an unfriendly AI to take over the universe?”. I might increase the odds of being taken seriously that way...but I’ll have to think that matter over first.