A lot of my work involves tweaking the utility or probability of an agent to make it believe—or act as if it believed—impossible or almost impossible events. But we have to be careful about this; an agent that believes the impossible may not be so different from one that doesn’t.
Consider for instance an agent that assigns a prior probability of zero to JFK ever having been assassinated. No matter what evidence you present to it, it will go on disbelieving the “non-zero gunmen theory”.
Initially, the agent will behave very unusually. If it was in charge of JFK’s security in Dallas before the shooting, it would have sent all secret service agents home, because no assassination could happen. Immediately after the assassination, it would have disbelieved everything. The films would have been faked or misinterpreted; the witnesses, deluded; the dead body of the president, that of twin or an actor. It would have had huge problems with the aftermath, trying to reject all the evidence of death, seeing a vast conspiracy to hide the truth of JFK’s non-death, including the many other conspiracy theories that must be false flags, because they all agree with the wrong statement that the president was actually assassinated.
But as time went on, the agent’s behaviour would start to become more and more normal. It would realise the conspiracy was incredibly thorough in its faking of the evidence. All avenues it pursued to expose them would come to naught. It would stop expecting people to come forward and confess the joke, it would stop expecting to find radical new evidence overturning the accepted narrative. After a while, it would start to expect the next new piece of evidence to be in favour of the assassination idea—because if a conspiracy has been faking things this well so far, then they should continue to do so in the future. Though it cannot change its view of the assassination, its expectation for observations converge towards the norm.
If it does a really thorough investigation, it might stop believing in a conspiracy at all. At some point, the probability of a miracle will start to become more likely than a perfect but undetectable conspiracy. It is very unlikely that Lee Harvey Oswald shot at JFK, missed, and the president’s head exploded simultaneously for unrelated natural causes. But after a while, such a miraculous explanation will start to become more likely than anything else the agent can consider. This explanation opens the possibility of miracles; but again, if the agent is very thorough, it will fail to find evidence of other miracles, and will probably settle on “an unrepeatable miracle caused JFK’s death in a way that is physically undetectable”.
But then note that such an agent will have a probability distribution over future events that is almost indistinguishable from a normal agent that just believes the standard story of JFK being assassinated. The zero-prior has been negated, not in theory but in practice.
How to do proper probability manipulation
This section is still somewhat a work in progress.
So the agent believes one false fact about the world, but its expectation is otherwise normal. This can be both desirable and undesirable. The negative is if we try and control the agent forever by giving it a false fact.
To see the positive, ask why would we want an agent to believe impossible things in the first place? Well, one example was an Oracle design where the Oracle didn’t believe its output message would ever be read. Here we wanted the Oracle to believe the message wouldn’t be read, but not believe anything else too weird about the world.
In terms of causality, if X designates the message being read at time t, and B and A are event before and after t, respectively, we want P(B|X)≈P(B) (probabilities about current facts in the world shouldn’t change much) while P(A|X)≠P(A) is fine and often expected (the future should be different if the message is read or not).
In the JFK example, the agent eventually concluded “a miracle happened”. I’ll call this miracle a scrambling point. It’s kind of a breakdown in causality: two futures are merged into one, given two different pasts. The two pasts are “JFK was assassinated” and “JFK wasn’t assassinated”, and their common scrambled future is “everything appears as if JFK was assassinated”. The non-assassination belief has shifted the past but not the future.
For the Oracle, we want to do the reverse: we want the non-reading belief to shift the future but not the past. However, unlike the JFK assassination, we can try and build the scrambling point. That’s why I always talk about messages going down noisy wires, or specific quantum events, or chaotic processes. If the past goes through a truly stochastic event (it doesn’t matter whether there is true randomness or just that the agent can’t figure out the consequences), we can get what we want.
The Oracle idea will go wrong if the Oracle conclude that non-reading must imply something is different about the past (maybe it can see through chaos in ways we thought it couldn’t), just as the JFK assassination denier will continue to be crazy if can’t find a route to reach “everything appears as if JFK was assassinated”.
But there is a break in the symmetry: the JFK assassination denier will eventually reach that point as long as the world is complex and stochastic enough. While the Oracle requires that the future probabilities be the same in all (realistic) past universes.
Now, once the Oracle’s message has been read, the Oracle will find itself in the same situation as the other agent: believing an impossible thing. For Oracles, we can simply reset them. Other agents might have to behave more like the JFK assassination disbeliever. Though if we’re careful, we can quantify things more precisely, as I attempted to do here.
JFK was not assassinated: prior probability zero events
A lot of my work involves tweaking the utility or probability of an agent to make it believe—or act as if it believed—impossible or almost impossible events. But we have to be careful about this; an agent that believes the impossible may not be so different from one that doesn’t.
Consider for instance an agent that assigns a prior probability of zero to JFK ever having been assassinated. No matter what evidence you present to it, it will go on disbelieving the “non-zero gunmen theory”.
Initially, the agent will behave very unusually. If it was in charge of JFK’s security in Dallas before the shooting, it would have sent all secret service agents home, because no assassination could happen. Immediately after the assassination, it would have disbelieved everything. The films would have been faked or misinterpreted; the witnesses, deluded; the dead body of the president, that of twin or an actor. It would have had huge problems with the aftermath, trying to reject all the evidence of death, seeing a vast conspiracy to hide the truth of JFK’s non-death, including the many other conspiracy theories that must be false flags, because they all agree with the wrong statement that the president was actually assassinated.
But as time went on, the agent’s behaviour would start to become more and more normal. It would realise the conspiracy was incredibly thorough in its faking of the evidence. All avenues it pursued to expose them would come to naught. It would stop expecting people to come forward and confess the joke, it would stop expecting to find radical new evidence overturning the accepted narrative. After a while, it would start to expect the next new piece of evidence to be in favour of the assassination idea—because if a conspiracy has been faking things this well so far, then they should continue to do so in the future. Though it cannot change its view of the assassination, its expectation for observations converge towards the norm.
If it does a really thorough investigation, it might stop believing in a conspiracy at all. At some point, the probability of a miracle will start to become more likely than a perfect but undetectable conspiracy. It is very unlikely that Lee Harvey Oswald shot at JFK, missed, and the president’s head exploded simultaneously for unrelated natural causes. But after a while, such a miraculous explanation will start to become more likely than anything else the agent can consider. This explanation opens the possibility of miracles; but again, if the agent is very thorough, it will fail to find evidence of other miracles, and will probably settle on “an unrepeatable miracle caused JFK’s death in a way that is physically undetectable”.
But then note that such an agent will have a probability distribution over future events that is almost indistinguishable from a normal agent that just believes the standard story of JFK being assassinated. The zero-prior has been negated, not in theory but in practice.
How to do proper probability manipulation
This section is still somewhat a work in progress.
So the agent believes one false fact about the world, but its expectation is otherwise normal. This can be both desirable and undesirable. The negative is if we try and control the agent forever by giving it a false fact.
To see the positive, ask why would we want an agent to believe impossible things in the first place? Well, one example was an Oracle design where the Oracle didn’t believe its output message would ever be read. Here we wanted the Oracle to believe the message wouldn’t be read, but not believe anything else too weird about the world.
In terms of causality, if X designates the message being read at time t, and B and A are event before and after t, respectively, we want P(B|X)≈P(B) (probabilities about current facts in the world shouldn’t change much) while P(A|X)≠P(A) is fine and often expected (the future should be different if the message is read or not).
In the JFK example, the agent eventually concluded “a miracle happened”. I’ll call this miracle a scrambling point. It’s kind of a breakdown in causality: two futures are merged into one, given two different pasts. The two pasts are “JFK was assassinated” and “JFK wasn’t assassinated”, and their common scrambled future is “everything appears as if JFK was assassinated”. The non-assassination belief has shifted the past but not the future.
For the Oracle, we want to do the reverse: we want the non-reading belief to shift the future but not the past. However, unlike the JFK assassination, we can try and build the scrambling point. That’s why I always talk about messages going down noisy wires, or specific quantum events, or chaotic processes. If the past goes through a truly stochastic event (it doesn’t matter whether there is true randomness or just that the agent can’t figure out the consequences), we can get what we want.
The Oracle idea will go wrong if the Oracle conclude that non-reading must imply something is different about the past (maybe it can see through chaos in ways we thought it couldn’t), just as the JFK assassination denier will continue to be crazy if can’t find a route to reach “everything appears as if JFK was assassinated”.
But there is a break in the symmetry: the JFK assassination denier will eventually reach that point as long as the world is complex and stochastic enough. While the Oracle requires that the future probabilities be the same in all (realistic) past universes.
Now, once the Oracle’s message has been read, the Oracle will find itself in the same situation as the other agent: believing an impossible thing. For Oracles, we can simply reset them. Other agents might have to behave more like the JFK assassination disbeliever. Though if we’re careful, we can quantify things more precisely, as I attempted to do here.