That isn’t what you need to show. You need to show that the semantics have no ontological implications, that they say nothing about the territory .
Actually, what I need to show is that the semantics say nothing extra about the territory that is meaningful. My argument is that the predictions are canonical representation of the belief, so it’s fine if the semantics say things about the territory that the predictions can’t say, as long as everything it says that does not affect the predictions is meaningless. At least, meaningless in the territory.
The semantics of gravity theory says that the force that pulls objects together over long range based on their mass is called “gravity”. If you call that force “travigy” instead, it will cause no difference in the predictions. This is because the name of the force if a property of the map, not the territory—if it was meaningful in the territory it should have had impact on the predictions.
And I claim that the “center of the universe” is similar—it has no meaning in the territory. The universe has no “center”—you can think of “center of mass” or “center of bounding volume” of a group of objects, but there is no single point you can naturally call “the center”. There can be good or bad choices for the center, but not right or wrong choices—the center is a property of the map, not the territory.
If it had any effect at all on the territory, it should have somehow affected the predictions.
Petrov’s choice was not about dismissing warnings, it’s about picking on which side to err. Wrongfully alerting his superiors could cause a nuclear war, and wrongfully not alerting them would disadvantage his country in the nuclear war that just started. I’m not saying he did all the numbers, used Bayes’s law to figure the probability there is an actual nuclear attack going on, assigned utilities to all four cases and performed the final decision theory calculations—but his reasoning did take into account the possibility of error both ways. Though… it does seem like his intuition gave utility much more weight than to probabilities.
So, if we take that rule for deciding what to do with a AGI, it won’t be just “ignore everything the instruments are saying” but “weight the dangers of UFAI against the missed opportunities from not releasing it”.
Which means the UFAI only needs to convince such a gatekeeper that releasing it is the only way to prevent a catastrophe, without having to convince the gatekeeper that the probabilities of the catastrophe are high or that the probabilities of the AI being unfriently are low.