Well, that message only works if it actually produces an UFAI within the required timespan, and if the other Oracle would have its message not read. There are problems, but the probability is not too high, initially (though this depends on the number of significant figures in its message).
Why does it need to produce an UFAI, and why does it matter whether there is another oracle whose message may or may not be read? The argument is that if there is a Convincing Argument that would make us reward all oracles giving it, it is incentivized to produce it. (Rewarding the oracle means running the oracle’s predictor source code again to find out what it predicted, then telling the oracle that’s what the world looks like.)
Well, that message only works if it actually produces an UFAI within the required timespan, and if the other Oracle would have its message not read. There are problems, but the probability is not too high, initially (though this depends on the number of significant figures in its message).
Why does it need to produce an UFAI, and why does it matter whether there is another oracle whose message may or may not be read? The argument is that if there is a Convincing Argument that would make us reward all oracles giving it, it is incentivized to produce it. (Rewarding the oracle means running the oracle’s predictor source code again to find out what it predicted, then telling the oracle that’s what the world looks like.)