This is the old “choose a number between 1 and googleplex with uniform probability”. Given the prior information, even if the probability of the number coming out is very low, nonetheless it is not surprising that something had come up. Indeed:
P(anything | -sim) = very low
P(involved in AI | sim) = high
This is the part that I find less convincing. I don’t see any reason to waste such an effort in simulating entire inefficient minds in order to investigate anything.
This is the old “choose a number between 1 and googleplex with uniform probability”. Given the prior information, even if the probability of the number coming out is very low, nonetheless it is not surprising that something had come up. Indeed: P(anything | -sim) = very low
This is the part that I find less convincing. I don’t see any reason to waste such an effort in simulating entire inefficient minds in order to investigate anything.