So maybe the error here is that humans can’t really hold thousands of hypotheses in their head. For example if you contrast the simulation argument vs “known physics is all there is” you can falsify the “known physics” argument because certain elements of the universe are impossible due to known physics. Or don’t have an apparent underlying reason, which the simulation argument can explain. (the speed of light is explainable if the universe is made of discrete simulation cells that must finish by a deadline, and certain quantum entanglement effects could happen if the universe can write to the same memory address in one step)
But there are thousands of other explanations that likely fit the same data, and it’s not falsifiable. The simulation argument is just one available “at hand” to a tech worker who has worked on related software.
EDIT: I think I misunderstood. Just to confirm, did you mean this removes the point of bother with a base rate, or did you mean it helps explain why people are ending up at preposterously far distances from even a relatively generous base rate estimate?
I have placed many forecasts on things where I am incapable of holding all the possible outcomes in my head. In fact that is extremely common for a variety of domains. In replication markets for example, I have little comprehension of the indefinite number of theories that could in principle be made about what is being tested in the paper. Doesn’t stop me from having opinions about some ostensible result shown to me in a paper, and I’ll still do better than a random dart-throwing chimp at that.
Yes. Thats what I meant, it you only compare hypotheses A and B when there is a very large number of hypotheses that fit all known data you may become unreasonably confident in B if A is false.
So maybe the error here is that humans can’t really hold thousands of hypotheses in their head. For example if you contrast the simulation argument vs “known physics is all there is” you can falsify the “known physics” argument because certain elements of the universe are impossible due to known physics. Or don’t have an apparent underlying reason, which the simulation argument can explain. (the speed of light is explainable if the universe is made of discrete simulation cells that must finish by a deadline, and certain quantum entanglement effects could happen if the universe can write to the same memory address in one step)
But there are thousands of other explanations that likely fit the same data, and it’s not falsifiable. The simulation argument is just one available “at hand” to a tech worker who has worked on related software.
EDIT: I think I misunderstood. Just to confirm, did you mean this removes the point of bother with a base rate, or did you mean it helps explain why people are ending up at preposterously far distances from even a relatively generous base rate estimate?
I have placed many forecasts on things where I am incapable of holding all the possible outcomes in my head. In fact that is extremely common for a variety of domains. In replication markets for example, I have little comprehension of the indefinite number of theories that could in principle be made about what is being tested in the paper. Doesn’t stop me from having opinions about some ostensible result shown to me in a paper, and I’ll still do better than a random dart-throwing chimp at that.
Yes. Thats what I meant, it you only compare hypotheses A and B when there is a very large number of hypotheses that fit all known data you may become unreasonably confident in B if A is false.