as a bayesian, you can reason about the probability of hypotheses you can’t test. for an example in this context, you could check Bostrom’s paper on this.
How do you know when your reasoning is faulty (as all human reasoning is) without experimental feedback? In absence of one, it is indeed about how it makes you feel.
Examples of ways to help know whether ones reasoning is sound in absence of experimental feedback include
Talking about it with others
Looking for opposing arguments, and then looking for counterarguments to any counterarguments you come up with
Converting ones reasoning into a syllogism, and checking it for logical fallacies, or having someone else check it
Asking oneself if one wants to believe a given conclusion for other reasons (i.e checking for motivated reasoning)
Reading a lot about avoiding biased reasoning, and setting a habit of automatically applying that to your own thoughts
In absence of [experimental feedback], it is indeed about how it makes you feel.
My low-credence belief that the simulation hypothesis is probable generally doesn’t make me feel anything. When it ever does, the feelings have not been good ones.
Furthermore, some important beliefs among many rationalists relate to future forecasting related to AI (e.g., ASI timelines, or beliefs about existential risks, or thoughts about anthropic shadow effects). If your intent is to argue that it is futile to reason in absence of the ability to test hypotheses, you may want to make a more robust argument and engage with rationalist literature on the subject first. (E.g., this is discussed in When Science Can’t Help; it may be discussed in other posts in the sequences too, but I can’t refer you to them because I don’t remember sequence post titles, sorry. It looks like there are also relevant posts under the “Practice and Philosophy of Science” tag)
These are all good ideas, but they are not a substitution for testing.
If your intent is to argue that it is futile to reason in absence of the ability to test hypotheses
The goal of reasoning is to eventually connect with experiment, i.e. to make accurate predictions. I have read the sequences (and helped name the book), but I strongly disagree with a lot of the posts. Specifically, When Science Can’t Help is very misleading. Sorry you got misled. The simulation hypothesis, is not a hypothesis, it’s a speculation with no way to connect to the real world, whatever it might be. I am not saying it’s wrong, it’s not even wrong. Focus on something that can be helpful to you and ignore this rubbish.
Tragically, you can’t connect to the real world using experiment alone. The whole problem is that the same experimental results can be predicted by different realities. The goal of reasoning as opposed to empirical observation is to correspond to reality.
I read this. It seemed like it was very oriented towards an audience which had the impression that “unfalsifiable claims are not real science,” and it seemed to do a good job of explaining to that audience its objections.
It seems to try to differentiate hypotheses which are (a) hard to test, but meaningful and possibly true, from ones which (b) don’t really say anything at all and are compatible with however reality might truly be.
I think that the simulation hypothesis is of the former type.
First, it is saying something discrete about the nature of reality, which differentiates it from ideas which aren’t saying anything. And I agree that this quality is important.
The second point—being testable in principle, even if difficult or impossible for now—is less important to me; it seems to me like even if there were no way to empirically test an idea even in principle, we could still have some probability attached to it being true (just like we can in situations where an idea isn’t testable yet, but may be with future technology).
Still, I’ll note that we could in principle observe evidence which increases or decreases our probabilities in being simulated. For example, we could (in principle) discover a glitch in reality which increases the probability, or (again in principle) observe something really obvious like the sky tearing open to reveal text which reads “you’re in a simulation” (not to imply there would not be other probable explanations for that, were it to happen).
Similarly, there’s ways empirical evidence could decrease our probability in the hypothesis, though they’re a little more complex to imagine. E.g., maybe we search very hard for such glitches and don’t find any, and this decreases (at least slightly) the probability we’re in a simulation not meant to prevent discovery of this fact; or, for a more strong decrease, maybe an ASI solves physics, and finds some compelling reason to believe we’re in the base universe.
I appreciate your engaging with me. I understand how it’s probably frustrating to have a minority view and see people constantly say the opposite thing. If you have other writings or arguments as to why “When Science Can’t Help” is misleading, I’d still be open to reading them.
How do you know when your reasoning is faulty (as all human reasoning is) without experimental feedback? In absence of one, it is indeed about how it makes you feel.
Examples of ways to help know whether ones reasoning is sound in absence of experimental feedback include
Talking about it with others
Looking for opposing arguments, and then looking for counterarguments to any counterarguments you come up with
Converting ones reasoning into a syllogism, and checking it for logical fallacies, or having someone else check it
Asking oneself if one wants to believe a given conclusion for other reasons (i.e checking for motivated reasoning)
Reading a lot about avoiding biased reasoning, and setting a habit of automatically applying that to your own thoughts
My low-credence belief that the simulation hypothesis is probable generally doesn’t make me feel anything. When it ever does, the feelings have not been good ones.
Furthermore, some important beliefs among many rationalists relate to future forecasting related to AI (e.g., ASI timelines, or beliefs about existential risks, or thoughts about anthropic shadow effects). If your intent is to argue that it is futile to reason in absence of the ability to test hypotheses, you may want to make a more robust argument and engage with rationalist literature on the subject first. (E.g., this is discussed in When Science Can’t Help; it may be discussed in other posts in the sequences too, but I can’t refer you to them because I don’t remember sequence post titles, sorry. It looks like there are also relevant posts under the “Practice and Philosophy of Science” tag)
These are all good ideas, but they are not a substitution for testing.
The goal of reasoning is to eventually connect with experiment, i.e. to make accurate predictions. I have read the sequences (and helped name the book), but I strongly disagree with a lot of the posts. Specifically, When Science Can’t Help is very misleading. Sorry you got misled. The simulation hypothesis, is not a hypothesis, it’s a speculation with no way to connect to the real world, whatever it might be. I am not saying it’s wrong, it’s not even wrong. Focus on something that can be helpful to you and ignore this rubbish.
Tragically, you can’t connect to the real world using experiment alone. The whole problem is that the same experimental results can be predicted by different realities. The goal of reasoning as opposed to empirical observation is to correspond to reality.
Yes, you absolutely need both. I don’t think anyone argues that point?
I’d be interested in reading about why you think it’s misleading. (Feel free to link relevant writing if any exists)
This is probably as far toward pure Bayesianism as is reasonable to go:
https://arxiv.org/abs/1801.05016
I read this. It seemed like it was very oriented towards an audience which had the impression that “unfalsifiable claims are not real science,” and it seemed to do a good job of explaining to that audience its objections.
It seems to try to differentiate hypotheses which are (a) hard to test, but meaningful and possibly true, from ones which (b) don’t really say anything at all and are compatible with however reality might truly be.
I think that the simulation hypothesis is of the former type.
First, it is saying something discrete about the nature of reality, which differentiates it from ideas which aren’t saying anything. And I agree that this quality is important.
The second point—being testable in principle, even if difficult or impossible for now—is less important to me; it seems to me like even if there were no way to empirically test an idea even in principle, we could still have some probability attached to it being true (just like we can in situations where an idea isn’t testable yet, but may be with future technology).
Still, I’ll note that we could in principle observe evidence which increases or decreases our probabilities in being simulated. For example, we could (in principle) discover a glitch in reality which increases the probability, or (again in principle) observe something really obvious like the sky tearing open to reveal text which reads “you’re in a simulation” (not to imply there would not be other probable explanations for that, were it to happen).
Similarly, there’s ways empirical evidence could decrease our probability in the hypothesis, though they’re a little more complex to imagine. E.g., maybe we search very hard for such glitches and don’t find any, and this decreases (at least slightly) the probability we’re in a simulation not meant to prevent discovery of this fact; or, for a more strong decrease, maybe an ASI solves physics, and finds some compelling reason to believe we’re in the base universe.
I appreciate your engaging with me. I understand how it’s probably frustrating to have a minority view and see people constantly say the opposite thing. If you have other writings or arguments as to why “When Science Can’t Help” is misleading, I’d still be open to reading them.