You can forecast all you want, but there is no “correctness” to that forecast. It requires some informational path from event to experience for a probability to have meaning. That does include telling the forecast to someone who observed (even indirectly) the outcome and seeing if they laugh at you, but does not include making up a number and then forgetting about it forever.
So to help me understand your position, how do you feel comparatively when someone like Bostrom says there’s, for example, maybe a 50% chance we’re in a simulation? (More egregiously, Elon saying there’s a one in a billion chance we’re not in a simulation!).
I think they are both perfectly reasonable statements about the models they prefer to imagine. They’re using probability terminology to give a sense of how much they like the model, not as any prediction—there’s no experience that will differ whether it’s true or false.
Probability of being in a simulation doesn’t make sense without clarifying what that means for the same reasons as probability in Sleeping Beauty. In the decision-relevant sense you need to ask what you’d care about affecting, since your decisions affect both real and simulated instances.
Virtually all forecasting has varying degrees of a risk the prediction resolves “ambiguous”. That risk reduces the informativeness. While I can’t say what exactly does or does not count as us being “in a simulation”, there’s also no particular reason I can’t put a probability on it. In the vast semantic cloud of possible interpretations, most of which is not visible to me, I have some nonzero information about what isn’t a simulation, and I know a simulation-promoter has shifted probability away from those other things. E.g. I know they are saying it’s not just WYSIWYG. It’s not much, but it’s also nonzero.
I also have placed many predictions on things that I will never see the resolution of, even if they are well-defined. Things that could not possibly affect anything to do with me.
I would wholeheartedly endorse an economic argument that such predictions are of too little tangible value to us. I do not endorse the idea that you fundamentally can’t have a probability attached. In fact it’s remarkably difficult for that to be entirely true, once actual numbers are used and extremely small amounts of information or confidence are a thing.
While I can’t say what exactly does or does not count as us being “in a simulation”, there’s also no particular reason I can’t put a probability on it.
Well, I quoted Sleeping Beauty as a particular illustration for why you’d put different probabilities on something depending on what you require, and that must be more specific than “a probability”. This is not a situation where you “can’t have a probability attached”, but illustrates that asking for “a probability” is occasionally not specific enough a question to be meaningful.
I would agree that models are generally useful as ML demonstrates, even if it’s unclear what they are saying, but in such cases interpreting them as hypotheses that give probabilities to events can be misleading, especially when there is no way of extracting these probabilities out of the models, or no clear way of formulating the events we’d be interested in. Instead, you have an error function, and you found models that have low error for the dataset, and these models make things better than the models with greater error. That doesn’t always have to be coerced into the language of probability.
You can forecast all you want, but there is no “correctness” to that forecast. It requires some informational path from event to experience for a probability to have meaning. That does include telling the forecast to someone who observed (even indirectly) the outcome and seeing if they laugh at you, but does not include making up a number and then forgetting about it forever.
So to help me understand your position, how do you feel comparatively when someone like Bostrom says there’s, for example, maybe a 50% chance we’re in a simulation? (More egregiously, Elon saying there’s a one in a billion chance we’re not in a simulation!).
I think they are both perfectly reasonable statements about the models they prefer to imagine. They’re using probability terminology to give a sense of how much they like the model, not as any prediction—there’s no experience that will differ whether it’s true or false.
Probability of being in a simulation doesn’t make sense without clarifying what that means for the same reasons as probability in Sleeping Beauty. In the decision-relevant sense you need to ask what you’d care about affecting, since your decisions affect both real and simulated instances.
Virtually all forecasting has varying degrees of a risk the prediction resolves “ambiguous”. That risk reduces the informativeness. While I can’t say what exactly does or does not count as us being “in a simulation”, there’s also no particular reason I can’t put a probability on it. In the vast semantic cloud of possible interpretations, most of which is not visible to me, I have some nonzero information about what isn’t a simulation, and I know a simulation-promoter has shifted probability away from those other things. E.g. I know they are saying it’s not just WYSIWYG. It’s not much, but it’s also nonzero.
I also have placed many predictions on things that I will never see the resolution of, even if they are well-defined. Things that could not possibly affect anything to do with me.
I would wholeheartedly endorse an economic argument that such predictions are of too little tangible value to us. I do not endorse the idea that you fundamentally can’t have a probability attached. In fact it’s remarkably difficult for that to be entirely true, once actual numbers are used and extremely small amounts of information or confidence are a thing.
Well, I quoted Sleeping Beauty as a particular illustration for why you’d put different probabilities on something depending on what you require, and that must be more specific than “a probability”. This is not a situation where you “can’t have a probability attached”, but illustrates that asking for “a probability” is occasionally not specific enough a question to be meaningful.
I would agree that models are generally useful as ML demonstrates, even if it’s unclear what they are saying, but in such cases interpreting them as hypotheses that give probabilities to events can be misleading, especially when there is no way of extracting these probabilities out of the models, or no clear way of formulating the events we’d be interested in. Instead, you have an error function, and you found models that have low error for the dataset, and these models make things better than the models with greater error. That doesn’t always have to be coerced into the language of probability.