One needs to define the probability spaces, and it’s appropriate to see if those probability spaces are relevant to something. It’s no use to discuss “probability” on the level of a word or surrounding intuitions.
Oh, I agree. If Adam Elga was initially careful with his reasoning and noticed that saying “centered possible worlds” doesn’t allow you to treat non-elementary outcomes as elementary ones, that you need to basically recreate the whole mathematical apparatus of probability theory from scratch to attempt to lawfully do what he did, the whole field of anthropics wouldn’t get that much traction and accure this amount of absurdity and bizarreness going on. But now it seems a little bit too late for this.
The main problem is in the fact that people are using probability spaces not relevant to the problems in discussion. Such valid but not sound reasoning is everywhere in anthropics. I’m not sure how to adress this problem in the strict formalism of mathematics without the medium of words and surrounding intuition. Math can show us how the model is not coherent, but not how the model is not applicable to the current situation.
Indeed the short version, the core idea of this whole anthropic sequence, is basically: “Stop using mathematical models not applicable to the problems you are talking to”. But it seems that people really do not see why wouldn’t their models be applicable and are more likely to believe that participating in an anthropic experiment gives you weird psychic powers. So I’m trying to carefully address these issues one at a time, using words, building high level intuitions and exploring failure modes.
One way of formulating relevance is by treating probabilities as details of how a machine that is an agent makes decisions internally, and ask what probability assignments lead to what kinds of outcomes. Or we can set up a prediction market with some scoring rule.
I find attempts to ground probability theory in decision making being quite backwards. As if we are trying to explain boolean algebra with computers. Granted, these are the applications of the corresponding fields but we can still meaningfully talk about mathematical model even when we do not have an application for it. As long as we do not insist that it a priori has to be relevant to a particular problem, of course. Decision theory is a next step, a superstructure on top of probability theory. Different decision makers may be interested in different probabilities, but we can still meaningfully talk about probability of a fair coin landing Heads even without any utility functions attached to the outcomes. Add an utility function, or a scoring rule and you get an extra variable entangled in the mix and it’s even harder to talk about it.
Saying that an assumption of some manner of sampling is “wrong” requires explaining what it means to be sampled vs. not sampled vs. sampled in a different way, and I don’t see what it could possibly mean, outside of some external process that performs the sampling and keeps the score, for example for the purpose of assigning rewards for a prediction market
I agree that there is still some ambiguity (what is randomness?), but I think it should be generally understandable what I mean here. I’m talking about causal process that determines the outcomes of the experiment. If this causal process uses random sampling—picks a random element from a set instead of always having a fixed element that it was always going to pick—then it makes sense to update on the corresponding evidence. In terms of markets and keeping the score we can talk about correct per experiment probability estimate based on Law of large numbers the way I did it in the previous post with python code samples repeating the experiment multiple times.
Oh, I agree. If Adam Elga was initially careful with his reasoning and noticed that saying “centered possible worlds” doesn’t allow you to treat non-elementary outcomes as elementary ones, that you need to basically recreate the whole mathematical apparatus of probability theory from scratch to attempt to lawfully do what he did, the whole field of anthropics wouldn’t get that much traction and accure this amount of absurdity and bizarreness going on. But now it seems a little bit too late for this.
The main problem is in the fact that people are using probability spaces not relevant to the problems in discussion. Such valid but not sound reasoning is everywhere in anthropics. I’m not sure how to adress this problem in the strict formalism of mathematics without the medium of words and surrounding intuition. Math can show us how the model is not coherent, but not how the model is not applicable to the current situation.
Indeed the short version, the core idea of this whole anthropic sequence, is basically: “Stop using mathematical models not applicable to the problems you are talking to”. But it seems that people really do not see why wouldn’t their models be applicable and are more likely to believe that participating in an anthropic experiment gives you weird psychic powers. So I’m trying to carefully address these issues one at a time, using words, building high level intuitions and exploring failure modes.
I find attempts to ground probability theory in decision making being quite backwards. As if we are trying to explain boolean algebra with computers. Granted, these are the applications of the corresponding fields but we can still meaningfully talk about mathematical model even when we do not have an application for it. As long as we do not insist that it a priori has to be relevant to a particular problem, of course. Decision theory is a next step, a superstructure on top of probability theory. Different decision makers may be interested in different probabilities, but we can still meaningfully talk about probability of a fair coin landing Heads even without any utility functions attached to the outcomes. Add an utility function, or a scoring rule and you get an extra variable entangled in the mix and it’s even harder to talk about it.
I agree that there is still some ambiguity (what is randomness?), but I think it should be generally understandable what I mean here. I’m talking about causal process that determines the outcomes of the experiment. If this causal process uses random sampling—picks a random element from a set instead of always having a fixed element that it was always going to pick—then it makes sense to update on the corresponding evidence. In terms of markets and keeping the score we can talk about correct per experiment probability estimate based on Law of large numbers the way I did it in the previous post with python code samples repeating the experiment multiple times.