If that wasn’t the event that entered into the Bayesian calculation, what was?
The Bayesian calculation only needs to use the event “Tuesday exists” which is non-indexical (though you’re right—it is entailed by “today is Tuesday”).
The problem with indexical events is that our prior is a distribution over possible worlds, and there doesn’t seem to be any non-arbitrary way of deriving a distribution over centered worlds from a distribution over uncentered ones. (E.g. Are all people equally likely regardless of lifespan, brain power, state of wakefulness etc.? What if people are copied and the copies diverge from one another? Where does the first ‘observer’ appear in the tree of life? etc.)
The Bayesian calculation only needs to use the event “Tuesday exists”
I can’t follow this. If “Tuesday exists” isn’t indexical, then it’s exactly as true on Monday as it is on Tuesday, and furthermore as true everywhere and for everyone as it is for anyone.
there doesn’t seem to be any non-arbitrary way of deriving a distribution over centered worlds from a distribution over uncentered ones.
Indeed, unless you work within the confines of a finite toy model. But why go in that direction? What non-arbitrary reason is there not to start with centered worlds and try to derive a distribution over uncentered ones? In fact, isn’t that the direction scientific method works in?
I can’t follow this. If “Tuesday exists” isn’t indexical, then it’s exactly as true on Monday as it is on Tuesday, and furthermore as true everywhere and for everyone as it is for anyone.
Well, in my toy model of the Doomsday Argument, there’s only a 1⁄2 chance that Tuesday exists, and the only way that a person can know that Tuesday exists is to be alive on Tuesday. Do you still think there’s a problem?
Indeed, unless you work within the confines of a finite toy model.
Even in toy models like Sleeping Beauty we have to somehow choose between SSA and SIA (which are precisely two rival methods for deriving centered from uncentered distributions.)
What non-arbitrary reason is there not to start with centered worlds and try to derive a distribution over uncentered ones? In fact, isn’t that the direction scientific method works in?
That’s a very good, philosophically deep question! Like many lesswrongers, I’m what David Chalmers would call a “Type-A materialist” which means that I deny the existence of “subjective facts” which aren’t in some way reducible to objective facts.
Therefore, I think that centered worlds can be regarded one of two ways: (i) as nonsense or (ii) as just a peculiar kind of uncentered world: A “centered world” really just means an “uncentered world that happens to contain an ontologically basic, causally inert ‘pointer’ towards some being and an ontologically basic, causally inert catalogue of its “mental facts”. However, because a “center” is causally inert, we can never acquire any evidence that the world has a “center”.
(I’d like to say more but really this needs a lot more thought and I can see I’m already starting to ramble...)
I’m what David Chalmers would call a “Type-A materialist” which means that I deny the existence of “subjective facts” which aren’t in some way reducible to objective facts.
The concerns Chalmers wrote about focused on the nature of phenomenal experience, and the traditional dichotomy between subjective and objective in human experience. That distinction draws a dividing line way off to the side of what I’m interested in. My main concern isn’t with ineffable consciousness, it’s with cognitive processing of information, information defined as that which distinguishes possibilities, reduces uncertainty and can have behavioral consequences. Consequences for what/whom? Situated epistemic agents, which I take as ubiquituous constituents of the world around us, and not just sentient life-forms like ourselves. Situated agents that process information don’t need to be very high on the computational hierarchy in order to be able to interact with the world as it is, use representations of the world as they take it to be, and entertain possibilities about how well their representations conform to what they are intended to represent. The old 128MB 286 I had in the corner that was too underpowered to run even a current version of linux, was powerful enough to implement an instantiation of a situated Bayesian agent. I’m completely fine with stipulating that it had about as much phenomenal or subjective experience as a chunk of pavement. But I think there are useful distinctions totally missed by Chalmers’ division (which I’m sure he’s aware of, but not concerned with in the paper you cite), between what you might call objective facts and what you might call “subjective facts”, if by the latter you include essentially indexical and contextual information, such as de se and de dicto information, as well as de re propositions.
Therefore, I think that centered worlds can be regarded one of two ways: (i) as nonsense or (ii) as just a peculiar kind of uncentered world: A “centered world” really just means an “uncentered world that happens to contain an ontologically basic, causally inert ‘pointer’ towards some being and an ontologically basic, causally inert catalogue of its “mental facts”. However, because a “center” is causally inert, we can never acquire any evidence that the world has a “center”.
(On Lewis’s account, centered worlds are generalizations of uncentered ones, which are contained in them as special cases.) From the point of view of a situated agent, centered worlds are epistemologically prior, about as patently obvious as the existence of “True”, “False” and “Don’t Know”, and the uncentered worlds are secondary, synthesized, hypothesized and inferred. The process of converting limited indexical information into objective, universally valid knowledge is where all the interesting stuff happens. That’s what the very idea of “calibration” is about. To know whether they (centered worlds or the other kind) are ontologically prior it’s just too soon for me to tell, but I feel uncomfortable prejudging the issue on such strict criteria without a more detailed exploration of the territory on the outside of the walled garden of God’s Own Library of Eternal Verity. In other words, with respect to that wall, I don’t see warrant flowing from inside out, I see it flowing from outside in. I suppose that’s in danger of making me an idealist, but I’m trying to be a good empiricist.
The Bayesian calculation only needs to use the event “Tuesday exists” which is non-indexical (though you’re right—it is entailed by “today is Tuesday”).
The problem with indexical events is that our prior is a distribution over possible worlds, and there doesn’t seem to be any non-arbitrary way of deriving a distribution over centered worlds from a distribution over uncentered ones. (E.g. Are all people equally likely regardless of lifespan, brain power, state of wakefulness etc.? What if people are copied and the copies diverge from one another? Where does the first ‘observer’ appear in the tree of life? etc.)
I can’t follow this. If “Tuesday exists” isn’t indexical, then it’s exactly as true on Monday as it is on Tuesday, and furthermore as true everywhere and for everyone as it is for anyone.
Indeed, unless you work within the confines of a finite toy model. But why go in that direction? What non-arbitrary reason is there not to start with centered worlds and try to derive a distribution over uncentered ones? In fact, isn’t that the direction scientific method works in?
Well, in my toy model of the Doomsday Argument, there’s only a 1⁄2 chance that Tuesday exists, and the only way that a person can know that Tuesday exists is to be alive on Tuesday. Do you still think there’s a problem?
Even in toy models like Sleeping Beauty we have to somehow choose between SSA and SIA (which are precisely two rival methods for deriving centered from uncentered distributions.)
That’s a very good, philosophically deep question! Like many lesswrongers, I’m what David Chalmers would call a “Type-A materialist” which means that I deny the existence of “subjective facts” which aren’t in some way reducible to objective facts.
Therefore, I think that centered worlds can be regarded one of two ways: (i) as nonsense or (ii) as just a peculiar kind of uncentered world: A “centered world” really just means an “uncentered world that happens to contain an ontologically basic, causally inert ‘pointer’ towards some being and an ontologically basic, causally inert catalogue of its “mental facts”. However, because a “center” is causally inert, we can never acquire any evidence that the world has a “center”.
(I’d like to say more but really this needs a lot more thought and I can see I’m already starting to ramble...)
The concerns Chalmers wrote about focused on the nature of phenomenal experience, and the traditional dichotomy between subjective and objective in human experience. That distinction draws a dividing line way off to the side of what I’m interested in. My main concern isn’t with ineffable consciousness, it’s with cognitive processing of information, information defined as that which distinguishes possibilities, reduces uncertainty and can have behavioral consequences. Consequences for what/whom? Situated epistemic agents, which I take as ubiquituous constituents of the world around us, and not just sentient life-forms like ourselves. Situated agents that process information don’t need to be very high on the computational hierarchy in order to be able to interact with the world as it is, use representations of the world as they take it to be, and entertain possibilities about how well their representations conform to what they are intended to represent. The old 128MB 286 I had in the corner that was too underpowered to run even a current version of linux, was powerful enough to implement an instantiation of a situated Bayesian agent. I’m completely fine with stipulating that it had about as much phenomenal or subjective experience as a chunk of pavement. But I think there are useful distinctions totally missed by Chalmers’ division (which I’m sure he’s aware of, but not concerned with in the paper you cite), between what you might call objective facts and what you might call “subjective facts”, if by the latter you include essentially indexical and contextual information, such as de se and de dicto information, as well as de re propositions.
(On Lewis’s account, centered worlds are generalizations of uncentered ones, which are contained in them as special cases.) From the point of view of a situated agent, centered worlds are epistemologically prior, about as patently obvious as the existence of “True”, “False” and “Don’t Know”, and the uncentered worlds are secondary, synthesized, hypothesized and inferred. The process of converting limited indexical information into objective, universally valid knowledge is where all the interesting stuff happens. That’s what the very idea of “calibration” is about. To know whether they (centered worlds or the other kind) are ontologically prior it’s just too soon for me to tell, but I feel uncomfortable prejudging the issue on such strict criteria without a more detailed exploration of the territory on the outside of the walled garden of God’s Own Library of Eternal Verity. In other words, with respect to that wall, I don’t see warrant flowing from inside out, I see it flowing from outside in. I suppose that’s in danger of making me an idealist, but I’m trying to be a good empiricist.