I’m what David Chalmers would call a “Type-A materialist” which means that I deny the existence of “subjective facts” which aren’t in some way reducible to objective facts.
The concerns Chalmers wrote about focused on the nature of phenomenal experience, and the traditional dichotomy between subjective and objective in human experience. That distinction draws a dividing line way off to the side of what I’m interested in. My main concern isn’t with ineffable consciousness, it’s with cognitive processing of information, information defined as that which distinguishes possibilities, reduces uncertainty and can have behavioral consequences. Consequences for what/whom? Situated epistemic agents, which I take as ubiquituous constituents of the world around us, and not just sentient life-forms like ourselves. Situated agents that process information don’t need to be very high on the computational hierarchy in order to be able to interact with the world as it is, use representations of the world as they take it to be, and entertain possibilities about how well their representations conform to what they are intended to represent. The old 128MB 286 I had in the corner that was too underpowered to run even a current version of linux, was powerful enough to implement an instantiation of a situated Bayesian agent. I’m completely fine with stipulating that it had about as much phenomenal or subjective experience as a chunk of pavement. But I think there are useful distinctions totally missed by Chalmers’ division (which I’m sure he’s aware of, but not concerned with in the paper you cite), between what you might call objective facts and what you might call “subjective facts”, if by the latter you include essentially indexical and contextual information, such as de se and de dicto information, as well as de re propositions.
Therefore, I think that centered worlds can be regarded one of two ways: (i) as nonsense or (ii) as just a peculiar kind of uncentered world: A “centered world” really just means an “uncentered world that happens to contain an ontologically basic, causally inert ‘pointer’ towards some being and an ontologically basic, causally inert catalogue of its “mental facts”. However, because a “center” is causally inert, we can never acquire any evidence that the world has a “center”.
(On Lewis’s account, centered worlds are generalizations of uncentered ones, which are contained in them as special cases.) From the point of view of a situated agent, centered worlds are epistemologically prior, about as patently obvious as the existence of “True”, “False” and “Don’t Know”, and the uncentered worlds are secondary, synthesized, hypothesized and inferred. The process of converting limited indexical information into objective, universally valid knowledge is where all the interesting stuff happens. That’s what the very idea of “calibration” is about. To know whether they (centered worlds or the other kind) are ontologically prior it’s just too soon for me to tell, but I feel uncomfortable prejudging the issue on such strict criteria without a more detailed exploration of the territory on the outside of the walled garden of God’s Own Library of Eternal Verity. In other words, with respect to that wall, I don’t see warrant flowing from inside out, I see it flowing from outside in. I suppose that’s in danger of making me an idealist, but I’m trying to be a good empiricist.
I think the temptation is very strong to notice the distinction between the elemental nature of raw sensory inputs and the cognitive significance they are the bearers of. And this is so, and is useful to do, precisely to the extent that the cognitive significance will vary depending on context and background knowledge, such as light levels, perspective, etc. because those serve as dynamically updated calibrations of cognitive significance. But these calibrations become transparent with use, so that we see, hear and feel vividly and directly in three dimensions because we have learned that that is the cognitive significance of what we see, hear, feel and navigate through. Subjective experience comes cooked and raw in the same dish. It then takes an analytic effort of abstraction of a painter’s eye to notice that it takes an elliptical shape on a focal plane to induce the visual experience of a round coin on a tabletop. Thus ambiguities, ambivalences and confusions abound about what constitutes the contents of subjective experience.
I’m reminded of an experiment I read about quite some time ago in a very old Scientific American I think, in which (IIRC) psychology subjects were fitted with goggles containing prisms that flipped their visual fields upside down. They wore them for upwards of a month during all waking hours. When they first put them on, they could barely walk at all without collapsing in a heap because of the severe navigational difficulties. After some time, the visual motor circuits in their brains adapted, and some were even able to re-learn how to ride a bike with the goggles on. After they could navigate their world more or less normally, they were asked whether at anytime their visual field ever “flipped over” so that things started looking “right side up” again. No, there was no change, things looked the same as when they first put the goggles on. So then things still looked “upside down”? After a while, the subjects started insisting that the question made no sense, and they didn’t know how to answer it. Nothing changed about their visual fields, they just got used to it and could successfully navigate in it; the effect became transparent.
(Until they took the goggles off after the experiment ended. And then they were again seriously disoriented for a time, though they recovered quickly.)