You seem to be making a mistake in treating bridge rules/hypotheses as necessary—perhaps to set up a later article?
I, like Cai, tend to frame my hypotheses in terms of a world-out-there model combined with bridging rules to my actual sense experience; but this is merely an optimisation strategy to take advantage of all my brain’s dedicated hardware for modelling specific world components, preprocessing of senses, etc.. The bridging rules certainly aren’t logically required. In practice there is an infinite family of equivalent models over my mental experience which would be totally indistinguishable, regardless of how I choose to “format” that idea mentally. My choice of mental model format is purely about efficiency considerations, not a claim about either my senses or the phenomena behind their behaviour. I’m just better at Tic tac toe than JAM.
To see this, let’s say Cai uses python internally to describe zir hypotheses A and B in their entirety. Clearly, ze can write either program with or without bridging rules and still have it yield identical predictions in all possible circumstances. Cai’s true hypothesis is the behaviour of the python program, regardless of how ze actually structures it: the combined interaction of all of it together. Both hypotheses could be written purely in terms of predictions on how Cai’s senses will change, thereby eliminating the “type error” issue. And if Cai is as heavily optimised for a particular structure of hypothesis as humans are, Cai can just use that—but for performance reasons, not because Cai has some magical way of knowing at what level of abstraction zir existence is implemented. Alternatively Cai might use a particular hypothesis structure because of the programmer’s arbitrary decision when writing zir. But the way the hypothesis is structured mentally isn’t a claim about how the universe works. The “hard problem of consciousness” is a problem about human intuitions, not a math problem.
You seem to be making a mistake in treating bridge rules/hypotheses as necessary—perhaps to set up a later article?
I, like Cai, tend to frame my hypotheses in terms of a world-out-there model combined with bridging rules to my actual sense experience; but this is merely an optimisation strategy to take advantage of all my brain’s dedicated hardware for modelling specific world components, preprocessing of senses, etc.. The bridging rules certainly aren’t logically required. In practice there is an infinite family of equivalent models over my mental experience which would be totally indistinguishable, regardless of how I choose to “format” that idea mentally. My choice of mental model format is purely about efficiency considerations, not a claim about either my senses or the phenomena behind their behaviour. I’m just better at Tic tac toe than JAM.
To see this, let’s say Cai uses python internally to describe zir hypotheses A and B in their entirety. Clearly, ze can write either program with or without bridging rules and still have it yield identical predictions in all possible circumstances. Cai’s true hypothesis is the behaviour of the python program, regardless of how ze actually structures it: the combined interaction of all of it together. Both hypotheses could be written purely in terms of predictions on how Cai’s senses will change, thereby eliminating the “type error” issue. And if Cai is as heavily optimised for a particular structure of hypothesis as humans are, Cai can just use that—but for performance reasons, not because Cai has some magical way of knowing at what level of abstraction zir existence is implemented. Alternatively Cai might use a particular hypothesis structure because of the programmer’s arbitrary decision when writing zir. But the way the hypothesis is structured mentally isn’t a claim about how the universe works. The “hard problem of consciousness” is a problem about human intuitions, not a math problem.