If consciousness is not functional, then Solomonoff induction will not predict it for other people even if you assert it for yourself. This is because “asserting it for yourself” doesn’t have a functional impact on yourself, so there’s no need to integrate it into the model of the world—it can just be a variable set to True a priori.
As I said, if you use induction to try to predict your more fine-grained personal experience, then the natural consequence (if the external world exists) is that you get a model of the external world plus some bridging laws that say how you experience it. You are certainly allowed to try to generalize these bridging laws to other humans’ brains, but you are not forced to, it doesn’t come out as an automatic part of the model.
If consciousness is not functional, then Solomonoff induction will not predict it for other people even if you assert it for yourself.
Agreed. But who’s saying that consciousness isn’t functional? “Functionalism” and “functional” as you’re using it are similar sounding words, but they mean two different things. “Functionalism” is about locating consciousness on an abstracted vs. fundamental level. “Functional” is about consciousness being causally active vs. passive.[1] You can be a consciousness realist, think consciousness is functional, but not a functionalist.
You can also phrase the “is consciousness functional” issue as the existence or non-existence of bridging laws (if consciousness is functional, then there are no bridging laws). Which actually also means that Solomonoff Induction privileges consciousness being functional, all else equal (which circles back to your original point, though of course you can assert that consciousness being functional is logically incoherent and then it doesn’t matter if the description is shorter).
I would frame this as dual-aspect monism [≈ consciousness is functional] vs. epiphenomenalism [≈ consciousness is not functional], to have a different sounding word. Although there are many other labels people use to refer to either of the two positions, especially for the first, these are just what I think are clearest.
You can also phrase the “is consciousness functional” issue as the existence or non-existence of bridging laws (if consciousness is functional, then there are no bridging laws). Which actually also means that Solomonoff Induction privileges consciousness being functional, all else equal.
Just imagine using your own subjective experience as the input to Solomonoff induction. If you have subjective experience that’s not connected by bridging laws to the physical world, Solomonoff induction is happy to try to predict its patterns anyhow.
Solomonoff induction only privileges consciousness being functional if you actually mean schmonsciousness.
You’re using ‘bridging law’ differently from I was, so let me rephrase.
To explain subjective experience, you need bridging-laws-as-you-define-them. But it could be that consciousness is functional and the bridging laws are implicit in the description of the universe, rather than explicit. Differently put, the bridging laws follow as a logical consequences of how the remaining universe is defined, rather than being an additional degree of freedom.[1]
In that case, since bridging laws do not add to the length of the program,[2] Solomonoff Induction will favor a universe in which they’re the same for everyone, since this is what happens by default (you’d have a hard time imagining that bridging laws follow by logical necessity but are different for different people). In fact, there’s a sense in which the program that SI finds is the same as the program SI would find for an illusionist universe; the difference is just about whether you think this program implies the existence of implicit bridging laws. But in neither cases is there an explicit set of bridging laws that add to the length of the program.
Most of Eliezer’s anti zombie sequence, especially Zombies Redacted can be viewed as an argument for bridging laws being implicit rather than explicit. He phrases this as “consciousness happens within physics” in that post.
Also arguable but something I feel very strongly about; I have an unpublished post where I argue at length that and why logical implications shouldn’t increase program length in Solomonoff Induction.
If consciousness is not functional, then Solomonoff induction will not predict it for other people even if you assert it for yourself. This is because “asserting it for yourself” doesn’t have a functional impact on yourself, so there’s no need to integrate it into the model of the world—it can just be a variable set to True a priori.
As I said, if you use induction to try to predict your more fine-grained personal experience, then the natural consequence (if the external world exists) is that you get a model of the external world plus some bridging laws that say how you experience it. You are certainly allowed to try to generalize these bridging laws to other humans’ brains, but you are not forced to, it doesn’t come out as an automatic part of the model.
Agreed. But who’s saying that consciousness isn’t functional? “Functionalism” and “functional” as you’re using it are similar sounding words, but they mean two different things. “Functionalism” is about locating consciousness on an abstracted vs. fundamental level. “Functional” is about consciousness being causally active vs. passive.[1] You can be a consciousness realist, think consciousness is functional, but not a functionalist.
You can also phrase the “is consciousness functional” issue as the existence or non-existence of bridging laws (if consciousness is functional, then there are no bridging laws). Which actually also means that Solomonoff Induction privileges consciousness being functional, all else equal (which circles back to your original point, though of course you can assert that consciousness being functional is logically incoherent and then it doesn’t matter if the description is shorter).
I would frame this as dual-aspect monism [≈ consciousness is functional] vs. epiphenomenalism [≈ consciousness is not functional], to have a different sounding word. Although there are many other labels people use to refer to either of the two positions, especially for the first, these are just what I think are clearest.
Just imagine using your own subjective experience as the input to Solomonoff induction. If you have subjective experience that’s not connected by bridging laws to the physical world, Solomonoff induction is happy to try to predict its patterns anyhow.
Solomonoff induction only privileges consciousness being functional if you actually mean schmonsciousness.
You’re using ‘bridging law’ differently from I was, so let me rephrase.
To explain subjective experience, you need bridging-laws-as-you-define-them. But it could be that consciousness is functional and the bridging laws are implicit in the description of the universe, rather than explicit. Differently put, the bridging laws follow as a logical consequences of how the remaining universe is defined, rather than being an additional degree of freedom.[1]
In that case, since bridging laws do not add to the length of the program,[2] Solomonoff Induction will favor a universe in which they’re the same for everyone, since this is what happens by default (you’d have a hard time imagining that bridging laws follow by logical necessity but are different for different people). In fact, there’s a sense in which the program that SI finds is the same as the program SI would find for an illusionist universe; the difference is just about whether you think this program implies the existence of implicit bridging laws. But in neither cases is there an explicit set of bridging laws that add to the length of the program.
Most of Eliezer’s anti zombie sequence, especially Zombies Redacted can be viewed as an argument for bridging laws being implicit rather than explicit. He phrases this as “consciousness happens within physics” in that post.
Also arguable but something I feel very strongly about; I have an unpublished post where I argue at length that and why logical implications shouldn’t increase program length in Solomonoff Induction.