I’m on board with being realist about your own consciousness. Personal experience and all that. But there’s an epistemological problem with generalizing—how are you supposed to have learned that other humans have first-person experience, or indeed that the notion of first-person experience generalizes outside of your own head?
In Solomonoff induction, the mathematical formalization of Occam’s razor, it’s perfectly legitimate to start by assuming your own phenomenal experience (and then look for hypotheses that would produce that, such as the external world plus some bridging laws). But there’s no a priori reason those bridging laws have to apply to other humans. It’s not that they’re assumed to be zombies, there just isn’t a truth of the matter that needs to be answered.
To solve this problem, let me introduce you to schmonciousness, the property that you infer other people have based on their behavior and anatomy. You’re conscious, they’re schmonscious. These two properties might end up being more or less the same, but who knows.
Where before one might say that conscious people are moral patients, now you don’t have to make the assumption that the people you care about are conscious, and you can just say that schmonscious people are moral patients.
Schmonsciousness is very obviously a functional property, because it’s something you have to infer about other people (you can infer it about yourself based on your behavior as well, I suppose). But if consciousness is different from schmonsciousness, you still don’t have to be a functionalist about consciousness.
In Solomonoff induction, the mathematical formalization of Occam’s razor, it’s perfectly legitimate to start by assuming your own phenomenal experience (and then look for hypotheses that would produce that, such as the external world plus some bridging laws). But there’s no a priori reason those bridging laws have to apply to other humans.
You can reason that a universe in which you are conscious and everyone else is not is more complex than a universe in which everyone is equally conscious, therefore Solomonoff Induction privileges consciousness for everyone.
If consciousness is not functional, then Solomonoff induction will not predict it for other people even if you assert it for yourself. This is because “asserting it for yourself” doesn’t have a functional impact on yourself, so there’s no need to integrate it into the model of the world—it can just be a variable set to True a priori.
As I said, if you use induction to try to predict your more fine-grained personal experience, then the natural consequence (if the external world exists) is that you get a model of the external world plus some bridging laws that say how you experience it. You are certainly allowed to try to generalize these bridging laws to other humans’ brains, but you are not forced to, it doesn’t come out as an automatic part of the model.
If consciousness is not functional, then Solomonoff induction will not predict it for other people even if you assert it for yourself.
Agreed. But who’s saying that consciousness isn’t functional? “Functionalism” and “functional” as you’re using it are similar sounding words, but they mean two different things. “Functionalism” is about locating consciousness on an abstracted vs. fundamental level. “Functional” is about consciousness being causally active vs. passive.[1] You can be a consciousness realist, think consciousness is functional, but not a functionalist.
You can also phrase the “is consciousness functional” issue as the existence or non-existence of bridging laws (if consciousness is functional, then there are no bridging laws). Which actually also means that Solomonoff Induction privileges consciousness being functional, all else equal (which circles back to your original point, though of course you can assert that consciousness being functional is logically incoherent and then it doesn’t matter if the description is shorter).
I would frame this as dual-aspect monism [≈ consciousness is functional] vs. epiphenomenalism [≈ consciousness is not functional], to have a different sounding word. Although there are many other labels people use to refer to either of the two positions, especially for the first, these are just what I think are clearest.
You can also phrase the “is consciousness functional” issue as the existence or non-existence of bridging laws (if consciousness is functional, then there are no bridging laws). Which actually also means that Solomonoff Induction privileges consciousness being functional, all else equal.
Just imagine using your own subjective experience as the input to Solomonoff induction. If you have subjective experience that’s not connected by bridging laws to the physical world, Solomonoff induction is happy to try to predict its patterns anyhow.
Solomonoff induction only privileges consciousness being functional if you actually mean schmonsciousness.
You’re using ‘bridging law’ differently from I was, so let me rephrase.
To explain subjective experience, you need bridging-laws-as-you-define-them. But it could be that consciousness is functional and the bridging laws are implicit in the description of the universe, rather than explicit. Differently put, the bridging laws follow as a logical consequences of how the remaining universe is defined, rather than being an additional degree of freedom.[1]
In that case, since bridging laws do not add to the length of the program,[2] Solomonoff Induction will favor a universe in which they’re the same for everyone, since this is what happens by default (you’d have a hard time imagining that bridging laws follow by logical necessity but are different for different people). In fact, there’s a sense in which the program that SI finds is the same as the program SI would find for an illusionist universe; the difference is just about whether you think this program implies the existence of implicit bridging laws. But in neither cases is there an explicit set of bridging laws that add to the length of the program.
Most of Eliezer’s anti zombie sequence, especially Zombies Redacted can be viewed as an argument for bridging laws being implicit rather than explicit. He phrases this as “consciousness happens within physics” in that post.
Also arguable but something I feel very strongly about; I have an unpublished post where I argue at length that and why logical implications shouldn’t increase program length in Solomonoff Induction.
I’m on board with being realist about your own consciousness. Personal experience and all that. But there’s an epistemological problem with generalizing—how are you supposed to have learned that other humans have first-person experience, or indeed that the notion of first-person experience generalizes outside of your own head?
In Solomonoff induction, the mathematical formalization of Occam’s razor, it’s perfectly legitimate to start by assuming your own phenomenal experience (and then look for hypotheses that would produce that, such as the external world plus some bridging laws). But there’s no a priori reason those bridging laws have to apply to other humans. It’s not that they’re assumed to be zombies, there just isn’t a truth of the matter that needs to be answered.
To solve this problem, let me introduce you to schmonciousness, the property that you infer other people have based on their behavior and anatomy. You’re conscious, they’re schmonscious. These two properties might end up being more or less the same, but who knows.
Where before one might say that conscious people are moral patients, now you don’t have to make the assumption that the people you care about are conscious, and you can just say that schmonscious people are moral patients.
Schmonsciousness is very obviously a functional property, because it’s something you have to infer about other people (you can infer it about yourself based on your behavior as well, I suppose). But if consciousness is different from schmonsciousness, you still don’t have to be a functionalist about consciousness.
You can reason that a universe in which you are conscious and everyone else is not is more complex than a universe in which everyone is equally conscious, therefore Solomonoff Induction privileges consciousness for everyone.
If consciousness is not functional, then Solomonoff induction will not predict it for other people even if you assert it for yourself. This is because “asserting it for yourself” doesn’t have a functional impact on yourself, so there’s no need to integrate it into the model of the world—it can just be a variable set to True a priori.
As I said, if you use induction to try to predict your more fine-grained personal experience, then the natural consequence (if the external world exists) is that you get a model of the external world plus some bridging laws that say how you experience it. You are certainly allowed to try to generalize these bridging laws to other humans’ brains, but you are not forced to, it doesn’t come out as an automatic part of the model.
Agreed. But who’s saying that consciousness isn’t functional? “Functionalism” and “functional” as you’re using it are similar sounding words, but they mean two different things. “Functionalism” is about locating consciousness on an abstracted vs. fundamental level. “Functional” is about consciousness being causally active vs. passive.[1] You can be a consciousness realist, think consciousness is functional, but not a functionalist.
You can also phrase the “is consciousness functional” issue as the existence or non-existence of bridging laws (if consciousness is functional, then there are no bridging laws). Which actually also means that Solomonoff Induction privileges consciousness being functional, all else equal (which circles back to your original point, though of course you can assert that consciousness being functional is logically incoherent and then it doesn’t matter if the description is shorter).
I would frame this as dual-aspect monism [≈ consciousness is functional] vs. epiphenomenalism [≈ consciousness is not functional], to have a different sounding word. Although there are many other labels people use to refer to either of the two positions, especially for the first, these are just what I think are clearest.
Just imagine using your own subjective experience as the input to Solomonoff induction. If you have subjective experience that’s not connected by bridging laws to the physical world, Solomonoff induction is happy to try to predict its patterns anyhow.
Solomonoff induction only privileges consciousness being functional if you actually mean schmonsciousness.
You’re using ‘bridging law’ differently from I was, so let me rephrase.
To explain subjective experience, you need bridging-laws-as-you-define-them. But it could be that consciousness is functional and the bridging laws are implicit in the description of the universe, rather than explicit. Differently put, the bridging laws follow as a logical consequences of how the remaining universe is defined, rather than being an additional degree of freedom.[1]
In that case, since bridging laws do not add to the length of the program,[2] Solomonoff Induction will favor a universe in which they’re the same for everyone, since this is what happens by default (you’d have a hard time imagining that bridging laws follow by logical necessity but are different for different people). In fact, there’s a sense in which the program that SI finds is the same as the program SI would find for an illusionist universe; the difference is just about whether you think this program implies the existence of implicit bridging laws. But in neither cases is there an explicit set of bridging laws that add to the length of the program.
Most of Eliezer’s anti zombie sequence, especially Zombies Redacted can be viewed as an argument for bridging laws being implicit rather than explicit. He phrases this as “consciousness happens within physics” in that post.
Also arguable but something I feel very strongly about; I have an unpublished post where I argue at length that and why logical implications shouldn’t increase program length in Solomonoff Induction.