Both (modeling stuff about others by reusing circuits for modeling stuff about yourself without having experience; and having experience without modelling others similarly to yourself) are possible, and the reason why I think the suggested experiment would provide indirect evidence is related to the evolutionary role I consider qualia to possibly play. It wouldn’t be extremely strong evidence and certainly wouldn’t be proof, but it’d be enough evidence for me to stop eating fish that has these things.
The studies about optimistic/pessimistic behaviour tell us nothing about whether these things experience optimism/pessimism, as they are an adaptation an RL algorithm would implement without the need to implement circuits that would also experience these things, unless you can provide a story for why circuitry for experience is beneficial or a natural side effect of something beneficial.
One of the points of the post is that any evidence we can have except for what we have about humans would be inderect, and people call things evidence for confused reasons. Pain-related behaviour is something you’d see in neural networks trained with RL, because it’s good to avoid pain and you need a good explanation for how exactly it can be evidence for qualia.
(Copies from EA Forum for the benefit of lesswrongers following the discussion here)
Definitely agree that empathy and other social feelings provide indirect evidence for self-awareness (ie, “modeling stuff about yourself” in your brain) in a way that optimism/pessimism or pain-avoidance doesn’t. (Although wouldn’t a sophisticated-enough RL circuit, interacting with other RL circuits in some kind of virtual evolutionary landscape, also develop social emotions like loyalty, empathy, etc? Even tiny mammals like mice/rats display sophisticated social behaviors...)
I tend to assume that some kind of panpsychism is true, so you don’t need extra “circuitry for experience” in order to turn visual-information-processing into an experience of vision. What would such extra circuitry even do, if not the visual information processing itself? (Seems like maybe you are a believer in what Daniel Dennet calls the “fallacy of the second transduction”?) Consequently, I think it’s likely that even simple “RL algorithms” might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated “experiences of vision”! But of course it would not have any awareness of itself as being a thing-that-sees, nor would those isolated experiences of vision be necessarily tied together into a coherent visual field, etc.
So, I tend to think that fish and other primitive creatures probably have “qualia”, including something like a subjective experience of suffering, but that they probably lack any sophisticated self-awareness / self-model, so it’s kind of just “suffering happening nowhere” or “an experience of suffering not connected to anything else”—the fish doesn’t know it’s a fish, doesn’t know that it’s suffering, etc, the fish is just generating some simple qualia that don’t really refer to anything or tie into a larger system. Whether you call such a disconnected & shallow experience “real qualia” or “real suffering” is a question of definitions.
I think this personal view of mine is fairly similar to Eliezer’s from the Sequences: there are no “zombies” (among humans or animals), there is no “second transduction” from neuron activity into a mythical medium-of-consciousness (no “extra circuitry for experience” needed), rather the information-processing itself somehow directly produces (or is equivalent to, or etc) the qualia. So, animals and even simpler systems probably have qualia in some sense. But since animals aren’t self-aware (and/or have less self-awareness than humans), their qualia don’t matter (and/or matter less than humans’ qualia).
...Anyways, I think our core disagreement is that you seem to be equating “has a self-model” with “has qualia”, versus I think maybe qualia can and do exist even in very simple systems that lack a self-model. But I still think that having a self-model is morally important (atomic units of “suffering” that are just floating in some kind of void, unconnected to a complex experience of selfhood, seem of questionable moral relevance to me), so we end up having similar opinions about how it’s probably fine to eat fish.
I guess what I am objecting to is that you are acting like these philosophical problems of qualia / consciousness / etc are solved and other people are making an obvious mistake. I agree that I see a lot of people being confused and making mistakes, but I don’t think the problems are solved!
Qualia (IMO) certainly is “information processing”: there are inputs and outputs. And it is a part of a larger information-processing thing, the brain. What I’m saying is that there’s information processing happening outside of the qualia circuits, and some of the results of the information processing outside of the qualia circuits are inputs to our qualia.
I think it’s likely that even simple “RL algorithms” might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated “experiences of vision”
Well, how do you know that visual information processing produces qualia? You can match when algorithms implemented by other humans’ brains to algorithms implemented by your brain, because all of you talk about subjective experience; how do you, inside your neural circuitry, make an inference that a similar thing happens in neurons that just process visual information?
You know you have subjective experience, self-evidently. You can match the computation run by the neural circuitry of your brain to the computation run by the neural circuitry of other humans: because since they talk about subjective experience, you can expect this to be caused by similar computation. This is valid. Thinking that visual information processing is part of what makes qualia (i.e., there’s no way to replace a bunch of your neurons with something that outputs the same stuff without first seeing and processing something, such that you’ll experience seeing as before) is something you can make theories about but is not a valid inference, you don’t have a way of matching the computation of qualia to the whole of your brain.
And, how can you match it to matrix multiplications that don’t talk about qualia, did not have evolutionary reasons for experience, etc.? Do you think an untrained or a small convolutional neural network experiences images to some extent, or only large and trained? Where does that expectation come from?
I’m not saying that qualia is solved. We don’t yet know how to build it, and we can’t yet scan brains and say which circuits implement it. But some people seem more confused than warranted, and they spend resources less effectively than they could’ve.
And I’m not equating qualia to self-model. Qualia is just the experience of information. It doesn’t required a self-model, also on Earth, so far, I expect these things to have been correlated.
If there’s suffering and experience of extreme pain, in my opinion, it matters even if there isn’t reflectivity.
You know you have subjective experience, self-evidently. You can match the computation run by the neural circuitry of your brain to the computation run by the neural circuitry of other humans: because since they talk about subjective experience, you can expect this to be caused by similar computation.
Similarity is subjective. There is no fundamental reason that the ethical threshold must be on the level of similarity between humans and not on level of similarity between humans and shrimps.
Both (modeling stuff about others by reusing circuits for modeling stuff about yourself without having experience; and having experience without modelling others similarly to yourself) are possible, and the reason why I think the suggested experiment would provide indirect evidence is related to the evolutionary role I consider qualia to possibly play. It wouldn’t be extremely strong evidence and certainly wouldn’t be proof, but it’d be enough evidence for me to stop eating fish that has these things.
The studies about optimistic/pessimistic behaviour tell us nothing about whether these things experience optimism/pessimism, as they are an adaptation an RL algorithm would implement without the need to implement circuits that would also experience these things, unless you can provide a story for why circuitry for experience is beneficial or a natural side effect of something beneficial.
One of the points of the post is that any evidence we can have except for what we have about humans would be inderect, and people call things evidence for confused reasons. Pain-related behaviour is something you’d see in neural networks trained with RL, because it’s good to avoid pain and you need a good explanation for how exactly it can be evidence for qualia.
(Copied from EA Forum)
(Copies from EA Forum for the benefit of lesswrongers following the discussion here)
Definitely agree that empathy and other social feelings provide indirect evidence for self-awareness (ie, “modeling stuff about yourself” in your brain) in a way that optimism/pessimism or pain-avoidance doesn’t. (Although wouldn’t a sophisticated-enough RL circuit, interacting with other RL circuits in some kind of virtual evolutionary landscape, also develop social emotions like loyalty, empathy, etc? Even tiny mammals like mice/rats display sophisticated social behaviors...)
I tend to assume that some kind of panpsychism is true, so you don’t need extra “circuitry for experience” in order to turn visual-information-processing into an experience of vision. What would such extra circuitry even do, if not the visual information processing itself? (Seems like maybe you are a believer in what Daniel Dennet calls the “fallacy of the second transduction”?)
Consequently, I think it’s likely that even simple “RL algorithms” might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated “experiences of vision”! But of course it would not have any awareness of itself as being a thing-that-sees, nor would those isolated experiences of vision be necessarily tied together into a coherent visual field, etc.
So, I tend to think that fish and other primitive creatures probably have “qualia”, including something like a subjective experience of suffering, but that they probably lack any sophisticated self-awareness / self-model, so it’s kind of just “suffering happening nowhere” or “an experience of suffering not connected to anything else”—the fish doesn’t know it’s a fish, doesn’t know that it’s suffering, etc, the fish is just generating some simple qualia that don’t really refer to anything or tie into a larger system. Whether you call such a disconnected & shallow experience “real qualia” or “real suffering” is a question of definitions.
I think this personal view of mine is fairly similar to Eliezer’s from the Sequences: there are no “zombies” (among humans or animals), there is no “second transduction” from neuron activity into a mythical medium-of-consciousness (no “extra circuitry for experience” needed), rather the information-processing itself somehow directly produces (or is equivalent to, or etc) the qualia. So, animals and even simpler systems probably have qualia in some sense. But since animals aren’t self-aware (and/or have less self-awareness than humans), their qualia don’t matter (and/or matter less than humans’ qualia).
...Anyways, I think our core disagreement is that you seem to be equating “has a self-model” with “has qualia”, versus I think maybe qualia can and do exist even in very simple systems that lack a self-model. But I still think that having a self-model is morally important (atomic units of “suffering” that are just floating in some kind of void, unconnected to a complex experience of selfhood, seem of questionable moral relevance to me), so we end up having similar opinions about how it’s probably fine to eat fish.
I guess what I am objecting to is that you are acting like these philosophical problems of qualia / consciousness / etc are solved and other people are making an obvious mistake. I agree that I see a lot of people being confused and making mistakes, but I don’t think the problems are solved!
I appreciate this comment.
Qualia (IMO) certainly is “information processing”: there are inputs and outputs. And it is a part of a larger information-processing thing, the brain. What I’m saying is that there’s information processing happening outside of the qualia circuits, and some of the results of the information processing outside of the qualia circuits are inputs to our qualia.
Well, how do you know that visual information processing produces qualia? You can match when algorithms implemented by other humans’ brains to algorithms implemented by your brain, because all of you talk about subjective experience; how do you, inside your neural circuitry, make an inference that a similar thing happens in neurons that just process visual information?
You know you have subjective experience, self-evidently. You can match the computation run by the neural circuitry of your brain to the computation run by the neural circuitry of other humans: because since they talk about subjective experience, you can expect this to be caused by similar computation. This is valid. Thinking that visual information processing is part of what makes qualia (i.e., there’s no way to replace a bunch of your neurons with something that outputs the same stuff without first seeing and processing something, such that you’ll experience seeing as before) is something you can make theories about but is not a valid inference, you don’t have a way of matching the computation of qualia to the whole of your brain.
And, how can you match it to matrix multiplications that don’t talk about qualia, did not have evolutionary reasons for experience, etc.? Do you think an untrained or a small convolutional neural network experiences images to some extent, or only large and trained? Where does that expectation come from?
I’m not saying that qualia is solved. We don’t yet know how to build it, and we can’t yet scan brains and say which circuits implement it. But some people seem more confused than warranted, and they spend resources less effectively than they could’ve.
And I’m not equating qualia to self-model. Qualia is just the experience of information. It doesn’t required a self-model, also on Earth, so far, I expect these things to have been correlated.
If there’s suffering and experience of extreme pain, in my opinion, it matters even if there isn’t reflectivity.
Similarity is subjective. There is no fundamental reason that the ethical threshold must be on the level of similarity between humans and not on level of similarity between humans and shrimps.