While I agree with mostly everything your model of Eliezer said, I do not feel less confused about how Eliezer arrives to a conclusion that most animals are not conscious. Granted, I may, and probably actually am, lacking an important insight in the matter, but than it will be this insight that allows me to become less confused and I wish Eliezer shared it.
When I’m thinking about a thought process that allows to arrive to such a conclusion I imagine something like this. Consciousness is not fundamental but it feels like it is. That’s why we intuitively apply concepts such as quantity towards consciousness, thinking about more or less conscious creatures as being more or less filled with conscious-fluid as we previously though about flogiston or caloric fluid. But this intuition is confused and leads us astray. Consciousness is a result of a specific cognitive algorithm. This algorithm can either be executed or not. There are good reasons to assume that such algorithm would be developped by evolution only among highly social animals as such conditions lead to necessity to model other creatures modelling yourself.
And I see an obvious problem with this line of thoughts. Reversed confusion isn’t insight. Our confused intuition which leads us to quantifying consciousness may be wrong, but it isn’t necessary wrong. If anything, the idea that consciousness isn’t quantifiable is also originally based on the idea of consciousness being fundamental. Think about ancient hebrews who claimed that animals didn’t have souls. There are lots of bad reasons to think that farm animals are ethically irrelivant, indeed it would be super convinient, considered how tasty is their meat. That doesn’t automatically mean that they are ethically relevant, just hints at the possibility.
We can think about hearing, or vision, or sense of smell. They are not fundamental. They are the result of a specific algorithm executed by our brain. Yet we can quantify them. Quantifying them actually makes a lot of sense, considered that evolution works incrementally. Why can’t it be the same for consciousness?
I don’t think the thought process that allows one to arrive at (my model of) Eliezer’s model looks very much like your 2nd paragraph. Rather, I think it looks like writing down a whole big list of stuff people say about consciousness, and then doing a bunch of introspection in the vicinity, and then listing out a bunch of hypothesized things the cognitive algorithm is doing, and then looking at that algorithm and asking why it is “obviously not conscious”, and so on and so forth, all while being very careful not to shove the entire problem under the rug in any particular step (by being like “and then there’s a sensor inside the mind, which is the part that has feelings about the image of the world that’s painted inside the head” or whatever).
Assuming one has had success at this exercise, they may feel much better-equipped to answer questions like “is (the appropriate rescuing of) consciousness more like a gradient quantity or more like a binary property?” or “are chickens similarly-conscious in the rescued sense?”. But their confidence wouldn’t be coming from abstract arguments like “because it is an algorithm, it can either be executed or not” or “there are good reasons to assume it would be developed by evolution only among social animals”; their confidence would be coming from saying “look, look at the particular algorithm, look at things X, Y, and Z that it needs to do in particular, there are other highly-probable consequences of a mind being able to do X, Y, and Z, and we difinitively observe those consequences in humans, and observe their absence in chickens.”
You might well disbelieve that Eliezer has such insight into cognitive algorithms, or believe he made a mistake when he did his exercise! But hopefully this sheds some light on (what I believe is) the nature of his confidence.
While I agree with mostly everything your model of Eliezer said, I do not feel less confused about how Eliezer arrives to a conclusion that most animals are not conscious. Granted, I may, and probably actually am, lacking an important insight in the matter, but than it will be this insight that allows me to become less confused and I wish Eliezer shared it.
When I’m thinking about a thought process that allows to arrive to such a conclusion I imagine something like this. Consciousness is not fundamental but it feels like it is. That’s why we intuitively apply concepts such as quantity towards consciousness, thinking about more or less conscious creatures as being more or less filled with conscious-fluid as we previously though about flogiston or caloric fluid. But this intuition is confused and leads us astray. Consciousness is a result of a specific cognitive algorithm. This algorithm can either be executed or not. There are good reasons to assume that such algorithm would be developped by evolution only among highly social animals as such conditions lead to necessity to model other creatures modelling yourself.
And I see an obvious problem with this line of thoughts. Reversed confusion isn’t insight. Our confused intuition which leads us to quantifying consciousness may be wrong, but it isn’t necessary wrong. If anything, the idea that consciousness isn’t quantifiable is also originally based on the idea of consciousness being fundamental. Think about ancient hebrews who claimed that animals didn’t have souls. There are lots of bad reasons to think that farm animals are ethically irrelivant, indeed it would be super convinient, considered how tasty is their meat. That doesn’t automatically mean that they are ethically relevant, just hints at the possibility.
We can think about hearing, or vision, or sense of smell. They are not fundamental. They are the result of a specific algorithm executed by our brain. Yet we can quantify them. Quantifying them actually makes a lot of sense, considered that evolution works incrementally. Why can’t it be the same for consciousness?
I don’t think the thought process that allows one to arrive at (my model of) Eliezer’s model looks very much like your 2nd paragraph. Rather, I think it looks like writing down a whole big list of stuff people say about consciousness, and then doing a bunch of introspection in the vicinity, and then listing out a bunch of hypothesized things the cognitive algorithm is doing, and then looking at that algorithm and asking why it is “obviously not conscious”, and so on and so forth, all while being very careful not to shove the entire problem under the rug in any particular step (by being like “and then there’s a sensor inside the mind, which is the part that has feelings about the image of the world that’s painted inside the head” or whatever).
Assuming one has had success at this exercise, they may feel much better-equipped to answer questions like “is (the appropriate rescuing of) consciousness more like a gradient quantity or more like a binary property?” or “are chickens similarly-conscious in the rescued sense?”. But their confidence wouldn’t be coming from abstract arguments like “because it is an algorithm, it can either be executed or not” or “there are good reasons to assume it would be developed by evolution only among social animals”; their confidence would be coming from saying “look, look at the particular algorithm, look at things X, Y, and Z that it needs to do in particular, there are other highly-probable consequences of a mind being able to do X, Y, and Z, and we difinitively observe those consequences in humans, and observe their absence in chickens.”
You might well disbelieve that Eliezer has such insight into cognitive algorithms, or believe he made a mistake when he did his exercise! But hopefully this sheds some light on (what I believe is) the nature of his confidence.