I don’t think the thought process that allows one to arrive at (my model of) Eliezer’s model looks very much like your 2nd paragraph. Rather, I think it looks like writing down a whole big list of stuff people say about consciousness, and then doing a bunch of introspection in the vicinity, and then listing out a bunch of hypothesized things the cognitive algorithm is doing, and then looking at that algorithm and asking why it is “obviously not conscious”, and so on and so forth, all while being very careful not to shove the entire problem under the rug in any particular step (by being like “and then there’s a sensor inside the mind, which is the part that has feelings about the image of the world that’s painted inside the head” or whatever).
Assuming one has had success at this exercise, they may feel much better-equipped to answer questions like “is (the appropriate rescuing of) consciousness more like a gradient quantity or more like a binary property?” or “are chickens similarly-conscious in the rescued sense?”. But their confidence wouldn’t be coming from abstract arguments like “because it is an algorithm, it can either be executed or not” or “there are good reasons to assume it would be developed by evolution only among social animals”; their confidence would be coming from saying “look, look at the particular algorithm, look at things X, Y, and Z that it needs to do in particular, there are other highly-probable consequences of a mind being able to do X, Y, and Z, and we difinitively observe those consequences in humans, and observe their absence in chickens.”
You might well disbelieve that Eliezer has such insight into cognitive algorithms, or believe he made a mistake when he did his exercise! But hopefully this sheds some light on (what I believe is) the nature of his confidence.
I don’t think the thought process that allows one to arrive at (my model of) Eliezer’s model looks very much like your 2nd paragraph. Rather, I think it looks like writing down a whole big list of stuff people say about consciousness, and then doing a bunch of introspection in the vicinity, and then listing out a bunch of hypothesized things the cognitive algorithm is doing, and then looking at that algorithm and asking why it is “obviously not conscious”, and so on and so forth, all while being very careful not to shove the entire problem under the rug in any particular step (by being like “and then there’s a sensor inside the mind, which is the part that has feelings about the image of the world that’s painted inside the head” or whatever).
Assuming one has had success at this exercise, they may feel much better-equipped to answer questions like “is (the appropriate rescuing of) consciousness more like a gradient quantity or more like a binary property?” or “are chickens similarly-conscious in the rescued sense?”. But their confidence wouldn’t be coming from abstract arguments like “because it is an algorithm, it can either be executed or not” or “there are good reasons to assume it would be developed by evolution only among social animals”; their confidence would be coming from saying “look, look at the particular algorithm, look at things X, Y, and Z that it needs to do in particular, there are other highly-probable consequences of a mind being able to do X, Y, and Z, and we difinitively observe those consequences in humans, and observe their absence in chickens.”
You might well disbelieve that Eliezer has such insight into cognitive algorithms, or believe he made a mistake when he did his exercise! But hopefully this sheds some light on (what I believe is) the nature of his confidence.