I’d love to read an EY-writeup of his model of consciousness, but I don’t see Eliezer invoking ‘I have a secret model of intelligence’ in this particular comment. I don’t feel like I have a gears-level understanding of what consciousness is, but in response to ‘qualia must be a convergently instrumental because it probably involves one or more of (Jessica’s list)’, these strike me as perfectly good rejoinders even if I assume that neither I nor anyone else in the conversation has a model of consciousness:
Positing that qualia involves those things doesn’t get rid of the confusion re qualia.
Positing that qualia involve only simple mechanisms that solve simple problems (hence more likely to be convergently instrumental) is a predictable bias of early wrong guesses about the nature of qualia, because the simple ideas are likely to come to mind first, and will seem more appealing when less of our map (with the attendant messiness and convolutedness of reality) is filled in.
E.g., maybe humans have qualia because of something specific about how we evolved to model other minds. In that case, I wouldn’t start with a strong prior that qualia are convergently instrumental (even among mind designs developed under selection pressure to understand humans). Because there are lots of idiosyncratic things about how humans do other-mind-modeling and reflection (e.g., the tendency to feel sad yourself when you think about a sad person) that are unlikely to be mirrored in superintelligent AI.
Eliezer clearly is implying he has a ‘secret model of qualia’ in another comment:
I am just plain skeptical that there is a real values difference that would survive their learning what I know about how minds and qualia work. I of course fully expect that these people will loudly proclaim that I could not possibly know anything they don’t, despite their own confusion about these matters that they lack the skill to reflect on as confusion, and for them to exchange some wise smiles about those silly people who think that people disagree because of mistakes rather than values differences.
Regarding the rejoinders, although I agree Jessica’s comment doesn’t give us convincing proof that qualia are instrumentally convergent, I think it does give us reason to assign non-negligible probability to that being the case, absent convincing counterarguments. Like, just intuitively—we have e.g. feelings of pleasure and pain, and we also have evolved drives leading us to avoid or seek certain things, and it sure feels like those feelings of pleasure/pain are key components of the avoidance/seeking system. Yes, this could be defeated by a convincing theory of consciousness, but none has been offered, so I think it’s rational to continue assigning a reasonably high probability to qualia being convergent. Generally speaking this point seems like a huge gap in the “AI has likely expected value 0” argument so it would be great if Eliezer could write up his thoughts here.
Eliezer has said tons of times that he has a model of qualia he hasn’t written up. That’s why I said:
I’d love to read an EY-writeup of his model of consciousness, but I don’t see Eliezer invoking ‘I have a secret model of intelligence’ in this particular comment.
The model is real, but I found it weird to reply to that specific comment asking for it, because I don’t think the arguments in that comment rely at all on having a reductive model of qualia.
I think it does give us reason to assign non-negligible probability to that being the case, absent convincing counterarguments.
I started writing a reply to this, but then I realized I’m confused about what Eliezer meant by “Not sure there’s anybody there to see it. Definitely nobody there to be happy about it or appreciate it. I don’t consider that particularly worthwhile.”
He’s written a decent amount about ensuring AI is nonsentient as a research goal, so I guess he’s mapping “sentience” on to “anybody there to see it” (which he thinks is at least plausible for random AGIs, but not a big source of value on its own), and mapping “anybody there to be happy about it or appreciate it” on to human emotions (which he thinks are definitely not going to spontaneously emerge in random AGIs).
I agree that it’s not so-unlikely-as-to-be-negligible that a random AGI might have positively morally valenced (relative to human values) reactions to a lot of the things it computes, even if the positively-morally-valenced thingies aren’t “pleasure”, “curiosity”, etc. in a human sense.
Though I think the reason I believe that doesn’t route through your or Jessica’s arguments; it’s just a simple ‘humans have property X, and I don’t understand what X is or why it showed up in humans, so it’s hard to reach extreme confidence that it won’t show up in AGIs’.
I’d love to read an EY-writeup of his model of consciousness, but I don’t see Eliezer invoking ‘I have a secret model of intelligence’ in this particular comment. I don’t feel like I have a gears-level understanding of what consciousness is, but in response to ‘qualia must be a convergently instrumental because it probably involves one or more of (Jessica’s list)’, these strike me as perfectly good rejoinders even if I assume that neither I nor anyone else in the conversation has a model of consciousness:
Positing that qualia involves those things doesn’t get rid of the confusion re qualia.
Positing that qualia involve only simple mechanisms that solve simple problems (hence more likely to be convergently instrumental) is a predictable bias of early wrong guesses about the nature of qualia, because the simple ideas are likely to come to mind first, and will seem more appealing when less of our map (with the attendant messiness and convolutedness of reality) is filled in.
E.g., maybe humans have qualia because of something specific about how we evolved to model other minds. In that case, I wouldn’t start with a strong prior that qualia are convergently instrumental (even among mind designs developed under selection pressure to understand humans). Because there are lots of idiosyncratic things about how humans do other-mind-modeling and reflection (e.g., the tendency to feel sad yourself when you think about a sad person) that are unlikely to be mirrored in superintelligent AI.
Eliezer clearly is implying he has a ‘secret model of qualia’ in another comment:
Regarding the rejoinders, although I agree Jessica’s comment doesn’t give us convincing proof that qualia are instrumentally convergent, I think it does give us reason to assign non-negligible probability to that being the case, absent convincing counterarguments. Like, just intuitively—we have e.g. feelings of pleasure and pain, and we also have evolved drives leading us to avoid or seek certain things, and it sure feels like those feelings of pleasure/pain are key components of the avoidance/seeking system. Yes, this could be defeated by a convincing theory of consciousness, but none has been offered, so I think it’s rational to continue assigning a reasonably high probability to qualia being convergent. Generally speaking this point seems like a huge gap in the “AI has likely expected value 0” argument so it would be great if Eliezer could write up his thoughts here.
Eliezer has said tons of times that he has a model of qualia he hasn’t written up. That’s why I said:
The model is real, but I found it weird to reply to that specific comment asking for it, because I don’t think the arguments in that comment rely at all on having a reductive model of qualia.
I started writing a reply to this, but then I realized I’m confused about what Eliezer meant by “Not sure there’s anybody there to see it. Definitely nobody there to be happy about it or appreciate it. I don’t consider that particularly worthwhile.”
He’s written a decent amount about ensuring AI is nonsentient as a research goal, so I guess he’s mapping “sentience” on to “anybody there to see it” (which he thinks is at least plausible for random AGIs, but not a big source of value on its own), and mapping “anybody there to be happy about it or appreciate it” on to human emotions (which he thinks are definitely not going to spontaneously emerge in random AGIs).
I agree that it’s not so-unlikely-as-to-be-negligible that a random AGI might have positively morally valenced (relative to human values) reactions to a lot of the things it computes, even if the positively-morally-valenced thingies aren’t “pleasure”, “curiosity”, etc. in a human sense.
Though I think the reason I believe that doesn’t route through your or Jessica’s arguments; it’s just a simple ‘humans have property X, and I don’t understand what X is or why it showed up in humans, so it’s hard to reach extreme confidence that it won’t show up in AGIs’.