Eliezer has said tons of times that he has a model of qualia he hasn’t written up. That’s why I said:
I’d love to read an EY-writeup of his model of consciousness, but I don’t see Eliezer invoking ‘I have a secret model of intelligence’ in this particular comment.
The model is real, but I found it weird to reply to that specific comment asking for it, because I don’t think the arguments in that comment rely at all on having a reductive model of qualia.
I think it does give us reason to assign non-negligible probability to that being the case, absent convincing counterarguments.
I started writing a reply to this, but then I realized I’m confused about what Eliezer meant by “Not sure there’s anybody there to see it. Definitely nobody there to be happy about it or appreciate it. I don’t consider that particularly worthwhile.”
He’s written a decent amount about ensuring AI is nonsentient as a research goal, so I guess he’s mapping “sentience” on to “anybody there to see it” (which he thinks is at least plausible for random AGIs, but not a big source of value on its own), and mapping “anybody there to be happy about it or appreciate it” on to human emotions (which he thinks are definitely not going to spontaneously emerge in random AGIs).
I agree that it’s not so-unlikely-as-to-be-negligible that a random AGI might have positively morally valenced (relative to human values) reactions to a lot of the things it computes, even if the positively-morally-valenced thingies aren’t “pleasure”, “curiosity”, etc. in a human sense.
Though I think the reason I believe that doesn’t route through your or Jessica’s arguments; it’s just a simple ‘humans have property X, and I don’t understand what X is or why it showed up in humans, so it’s hard to reach extreme confidence that it won’t show up in AGIs’.
Eliezer has said tons of times that he has a model of qualia he hasn’t written up. That’s why I said:
The model is real, but I found it weird to reply to that specific comment asking for it, because I don’t think the arguments in that comment rely at all on having a reductive model of qualia.
I started writing a reply to this, but then I realized I’m confused about what Eliezer meant by “Not sure there’s anybody there to see it. Definitely nobody there to be happy about it or appreciate it. I don’t consider that particularly worthwhile.”
He’s written a decent amount about ensuring AI is nonsentient as a research goal, so I guess he’s mapping “sentience” on to “anybody there to see it” (which he thinks is at least plausible for random AGIs, but not a big source of value on its own), and mapping “anybody there to be happy about it or appreciate it” on to human emotions (which he thinks are definitely not going to spontaneously emerge in random AGIs).
I agree that it’s not so-unlikely-as-to-be-negligible that a random AGI might have positively morally valenced (relative to human values) reactions to a lot of the things it computes, even if the positively-morally-valenced thingies aren’t “pleasure”, “curiosity”, etc. in a human sense.
Though I think the reason I believe that doesn’t route through your or Jessica’s arguments; it’s just a simple ‘humans have property X, and I don’t understand what X is or why it showed up in humans, so it’s hard to reach extreme confidence that it won’t show up in AGIs’.