I continue to strongly believe that your previous post is methodologically dubious and does not provide an adequate set of explanations of what “humans believe in” when they say “consciousness.” I think the results that you obtained from your surveys are ~ entirely noise generated by forcing people who lack the expertise necessary to have a gears-level model of consciousness (i.e., literally all people in existence now or in the past) to talk about consciousness as though they did, by denying them the ability to express themselves using the language that represents their intuitions best.
Normally, I wouldn’t harp on that too much here given the passage of time (water under the bridge and all that), but literally this entire post is based on a framework I believe gets things totally backwards. Moreover, I was very (negatively) surprised to see respected users on this site apparently believing your previous post was “outstanding” and “very legible evidence” in favor of your thesis.
I dearly hope this general structure does not become part of the LW zeitgeist for thinking about an issue as important as this.
Yeah, this theory definitely needs far better methodologies for testing this theory, and while I wouldn’t be surprised by at least part of the answer/solution to the Hard Problem or problems of Consciousness being that we have unnecessarily conflated various properties that occur in various humans in the word consciousness because of political/moral reasons, whereas AIs don’t automatically have all the properties of humans here, so we should create new concepts for AIs, it’s still methodologically bad.
But yes, this post at the very least relies on a theory that hasn’t been tested, and while I suspect it’s at least partially correct, the evidence in the conflationary alliances post is basically 0 evidence for the proposition.
Fwiw I was too much of a coward/too conflict averse to say anything myself but I agree with this critique. (As I’ve said in my post I think half of all people are talking about mostly the same thing when they say ‘consciousness’, which imo is fully compatible with the set of responses Andrew listed in his post given how they were collected.)
That said, while the methodology isn’t sound, I wouldn’t be surprised if there was in fact a real conflationary alliance around the term, since the term is used in a context where deciding if someone is conscious or not (like uploads) have pretty big moral and political ramifications too, so there are pressures for the word to be politicized and not truth-tracking.
I continue to strongly believe that your previous post is methodologically dubious and does not provide an adequate set of explanations of what “humans believe in” when they say “consciousness.” I think the results that you obtained from your surveys are ~ entirely noise generated by forcing people who lack the expertise necessary to have a gears-level model of consciousness (i.e., literally all people in existence now or in the past) to talk about consciousness as though they did, by denying them the ability to express themselves using the language that represents their intuitions best.
Normally, I wouldn’t harp on that too much here given the passage of time (water under the bridge and all that), but literally this entire post is based on a framework I believe gets things totally backwards. Moreover, I was very (negatively) surprised to see respected users on this site apparently believing your previous post was “outstanding” and “very legible evidence” in favor of your thesis.
I dearly hope this general structure does not become part of the LW zeitgeist for thinking about an issue as important as this.
Yeah, this theory definitely needs far better methodologies for testing this theory, and while I wouldn’t be surprised by at least part of the answer/solution to the Hard Problem or problems of Consciousness being that we have unnecessarily conflated various properties that occur in various humans in the word consciousness because of political/moral reasons, whereas AIs don’t automatically have all the properties of humans here, so we should create new concepts for AIs, it’s still methodologically bad.
But yes, this post at the very least relies on a theory that hasn’t been tested, and while I suspect it’s at least partially correct, the evidence in the conflationary alliances post is basically 0 evidence for the proposition.
Fwiw I was too much of a coward/too conflict averse to say anything myself but I agree with this critique. (As I’ve said in my post I think half of all people are talking about mostly the same thing when they say ‘consciousness’, which imo is fully compatible with the set of responses Andrew listed in his post given how they were collected.)
That said, while the methodology isn’t sound, I wouldn’t be surprised if there was in fact a real conflationary alliance around the term, since the term is used in a context where deciding if someone is conscious or not (like uploads) have pretty big moral and political ramifications too, so there are pressures for the word to be politicized and not truth-tracking.