You present an excellently-written and interesting case here. I agree with the point that self-modelling systems can think in certain ways which are unique and special and chickens can’t do that.
One reason I identify consciousness with having qualia is that Eliezer specifically does that in the twitter thread. The other is that qualia is generally less ambiguous than terms like consciousness and self-awareness and sentience. The disadvantage is that the concept of qualia is something which is very difficult (and beyond my explaining capabilities) to explain to people who don’t know what it means. I choose to take this tradeoff because I find that I, personally, get much more out of discussions about specifically qualia, than any of the related words. Perhaps I’m not taking seriously enough the idea that illusionism will explain why I feel like I’m conscious and not explain why I am conscious.
I also agree that most other existing mainstream views are somewhat poor, but to me this isn’t particularly strong positive evidence for Eliezer’s views. This is because models of consciousness on the level of detail of Eliezer’s are hard to come up with, so there might be many other excellent ones that haven’t been found yet. And Eliezer hasn’t done (to my knowledge) anything which rules out other arguments on the level of detail of his own.
Basically I think that the reason the best argument we see is Eliezer’s is less along the lines of “this is the only computational argument that could be made for consciousness” and more along the lines of “computational arguments for consciousness are really difficult and this is the first one anyone has found”.
Yudkowsky specifically using the term is a good reason. Thanks for pointing that out, and now I feel a little silly for asking. He says, “I mean qualia, yes.” You can’t get more blunt than that.
While I agree that qualia is less ambiguous than other terms, I am still not sure it is sufficiently unambiguous. I don’t know what you mean by the term, for instance. Generally, though, I would say that I think consciousness exists, but that qualia do not exist.
I think illusionism does offer an account of consciousness; it’s just that consciousness turns out not to be what some people thought that it was. Personally, I don’t have and apparently have never had qualia intuitions, and thus never struggled with accepting Dennett’s views. This might be unusual, but the only view I ever recall holding on the matter was something like Dennett’s. His views immediately resonated with me and I adopted them the moment I heard them, with something similar to a “wow, this is obviously how it is!” response, and bewilderment that anyone could think otherwise.
I’m glad we agree most alternatives are poor. I do happen to agree that this isn’t especially good evidence against the plausibility of some compelling alternative to illusionism emerging. I definitely think that’s a very real possibility. But I do not think it is going to come out of the intuition-mongering methodology many philosophers rely on. I also agree that this is probably due to the difficulty of coming up with alternative models. Seems like we’re largely in agreement here, in that case.
You present an excellently-written and interesting case here. I agree with the point that self-modelling systems can think in certain ways which are unique and special and chickens can’t do that.
One reason I identify consciousness with having qualia is that Eliezer specifically does that in the twitter thread. The other is that qualia is generally less ambiguous than terms like consciousness and self-awareness and sentience. The disadvantage is that the concept of qualia is something which is very difficult (and beyond my explaining capabilities) to explain to people who don’t know what it means. I choose to take this tradeoff because I find that I, personally, get much more out of discussions about specifically qualia, than any of the related words. Perhaps I’m not taking seriously enough the idea that illusionism will explain why I feel like I’m conscious and not explain why I am conscious.
I also agree that most other existing mainstream views are somewhat poor, but to me this isn’t particularly strong positive evidence for Eliezer’s views. This is because models of consciousness on the level of detail of Eliezer’s are hard to come up with, so there might be many other excellent ones that haven’t been found yet. And Eliezer hasn’t done (to my knowledge) anything which rules out other arguments on the level of detail of his own.
Basically I think that the reason the best argument we see is Eliezer’s is less along the lines of “this is the only computational argument that could be made for consciousness” and more along the lines of “computational arguments for consciousness are really difficult and this is the first one anyone has found”.
Yudkowsky specifically using the term is a good reason. Thanks for pointing that out, and now I feel a little silly for asking. He says, “I mean qualia, yes.” You can’t get more blunt than that.
While I agree that qualia is less ambiguous than other terms, I am still not sure it is sufficiently unambiguous. I don’t know what you mean by the term, for instance. Generally, though, I would say that I think consciousness exists, but that qualia do not exist.
I think illusionism does offer an account of consciousness; it’s just that consciousness turns out not to be what some people thought that it was. Personally, I don’t have and apparently have never had qualia intuitions, and thus never struggled with accepting Dennett’s views. This might be unusual, but the only view I ever recall holding on the matter was something like Dennett’s. His views immediately resonated with me and I adopted them the moment I heard them, with something similar to a “wow, this is obviously how it is!” response, and bewilderment that anyone could think otherwise.
I’m glad we agree most alternatives are poor. I do happen to agree that this isn’t especially good evidence against the plausibility of some compelling alternative to illusionism emerging. I definitely think that’s a very real possibility. But I do not think it is going to come out of the intuition-mongering methodology many philosophers rely on. I also agree that this is probably due to the difficulty of coming up with alternative models. Seems like we’re largely in agreement here, in that case.