Assuming Yudkowsky’s position is quite similair to Nate’s, which it sounds like given what both have written, I’d recommend reading this debate Yud was in to get a better understanding of this model[1]. Follow up on the posts Yud, Luke and Rob mention if you’d care to know more. Personally, I’m closer to Luke’s position on the topic. He gives a clear and thorough exposition here.
Also, I anticipate that if Nate does have a fullly fleshed out model, he’d be reluctant to share it. I think Yud said he didn’t wish to give too many specifics as he was worried trolls might implement a maximally suffering entity. And, you know, 4chan exists. Plenty of people there would be inclined to do such a thing to signal disbelief or simply to upset others.
I think this kind of model would fall under the illusionism school of thought. “Consciousness is an illusion” is the motto. I parse it as “the concept you have of consciousness is an illusion, a persistent part of your map that doesn’t match the territory. Just as you may be convinced these tables are of different shapes, even after rotating and matching them onto one another, so too may you be convinced that you have this property known as consciousness.” That doesn’t mean the territory has nothing like consciousness in it, just that it doesn’t have the exact form you believed it to. You can understand on a deliberative level how the shapes are the same and the process that generates the illusion whilst still experiencing the illusion. EDIT: The same for your intuition that “consciousness has to be more than an algorithm” or “more than matter” or so on.
Huh. I’m a bit surprised. I guess I thought that since a lot of the stuff I’ve read by Eliezer seems heavily influenced by Dennet. And he’s also a physicalist. His approach also seems to be “explain our claims about consciousness”. Plus there’s all the stuff about self reflection, how an algorithm feels from the inside etc. I guess I was just bucketing that stuff together with (weak) illusionism. After writing that out, I can see how those points doesn’t imply illusionism. Does Eliezer think we can save the phenomena of consciousness and hence it calling it an illusion is a mistake? Or is there something else going on there?
I think Dennett’s argumentation about the hard problem of consciousness has usually been terrible, and I don’t see him as an important forerunner of illusionism, though he’s an example of someone who soldiered on for anti-realism about phenomenal consciousness for long stretches of time where the arguments were lacking.
I think I remember Eliezer saying somewhere that he also wasn’t impressed with Dennett’s takes on the hard problem, but I forget where?
His approach also seems to be “explain our claims about consciousness”.
There’s some similarity between heterophenomenology and the way Eliezer/Nate talk about consciousness, though I guess I think of Eliezer/Nate’s “let’s find a theory that makes sense of our claims about consciousness” as more “here’s a necessary feature of any account of consciousness, and a plausibly fruitful way to get insight into a lot of what’s going on”, not as an argument for otherwise ignoring all introspective data. Heterophenomenology IMO was always a somewhat silly and confused idea, because it’s proposing that we a priori reject introspective evidence but it’s not giving a clear argument for why.
(Or, worse, it’s arguing something orthogonal to whether we should care about introspective evidence, while winking and nudging that there’s something vaguely unrespectable about the introspective-evidence question.)
There are good arguments for being skeptical of introspection here, but “that doesn’t sound like it’s in the literary genre of science” should not be an argument that Bayesians find very compelling.
EDIT: Added in the correct links.
Assuming Yudkowsky’s position is quite similair to Nate’s, which it sounds like given what both have written, I’d recommend reading this debate Yud was in to get a better understanding of this model[1]. Follow up on the posts Yud, Luke and Rob mention if you’d care to know more. Personally, I’m closer to Luke’s position on the topic. He gives a clear and thorough exposition here.
Also, I anticipate that if Nate does have a fullly fleshed out model, he’d be reluctant to share it. I think Yud said he didn’t wish to give too many specifics as he was worried trolls might implement a maximally suffering entity. And, you know, 4chan exists. Plenty of people there would be inclined to do such a thing to signal disbelief or simply to upset others.
I think this kind of model would fall under the illusionism school of thought. “Consciousness is an illusion” is the motto. I parse it as “the concept you have of consciousness is an illusion, a persistent part of your map that doesn’t match the territory. Just as you may be convinced these tables are of different shapes, even after rotating and matching them onto one another, so too may you be convinced that you have this property known as consciousness.” That doesn’t mean the territory has nothing like consciousness in it, just that it doesn’t have the exact form you believed it to. You can understand on a deliberative level how the shapes are the same and the process that generates the illusion whilst still experiencing the illusion. EDIT: The same for your intuition that “consciousness has to be more than an algorithm” or “more than matter” or so on.
Luke M and I are illusionists, but I don’t think Eliezer or Nate are illusionists.
Huh. I’m a bit surprised. I guess I thought that since a lot of the stuff I’ve read by Eliezer seems heavily influenced by Dennet. And he’s also a physicalist. His approach also seems to be “explain our claims about consciousness”. Plus there’s all the stuff about self reflection, how an algorithm feels from the inside etc. I guess I was just bucketing that stuff together with (weak) illusionism. After writing that out, I can see how those points doesn’t imply illusionism. Does Eliezer think we can save the phenomena of consciousness and hence it calling it an illusion is a mistake? Or is there something else going on there?
I think Dennett’s argumentation about the hard problem of consciousness has usually been terrible, and I don’t see him as an important forerunner of illusionism, though he’s an example of someone who soldiered on for anti-realism about phenomenal consciousness for long stretches of time where the arguments were lacking.
I think I remember Eliezer saying somewhere that he also wasn’t impressed with Dennett’s takes on the hard problem, but I forget where?
There’s some similarity between heterophenomenology and the way Eliezer/Nate talk about consciousness, though I guess I think of Eliezer/Nate’s “let’s find a theory that makes sense of our claims about consciousness” as more “here’s a necessary feature of any account of consciousness, and a plausibly fruitful way to get insight into a lot of what’s going on”, not as an argument for otherwise ignoring all introspective data. Heterophenomenology IMO was always a somewhat silly and confused idea, because it’s proposing that we a priori reject introspective evidence but it’s not giving a clear argument for why.
(Or, worse, it’s arguing something orthogonal to whether we should care about introspective evidence, while winking and nudging that there’s something vaguely unrespectable about the introspective-evidence question.)
There are good arguments for being skeptical of introspection here, but “that doesn’t sound like it’s in the literary genre of science” should not be an argument that Bayesians find very compelling.
Yeah. I’d already read the Yudkowsky piece. I hadn’t read the Muehlhauser one though!