Shouldn’t the arrow from “self-model” to “explicit planning” be dashed and labelled “inaccurate” in the case of blindsight like it is for anosognosia ? From my understanding of your article, both are opposite cases of inaccurate self-model.
Else, interesting, but I’m waiting for the next article since the main question (to me) “That’s true of lots of animals though. What makes humans more conscious?” was not really answered to (but take your time !).
The dashed/inaccurate is more valuable than solid, but it’s not really the connection that’s the locus of inaccuracy.
If instead the self-model contained a diagram inside of it, then we could see that the connection self-model to planning is working fine, it’s the diagram inside self-model that is wrong.
The Self-Model is accurately saying that it doesn’t see anything because it’s not getting visual input, but it’s failing to be an accurate self-model because other parts can still see, and vision can be acted on in a limited way.
Shouldn’t the arrow from “self-model” to “explicit planning” be dashed and labelled “inaccurate” in the case of blindsight like it is for anosognosia ? From my understanding of your article, both are opposite cases of inaccurate self-model.
Else, interesting, but I’m waiting for the next article since the main question (to me) “That’s true of lots of animals though. What makes humans more conscious?” was not really answered to (but take your time !).
The dashed/inaccurate is more valuable than solid, but it’s not really the connection that’s the locus of inaccuracy.
If instead the self-model contained a diagram inside of it, then we could see that the connection self-model to planning is working fine, it’s the diagram inside self-model that is wrong.
Yeah, pretty much this.
The Self-Model is accurately saying that it doesn’t see anything because it’s not getting visual input, but it’s failing to be an accurate self-model because other parts can still see, and vision can be acted on in a limited way.