It’s not exactly inconceivable to me, as much as unlikely. What that situation would still imply is that a perfect picture of what all the mere atoms in a locality were doing—including those of speech—would have a readily discoverable flaw when there was a massive inability to predict what would happen at some point after photons hit eyes lasting until some point before the speech emanated. Then there would be a different kind of matter as a component of every mind with qualia—every human mind or nearly so, at least.
you may be an eliminative materialist
I may be what people call that but if you (or I) learn that such a label fits me after you (or I) learn my opinions on all the categories eliminative materialists opine on then you (or I) haven’t learned anything about me from the label. If I try to figure out if others would call me that then I won’t be able to taboo “eliminative materialist” for that inquiry.
It’s not exactly inconceivable to me, as much as unlikely. What that situation would still imply is that a perfect picture of what all the mere atoms in a locality were doing—including those of speech—would have a readily discoverable flaw when there was a massive inability to predict what would happen at some point after photons hit eyes lasting until some point before the speech emanated. Then there would be a different kind of matter as a component of every mind with qualia—every human mind or nearly so, at least.
The way I picture it is that we might obtain a detailed picture of the brain, but all we would find is that the brain has a parallel structure interpreted in a serial way, per Dennett. We would discover that in hindsight we already had a roughly accurate idea in 2011, with some gaps and flaws but no major missing piece, of internal narratives and why humans write philosophy papers, as Dennett has layed out in Consciousness Explained.
Eliminative materialists might then claim victory. However, I argue in my article that if a mind possesses sufficiently detailed information regarding a brain’s structure, then he actually experiences whatever qualia that brain produces because he runs exactly the same computations (and whatever inaccuracy there is in his information about that brain is the same degree of inaccuracy that exists in the similarity of the qualia he experiences).
Therefore, to the extent that a mind can accurately predict another mind’s precise behaviours, he experiences that brain’s qualia because he is essentially running that brain himself. Therefore, there is no additional uncertainty of qualia in addition to uncertainty about the physical configuration of the Universe, which is the precise subject Eliezer and Chalmers were arguing about.
It’s not exactly inconceivable to me, as much as unlikely. What that situation would still imply is that a perfect picture of what all the mere atoms in a locality were doing—including those of speech—would have a readily discoverable flaw when there was a massive inability to predict what would happen at some point after photons hit eyes lasting until some point before the speech emanated. Then there would be a different kind of matter as a component of every mind with qualia—every human mind or nearly so, at least.
I may be what people call that but if you (or I) learn that such a label fits me after you (or I) learn my opinions on all the categories eliminative materialists opine on then you (or I) haven’t learned anything about me from the label. If I try to figure out if others would call me that then I won’t be able to taboo “eliminative materialist” for that inquiry.
The way I picture it is that we might obtain a detailed picture of the brain, but all we would find is that the brain has a parallel structure interpreted in a serial way, per Dennett. We would discover that in hindsight we already had a roughly accurate idea in 2011, with some gaps and flaws but no major missing piece, of internal narratives and why humans write philosophy papers, as Dennett has layed out in Consciousness Explained.
Eliminative materialists might then claim victory. However, I argue in my article that if a mind possesses sufficiently detailed information regarding a brain’s structure, then he actually experiences whatever qualia that brain produces because he runs exactly the same computations (and whatever inaccuracy there is in his information about that brain is the same degree of inaccuracy that exists in the similarity of the qualia he experiences).
Therefore, to the extent that a mind can accurately predict another mind’s precise behaviours, he experiences that brain’s qualia because he is essentially running that brain himself. Therefore, there is no additional uncertainty of qualia in addition to uncertainty about the physical configuration of the Universe, which is the precise subject Eliezer and Chalmers were arguing about.