There are maybe three things going on here. In the original discussions surrounding the Sorites paradox (and Robin Hanson’s mangled worlds), it was proposed that there is no need to have a fully objective and non-arbitrary concept of self (or of world). This makes vagueness into a principle: it’s not just that the concept is underdetermined, it’s asserted that there is no need to make it fully exact.
The discussion with Psychohistorian proceeds in a different direction. Psychohistorian hasn’t taken a stand in favor of vagueness. I was able to ask my question because no-one has an exact answer, Psychohistorian included, but Psychohistorian at least didn’t say “we don’t need an exact answer”—and so didn’t “vague out”.
Fair point about missing the context on my part, and I should have done better, since I rip on others when they do the same—just ask Z M Davis!
Still, if this is what’s going on here—if you think rejection of your ontology forces you into one of two unpalatable positions, one represented by Robin_Hanson, and the other by Psychohistorian—then this rock-and-a-hard-place problem of identity should have been in your main post to show what the problem is, and I can’t infer that issue from reading it.
The ontological assumptions are made primarily so I don’t have to disbelieve in the existence of time, color, or myself.
Again, nothing in the standard LW handling requires you to disbelieve in any of those things, at the subjective level; it’s just that they are claimed to arise from more fundamental phenomena.
They’re not made so as to expedite biophysical progress, though they might do so if they’re on the right track.
Then I’m lost: normally, the reason to propose e.g. a completely new ontology is to eliminate a confusion from the beginning, thereby enhancing your ability to achieve useful insights. But you’re position is: buy into my ontology, even though it’s completely independent of your ability to find out how consciousness works. That’s even worse than a fake explanation!
Just like an information theoretic analysis of a program brings us no closer to getting actual labels for the program’s referents.
Colors are phenomena, not labels. It’s the names of colors which are labels, for contingent collections of individual shades of color. There is no such thing as objective “redness” per se, but there are individual shades of color which may or may not classify as red. It’s all the instances of color which are the ontological problem; the way we group them is not the problem.
I think you’re misunderstanding the Drescher analogy I described. The gensyms don’t map to our terms for color, or classifications for color; they map to our phenomenal experience of color. That is, the distinctiveness of experiencing red, as differentiated from other aspects of your consciousness, is like the distinctiveness of several generated symbols within a program.
The program is able to distinguish between gensyms, but the comparison of their labels across different program instances is not meaningful. If that’s not a problem in need of a solution, neither should qualia be, since qualia can be viewed as the phenomenon of being able to distinguish between different data structures, as seen from the inside.
(To put it another way, your experience of color has be different enough so that you don’t treat color data as sound data.)
I emphasize that Drescher has not “closed the book” on the issue; there’s still work to be done. But you can see how qualia can be approached within the reductionist ontology espoused here.
Fair point about missing the context on my part, and I should have done better, since I rip on others when they do the same—just ask Z M Davis!
Still, if this is what’s going on here—if you think rejection of your ontology forces you into one of two unpalatable positions, one represented by Robin_Hanson, and the other by Psychohistorian—then this rock-and-a-hard-place problem of identity should have been in your main post to show what the problem is, and I can’t infer that issue from reading it.
Again, nothing in the standard LW handling requires you to disbelieve in any of those things, at the subjective level; it’s just that they are claimed to arise from more fundamental phenomena.
Then I’m lost: normally, the reason to propose e.g. a completely new ontology is to eliminate a confusion from the beginning, thereby enhancing your ability to achieve useful insights. But you’re position is: buy into my ontology, even though it’s completely independent of your ability to find out how consciousness works. That’s even worse than a fake explanation!
I think you’re misunderstanding the Drescher analogy I described. The gensyms don’t map to our terms for color, or classifications for color; they map to our phenomenal experience of color. That is, the distinctiveness of experiencing red, as differentiated from other aspects of your consciousness, is like the distinctiveness of several generated symbols within a program.
The program is able to distinguish between gensyms, but the comparison of their labels across different program instances is not meaningful. If that’s not a problem in need of a solution, neither should qualia be, since qualia can be viewed as the phenomenon of being able to distinguish between different data structures, as seen from the inside.
(To put it another way, your experience of color has be different enough so that you don’t treat color data as sound data.)
I emphasize that Drescher has not “closed the book” on the issue; there’s still work to be done. But you can see how qualia can be approached within the reductionist ontology espoused here.