Likewise, I see no reason to expect that a mathematical process could look at a symbolic description of itself and recognize it with intuitive certainty. We have some reason to think the opposite. So why expect to recognize “qualia” from their descriptions?
As orthonormal points out at length, we know that humans have unconscious processing of the sort you might expect from this line of reasoning. We can explain how this would likely give rise to confusion about Mary’s Room.
If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from examining the code.
This seems trivially false.
The implicit assumption I inferred from the claim made it:
If you are a superscientist, there is nothing you can learn from running a programme [for some given non-infinite time] that you cannot get from examining the code [for a commensurate period of subjective time, including allowance for some computational overhead in those special cases where abstract analysis of the program provides no compression over just emulating it].”
That makes it trivially true. The trivially false seems to apply only when the ‘run the program’ alternative gets to do infinite computation but the ’be a superscientist and examine the program” doesn’t.
The trivially false seems to apply only when the ‘run the program’ alternative gets to do infinite computation
‘If the program you are looking at stops in less than T seconds, go into an infinite loop. Otherwise, stop.’ In order to avoid a contradiction the examiner program can’t reach a decision in less than T seconds (minus any time added by those instructions). Running a program for at most T seconds can trivially give you more info if you can’t wait any longer. I don’t know how much this matters in practice, but the “infinite” part at least seems wrong.
And again, the fact that the problem involves self-knowledge seems very relevant to this layman. (typo fixed)
More info than inspecting the code for at most T seconds. Finite examination time seems like a reasonable assumption.
I get the impression you’re reading more than I’m saying. If you want to get into the original topic we should probably forget the OP and discuss orthonormal’s mini-sequence.
I no longer have any clue what we’re talking about. Are superscientists computable? Do they seem likely to die in less than the lifespan of our (visible) universe? If not, why do we care about them?
The point is that you can’t say a person of unknown intelligence inspecting code for T seconds will necessarily conclude less than a computer of unknown power running the code for T seconds. You are comparing two unknowns.
So why expect to recognize “qualia” from their descriptions?
Why expect an inability to figure out some things about your internal stare to put on a techinicolor
display? Blind spots don’t look like anything. Not even perceivable gaps in the visual field.
Why expect an inability to figure out some things about your internal stare to put on a techinicolor display?
What.
(Internal state seems a little misleading. At the risk of getting away from the real discussion again, Peano arithmetic is looking at a coded representation of itself when it fails to see certain facts about its proofs. But it needs some such symbols in order to have any self-awareness at all. And there exists a limit to what any arithmetical system or Turing machine can learn by this method. Oh, and the process that fills my blind spots puts on colorful displays all the time.)
This seems trivially false. See also the incomputability of pure Solomonoff induction.
Likewise, I see no reason to expect that a mathematical process could look at a symbolic description of itself and recognize it with intuitive certainty. We have some reason to think the opposite. So why expect to recognize “qualia” from their descriptions?
As orthonormal points out at length, we know that humans have unconscious processing of the sort you might expect from this line of reasoning. We can explain how this would likely give rise to confusion about Mary’s Room.
The implicit assumption I inferred from the claim made it:
That makes it trivially true. The trivially false seems to apply only when the ‘run the program’ alternative gets to do infinite computation but the ’be a superscientist and examine the program” doesn’t.
My thoughts exactly.
‘If the program you are looking at stops in less than T seconds, go into an infinite loop. Otherwise, stop.’ In order to avoid a contradiction the examiner program can’t reach a decision in less than T seconds (minus any time added by those instructions). Running a program for at most T seconds can trivially give you more info if you can’t wait any longer. I don’t know how much this matters in practice, but the “infinite” part at least seems wrong.
And again, the fact that the problem involves self-knowledge seems very relevant to this layman. (typo fixed)
I don’t see anything particularly troubling for a superscientist in the above.
More info than what? Are you assuming that inspection is equivalent to one programme cycle, or something?
More info than inspecting the code for at most T seconds. Finite examination time seems like a reasonable assumption.
I get the impression you’re reading more than I’m saying. If you want to get into the original topic we should probably forget the OP and discuss orthonormal’s mini-sequence.
More info than who or what inspecting the code? We are talking about superscientists here.
I no longer have any clue what we’re talking about. Are superscientists computable? Do they seem likely to die in less than the lifespan of our (visible) universe? If not, why do we care about them?
The point is that you can’t say a person of unknown intelligence inspecting code for T seconds will necessarily conclude less than a computer of unknown power running the code for T seconds. You are comparing two unknowns.
Why expect an inability to figure out some things about your internal stare to put on a techinicolor display? Blind spots don’t look like anything. Not even perceivable gaps in the visual field.
What.
(Internal state seems a little misleading. At the risk of getting away from the real discussion again, Peano arithmetic is looking at a coded representation of itself when it fails to see certain facts about its proofs. But it needs some such symbols in order to have any self-awareness at all. And there exists a limit to what any arithmetical system or Turing machine can learn by this method. Oh, and the process that fills my blind spots puts on colorful displays all the time.)
There is no evidence that PA is self aware.
So your blind spot is filled in by other blind spots?