As far as Mary’s Room goes, you might similarly argue that you could have all of the data belonging to Pixar’s next movie, which you haven’t seen yet, without having any knowledge of what it looks like or what it’s about. Or that you can’t understand a program without compiling it & running it.
I’m not entirely sure how much credibility I lend to that. There are some very abstract things (fairly simple, yes) which I can intuit without prior experience, and there are many complicated things which I can predict due to a great deal of prior experience (eg landscapes described in novels).
But I mostly raised it as another interesting problem with a proposed [partial] solution.
As far as Mary’s Room goes, you might similarly argue that you could have all of the data belonging to Pixar’s next movie, which you haven’t seen yet, without having any knowledge of what it looks like or what it’s about
I dont see how you could fail to be able to deduce what it is about, given Mary’s supercientific powers.
Or that you can’t understand a program without compiling it & running it.
Ordinary mortals can, in simple cases, and Mary presumably can in any case.
Or that you can’t understand a program without compiling it & running it.
You″re not a superscientist. Can I recommend reading the linked material?
It’s possible I already had & that you’re misunderstanding what my examples are about: the difference between the physical/digital/abstract structure underlying something & the actual experience it produces (eg qualia for perceptions of physical things, or pictures for geometric definitions, etc).
I maintain that the difference between code & a running program (or at least our experience of a running program) is almost exactly analogous to the difference between physical matter & our perception of it. The underlying structure is digital, not physical, and has physical means of delivery to our senses, but the major differences end there.
I maintain that the difference between code & a running program (or at least our experience of a running program) is almost exactly analogous to the difference between physical matter & our perception of it. The underlying structure is digital, not physical, and has physical means of delivery to our senses, but the major differences end there.
I don’t see where you are going with that. If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from examining the code. But M’s R proposes that there is something you can
get from seeing a colour yourself. The analogy doesnt seem to be there. Unless you disagree with the intended
conclusion of M’s R.
Likewise, I see no reason to expect that a mathematical process could look at a symbolic description of itself and recognize it with intuitive certainty. We have some reason to think the opposite. So why expect to recognize “qualia” from their descriptions?
As orthonormal points out at length, we know that humans have unconscious processing of the sort you might expect from this line of reasoning. We can explain how this would likely give rise to confusion about Mary’s Room.
If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from examining the code.
This seems trivially false.
The implicit assumption I inferred from the claim made it:
If you are a superscientist, there is nothing you can learn from running a programme [for some given non-infinite time] that you cannot get from examining the code [for a commensurate period of subjective time, including allowance for some computational overhead in those special cases where abstract analysis of the program provides no compression over just emulating it].”
That makes it trivially true. The trivially false seems to apply only when the ‘run the program’ alternative gets to do infinite computation but the ’be a superscientist and examine the program” doesn’t.
The trivially false seems to apply only when the ‘run the program’ alternative gets to do infinite computation
‘If the program you are looking at stops in less than T seconds, go into an infinite loop. Otherwise, stop.’ In order to avoid a contradiction the examiner program can’t reach a decision in less than T seconds (minus any time added by those instructions). Running a program for at most T seconds can trivially give you more info if you can’t wait any longer. I don’t know how much this matters in practice, but the “infinite” part at least seems wrong.
And again, the fact that the problem involves self-knowledge seems very relevant to this layman. (typo fixed)
More info than inspecting the code for at most T seconds. Finite examination time seems like a reasonable assumption.
I get the impression you’re reading more than I’m saying. If you want to get into the original topic we should probably forget the OP and discuss orthonormal’s mini-sequence.
I no longer have any clue what we’re talking about. Are superscientists computable? Do they seem likely to die in less than the lifespan of our (visible) universe? If not, why do we care about them?
The point is that you can’t say a person of unknown intelligence inspecting code for T seconds will necessarily conclude less than a computer of unknown power running the code for T seconds. You are comparing two unknowns.
So why expect to recognize “qualia” from their descriptions?
Why expect an inability to figure out some things about your internal stare to put on a techinicolor
display? Blind spots don’t look like anything. Not even perceivable gaps in the visual field.
Why expect an inability to figure out some things about your internal stare to put on a techinicolor display?
What.
(Internal state seems a little misleading. At the risk of getting away from the real discussion again, Peano arithmetic is looking at a coded representation of itself when it fails to see certain facts about its proofs. But it needs some such symbols in order to have any self-awareness at all. And there exists a limit to what any arithmetical system or Turing machine can learn by this method. Oh, and the process that fills my blind spots puts on colorful displays all the time.)
If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from >examining the code.
If you believe this, then you must similarly think that Mary will learn nothing about the qualia associated with colors if she already understands everything about the physics underlying them.
In case I haven’t driven the point home with enough clarity (for example, I did read the link the first time you posted it), I am claiming that there is something to experiencing the program/novel/world inasmuch as there is something to experiencing colors in the world. Whether that something is a subset of the code/words/physics or something additional is the whole point of the problem of qualia.
And no, I don’t have a clear idea what a satisfying answer might look like.
If you believe this, then you must similarly think that Mary will learn nothing about the qualia associated with colors if she already understands everything about the physics underlying them.
That doesn’t follow. Figuring out the behaviour of a programme is just an exercise in logical deduction. It can be done by non-superscientists in easy cases, so it is just an extension of the same idea that a supersceintist can handle difficult cases. However, there is no “easy case” of deducing a perceived quality from objective inormation.
Beyond that, if all you are saying is that the problem of colours is part of a larger problem of qualia, which
itself is part of a larger issue of experience, I can answer with a wholehearted “maybe”. That might make colour
seem less exceptional and therefore less annihilaion-worthy, but I otherwise don’t see where you are going.
I’m not just talking about behavior. The kinds of things involved in experiencing a program involve subjective qualities, like whether Counter-Strike is more fun than Day of Defeat, which maybe can’t be learned just from reading the code.
It’s possible the analogy is actually flawed, and one is contained in its underlying components while the other is not, but I don’t understand how they differ if they do, or why they should.
It’s just another cool problem about colors.
As far as Mary’s Room goes, you might similarly argue that you could have all of the data belonging to Pixar’s next movie, which you haven’t seen yet, without having any knowledge of what it looks like or what it’s about. Or that you can’t understand a program without compiling it & running it.
I’m not entirely sure how much credibility I lend to that. There are some very abstract things (fairly simple, yes) which I can intuit without prior experience, and there are many complicated things which I can predict due to a great deal of prior experience (eg landscapes described in novels).
But I mostly raised it as another interesting problem with a proposed [partial] solution.
I dont see how you could fail to be able to deduce what it is about, given Mary’s supercientific powers.
Ordinary mortals can, in simple cases, and Mary presumably can in any case.
You″re not a superscientist. Can I recommend reading the linked material?
It’s possible I already had & that you’re misunderstanding what my examples are about: the difference between the physical/digital/abstract structure underlying something & the actual experience it produces (eg qualia for perceptions of physical things, or pictures for geometric definitions, etc).
I maintain that the difference between code & a running program (or at least our experience of a running program) is almost exactly analogous to the difference between physical matter & our perception of it. The underlying structure is digital, not physical, and has physical means of delivery to our senses, but the major differences end there.
How about telling me whether you actually had?
I don’t see where you are going with that. If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from examining the code. But M’s R proposes that there is something you can get from seeing a colour yourself. The analogy doesnt seem to be there. Unless you disagree with the intended conclusion of M’s R.
This seems trivially false. See also the incomputability of pure Solomonoff induction.
Likewise, I see no reason to expect that a mathematical process could look at a symbolic description of itself and recognize it with intuitive certainty. We have some reason to think the opposite. So why expect to recognize “qualia” from their descriptions?
As orthonormal points out at length, we know that humans have unconscious processing of the sort you might expect from this line of reasoning. We can explain how this would likely give rise to confusion about Mary’s Room.
The implicit assumption I inferred from the claim made it:
That makes it trivially true. The trivially false seems to apply only when the ‘run the program’ alternative gets to do infinite computation but the ’be a superscientist and examine the program” doesn’t.
My thoughts exactly.
‘If the program you are looking at stops in less than T seconds, go into an infinite loop. Otherwise, stop.’ In order to avoid a contradiction the examiner program can’t reach a decision in less than T seconds (minus any time added by those instructions). Running a program for at most T seconds can trivially give you more info if you can’t wait any longer. I don’t know how much this matters in practice, but the “infinite” part at least seems wrong.
And again, the fact that the problem involves self-knowledge seems very relevant to this layman. (typo fixed)
I don’t see anything particularly troubling for a superscientist in the above.
More info than what? Are you assuming that inspection is equivalent to one programme cycle, or something?
More info than inspecting the code for at most T seconds. Finite examination time seems like a reasonable assumption.
I get the impression you’re reading more than I’m saying. If you want to get into the original topic we should probably forget the OP and discuss orthonormal’s mini-sequence.
More info than who or what inspecting the code? We are talking about superscientists here.
I no longer have any clue what we’re talking about. Are superscientists computable? Do they seem likely to die in less than the lifespan of our (visible) universe? If not, why do we care about them?
The point is that you can’t say a person of unknown intelligence inspecting code for T seconds will necessarily conclude less than a computer of unknown power running the code for T seconds. You are comparing two unknowns.
Why expect an inability to figure out some things about your internal stare to put on a techinicolor display? Blind spots don’t look like anything. Not even perceivable gaps in the visual field.
What.
(Internal state seems a little misleading. At the risk of getting away from the real discussion again, Peano arithmetic is looking at a coded representation of itself when it fails to see certain facts about its proofs. But it needs some such symbols in order to have any self-awareness at all. And there exists a limit to what any arithmetical system or Turing machine can learn by this method. Oh, and the process that fills my blind spots puts on colorful displays all the time.)
There is no evidence that PA is self aware.
So your blind spot is filled in by other blind spots?
If you believe this, then you must similarly think that Mary will learn nothing about the qualia associated with colors if she already understands everything about the physics underlying them.
In case I haven’t driven the point home with enough clarity (for example, I did read the link the first time you posted it), I am claiming that there is something to experiencing the program/novel/world inasmuch as there is something to experiencing colors in the world. Whether that something is a subset of the code/words/physics or something additional is the whole point of the problem of qualia.
And no, I don’t have a clear idea what a satisfying answer might look like.
That doesn’t follow. Figuring out the behaviour of a programme is just an exercise in logical deduction. It can be done by non-superscientists in easy cases, so it is just an extension of the same idea that a supersceintist can handle difficult cases. However, there is no “easy case” of deducing a perceived quality from objective inormation.
Beyond that, if all you are saying is that the problem of colours is part of a larger problem of qualia, which itself is part of a larger issue of experience, I can answer with a wholehearted “maybe”. That might make colour seem less exceptional and therefore less annihilaion-worthy, but I otherwise don’t see where you are going.
I’m not just talking about behavior. The kinds of things involved in experiencing a program involve subjective qualities, like whether Counter-Strike is more fun than Day of Defeat, which maybe can’t be learned just from reading the code.
It’s possible the analogy is actually flawed, and one is contained in its underlying components while the other is not, but I don’t understand how they differ if they do, or why they should.