I’m not sure what the prolem of distingusihing colours analytically is supposed to relate to. The classic modern argument, Mary’s Room attempts to demonstrate that the subjective sensation of colour is a problem of materialism, because on can conceviably know everything about the neuroscience of colour perception without knowing
anything about how colours look. That could sort-of be re-expressed by saying Mary can’t analytically deduce colour
sensations from the information she has. And it is sort-of true that once you have a certain amount of experiential
knowledge of colour space, you could gues the nature of colours you haven’t personally seen. But that isn’t very
relevant to M’s R because she is stipulated as not having seen any colours. So, overall, I don’t see what you are getting
at.
You can also know all relevant facts about physics but still not “know” how to ride a bicycle. “Knowing” what red looks like (or being able to imagine redness) requires your brain to have the ability to produce a certain neural pattern, i.e. execute a certain neural “program”. You can’t learn how to imagine red the same way you learn facts like 2+2=4 for the same reason you can’t learn how to ride a bike by learning physics. It’s a different type of “knowledge”, not sure if we should even call it that.
Edit (further explanation): To learn how to ride a bike you need to practice doing it, which implements a “neural program” that allows you to do it (via e.g. “muscle memory” and whatnot). Same for producing a redness sensation (imagining red), a.k.a “knowing what red looks like”.
Knowing” what red looks like (or being able to imagine redness) requires your brain to have the ability to produce a certain neural pattern, i.e. execute a certain neural “program”
Maybe. But, if true, that doesn’t mean that red is know-how. I means that something like know-how is necessary
to get knowlege-by-acquaintance with Red. So it still doesn’t show that Red is know-how in itself. (What does
it enable you to do?)
So it still doesn’t show that Red is know-how in itself.
Talking about “red in itself” is a bit like talking about “the-number-1 in itself”. What does it mean? We can talk about the “redness sensation” that a person experiences, or “the experience of red”. From an anatomical point of view, experiencing red(ness) is a process that occurs in the brain. When you’re looking at something red (or imagining redness), certain neural pathways are constantly firing. No brain activity → no redness experience.
Let’s compare this to factual knowledge. How are facts stored in the brain? From what we understand about the brain, they’re likely encoded in neuronal/synaptic connections. You could in principle extract them by analyzing the brain. And where is the (knowledge of) red(ness) stored in the brain? Well there is no ‘redness’ stored in the brain, what is stored are (again in synaptic connections) instructions that activate the color-pathways of the visual cortex that produce the experience of red. See how the ‘knowledge of color’ is not quite like factual knowledge, but rather looks like an ability?
You argue as if involving neuronal activation is sufficient evidence that something is an ability. But inabilities
are as neuronal as abilitites. If someone becomes incapably drunk, that is as much as matter of neuronal activity
as anything else. But in common sense terms, it is loss of ability, not acquisition of an ability.
Both riding a bike or seeing red involves the brain performing I/O, i.e., interacting with the outside world, whereas learning that 2+2=4 can be done without such interaction.
Mary’s room is an interesting one. I think there’s a valid rebuttal to it, though, but it takes quite a bit of explanation so hold onto your hats, ladies and gentlemen, and if you’re not interested then feel free to ignore. I should stress that this is an argument of my own formulation, although it is informed by my readings of a bunch of other philosophers, and that therefore it is entirely possible that people who share my conclusions might disagree with my premises or form of argument. I’m not trying very hard to convince anyone with this post, just putting the argument out there for your inspection. <-- (EDIT: left the word “not” out of this sentence the first time. Whoops!)
The hard-materialist, anti-qualian, functionalist argument is that sensation ≡ brain state. That is, “for one’s brain to be in the brain-state which is produced when red light hits one’s retina is to experience redness”. Once you’ve experienced redness a few times, it is to possible to intentionally assume that “red” brain-state, so it is possible to remember what it is like to see red without actually having to be exposed to red light. We call this “knowing what red is like”.
Mary, unfortunately, has grown up in a colour-free environment, so she has never experienced the brain-state that is “seeing red”, and even if her brain had drifted through that state accidentally, she wouldn’t have known that what she was experiencing was redness. She can’t find her way to the state of redness because she has never been there before. When she starts researching in an attempt to figure out what it is like to see red, her descriptive knowledge of the state will increase—she’ll know which sets of neurons are involved, the order and frequency of their firings, etc—but of course this won’t be much help in actually attaining a red brain-state. Hearing that Paris is at 48.8742° N, 2.3470° E doesn’t help you get there unless you know where you are right now.
Mary’s next step might be to investigate the patterns that instantiate sensations with which she is familiar. She might learn about how the smell of cinnamon is instantiated in the brain, or the feeling of heat, etc, etc, and then attempt to “locate” the sensation of red by analogy to these sensations. If you know where you are relative to Brisbane, and you know where Brisbane is relative to Paris, then you can figure out where you are relative to Paris.
This workaround would be effective if she were trying to find her way to a physical place, because on Earth you only need 3 dimensions to specify any given location, and it’s the same 3 dimensions every time. Unfortunately, the brain is more complicated. There are some patterns of neural behaviour which are only active in the perception of colour, so while analogy to the other senses might allow Mary zero in a little closer to knowing what red is like, it wouldn’t be nearly enough to solve her problem.
Luckily, Mary is a scientist, and where scientists can’t walk they generally invent a way to fly. Mary knows which neurons would are activated when people see red, and she knows the manner of their activation. She can scan her head and point to the region of her brain that red light would stimulate. So why does she need red light? Synesthetes regularly report colour experiences being induced by apparently non-coloured stimuli, and epileptics often experience phantom colours before fits. Ramachandran and Hubbard even offer a report of a colour-blind synesthete who experiences what he calls “Martian colours”—colours which he has never experienced in the real world and which therefore appear alien to him (PRSL, 2001). So, Mary CAN know red, she just has to induce the brain state associated with redness in herself. Maybe she uses transcranial electrostimulation. Maybe she has to resort to wireheading (http://wiki.lesswrong.com/wiki/Wireheading). Maybe all she needs to do is watch a real-time brain scan while she meditates, so she can learn to guide herself into the right state the same way that people who already “know” red get to it. Point is, if Mary is at all dedicated, she’s going to end up understanding red.
Of course, some qualians might argue that this misses the point—if Mary induces an experience of redness then she’s still exposing herself to the quale of red, whether or not there was any red light involved, so Mary hasn’t come to her knowledge solely by physical means. I think that skirts dangerously close to begging the question, though. As I’ve mentioned above, the functionalist view of colour holds that to know what it is like to see red is just “to know how to bring about the brain-state associated with redness in oneself”. It seems unfair to say that Mary has to be able possess that knowledge but never use it in order for functionalists to be proved right—you might as well request that she know what an elephant looks like without ever picturing one in her mind. Regardless, the Mary’s Room thought experiment presupposes that Mary can’t experience the quale of red in her colourless environment. If qualians want to argue that inducing the brain state of red exposes Mary to the quale of red, then the thought experiment doesn’t do what it was supposed to, and therefore can’t prove what it was designed to prove.
Anyway, I’d say that was my two cents but looking at how much I’ve typed it’s probably more like fifteen dollars...
As far as Mary’s Room goes, you might similarly argue that you could have all of the data belonging to Pixar’s next movie, which you haven’t seen yet, without having any knowledge of what it looks like or what it’s about. Or that you can’t understand a program without compiling it & running it.
I’m not entirely sure how much credibility I lend to that. There are some very abstract things (fairly simple, yes) which I can intuit without prior experience, and there are many complicated things which I can predict due to a great deal of prior experience (eg landscapes described in novels).
But I mostly raised it as another interesting problem with a proposed [partial] solution.
As far as Mary’s Room goes, you might similarly argue that you could have all of the data belonging to Pixar’s next movie, which you haven’t seen yet, without having any knowledge of what it looks like or what it’s about
I dont see how you could fail to be able to deduce what it is about, given Mary’s supercientific powers.
Or that you can’t understand a program without compiling it & running it.
Ordinary mortals can, in simple cases, and Mary presumably can in any case.
Or that you can’t understand a program without compiling it & running it.
You″re not a superscientist. Can I recommend reading the linked material?
It’s possible I already had & that you’re misunderstanding what my examples are about: the difference between the physical/digital/abstract structure underlying something & the actual experience it produces (eg qualia for perceptions of physical things, or pictures for geometric definitions, etc).
I maintain that the difference between code & a running program (or at least our experience of a running program) is almost exactly analogous to the difference between physical matter & our perception of it. The underlying structure is digital, not physical, and has physical means of delivery to our senses, but the major differences end there.
I maintain that the difference between code & a running program (or at least our experience of a running program) is almost exactly analogous to the difference between physical matter & our perception of it. The underlying structure is digital, not physical, and has physical means of delivery to our senses, but the major differences end there.
I don’t see where you are going with that. If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from examining the code. But M’s R proposes that there is something you can
get from seeing a colour yourself. The analogy doesnt seem to be there. Unless you disagree with the intended
conclusion of M’s R.
Likewise, I see no reason to expect that a mathematical process could look at a symbolic description of itself and recognize it with intuitive certainty. We have some reason to think the opposite. So why expect to recognize “qualia” from their descriptions?
As orthonormal points out at length, we know that humans have unconscious processing of the sort you might expect from this line of reasoning. We can explain how this would likely give rise to confusion about Mary’s Room.
If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from examining the code.
This seems trivially false.
The implicit assumption I inferred from the claim made it:
If you are a superscientist, there is nothing you can learn from running a programme [for some given non-infinite time] that you cannot get from examining the code [for a commensurate period of subjective time, including allowance for some computational overhead in those special cases where abstract analysis of the program provides no compression over just emulating it].”
That makes it trivially true. The trivially false seems to apply only when the ‘run the program’ alternative gets to do infinite computation but the ’be a superscientist and examine the program” doesn’t.
The trivially false seems to apply only when the ‘run the program’ alternative gets to do infinite computation
‘If the program you are looking at stops in less than T seconds, go into an infinite loop. Otherwise, stop.’ In order to avoid a contradiction the examiner program can’t reach a decision in less than T seconds (minus any time added by those instructions). Running a program for at most T seconds can trivially give you more info if you can’t wait any longer. I don’t know how much this matters in practice, but the “infinite” part at least seems wrong.
And again, the fact that the problem involves self-knowledge seems very relevant to this layman. (typo fixed)
More info than inspecting the code for at most T seconds. Finite examination time seems like a reasonable assumption.
I get the impression you’re reading more than I’m saying. If you want to get into the original topic we should probably forget the OP and discuss orthonormal’s mini-sequence.
I no longer have any clue what we’re talking about. Are superscientists computable? Do they seem likely to die in less than the lifespan of our (visible) universe? If not, why do we care about them?
The point is that you can’t say a person of unknown intelligence inspecting code for T seconds will necessarily conclude less than a computer of unknown power running the code for T seconds. You are comparing two unknowns.
So why expect to recognize “qualia” from their descriptions?
Why expect an inability to figure out some things about your internal stare to put on a techinicolor
display? Blind spots don’t look like anything. Not even perceivable gaps in the visual field.
Why expect an inability to figure out some things about your internal stare to put on a techinicolor display?
What.
(Internal state seems a little misleading. At the risk of getting away from the real discussion again, Peano arithmetic is looking at a coded representation of itself when it fails to see certain facts about its proofs. But it needs some such symbols in order to have any self-awareness at all. And there exists a limit to what any arithmetical system or Turing machine can learn by this method. Oh, and the process that fills my blind spots puts on colorful displays all the time.)
If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from >examining the code.
If you believe this, then you must similarly think that Mary will learn nothing about the qualia associated with colors if she already understands everything about the physics underlying them.
In case I haven’t driven the point home with enough clarity (for example, I did read the link the first time you posted it), I am claiming that there is something to experiencing the program/novel/world inasmuch as there is something to experiencing colors in the world. Whether that something is a subset of the code/words/physics or something additional is the whole point of the problem of qualia.
And no, I don’t have a clear idea what a satisfying answer might look like.
If you believe this, then you must similarly think that Mary will learn nothing about the qualia associated with colors if she already understands everything about the physics underlying them.
That doesn’t follow. Figuring out the behaviour of a programme is just an exercise in logical deduction. It can be done by non-superscientists in easy cases, so it is just an extension of the same idea that a supersceintist can handle difficult cases. However, there is no “easy case” of deducing a perceived quality from objective inormation.
Beyond that, if all you are saying is that the problem of colours is part of a larger problem of qualia, which
itself is part of a larger issue of experience, I can answer with a wholehearted “maybe”. That might make colour
seem less exceptional and therefore less annihilaion-worthy, but I otherwise don’t see where you are going.
I’m not just talking about behavior. The kinds of things involved in experiencing a program involve subjective qualities, like whether Counter-Strike is more fun than Day of Defeat, which maybe can’t be learned just from reading the code.
It’s possible the analogy is actually flawed, and one is contained in its underlying components while the other is not, but I don’t understand how they differ if they do, or why they should.
I’m not sure what the prolem of distingusihing colours analytically is supposed to relate to. The classic modern argument, Mary’s Room attempts to demonstrate that the subjective sensation of colour is a problem of materialism, because on can conceviably know everything about the neuroscience of colour perception without knowing anything about how colours look. That could sort-of be re-expressed by saying Mary can’t analytically deduce colour sensations from the information she has. And it is sort-of true that once you have a certain amount of experiential knowledge of colour space, you could gues the nature of colours you haven’t personally seen. But that isn’t very relevant to M’s R because she is stipulated as not having seen any colours. So, overall, I don’t see what you are getting at.
You can also know all relevant facts about physics but still not “know” how to ride a bicycle. “Knowing” what red looks like (or being able to imagine redness) requires your brain to have the ability to produce a certain neural pattern, i.e. execute a certain neural “program”. You can’t learn how to imagine red the same way you learn facts like 2+2=4 for the same reason you can’t learn how to ride a bike by learning physics. It’s a different type of “knowledge”, not sure if we should even call it that.
Edit (further explanation): To learn how to ride a bike you need to practice doing it, which implements a “neural program” that allows you to do it (via e.g. “muscle memory” and whatnot). Same for producing a redness sensation (imagining red), a.k.a “knowing what red looks like”.
Maybe. But, if true, that doesn’t mean that red is know-how. I means that something like know-how is necessary to get knowlege-by-acquaintance with Red. So it still doesn’t show that Red is know-how in itself. (What does it enable you to do?)
Talking about “red in itself” is a bit like talking about “the-number-1 in itself”. What does it mean? We can talk about the “redness sensation” that a person experiences, or “the experience of red”. From an anatomical point of view, experiencing red(ness) is a process that occurs in the brain. When you’re looking at something red (or imagining redness), certain neural pathways are constantly firing. No brain activity → no redness experience.
Let’s compare this to factual knowledge. How are facts stored in the brain? From what we understand about the brain, they’re likely encoded in neuronal/synaptic connections. You could in principle extract them by analyzing the brain. And where is the (knowledge of) red(ness) stored in the brain? Well there is no ‘redness’ stored in the brain, what is stored are (again in synaptic connections) instructions that activate the color-pathways of the visual cortex that produce the experience of red. See how the ‘knowledge of color’ is not quite like factual knowledge, but rather looks like an ability?
An ability to do what?
You argue as if involving neuronal activation is sufficient evidence that something is an ability. But inabilities are as neuronal as abilitites. If someone becomes incapably drunk, that is as much as matter of neuronal activity as anything else. But in common sense terms, it is loss of ability, not acquisition of an ability.
In an case, there are plenty of other obections to the Ability Hypothesis
Both riding a bike or seeing red involves the brain performing I/O, i.e., interacting with the outside world, whereas learning that 2+2=4 can be done without such interaction.
One might imagine so, but I expect there are no examples of it ever happening.
There are plenty of examples of less basic apriori truths being figured out once the basics are in place.
Mary’s room is an interesting one. I think there’s a valid rebuttal to it, though, but it takes quite a bit of explanation so hold onto your hats, ladies and gentlemen, and if you’re not interested then feel free to ignore. I should stress that this is an argument of my own formulation, although it is informed by my readings of a bunch of other philosophers, and that therefore it is entirely possible that people who share my conclusions might disagree with my premises or form of argument. I’m not trying very hard to convince anyone with this post, just putting the argument out there for your inspection. <-- (EDIT: left the word “not” out of this sentence the first time. Whoops!)
The hard-materialist, anti-qualian, functionalist argument is that sensation ≡ brain state. That is, “for one’s brain to be in the brain-state which is produced when red light hits one’s retina is to experience redness”. Once you’ve experienced redness a few times, it is to possible to intentionally assume that “red” brain-state, so it is possible to remember what it is like to see red without actually having to be exposed to red light. We call this “knowing what red is like”.
Mary, unfortunately, has grown up in a colour-free environment, so she has never experienced the brain-state that is “seeing red”, and even if her brain had drifted through that state accidentally, she wouldn’t have known that what she was experiencing was redness. She can’t find her way to the state of redness because she has never been there before. When she starts researching in an attempt to figure out what it is like to see red, her descriptive knowledge of the state will increase—she’ll know which sets of neurons are involved, the order and frequency of their firings, etc—but of course this won’t be much help in actually attaining a red brain-state. Hearing that Paris is at 48.8742° N, 2.3470° E doesn’t help you get there unless you know where you are right now.
Mary’s next step might be to investigate the patterns that instantiate sensations with which she is familiar. She might learn about how the smell of cinnamon is instantiated in the brain, or the feeling of heat, etc, etc, and then attempt to “locate” the sensation of red by analogy to these sensations. If you know where you are relative to Brisbane, and you know where Brisbane is relative to Paris, then you can figure out where you are relative to Paris.
This workaround would be effective if she were trying to find her way to a physical place, because on Earth you only need 3 dimensions to specify any given location, and it’s the same 3 dimensions every time. Unfortunately, the brain is more complicated. There are some patterns of neural behaviour which are only active in the perception of colour, so while analogy to the other senses might allow Mary zero in a little closer to knowing what red is like, it wouldn’t be nearly enough to solve her problem.
Luckily, Mary is a scientist, and where scientists can’t walk they generally invent a way to fly. Mary knows which neurons would are activated when people see red, and she knows the manner of their activation. She can scan her head and point to the region of her brain that red light would stimulate. So why does she need red light? Synesthetes regularly report colour experiences being induced by apparently non-coloured stimuli, and epileptics often experience phantom colours before fits. Ramachandran and Hubbard even offer a report of a colour-blind synesthete who experiences what he calls “Martian colours”—colours which he has never experienced in the real world and which therefore appear alien to him (PRSL, 2001). So, Mary CAN know red, she just has to induce the brain state associated with redness in herself. Maybe she uses transcranial electrostimulation. Maybe she has to resort to wireheading (http://wiki.lesswrong.com/wiki/Wireheading). Maybe all she needs to do is watch a real-time brain scan while she meditates, so she can learn to guide herself into the right state the same way that people who already “know” red get to it. Point is, if Mary is at all dedicated, she’s going to end up understanding red.
Of course, some qualians might argue that this misses the point—if Mary induces an experience of redness then she’s still exposing herself to the quale of red, whether or not there was any red light involved, so Mary hasn’t come to her knowledge solely by physical means. I think that skirts dangerously close to begging the question, though. As I’ve mentioned above, the functionalist view of colour holds that to know what it is like to see red is just “to know how to bring about the brain-state associated with redness in oneself”. It seems unfair to say that Mary has to be able possess that knowledge but never use it in order for functionalists to be proved right—you might as well request that she know what an elephant looks like without ever picturing one in her mind. Regardless, the Mary’s Room thought experiment presupposes that Mary can’t experience the quale of red in her colourless environment. If qualians want to argue that inducing the brain state of red exposes Mary to the quale of red, then the thought experiment doesn’t do what it was supposed to, and therefore can’t prove what it was designed to prove.
Anyway, I’d say that was my two cents but looking at how much I’ve typed it’s probably more like fifteen dollars...
It’s just another cool problem about colors.
As far as Mary’s Room goes, you might similarly argue that you could have all of the data belonging to Pixar’s next movie, which you haven’t seen yet, without having any knowledge of what it looks like or what it’s about. Or that you can’t understand a program without compiling it & running it.
I’m not entirely sure how much credibility I lend to that. There are some very abstract things (fairly simple, yes) which I can intuit without prior experience, and there are many complicated things which I can predict due to a great deal of prior experience (eg landscapes described in novels).
But I mostly raised it as another interesting problem with a proposed [partial] solution.
I dont see how you could fail to be able to deduce what it is about, given Mary’s supercientific powers.
Ordinary mortals can, in simple cases, and Mary presumably can in any case.
You″re not a superscientist. Can I recommend reading the linked material?
It’s possible I already had & that you’re misunderstanding what my examples are about: the difference between the physical/digital/abstract structure underlying something & the actual experience it produces (eg qualia for perceptions of physical things, or pictures for geometric definitions, etc).
I maintain that the difference between code & a running program (or at least our experience of a running program) is almost exactly analogous to the difference between physical matter & our perception of it. The underlying structure is digital, not physical, and has physical means of delivery to our senses, but the major differences end there.
How about telling me whether you actually had?
I don’t see where you are going with that. If you are a superscientist, there is nothing you can learn from running a programme that you cannot get from examining the code. But M’s R proposes that there is something you can get from seeing a colour yourself. The analogy doesnt seem to be there. Unless you disagree with the intended conclusion of M’s R.
This seems trivially false. See also the incomputability of pure Solomonoff induction.
Likewise, I see no reason to expect that a mathematical process could look at a symbolic description of itself and recognize it with intuitive certainty. We have some reason to think the opposite. So why expect to recognize “qualia” from their descriptions?
As orthonormal points out at length, we know that humans have unconscious processing of the sort you might expect from this line of reasoning. We can explain how this would likely give rise to confusion about Mary’s Room.
The implicit assumption I inferred from the claim made it:
That makes it trivially true. The trivially false seems to apply only when the ‘run the program’ alternative gets to do infinite computation but the ’be a superscientist and examine the program” doesn’t.
My thoughts exactly.
‘If the program you are looking at stops in less than T seconds, go into an infinite loop. Otherwise, stop.’ In order to avoid a contradiction the examiner program can’t reach a decision in less than T seconds (minus any time added by those instructions). Running a program for at most T seconds can trivially give you more info if you can’t wait any longer. I don’t know how much this matters in practice, but the “infinite” part at least seems wrong.
And again, the fact that the problem involves self-knowledge seems very relevant to this layman. (typo fixed)
I don’t see anything particularly troubling for a superscientist in the above.
More info than what? Are you assuming that inspection is equivalent to one programme cycle, or something?
More info than inspecting the code for at most T seconds. Finite examination time seems like a reasonable assumption.
I get the impression you’re reading more than I’m saying. If you want to get into the original topic we should probably forget the OP and discuss orthonormal’s mini-sequence.
More info than who or what inspecting the code? We are talking about superscientists here.
I no longer have any clue what we’re talking about. Are superscientists computable? Do they seem likely to die in less than the lifespan of our (visible) universe? If not, why do we care about them?
The point is that you can’t say a person of unknown intelligence inspecting code for T seconds will necessarily conclude less than a computer of unknown power running the code for T seconds. You are comparing two unknowns.
Why expect an inability to figure out some things about your internal stare to put on a techinicolor display? Blind spots don’t look like anything. Not even perceivable gaps in the visual field.
What.
(Internal state seems a little misleading. At the risk of getting away from the real discussion again, Peano arithmetic is looking at a coded representation of itself when it fails to see certain facts about its proofs. But it needs some such symbols in order to have any self-awareness at all. And there exists a limit to what any arithmetical system or Turing machine can learn by this method. Oh, and the process that fills my blind spots puts on colorful displays all the time.)
There is no evidence that PA is self aware.
So your blind spot is filled in by other blind spots?
If you believe this, then you must similarly think that Mary will learn nothing about the qualia associated with colors if she already understands everything about the physics underlying them.
In case I haven’t driven the point home with enough clarity (for example, I did read the link the first time you posted it), I am claiming that there is something to experiencing the program/novel/world inasmuch as there is something to experiencing colors in the world. Whether that something is a subset of the code/words/physics or something additional is the whole point of the problem of qualia.
And no, I don’t have a clear idea what a satisfying answer might look like.
That doesn’t follow. Figuring out the behaviour of a programme is just an exercise in logical deduction. It can be done by non-superscientists in easy cases, so it is just an extension of the same idea that a supersceintist can handle difficult cases. However, there is no “easy case” of deducing a perceived quality from objective inormation.
Beyond that, if all you are saying is that the problem of colours is part of a larger problem of qualia, which itself is part of a larger issue of experience, I can answer with a wholehearted “maybe”. That might make colour seem less exceptional and therefore less annihilaion-worthy, but I otherwise don’t see where you are going.
I’m not just talking about behavior. The kinds of things involved in experiencing a program involve subjective qualities, like whether Counter-Strike is more fun than Day of Defeat, which maybe can’t be learned just from reading the code.
It’s possible the analogy is actually flawed, and one is contained in its underlying components while the other is not, but I don’t understand how they differ if they do, or why they should.