What does it mean to benefit a person, apart from benefits to the individual cells in that person’s body? I don’t think it’s unreasonable to think of society as having emergent goals, and fulfilling those goals would benefit it.
I don’t think it’s unreasonable to think of society as having emergent goals, and fulfilling those goals would benefit it.
I actually do think it is unreasonable to take any but the physical stance toward society; the predictive power of taking the intentional stance (or the design stance, for that matter) is just less.
But! We might assume, for the sake of argument, that we can think of society as having emergent goals, goals that do not benefit its members (or do not benefit a majority of its members, or something). In that case, however, my question is:
Why should I care?
Society’s emergent goals can go take a flying leap, as can evolution’s goals, the goals of my genes, the goals of the human species, and any other goals of any other entity that is not me or the people I care about.
Hmm, I’ll have to look into the predictive power thing, and the tradeoff between predictive power and efficiency. I figured viewing society as an organism would drastically improve computational efficiency over trying to reason about and then aggregate individual people’s preferences, so that any drop in predictive power might be worth it. But I’m not sure I’ve seen evidence in either direction; I just assumed it based on analogy and priors.
As for why you should care, I don’t think you should, necessarily, if you don’t already. But I think for a lot of people, serving some kind of emergent structure or higher ideal is an important source of existential fulfillment.
Sorry, when I said “predictive power”, I was actually assuming normalization for efficiency. That is, my claim that the total predictive capacity you get for your available computation resources is greatest by taking the physical stance in this case.
What does it mean to benefit a person, apart from benefits to the individual cells in that person’s body?
It is almost trivial to imagine such a thing. For example, my body may be destroyed utterly in the process of transferring my mind, unharmed, into a new, artificial body, better in every way than my old one. This would be great for me (assuming the new body suited my wants and needs), but bad for the cells making up my existing body.
The core idea here is that I am not my body. I am currently instantiated in my body, but that’s not the same thing. I care about my current instantiation only to the degree that doing so is necessary for me to survive and prosper.
Ah. I’m not sure I agree with you on the nature of the self. What evidence do you have that your mind could be instantiated in a different medium and still lead to the same subjective experience? (Or is subjective experience irrelevant to your definition of self?)
I mean, I don’t necessarily disagree with this kind of dualism; it seems possible, even given what I know about embodied cognition. I just am not sure how it could be tested scientifically.
What evidence do you have that your mind could be instantiated in a different medium and still lead to the same subjective experience? (Or is subjective experience irrelevant to your definition of self?)
No direct evidence, just the totality of what we currently know about the mind (i.e. cognitive science). Subjective experience is not irrelevant, though I am still confused about its nature. I don’t, however, have any reason to believe that it’s tied to any particular instantiation.
dualism
I don’t think my view can properly be characterized as dualism. I don’t posit any sort of nonmaterial properties of mind, for instance, nor that the mind itself is some nonmaterial substance. Computationalism merely says, essentially, that “the mind is what the brain does”, and that other physical substrates can perform the same computation.
embodied cognition
Everything that I know about the idea of embodied cognition leads me to conclude that it is a brand of mysticism. I’ve never heard a cogent argument for why embodiment can’t be simulated on some suitable level.
Hmm, I can see arguments for and against calling computationalism a form of dualism. I don’t think it matters much, so I’ll accept your claim that it’s not.
As for embodied cognition, most of what I know about it comes from reading Lawrence Shapiro’s book Embodied Cognition. I was much less impressed with the field after reading that book, but I do think the general idea is important, that it’s a mistake to think of the mind and body as separate things, and that in order to study cognition we have to take the body into consideration.
I agree that embodiment could be simulated. But I don’t like to make assumptions about how subjective experience works, and for all I know, it arises from some subtrates of cognition but not others. Since I think of my subjective experience as an essential part of my self, this seems important.
in order to study cognition we have to take the body into consideration.
I agree. I don’t think embodiment is irrelevant; my own field (human-computer interaction) takes embodiment quite seriously — it’s an absolutely integral factor in natural user interface design, for example.
I just don’t think embodiment is in any way magic, the way that the embodied cognition people seem to think and imply. If you can simulate a human and their environment on any level you like, then embodiment stops being an issue. It seems like we don’t actually disagree on this.
I don’t like to make assumptions about how subjective experience works, and for all I know, it arises from some subtrates of cognition but not others.
This is certainly not impossible, but it’s not clear to me why you couldn’t then simulate the substrate on a sufficiently low level as to capture whatever aspect of the substrate is responsible for enabling cognition. After all, we could in principle simulate the entire universe down to quantum configuration distributions, right?
If you wanted to make a weaker claim based on computational tractability, then that would of course be another thing.
Since I think of my subjective experience as an essential part of my self, this seems important.
I concur with this. To the extent that I have any kind of a handle of what subjective experience even is, it does seem quite important.
P.S.
Hmm, I can see arguments for and against calling computationalism a form of dualism. I don’t think it matters much, so I’ll accept your claim that it’s not.
Yeah, this is probably a question of preferred terminology and I am not inclined to argue about it too much; I just wanted to clarify my actual views.
I don’t think embodiment is irrelevant.[...] If you can simulate a human and their environment on any level you like, then embodiment stops being an issue.
Sure you can (and have to) take the body with you into the simulation—but then the (biological) rules governing the body still apply. You may have more control over them though.
What does it mean to benefit a person, apart from benefits to the individual cells in that person’s body? I don’t think it’s unreasonable to think of society as having emergent goals, and fulfilling those goals would benefit it.
I actually do think it is unreasonable to take any but the physical stance toward society; the predictive power of taking the intentional stance (or the design stance, for that matter) is just less.
But! We might assume, for the sake of argument, that we can think of society as having emergent goals, goals that do not benefit its members (or do not benefit a majority of its members, or something). In that case, however, my question is:
Why should I care?
Society’s emergent goals can go take a flying leap, as can evolution’s goals, the goals of my genes, the goals of the human species, and any other goals of any other entity that is not me or the people I care about.
Hmm, I’ll have to look into the predictive power thing, and the tradeoff between predictive power and efficiency. I figured viewing society as an organism would drastically improve computational efficiency over trying to reason about and then aggregate individual people’s preferences, so that any drop in predictive power might be worth it. But I’m not sure I’ve seen evidence in either direction; I just assumed it based on analogy and priors.
As for why you should care, I don’t think you should, necessarily, if you don’t already. But I think for a lot of people, serving some kind of emergent structure or higher ideal is an important source of existential fulfillment.
Sorry, when I said “predictive power”, I was actually assuming normalization for efficiency. That is, my claim that the total predictive capacity you get for your available computation resources is greatest by taking the physical stance in this case.
Oh, and:
It is almost trivial to imagine such a thing. For example, my body may be destroyed utterly in the process of transferring my mind, unharmed, into a new, artificial body, better in every way than my old one. This would be great for me (assuming the new body suited my wants and needs), but bad for the cells making up my existing body.
The core idea here is that I am not my body. I am currently instantiated in my body, but that’s not the same thing. I care about my current instantiation only to the degree that doing so is necessary for me to survive and prosper.
Ah. I’m not sure I agree with you on the nature of the self. What evidence do you have that your mind could be instantiated in a different medium and still lead to the same subjective experience? (Or is subjective experience irrelevant to your definition of self?)
I mean, I don’t necessarily disagree with this kind of dualism; it seems possible, even given what I know about embodied cognition. I just am not sure how it could be tested scientifically.
No direct evidence, just the totality of what we currently know about the mind (i.e. cognitive science). Subjective experience is not irrelevant, though I am still confused about its nature. I don’t, however, have any reason to believe that it’s tied to any particular instantiation.
I don’t think my view can properly be characterized as dualism. I don’t posit any sort of nonmaterial properties of mind, for instance, nor that the mind itself is some nonmaterial substance. Computationalism merely says, essentially, that “the mind is what the brain does”, and that other physical substrates can perform the same computation.
Everything that I know about the idea of embodied cognition leads me to conclude that it is a brand of mysticism. I’ve never heard a cogent argument for why embodiment can’t be simulated on some suitable level.
Hmm, I can see arguments for and against calling computationalism a form of dualism. I don’t think it matters much, so I’ll accept your claim that it’s not.
As for embodied cognition, most of what I know about it comes from reading Lawrence Shapiro’s book Embodied Cognition. I was much less impressed with the field after reading that book, but I do think the general idea is important, that it’s a mistake to think of the mind and body as separate things, and that in order to study cognition we have to take the body into consideration.
I agree that embodiment could be simulated. But I don’t like to make assumptions about how subjective experience works, and for all I know, it arises from some subtrates of cognition but not others. Since I think of my subjective experience as an essential part of my self, this seems important.
I agree. I don’t think embodiment is irrelevant; my own field (human-computer interaction) takes embodiment quite seriously — it’s an absolutely integral factor in natural user interface design, for example.
I just don’t think embodiment is in any way magic, the way that the embodied cognition people seem to think and imply. If you can simulate a human and their environment on any level you like, then embodiment stops being an issue. It seems like we don’t actually disagree on this.
This is certainly not impossible, but it’s not clear to me why you couldn’t then simulate the substrate on a sufficiently low level as to capture whatever aspect of the substrate is responsible for enabling cognition. After all, we could in principle simulate the entire universe down to quantum configuration distributions, right?
If you wanted to make a weaker claim based on computational tractability, then that would of course be another thing.
I concur with this. To the extent that I have any kind of a handle of what subjective experience even is, it does seem quite important.
P.S.
Yeah, this is probably a question of preferred terminology and I am not inclined to argue about it too much; I just wanted to clarify my actual views.
Sure you can (and have to) take the body with you into the simulation—but then the (biological) rules governing the body still apply. You may have more control over them though.