What does it mean to benefit a person, apart from benefits to the individual cells in that person’s body? I don’t think it’s unreasonable to think of society as having emergent goals, and fulfilling those goals would benefit it.
I don’t think it’s unreasonable to think of society as having emergent goals, and fulfilling those goals would benefit it.
I actually do think it is unreasonable to take any but the physical stance toward society; the predictive power of taking the intentional stance (or the design stance, for that matter) is just less.
But! We might assume, for the sake of argument, that we can think of society as having emergent goals, goals that do not benefit its members (or do not benefit a majority of its members, or something). In that case, however, my question is:
Why should I care?
Society’s emergent goals can go take a flying leap, as can evolution’s goals, the goals of my genes, the goals of the human species, and any other goals of any other entity that is not me or the people I care about.
Hmm, I’ll have to look into the predictive power thing, and the tradeoff between predictive power and efficiency. I figured viewing society as an organism would drastically improve computational efficiency over trying to reason about and then aggregate individual people’s preferences, so that any drop in predictive power might be worth it. But I’m not sure I’ve seen evidence in either direction; I just assumed it based on analogy and priors.
As for why you should care, I don’t think you should, necessarily, if you don’t already. But I think for a lot of people, serving some kind of emergent structure or higher ideal is an important source of existential fulfillment.
Sorry, when I said “predictive power”, I was actually assuming normalization for efficiency. That is, my claim that the total predictive capacity you get for your available computation resources is greatest by taking the physical stance in this case.
What does it mean to benefit a person, apart from benefits to the individual cells in that person’s body?
It is almost trivial to imagine such a thing. For example, my body may be destroyed utterly in the process of transferring my mind, unharmed, into a new, artificial body, better in every way than my old one. This would be great for me (assuming the new body suited my wants and needs), but bad for the cells making up my existing body.
The core idea here is that I am not my body. I am currently instantiated in my body, but that’s not the same thing. I care about my current instantiation only to the degree that doing so is necessary for me to survive and prosper.
Ah. I’m not sure I agree with you on the nature of the self. What evidence do you have that your mind could be instantiated in a different medium and still lead to the same subjective experience? (Or is subjective experience irrelevant to your definition of self?)
I mean, I don’t necessarily disagree with this kind of dualism; it seems possible, even given what I know about embodied cognition. I just am not sure how it could be tested scientifically.
What evidence do you have that your mind could be instantiated in a different medium and still lead to the same subjective experience? (Or is subjective experience irrelevant to your definition of self?)
No direct evidence, just the totality of what we currently know about the mind (i.e. cognitive science). Subjective experience is not irrelevant, though I am still confused about its nature. I don’t, however, have any reason to believe that it’s tied to any particular instantiation.
dualism
I don’t think my view can properly be characterized as dualism. I don’t posit any sort of nonmaterial properties of mind, for instance, nor that the mind itself is some nonmaterial substance. Computationalism merely says, essentially, that “the mind is what the brain does”, and that other physical substrates can perform the same computation.
embodied cognition
Everything that I know about the idea of embodied cognition leads me to conclude that it is a brand of mysticism. I’ve never heard a cogent argument for why embodiment can’t be simulated on some suitable level.
Hmm, I can see arguments for and against calling computationalism a form of dualism. I don’t think it matters much, so I’ll accept your claim that it’s not.
As for embodied cognition, most of what I know about it comes from reading Lawrence Shapiro’s book Embodied Cognition. I was much less impressed with the field after reading that book, but I do think the general idea is important, that it’s a mistake to think of the mind and body as separate things, and that in order to study cognition we have to take the body into consideration.
I agree that embodiment could be simulated. But I don’t like to make assumptions about how subjective experience works, and for all I know, it arises from some subtrates of cognition but not others. Since I think of my subjective experience as an essential part of my self, this seems important.
in order to study cognition we have to take the body into consideration.
I agree. I don’t think embodiment is irrelevant; my own field (human-computer interaction) takes embodiment quite seriously — it’s an absolutely integral factor in natural user interface design, for example.
I just don’t think embodiment is in any way magic, the way that the embodied cognition people seem to think and imply. If you can simulate a human and their environment on any level you like, then embodiment stops being an issue. It seems like we don’t actually disagree on this.
I don’t like to make assumptions about how subjective experience works, and for all I know, it arises from some subtrates of cognition but not others.
This is certainly not impossible, but it’s not clear to me why you couldn’t then simulate the substrate on a sufficiently low level as to capture whatever aspect of the substrate is responsible for enabling cognition. After all, we could in principle simulate the entire universe down to quantum configuration distributions, right?
If you wanted to make a weaker claim based on computational tractability, then that would of course be another thing.
Since I think of my subjective experience as an essential part of my self, this seems important.
I concur with this. To the extent that I have any kind of a handle of what subjective experience even is, it does seem quite important.
P.S.
Hmm, I can see arguments for and against calling computationalism a form of dualism. I don’t think it matters much, so I’ll accept your claim that it’s not.
Yeah, this is probably a question of preferred terminology and I am not inclined to argue about it too much; I just wanted to clarify my actual views.
I don’t think embodiment is irrelevant.[...] If you can simulate a human and their environment on any level you like, then embodiment stops being an issue.
Sure you can (and have to) take the body with you into the simulation—but then the (biological) rules governing the body still apply. You may have more control over them though.
Ok. So, my death benefits me insofar as I care about other people (altruism) and my death benefits those people.
Obvious next question: how does my death benefit the people I care about?
Roughly delineated, I care about my family and friends immensely; casual acquaintances / colleagues / peers / etc. a good amount; and other people roughly to the degree that they share my culture (variously defined).
Thus, for death to benefit me via altruism, it would have to benefit my family and friends a lot (to offset the great loss they would feel at my death); and/or benefit casual acquaintances / colleagues / peers a pretty large amount (to compensate for the discount factor of how much I care about them); and/or benefit various other people in the world an almost astronomical total amount (ditto).
Does my death in fact do this? I can imagine certain specific scenarios where one or more of these things is the case, such as if my death serves to save my family, or save the world from destruction. However, death in common circumstances does not seem to fit the aforementioned criteria for me to judge it a net positive.
If you have children who in turn plan on having children, then you might be inclined to die eventually in order to secure more resources for your progeny. Similarly if you have close friends with children.
Of course, I’d prefer to incentivize having fewer children and make it harder to have children as a means to control population. If death were necessary, I’d recommend a lottery (assuming there are insufficient volunteers).
how does my death benefit the people I care about?
Directly? It doesn’t. It only benefits society at large and even then only in so far as it conveys a fitness advantage. Same as sexual reproduction does. The obvious difference being that sexual mechanisms are visible as motivations on your conscious level but aging mechanisms stop at the biological level. But denying it’s operation will not work same as an AI couldn’t deny the operation of it hardware (at least not without conferring comparable disadvantages),
It only benefits society at large and even then only in so far as it conveys a fitness advantage.
Once again, what does it mean for something to benefit society, apart from any benefit to individuals in that society?
If something doesn’t benefit any of the people I care about directly, at all, then how can it benefit society, which is made up of those people?
It only benefits society at large and even then only in so far as it conveys a fitness advantage. Same as sexual reproduction does. The obvious difference being that sexual mechanisms are visible as motivations on your conscious level but aging mechanisms stop at the biological level. But denying it’s operation will not work same as an AI couldn’t deny the operation of it hardware (at least not without conferring comparable disadvantages),
Are you familiar with the concept of the selfish gene?
What does it mean for something to be “beneficial to a society”, apart from any benefits it has to the individuals in that society?
My death would be very bad for me, personally. How does the “benefit to society” of my death cash out as benefit to me?
What does it mean to benefit a person, apart from benefits to the individual cells in that person’s body? I don’t think it’s unreasonable to think of society as having emergent goals, and fulfilling those goals would benefit it.
I actually do think it is unreasonable to take any but the physical stance toward society; the predictive power of taking the intentional stance (or the design stance, for that matter) is just less.
But! We might assume, for the sake of argument, that we can think of society as having emergent goals, goals that do not benefit its members (or do not benefit a majority of its members, or something). In that case, however, my question is:
Why should I care?
Society’s emergent goals can go take a flying leap, as can evolution’s goals, the goals of my genes, the goals of the human species, and any other goals of any other entity that is not me or the people I care about.
Hmm, I’ll have to look into the predictive power thing, and the tradeoff between predictive power and efficiency. I figured viewing society as an organism would drastically improve computational efficiency over trying to reason about and then aggregate individual people’s preferences, so that any drop in predictive power might be worth it. But I’m not sure I’ve seen evidence in either direction; I just assumed it based on analogy and priors.
As for why you should care, I don’t think you should, necessarily, if you don’t already. But I think for a lot of people, serving some kind of emergent structure or higher ideal is an important source of existential fulfillment.
Sorry, when I said “predictive power”, I was actually assuming normalization for efficiency. That is, my claim that the total predictive capacity you get for your available computation resources is greatest by taking the physical stance in this case.
Oh, and:
It is almost trivial to imagine such a thing. For example, my body may be destroyed utterly in the process of transferring my mind, unharmed, into a new, artificial body, better in every way than my old one. This would be great for me (assuming the new body suited my wants and needs), but bad for the cells making up my existing body.
The core idea here is that I am not my body. I am currently instantiated in my body, but that’s not the same thing. I care about my current instantiation only to the degree that doing so is necessary for me to survive and prosper.
Ah. I’m not sure I agree with you on the nature of the self. What evidence do you have that your mind could be instantiated in a different medium and still lead to the same subjective experience? (Or is subjective experience irrelevant to your definition of self?)
I mean, I don’t necessarily disagree with this kind of dualism; it seems possible, even given what I know about embodied cognition. I just am not sure how it could be tested scientifically.
No direct evidence, just the totality of what we currently know about the mind (i.e. cognitive science). Subjective experience is not irrelevant, though I am still confused about its nature. I don’t, however, have any reason to believe that it’s tied to any particular instantiation.
I don’t think my view can properly be characterized as dualism. I don’t posit any sort of nonmaterial properties of mind, for instance, nor that the mind itself is some nonmaterial substance. Computationalism merely says, essentially, that “the mind is what the brain does”, and that other physical substrates can perform the same computation.
Everything that I know about the idea of embodied cognition leads me to conclude that it is a brand of mysticism. I’ve never heard a cogent argument for why embodiment can’t be simulated on some suitable level.
Hmm, I can see arguments for and against calling computationalism a form of dualism. I don’t think it matters much, so I’ll accept your claim that it’s not.
As for embodied cognition, most of what I know about it comes from reading Lawrence Shapiro’s book Embodied Cognition. I was much less impressed with the field after reading that book, but I do think the general idea is important, that it’s a mistake to think of the mind and body as separate things, and that in order to study cognition we have to take the body into consideration.
I agree that embodiment could be simulated. But I don’t like to make assumptions about how subjective experience works, and for all I know, it arises from some subtrates of cognition but not others. Since I think of my subjective experience as an essential part of my self, this seems important.
I agree. I don’t think embodiment is irrelevant; my own field (human-computer interaction) takes embodiment quite seriously — it’s an absolutely integral factor in natural user interface design, for example.
I just don’t think embodiment is in any way magic, the way that the embodied cognition people seem to think and imply. If you can simulate a human and their environment on any level you like, then embodiment stops being an issue. It seems like we don’t actually disagree on this.
This is certainly not impossible, but it’s not clear to me why you couldn’t then simulate the substrate on a sufficiently low level as to capture whatever aspect of the substrate is responsible for enabling cognition. After all, we could in principle simulate the entire universe down to quantum configuration distributions, right?
If you wanted to make a weaker claim based on computational tractability, then that would of course be another thing.
I concur with this. To the extent that I have any kind of a handle of what subjective experience even is, it does seem quite important.
P.S.
Yeah, this is probably a question of preferred terminology and I am not inclined to argue about it too much; I just wanted to clarify my actual views.
Sure you can (and have to) take the body with you into the simulation—but then the (biological) rules governing the body still apply. You may have more control over them though.
It doesn’t.
Except in the same way as altruism does.
Ok. So, my death benefits me insofar as I care about other people (altruism) and my death benefits those people.
Obvious next question: how does my death benefit the people I care about?
Roughly delineated, I care about my family and friends immensely; casual acquaintances / colleagues / peers / etc. a good amount; and other people roughly to the degree that they share my culture (variously defined).
Thus, for death to benefit me via altruism, it would have to benefit my family and friends a lot (to offset the great loss they would feel at my death); and/or benefit casual acquaintances / colleagues / peers a pretty large amount (to compensate for the discount factor of how much I care about them); and/or benefit various other people in the world an almost astronomical total amount (ditto).
Does my death in fact do this? I can imagine certain specific scenarios where one or more of these things is the case, such as if my death serves to save my family, or save the world from destruction. However, death in common circumstances does not seem to fit the aforementioned criteria for me to judge it a net positive.
If you have children who in turn plan on having children, then you might be inclined to die eventually in order to secure more resources for your progeny. Similarly if you have close friends with children.
Of course, I’d prefer to incentivize having fewer children and make it harder to have children as a means to control population. If death were necessary, I’d recommend a lottery (assuming there are insufficient volunteers).
Directly? It doesn’t. It only benefits society at large and even then only in so far as it conveys a fitness advantage. Same as sexual reproduction does. The obvious difference being that sexual mechanisms are visible as motivations on your conscious level but aging mechanisms stop at the biological level. But denying it’s operation will not work same as an AI couldn’t deny the operation of it hardware (at least not without conferring comparable disadvantages),
Once again, what does it mean for something to benefit society, apart from any benefit to individuals in that society?
If something doesn’t benefit any of the people I care about directly, at all, then how can it benefit society, which is made up of those people?
Are you familiar with the concept of the selfish gene?