When is a fetus or baby capable of feeling pain (that has moral disvalue)?
At some point between being a blob of cells and being a fully formed baby popping out of Mommy, with the disvalue on a sliding scale. Where is the crisis?
For non-human animals, I disapprove of torturing animals for kicks, but am fine with using animals for industrial purposes, including food and medical testing. No crisis here either.
In life, I don’t kill people, and I also don’t alleviate a great deal of death and suffering that I might. I eat meat, I wear leather, and support abortion rights. No crisis. And I don’t see a lot of other people in crisis over such things either.
No crisis. And I don’t see a lot of other people in crisis over such things either.
I explained in the post why “ontological crisis” is a problem that people mostly don’t have to deal with right away, but will have to eventually, in the paragraph that starts with “To fully confront the ontological crisis that we face”. Do you have any substantive disagreements with my post, or just object to the term “crisis” as being inappropriate for something that isn’t a present emergency for most people? If it’s the latter, I chose it for historical reasons, namely because Peter de Blanc already used it to name a similar class of problems in AIs.
Nevertheless, this approach hardly seems capable of being extended to work in a future where many people may have nontraditional mind architectures, or have a zillion copies of themselves running on all kinds of strange substrates, or be merged into amorphous group minds with no clear boundaries between individuals.
Maybe we have no substantive disagreement. If your point is that a million copy super intelligence will have issues in morality because of their ontologies that we don’t currently have, then I agree. Me, I think it’s kind of cheeky to be prescribing solutions for the million copy super intelligence—I think he’s smarter than I am, doesn’t need my help much, and may not ever exist anyway. I’m not here to rain on that parade, but I’m not interested in joining it either.
However, you seemed to be using present tense for the crisis, and I just don’t see one now. Real people now don’t have big complicated ontological problems lacking clear solutions. That was my point.
The abortion example was appropriate, as that is one issue where currently many people have a problem, but their problem is usually just essentialism, and there is a cure for it—just knock it off.
I find discussions of metaethics interesting, particularly in terms of the conceptual confusion involved. It seemed that you were getting at such issues, but I couldn’t locate a concrete and currently relevant issue of that type from your post. So I directly asked for the concretes applicable now. You gave a couple. I don’t find either particularly problematic.
The abortion example was appropriate, as that is one issue where currently many people have a problem, but their problem is usually just essentialism, and there is a cure for it—just knock it off.
It’s easy to decide that the moral significance of a fetus changes gradually from conception to birth; it takes a bit more thought to quantify the significance. Abstractly, at what stage of development is the suffering of 100 fetuses commensurate with the suffering of a newborn? 1 month of gestation? 4? 9? More concretely, if you’re pregnant, you’ll have to decide not only whether the phenomenological point of view of your unborn child should be taken into account in your decisionmaking, but you’ll have to decide in what way and to what degree it should be taken into account.
It’s not clear whether your disapproval of animal torture is for consequentialist or virtue-ethics reasons, or whether it is a moral judgment at all; but in either case there are plenty of everyday cases of borderline animal exploitation. (Dogfighting? Inhumane flensing?) And maybe you have a very specific policy about which practices you support and which you don’t. But why that policy?
Perhaps “crisis” is too dramatic in its connotations, but you should at least give some thought to the many moral decisions you make every day, and decide whether, on reflection, you endorse the choices you’re making.
Abstractly, at what stage of development is the suffering of 100 fetuses commensurate with the suffering of a newborn? 1 month of gestation? 4? 9?
Concretely, how many people that you know have faced a situation where that calculation is relevant?
More concretely, if you’re pregnant, you’ll have to decide not only whether the phenomenological point of view of your unborn child should be taken into account in your decisionmaking, but you’ll have to decide in what way and to what degree it should be taken into account.
I don’t know how much you’ll have to decide on how you decide. You’ll decide what to do based on your valuations—I don’t think the valuations themselves involve a lot of deciding. I don’t decide that ice cream is yummy; I taste it and it is.
And yes, I think it’s a good policy to review your decisions and actions to see if you’re endorsing the choices you’re making. But that’s not primarily an issue of suspect ontologies, but of just paying attention to your choices.
I don’t know how much you’ll have to decide on how you decide. You’ll decide what to do based on your valuations—I don’t think the valuations themselves involve a lot of deciding. I don’t decide that ice cream is yummy; I taste it and it is.
I don’t know what you mean by this, but maybe there’s no point in further discussion because we seem to agree that one should reflect on one’s moral decisions.
When is a fetus or baby capable of feeling pain (that has moral disvalue)? What about (non-human) animals?
When it is similar enough to me that I can feel empathy for its mental state. If the torture vs dust, babyeaters, and paperclip maximizers have taught me anything it’s that I value the utility of other agents according to their similarity to me. I value the utility of agents that are indistinguishable from me most highly and very dissimilar agents (or inanimate matter) the least.
When I think about quantifying that similarity I think of how many experiences and thoughts I can meaningfully share with the agent. The utility of an agent that can think and feel and act the way that I can carries much more weight in my utility function. An agent that actually does think, experience, and act like I do has even more weight. If I compare the set of my actions, thoughts, and experiences, ME, to the set of the agent’s actions, thoughts, and experiences, AGENT, I think U(me) + |AGENT intersect ME| / |ME| * U(agent) is a reasonable starting point. Comparing actions would probably be done by comparing the outcome of my decision theory to that of the agent. It might even be possible to directly compare the function U_me(world) to U_agent(world)[agent_self=me_self] (the utility function of the agent with the agent’s representation of itself replaced with a representation of me) and the more similar the resulting functions the more empathy I would have for the agent. I would also want to include a factor of my estimate of the agent’s future utility. For example, a fetus will likely have a utility function much closer to mine in 20 years, so I would have more empathy for the future state of a fetus than the future state of a babyeater.
When is a fetus or baby capable of feeling pain (that has moral disvalue)? What about (non-human) animals?
At some point between being a blob of cells and being a fully formed baby popping out of Mommy, with the disvalue on a sliding scale. Where is the crisis?
For non-human animals, I disapprove of torturing animals for kicks, but am fine with using animals for industrial purposes, including food and medical testing. No crisis here either.
In life, I don’t kill people, and I also don’t alleviate a great deal of death and suffering that I might. I eat meat, I wear leather, and support abortion rights. No crisis. And I don’t see a lot of other people in crisis over such things either.
I explained in the post why “ontological crisis” is a problem that people mostly don’t have to deal with right away, but will have to eventually, in the paragraph that starts with “To fully confront the ontological crisis that we face”. Do you have any substantive disagreements with my post, or just object to the term “crisis” as being inappropriate for something that isn’t a present emergency for most people? If it’s the latter, I chose it for historical reasons, namely because Peter de Blanc already used it to name a similar class of problems in AIs.
In the paragraph you refer to:
Maybe we have no substantive disagreement. If your point is that a million copy super intelligence will have issues in morality because of their ontologies that we don’t currently have, then I agree. Me, I think it’s kind of cheeky to be prescribing solutions for the million copy super intelligence—I think he’s smarter than I am, doesn’t need my help much, and may not ever exist anyway. I’m not here to rain on that parade, but I’m not interested in joining it either.
However, you seemed to be using present tense for the crisis, and I just don’t see one now. Real people now don’t have big complicated ontological problems lacking clear solutions. That was my point.
The abortion example was appropriate, as that is one issue where currently many people have a problem, but their problem is usually just essentialism, and there is a cure for it—just knock it off.
I find discussions of metaethics interesting, particularly in terms of the conceptual confusion involved. It seemed that you were getting at such issues, but I couldn’t locate a concrete and currently relevant issue of that type from your post. So I directly asked for the concretes applicable now. You gave a couple. I don’t find either particularly problematic.
I don’t see how this addresses the problem.
Are you suggesting that AI will avoid cognitive dissonance by using compartmentalization like humans do?
It’s easy to decide that the moral significance of a fetus changes gradually from conception to birth; it takes a bit more thought to quantify the significance. Abstractly, at what stage of development is the suffering of 100 fetuses commensurate with the suffering of a newborn? 1 month of gestation? 4? 9? More concretely, if you’re pregnant, you’ll have to decide not only whether the phenomenological point of view of your unborn child should be taken into account in your decisionmaking, but you’ll have to decide in what way and to what degree it should be taken into account.
It’s not clear whether your disapproval of animal torture is for consequentialist or virtue-ethics reasons, or whether it is a moral judgment at all; but in either case there are plenty of everyday cases of borderline animal exploitation. (Dogfighting? Inhumane flensing?) And maybe you have a very specific policy about which practices you support and which you don’t. But why that policy?
Perhaps “crisis” is too dramatic in its connotations, but you should at least give some thought to the many moral decisions you make every day, and decide whether, on reflection, you endorse the choices you’re making.
Concretely, how many people that you know have faced a situation where that calculation is relevant?
I don’t know how much you’ll have to decide on how you decide. You’ll decide what to do based on your valuations—I don’t think the valuations themselves involve a lot of deciding. I don’t decide that ice cream is yummy; I taste it and it is.
And yes, I think it’s a good policy to review your decisions and actions to see if you’re endorsing the choices you’re making. But that’s not primarily an issue of suspect ontologies, but of just paying attention to your choices.
I don’t know what you mean by this, but maybe there’s no point in further discussion because we seem to agree that one should reflect on one’s moral decisions.
When it is similar enough to me that I can feel empathy for its mental state. If the torture vs dust, babyeaters, and paperclip maximizers have taught me anything it’s that I value the utility of other agents according to their similarity to me. I value the utility of agents that are indistinguishable from me most highly and very dissimilar agents (or inanimate matter) the least.
When I think about quantifying that similarity I think of how many experiences and thoughts I can meaningfully share with the agent. The utility of an agent that can think and feel and act the way that I can carries much more weight in my utility function. An agent that actually does think, experience, and act like I do has even more weight. If I compare the set of my actions, thoughts, and experiences, ME, to the set of the agent’s actions, thoughts, and experiences, AGENT, I think U(me) + |AGENT intersect ME| / |ME| * U(agent) is a reasonable starting point. Comparing actions would probably be done by comparing the outcome of my decision theory to that of the agent. It might even be possible to directly compare the function U_me(world) to U_agent(world)[agent_self=me_self] (the utility function of the agent with the agent’s representation of itself replaced with a representation of me) and the more similar the resulting functions the more empathy I would have for the agent. I would also want to include a factor of my estimate of the agent’s future utility. For example, a fetus will likely have a utility function much closer to mine in 20 years, so I would have more empathy for the future state of a fetus than the future state of a babyeater.