I wonder if there is any right answer. We are talking about moral intuitions, feelings, which come from some presumably algorithmic part of our brain which seems it can be influenced, at least somewhat changed, by our rational minds. When we ask the question: which is worse 3^^^3 dust specks of 50 years of torture, are we asking what our feeling would be when confronted with a real world in which somehow we could see this happening? As if we could ever see 3^^^3 of anything? Because that seems to be the closest I can get to defining right and wrong.
Yes, the brain is a great tool for running hypotheticals to predict how the real thing might go. So we are running hypotheticals of 3*3 dust specks and hypotheticals of 50 years of torture, and we are trying to see how we feel about them?
I think it is not that hard to “break” the built-in for determining how one feels morally about various events. The thing was designed for real world circumstances and honed by evolution to produce behavior that would cause us to cooperate in a way that would ensure our survival, enhance our productivity and reproductivity. When we get to corner cases that clearly the original moral systems could not have been designed for, because NOTHING like them ever happened in our evolution, what can we say about our moral intuition? Even if it would make sense to say there is one right answer as to how our moral intuition “should” work in this corner case, why would we put any stock behind it?
I think in real life, the economic forces of a gigantic civilization of humans would happily torture a small number of individuals for 50 years if it resulted in the avoidance of 3^^^3 dust specks. We build bridges and buildings where part of the cost is a pretty predictable loss of human life, and a predictable amount of that being painful and a predictable amount of crippling, from construction accidents. We drive and fly and expose ourselves to carcinogens. We expose others to carcinogens.
In some important sense, when I drive a car that costs $10,000 instead of one that costs $8000, but I send the extra $2000 to Africa to relieve famine, I am trading some number of African starvations for leather seats and a nice stereo.
What is the intuition I am supposed to be pumping with dust specks? I think it might be that how you feel morally about things that are way beyond the edges of the environment in which your moral intuitions evolved are not meaningfully either moral or immoral.
Givewell does not think you can save one African from starving with $2000. You might be able to save one child from dying of malaria via insecticide-treated mosquito bednets. But this of course will not be the optimal use of $2K even on conventional targets of altruism; well-targeted science research should beat that (where did mosquito nets come from?).
… are we asking what our feeling would be when confronted with a real world in which somehow we could see this happening? As if we could ever see 3^^^3 of anything? Because that seems to be the closest I can get to defining right and wrong.
As a simple improvement, you could approximate the (normatively) “correct answer” as the decision reached as a result of spending a billion years developing moral theory, formal decision-making algorithms and social institutions supporting the research process, in the hypothetical where nothing goes wrong during this project. (Then, whatever you should actually decide in the hypothetical where you are confronted with the problem and won’t spend a billion years might be formalized as an attempt to approximate that (explicitly inaccessible) approximation, and would itself be expected to be a worse approximation.)
As a simple improvement, you could approximate the (normatively) “correct answer” as the decision reached as a result of spending a billion years developing moral theory, formal decision-making algorithms and social institutions supporting the research process, in the hypothetical where nothing goes wrong during this project.
Presumably this requires that a billion years developing theory with nothing going wrong during the development would need to converge on a single answer. That is, a million identical starting points like our world would need to end up after that billion years within a very small volume of morality-space, in order for this process to be meaningful.
Considering that the 4 billion years (or so) spent by evolution on this project has developed quite divergent moralities across not only different species, but with quite a bit of variation even within our own species, that convergence seems exceedingly unlikely to me. Do you have any reason to think that there is “an answer” to what would happen after a billion years of good progress?
Supposing another billion years of evolution and natural selection. If the human species splits into multiple species or social intelligences arise among other species, one would (or at least i would in absence of a reason not to) expect each species to have a different morality. Further since it has been evolution that has gotten us to the morality we have, presumably another billion years of evolution would have to be considered as a “nothing going wrong” method. Perhaps some of these evolved systems will address the 3^^^3 dust specks vs torture question better than our current system does, but is there any reason to believe that all of them will get the same answer? Or I should say, any rational reason, as faith in a higher morality that we are a pale echo of would constitute a reason but not a rational one?
That is, a million identical starting points like our world would need to end up after that billion years within a very small volume of morality-space, in order for this process to be meaningful.
(The hypothetical I posited doesn’t start with our world, but with an artificial isolated/abstract “research project” whose only goal is to answer that one question.) In any case, determinism is not particularly important as long as the expected quality across the various possible outcomes is high. For example, for a goal of producing a good movie to be meaningful, it is not necessary to demand that only very similar movies can be produced.
Considering that the 4 billion years (or so) spent by evolution
Evolution is irrelevant, not analogous to intelligent minds purposefully designing things.
presumably another billion years of evolution would have to be considered as a “nothing going wrong” method
I wonder if there is any right answer. We are talking about moral intuitions, feelings, which come from some presumably algorithmic part of our brain which seems it can be influenced, at least somewhat changed, by our rational minds. When we ask the question: which is worse 3^^^3 dust specks of 50 years of torture, are we asking what our feeling would be when confronted with a real world in which somehow we could see this happening? As if we could ever see 3^^^3 of anything? Because that seems to be the closest I can get to defining right and wrong.
Yes, the brain is a great tool for running hypotheticals to predict how the real thing might go. So we are running hypotheticals of 3*3 dust specks and hypotheticals of 50 years of torture, and we are trying to see how we feel about them?
I think it is not that hard to “break” the built-in for determining how one feels morally about various events. The thing was designed for real world circumstances and honed by evolution to produce behavior that would cause us to cooperate in a way that would ensure our survival, enhance our productivity and reproductivity. When we get to corner cases that clearly the original moral systems could not have been designed for, because NOTHING like them ever happened in our evolution, what can we say about our moral intuition? Even if it would make sense to say there is one right answer as to how our moral intuition “should” work in this corner case, why would we put any stock behind it?
I think in real life, the economic forces of a gigantic civilization of humans would happily torture a small number of individuals for 50 years if it resulted in the avoidance of 3^^^3 dust specks. We build bridges and buildings where part of the cost is a pretty predictable loss of human life, and a predictable amount of that being painful and a predictable amount of crippling, from construction accidents. We drive and fly and expose ourselves to carcinogens. We expose others to carcinogens.
In some important sense, when I drive a car that costs $10,000 instead of one that costs $8000, but I send the extra $2000 to Africa to relieve famine, I am trading some number of African starvations for leather seats and a nice stereo.
What is the intuition I am supposed to be pumping with dust specks? I think it might be that how you feel morally about things that are way beyond the edges of the environment in which your moral intuitions evolved are not meaningfully either moral or immoral.
Givewell does not think you can save one African from starving with $2000. You might be able to save one child from dying of malaria via insecticide-treated mosquito bednets. But this of course will not be the optimal use of $2K even on conventional targets of altruism; well-targeted science research should beat that (where did mosquito nets come from?).
As a simple improvement, you could approximate the (normatively) “correct answer” as the decision reached as a result of spending a billion years developing moral theory, formal decision-making algorithms and social institutions supporting the research process, in the hypothetical where nothing goes wrong during this project. (Then, whatever you should actually decide in the hypothetical where you are confronted with the problem and won’t spend a billion years might be formalized as an attempt to approximate that (explicitly inaccessible) approximation, and would itself be expected to be a worse approximation.)
Presumably this requires that a billion years developing theory with nothing going wrong during the development would need to converge on a single answer. That is, a million identical starting points like our world would need to end up after that billion years within a very small volume of morality-space, in order for this process to be meaningful.
Considering that the 4 billion years (or so) spent by evolution on this project has developed quite divergent moralities across not only different species, but with quite a bit of variation even within our own species, that convergence seems exceedingly unlikely to me. Do you have any reason to think that there is “an answer” to what would happen after a billion years of good progress?
Supposing another billion years of evolution and natural selection. If the human species splits into multiple species or social intelligences arise among other species, one would (or at least i would in absence of a reason not to) expect each species to have a different morality. Further since it has been evolution that has gotten us to the morality we have, presumably another billion years of evolution would have to be considered as a “nothing going wrong” method. Perhaps some of these evolved systems will address the 3^^^3 dust specks vs torture question better than our current system does, but is there any reason to believe that all of them will get the same answer? Or I should say, any rational reason, as faith in a higher morality that we are a pale echo of would constitute a reason but not a rational one?
(The hypothetical I posited doesn’t start with our world, but with an artificial isolated/abstract “research project” whose only goal is to answer that one question.) In any case, determinism is not particularly important as long as the expected quality across the various possible outcomes is high. For example, for a goal of producing a good movie to be meaningful, it is not necessary to demand that only very similar movies can be produced.
Evolution is irrelevant, not analogous to intelligent minds purposefully designing things.
No, see “value drift”, fragility of value.