Yes, in which case their evaluation doesn’t correspond to any first-person-evaluations other than their own (because solar fusion likely doesn’t have any of that), whereas my evaluation reflects all the first-person-perspectives out there. I’m being altruistic, they aren’t. Sure, they might not care about that, and indeed, if the creators themselves aren’t capable of suffering, they might not even realize they’re being a**holes, but otherwise they’d obviously be total jerks in a very objective sense—for whatever that’s worth.
Then, if I understand the question correcly, the creators would be being partially altruistic, which we’d mistake for being non-altruistic because we don’t understand that solar fusions can suffer.
“Not taking other-regarding reasons for actions seriously” makes you a total jerk. “Others” are beings with a first-person perspective, the only type of entities for which things can go well or not well in a sense that is more than just metaphorical. You could say that it is “bad for a rock” if the rock is split into parts, but there isn’t anything there to mind the splitting so at best you’re saying that you find it bad if rocks are split.
The above view fits into LW-metaethics the following way: No matter their “terminal values”, everyone can try to answer which action-guiding set of principles best reflects what is good or bad for others. So once you specify what the goalpost of ethics in this sense is, everyone can play the game. Some agents will however state that they don’t care about ethics if defined like that, which implies that their “terminal value” doesn’t include altruism (or at least that they think it doesn’t, which may sometimes happen if people are too quick to declare things their “terminal value”—it’s kind of a self-fulfilling prophecy if you think about it).
Would it be immoral to fully simulate a single human with brain cancer if there was an expected return of saving more than one actual human with brain cancer? What if there was an expectation of saving less than one actual human? (Say, a one-in-X chance of saving fewer than X patients) What if there was no chance of saving an actual patient at all as a result of the simulation? Assume that simulating the human and cancer well enough requires that the simulated human simulate saying that he is self-aware, among other things.
I’ve never quite understood, in cases like this, how “fully simulate a single human with brain cancer” and “create a single human with brain cancer” are supposed to differ from one another. Because boy do my intuitions about the situation change when I change the verb.
Yes, in which case their evaluation doesn’t correspond to any first-person-evaluations other than their own (because solar fusion likely doesn’t have any of that), whereas my evaluation reflects all the first-person-perspectives out there. I’m being altruistic, they aren’t. Sure, they might not care about that, and indeed, if the creators themselves aren’t capable of suffering, they might not even realize they’re being a**holes, but otherwise they’d obviously be total jerks in a very objective sense—for whatever that’s worth.
What if they have first-person perspectives which are objectively comparable to us in the same way that we are comparable to solar fusion?
What are the necessary and sufficient conditions to be “total jerks” in any objective sense?
Then, if I understand the question correcly, the creators would be being partially altruistic, which we’d mistake for being non-altruistic because we don’t understand that solar fusions can suffer.
“Not taking other-regarding reasons for actions seriously” makes you a total jerk. “Others” are beings with a first-person perspective, the only type of entities for which things can go well or not well in a sense that is more than just metaphorical. You could say that it is “bad for a rock” if the rock is split into parts, but there isn’t anything there to mind the splitting so at best you’re saying that you find it bad if rocks are split.
The above view fits into LW-metaethics the following way: No matter their “terminal values”, everyone can try to answer which action-guiding set of principles best reflects what is good or bad for others. So once you specify what the goalpost of ethics in this sense is, everyone can play the game. Some agents will however state that they don’t care about ethics if defined like that, which implies that their “terminal value” doesn’t include altruism (or at least that they think it doesn’t, which may sometimes happen if people are too quick to declare things their “terminal value”—it’s kind of a self-fulfilling prophecy if you think about it).
Would it be immoral to fully simulate a single human with brain cancer if there was an expected return of saving more than one actual human with brain cancer? What if there was an expectation of saving less than one actual human? (Say, a one-in-X chance of saving fewer than X patients) What if there was no chance of saving an actual patient at all as a result of the simulation? Assume that simulating the human and cancer well enough requires that the simulated human simulate saying that he is self-aware, among other things.
I’ve never quite understood, in cases like this, how “fully simulate a single human with brain cancer” and “create a single human with brain cancer” are supposed to differ from one another. Because boy do my intuitions about the situation change when I change the verb.