Would it be immoral to fully simulate a single human with brain cancer if there was an expected return of saving more than one actual human with brain cancer? What if there was an expectation of saving less than one actual human? (Say, a one-in-X chance of saving fewer than X patients) What if there was no chance of saving an actual patient at all as a result of the simulation? Assume that simulating the human and cancer well enough requires that the simulated human simulate saying that he is self-aware, among other things.
I’ve never quite understood, in cases like this, how “fully simulate a single human with brain cancer” and “create a single human with brain cancer” are supposed to differ from one another. Because boy do my intuitions about the situation change when I change the verb.
Would it be immoral to fully simulate a single human with brain cancer if there was an expected return of saving more than one actual human with brain cancer? What if there was an expectation of saving less than one actual human? (Say, a one-in-X chance of saving fewer than X patients) What if there was no chance of saving an actual patient at all as a result of the simulation? Assume that simulating the human and cancer well enough requires that the simulated human simulate saying that he is self-aware, among other things.
I’ve never quite understood, in cases like this, how “fully simulate a single human with brain cancer” and “create a single human with brain cancer” are supposed to differ from one another. Because boy do my intuitions about the situation change when I change the verb.