Highly positive outcomes are assumed to be more particular and complex than highly bad outcomes. Another assumption I think is common is that a utility of a maximally good life is lower than the magnitude of the utility of a maximally bad life. Is there a life good enough that you would take a bet of a 50% chance of that life and a 50% chance of the worst life of torture?
Given human brains as they are now I agree highly positive outcomes are more complex, the utility of a maximally good life is lower than a maximally bad life, and there is no life good enough that I’d take a 50% chance of torture.
But would this apply to minds in general (say, a random mind or one not too different from human)?
Highly positive outcomes are assumed to be more particular and complex than highly bad outcomes. Another assumption I think is common is that a utility of a maximally good life is lower than the magnitude of the utility of a maximally bad life. Is there a life good enough that you would take a bet of a 50% chance of that life and a 50% chance of the worst life of torture?
Given human brains as they are now I agree highly positive outcomes are more complex, the utility of a maximally good life is lower than a maximally bad life, and there is no life good enough that I’d take a 50% chance of torture.
But would this apply to minds in general (say, a random mind or one not too different from human)?