Tiiba, keep in mind that to an altruist with a bounded utility function, or with any other of Peter’s caveats, in may not “make perfect sense” to hand over the five dollars. So the problem is solveable in a number of ways, the problem is to come up with a solution that (1) isn’t a hack and (2) doesn’t create more problems than in solves.
Anyway, like most people, I’m not a complete utilitarian altruist, even at a philosophical level. Example: if an AI complained that you take up too much space and are mopey, and offered to kill you and replace you with two happy midgets, I would feel no guilt about refusing the offer, even if the AI could guarantee that overall utility would be higher after the swap.
Though, if the AI is a true utilitarian, why must it kill you in order to make the midgets? Aren’t there plenty of asteroids that can be nanofabricated into midgets instead?
Tiiba, keep in mind that to an altruist with a bounded utility function, or with any other of Peter’s caveats, in may not “make perfect sense” to hand over the five dollars. So the problem is solveable in a number of ways, the problem is to come up with a solution that (1) isn’t a hack and (2) doesn’t create more problems than in solves.
Anyway, like most people, I’m not a complete utilitarian altruist, even at a philosophical level. Example: if an AI complained that you take up too much space and are mopey, and offered to kill you and replace you with two happy midgets, I would feel no guilt about refusing the offer, even if the AI could guarantee that overall utility would be higher after the swap.
Though, if the AI is a true utilitarian, why must it kill you in order to make the midgets? Aren’t there plenty of asteroids that can be nanofabricated into midgets instead?
Candidate for weirdest sentence ever uttered: “Aren’t there plenty of asteroids that can be nanofabricated into midgets instead?”