Does this type of reasoning mean it is a good idea to simulate lots of alien civilizations (across lots of different worlds), to see what utility functions emerge, and how frequently each type emerges?
It seems like detailed simulation is quite a sensible strategy anyway, if we’re utility trading (detailed enough to create conscious beings). We could plausibly assume that each utility function f(i) assigns positive utility to the aliens of type (i) existing in a world, as long as their welfare in that world exceeds an acceptable threshold. (For instance, if we imagine worlds with or without humans, then we tend to prefer the ones with, unless they are being horribly tortured etc..) So by simulating alien species (i), and checking that they generally prefer to exist (rather than trying to commit suicide) we are likely doing them a favour according to f(i), and we can assume that since our TDT decision is linked to theirs, we are increasing the number of worlds humans exist in too.
I’m intrigued by the idea that TDT leads to a converged “average utilitity” function, across all possible worlds with TDT civilizations...
An interesting question. Some thoughts here:
Does this type of reasoning mean it is a good idea to simulate lots of alien civilizations (across lots of different worlds), to see what utility functions emerge, and how frequently each type emerges?
It seems like detailed simulation is quite a sensible strategy anyway, if we’re utility trading (detailed enough to create conscious beings). We could plausibly assume that each utility function f(i) assigns positive utility to the aliens of type (i) existing in a world, as long as their welfare in that world exceeds an acceptable threshold. (For instance, if we imagine worlds with or without humans, then we tend to prefer the ones with, unless they are being horribly tortured etc..) So by simulating alien species (i), and checking that they generally prefer to exist (rather than trying to commit suicide) we are likely doing them a favour according to f(i), and we can assume that since our TDT decision is linked to theirs, we are increasing the number of worlds humans exist in too.
I’m intrigued by the idea that TDT leads to a converged “average utilitity” function, across all possible worlds with TDT civilizations...