I think that your reasoning here is substantially confused. FDT can handle reasoning about many versions of yourself, some of which might be duplicated, just fine. If your utility function is such that U(α|ϕ⟩+β|ψ⟩)=α2U(|ϕ⟩)+β2U(|ψ⟩) where ⟨ϕ|ψ⟩=0. (and you don’t intrinsically value looking at quantum randomness generators) then you won’t make any decisions based on one.
If you would prefer the universe to be in 1√2(|A⟩+|B⟩) than a logical bet between A and B. (Ie you get A if the 3^^^3 th digit of π is even, else B) Then flipping a quantum coin makes sense.
I don’t think that randomized behavior is best described as a new decision theory, as opposed to an existing decision theory with odd preferences. I don’t think we actually should randomize.
I also think that quantum randomness has a Lot of power over reality. There is already a very wide spread of worlds. So your attempts to spread it wider won’t help.
If you would prefer the universe to be in …
If I was to make Evan’s argument, that’s the point I’d try to make.
My own intuition supporting Evan’s line of argument comes from the investing world: it’s much better to run a lot of uncorrelated positive EV strategies than a few really good ones, since the former reduces your volatility and drawdown, even while at the expense of EV measured in USD.
I’m sorry but I am not familiar with your notation. I am just interested in the idea: when an agent Amir is fundamentally uncertain about the ethical systems that he evaluates his actions by, is it better if all of his immediate child worlds make the same decision? Or should he hedge against his moral uncertainty, ensure his immediate child worlds choose courses of action that optimize for irreconcilable moral frameworks, and increase the probability that in a subset of his child worlds, his actions realize value?
It seems that in a growing market (worlds splitting at an exponential rate), it pays in the long term to diversify your portfolio (optimize locally for irreconcilable moral frameworks).
I agree that QM already creates a wide spread of worlds, but I don’t think that means it’s safe to put all of one’s eggs in one basket when one has doubt that their moral system is fundamentally wrong.
If you think that there is 51% chance that A is the correct morality, and 49% chance that B is, with no more information available, which is best.
Optimize A only.
Flip a quantum coin, Optimize A in one universe, B in another.
Optimize for a mixture of A and B within the same Universe. (Act like you had utility U=0.51A+0.49B) (I would do this one.)
If A and B are local objects (eg paperclips, staples) then flipping a quantum coin makes sense if you have a concave utility per object in both of them. If your utility is log(#Paperclips across multiverse)+log(#Staples across multiverse) Then if you are the only potential source of staples or paperclips in the entire quantum multiverse, then the quantum coin or classical mix approaches are equally good. (Assuming that the resource to paperclip conversion rate is uniform. )
However, the assumption that the multiverse contains no other paperclips is probably false. Such an AI will run simulations to see which is rarer in the multiverse, and then make only that.
The talk about avoiding risk rather than expected utility maximization, and how your utility function is nonlinear, suggests this is a hackish attempt to avoid bad outcomes more strongly.
While this isn’t a bad attempt at decision theory, I wouldn’t want to turn on an ASI that was programmed with it. You are getting into the mathematically well specified, novel failure modes. Keep up the good work.
I really appreciate this comment and my idea definitely might come down trying to avoid risk rather than maximize expected utility. However, I still think there is something net positive about diversification. I write a better version of my post here: https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/ and if you could spare the time, I would love your feedback.
I think that your reasoning here is substantially confused. FDT can handle reasoning about many versions of yourself, some of which might be duplicated, just fine. If your utility function is such that U(α|ϕ⟩+β|ψ⟩)=α2U(|ϕ⟩)+β2U(|ψ⟩) where ⟨ϕ|ψ⟩=0. (and you don’t intrinsically value looking at quantum randomness generators) then you won’t make any decisions based on one.
If you would prefer the universe to be in 1√2(|A⟩+|B⟩) than a logical bet between A and B. (Ie you get A if the 3^^^3 th digit of π is even, else B) Then flipping a quantum coin makes sense.
I don’t think that randomized behavior is best described as a new decision theory, as opposed to an existing decision theory with odd preferences. I don’t think we actually should randomize.
I also think that quantum randomness has a Lot of power over reality. There is already a very wide spread of worlds. So your attempts to spread it wider won’t help.
My own intuition supporting Evan’s line of argument comes from the investing world: it’s much better to run a lot of uncorrelated positive EV strategies than a few really good ones, since the former reduces your volatility and drawdown, even while at the expense of EV measured in USD.
I’m sorry but I am not familiar with your notation. I am just interested in the idea: when an agent Amir is fundamentally uncertain about the ethical systems that he evaluates his actions by, is it better if all of his immediate child worlds make the same decision? Or should he hedge against his moral uncertainty, ensure his immediate child worlds choose courses of action that optimize for irreconcilable moral frameworks, and increase the probability that in a subset of his child worlds, his actions realize value?
It seems that in a growing market (worlds splitting at an exponential rate), it pays in the long term to diversify your portfolio (optimize locally for irreconcilable moral frameworks).
I agree that QM already creates a wide spread of worlds, but I don’t think that means it’s safe to put all of one’s eggs in one basket when one has doubt that their moral system is fundamentally wrong.
If you think that there is 51% chance that A is the correct morality, and 49% chance that B is, with no more information available, which is best.
Optimize A only.
Flip a quantum coin, Optimize A in one universe, B in another.
Optimize for a mixture of A and B within the same Universe. (Act like you had utility U=0.51A+0.49B) (I would do this one.)
If A and B are local objects (eg paperclips, staples) then flipping a quantum coin makes sense if you have a concave utility per object in both of them. If your utility is log(#Paperclips across multiverse)+log(#Staples across multiverse) Then if you are the only potential source of staples or paperclips in the entire quantum multiverse, then the quantum coin or classical mix approaches are equally good. (Assuming that the resource to paperclip conversion rate is uniform. )
However, the assumption that the multiverse contains no other paperclips is probably false. Such an AI will run simulations to see which is rarer in the multiverse, and then make only that.
The talk about avoiding risk rather than expected utility maximization, and how your utility function is nonlinear, suggests this is a hackish attempt to avoid bad outcomes more strongly.
While this isn’t a bad attempt at decision theory, I wouldn’t want to turn on an ASI that was programmed with it. You are getting into the mathematically well specified, novel failure modes. Keep up the good work.
I really appreciate this comment and my idea definitely might come down trying to avoid risk rather than maximize expected utility. However, I still think there is something net positive about diversification. I write a better version of my post here: https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/ and if you could spare the time, I would love your feedback.