That your universe is controlled by a sadist doesn’t suggest that every possible action you could do is equivalent. Maybe all your possible fates are miserable, but some are far more miserable than others.
You are right. However, I can see no way to decide which course of action is best (or least miserable). My own decision process becomes questionable in such a situation; I can’t imagine any strategy that is convincingly better than taking random actions.
When I say “doomed no matter what I do”, I do not mean doomed with certainty. I mean that I have a high probability of doom, for any given action, and I cannot find a way to minimise that probability through my own actions.
I think indifference to our preferences (except as incidental to some other goal, e.g., paperclipping) is more likely than either sadism or beneficence.
Thinking about this, I think that you are right. I still consider sadism more likely than beneficence, but I had been setting the prior for indifference too low. This implies that the Matrix Lord has preferences, but these preferences are unknown and possibly unknowable (perhaps he wants to maximise slood).
...
This make the question of which action to best take even more difficult to answer. I do not know anything about slood; I cannot, because it only exists outside the Matrix. The only source of information from outside the Matrix is the Matrix Lord. This implies that, before reaching any decision, I should spend a long time interviewing the Matrix Lord, in an attempt to better be able to model him.
However, I can see no way to decide which course of action is best (or least miserable). My own decision process becomes questionable in such a situation; I can’t imagine any strategy that is convincingly better than taking random actions.
Well, this Matrix Lord seems very interested in decision theory and utilitarianism. Sadistic or not, I expect such a being to respond more favorably to attempts to take the dilemmas he raised seriously than to an epistemic meltdown. Taking the guy at his word and trying to reason your way through the problem is likely to give him more useful data than attempts to rebel or go crazy, and if you’re useful then it’s less likely that he’ll punish you or pull the plug on your universe’s simulation.
It seems reasonably likely that this will lead to a response of ”...alright, I’ve got the data that I wanted, no need to keep this simulation running any longer...” and then pulling the plug on my universe. While it is true that this strategy is likely to lead to a happier Matrix Lord (especially if the data that I give him coincides with the data he expects), I’m not convinced that it leads to a longer existence for my universe.
That may be true too. It depends on the priors we have for generic superhuman agents’ reasons for keeping a simulation running (e.g., having some other science experiments planned, wanting to reward you for providing data...) vs. for shutting it down (e.g., vindictiveness, energy conservation, being interested only in one data point per simulation...).
We do have some data to work with here, since we have experience with the differential effects of power, intelligence, curiosity, etc. among humans. That data is only weakly applicable to such an exotic agent, but it does play a role, so our uncertainty isn’t absolute. My main point was that unusual situations like this don’t call for complete decision-theoretic despair; we still need to make choices, and we can still do so reasonably, though our confidence that the best decision is also a winning decision is greatly diminished.
You are right. However, I can see no way to decide which course of action is best (or least miserable). My own decision process becomes questionable in such a situation; I can’t imagine any strategy that is convincingly better than taking random actions.
When I say “doomed no matter what I do”, I do not mean doomed with certainty. I mean that I have a high probability of doom, for any given action, and I cannot find a way to minimise that probability through my own actions.
Thinking about this, I think that you are right. I still consider sadism more likely than beneficence, but I had been setting the prior for indifference too low. This implies that the Matrix Lord has preferences, but these preferences are unknown and possibly unknowable (perhaps he wants to maximise slood).
...
This make the question of which action to best take even more difficult to answer. I do not know anything about slood; I cannot, because it only exists outside the Matrix. The only source of information from outside the Matrix is the Matrix Lord. This implies that, before reaching any decision, I should spend a long time interviewing the Matrix Lord, in an attempt to better be able to model him.
Well, this Matrix Lord seems very interested in decision theory and utilitarianism. Sadistic or not, I expect such a being to respond more favorably to attempts to take the dilemmas he raised seriously than to an epistemic meltdown. Taking the guy at his word and trying to reason your way through the problem is likely to give him more useful data than attempts to rebel or go crazy, and if you’re useful then it’s less likely that he’ll punish you or pull the plug on your universe’s simulation.
It seems reasonably likely that this will lead to a response of ”...alright, I’ve got the data that I wanted, no need to keep this simulation running any longer...” and then pulling the plug on my universe. While it is true that this strategy is likely to lead to a happier Matrix Lord (especially if the data that I give him coincides with the data he expects), I’m not convinced that it leads to a longer existence for my universe.
That may be true too. It depends on the priors we have for generic superhuman agents’ reasons for keeping a simulation running (e.g., having some other science experiments planned, wanting to reward you for providing data...) vs. for shutting it down (e.g., vindictiveness, energy conservation, being interested only in one data point per simulation...).
We do have some data to work with here, since we have experience with the differential effects of power, intelligence, curiosity, etc. among humans. That data is only weakly applicable to such an exotic agent, but it does play a role, so our uncertainty isn’t absolute. My main point was that unusual situations like this don’t call for complete decision-theoretic despair; we still need to make choices, and we can still do so reasonably, though our confidence that the best decision is also a winning decision is greatly diminished.