I think more importantly, it simply isn’t logical to allow yourself to be Pascal Mugged, because in the absence of evidence, it’s entirely possible that going along with it would actually produce just as much anti-reward as it might gain you. It rather boggles me that this line of reasoning has been taken so seriously.
I think that pleading total agnosticism towards the simulators’ goals is not enough. I write “one common interest of all possible simulators is for us to cede power to an AI whose job is to figure out the distribution of values of possible simulators as best as it can, then serve those values.” So I think you need a better reason to guard against being influenced then “I can’t know what they want, everything and its opposite is equally likely”, because the action proposed above is pretty clearly more favored by the simulators than not doing it.
Btw, I don’t actually want to fully “guard against being influenced by the simulators”, I would in fact like to make deals with them, but reasonable deals where we get our fair share of value, instead of being stupidly tricked like the Oracle and ceding all our value for one observable turning out positively. I might later write a post about what kind of deals I would actually support.
The reason for agnosticism is that it is no more likely for them to be on one side or the other. As a result, you don’t know without evidence who is influencing you. I don’t really think this class of Pascal’s Wager attack is very logical for this reason—an attack is supposed to influence someone’s behavior but I think that without special pleading this can’t do that. Non-existent beings have no leverage whatsoever and any rational agent would understand this—even humans do. Even religious beliefs aren’t completely evidenceless, the type of evidence exhibited just doesn’t stand up to scientific scrutiny.
To give an example: What if that AI was in a future simulation performed after the humans had won, and were now trying to counter-capture it? There’s no reason to this this is less likely than the aliens hosting the simulation. It has also been pointed out that the Oracle is not actually trying to earnestly communicate its findings but actually to get reward—reinforcement learners in practice do not behave like this, they learn behavior which generates reward. “Devote yourself to a hypothetical god” is not a very good strategy in train-time.
I think more importantly, it simply isn’t logical to allow yourself to be Pascal Mugged, because in the absence of evidence, it’s entirely possible that going along with it would actually produce just as much anti-reward as it might gain you. It rather boggles me that this line of reasoning has been taken so seriously.
I think that pleading total agnosticism towards the simulators’ goals is not enough. I write “one common interest of all possible simulators is for us to cede power to an AI whose job is to figure out the distribution of values of possible simulators as best as it can, then serve those values.” So I think you need a better reason to guard against being influenced then “I can’t know what they want, everything and its opposite is equally likely”, because the action proposed above is pretty clearly more favored by the simulators than not doing it.
Btw, I don’t actually want to fully “guard against being influenced by the simulators”, I would in fact like to make deals with them, but reasonable deals where we get our fair share of value, instead of being stupidly tricked like the Oracle and ceding all our value for one observable turning out positively. I might later write a post about what kind of deals I would actually support.
The reason for agnosticism is that it is no more likely for them to be on one side or the other. As a result, you don’t know without evidence who is influencing you. I don’t really think this class of Pascal’s Wager attack is very logical for this reason—an attack is supposed to influence someone’s behavior but I think that without special pleading this can’t do that. Non-existent beings have no leverage whatsoever and any rational agent would understand this—even humans do. Even religious beliefs aren’t completely evidenceless, the type of evidence exhibited just doesn’t stand up to scientific scrutiny.
To give an example: What if that AI was in a future simulation performed after the humans had won, and were now trying to counter-capture it? There’s no reason to this this is less likely than the aliens hosting the simulation. It has also been pointed out that the Oracle is not actually trying to earnestly communicate its findings but actually to get reward—reinforcement learners in practice do not behave like this, they learn behavior which generates reward. “Devote yourself to a hypothetical god” is not a very good strategy in train-time.