You also have to take into account the fact that other people are largely similar to you and therefore, when you set the output of the algorithm that defines you, you are also setting the output of every other algorithm if and to the extent that it resembles the one that is you, so the effect is actually larger than your action-difference taken in isolation.
Well, it made much more sense when Eliezer Yudkowsky and Wei Dai said all that stuff about similar computational processes and setting their logical output...
I’m not sure if I should respond, but …
You also have to take into account the fact that other people are largely similar to you and therefore, when you set the output of the algorithm that defines you, you are also setting the output of every other algorithm if and to the extent that it resembles the one that is you, so the effect is actually larger than your action-difference taken in isolation.
Well, it made much more sense when Eliezer Yudkowsky and Wei Dai said all that stuff about similar computational processes and setting their logical output...
For reasons similar to those I gave here: http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/1ss1 Or using Bayescraft as described here: http://philsci-archive.pitt.edu/archive/00003169/01/noregrets.pdf I would say this should be part of what you consider to be a consequence of your actions.