The penalty for impact is supposed to be defined with respect to the AI’s current beliefs. Perhaps shuttling around electrons has large effects on the world, but if you look at some particular assertion X and examine P(X | electron shuffle 1) vs. P(X | electron shuffle 2), where P is AI’s beliefs, you will not generally see a large difference. (This is stated in Stuart’s post, but perhaps not clearly enough.)
I’m aware of the issues arising from defining value with this sort of reference to “the AI’s beliefs.” I can see why you would object to that, though I think it is unclear whether it is fatal (minimally it restricts the range of applicability, perhaps to the point of unhelpfulness).
Also, I don’t quite buy your overall argument about the butterfly effect in general. For many chaotic systems, if you have a lot of randomness going in, you get out an appropriate equilibrium distribution, which then isn’t disturbed by changing some inputs arising from the AI’s electron shuffling (indeed, by chaoticness it isn’t even disturbed by quite large changes). So even if you talk about the real probability distributions over outcomes for a system of quantum measurements, the objection doesn’t seem to go through. What I do right now doesn’t significantly affect the distribution over outcomes when I flip a coin tomorrow, for example, even if I’m omniscient.
The penalty for impact is supposed to be defined with respect to the AI’s current beliefs. Perhaps shuttling around electrons has large effects on the world, but if you look at some particular assertion X and examine P(X | electron shuffle 1) vs. P(X | electron shuffle 2), where P is AI’s beliefs, you will not generally see a large difference. (This is stated in Stuart’s post, but perhaps not clearly enough.)
I’m aware of the issues arising from defining value with this sort of reference to “the AI’s beliefs.” I can see why you would object to that, though I think it is unclear whether it is fatal (minimally it restricts the range of applicability, perhaps to the point of unhelpfulness).
Also, I don’t quite buy your overall argument about the butterfly effect in general. For many chaotic systems, if you have a lot of randomness going in, you get out an appropriate equilibrium distribution, which then isn’t disturbed by changing some inputs arising from the AI’s electron shuffling (indeed, by chaoticness it isn’t even disturbed by quite large changes). So even if you talk about the real probability distributions over outcomes for a system of quantum measurements, the objection doesn’t seem to go through. What I do right now doesn’t significantly affect the distribution over outcomes when I flip a coin tomorrow, for example, even if I’m omniscient.
See wedifrid’s reply and my comment.