The idea is that the Kolmogorov complexity of “3^^^^3 units of disutility” should be much higher than the Kolmogorov complexity of the number 3^^^^3.
Is your utility function such that there is some scenario for which you assign −3^^^^3 utils? If so, then the Kolmogorov complexity of “3^^^^3 units of disutility” can’t be greater than K(your brain) + K(3^^^^3), since I can write a program to output such a scenario by iterating through all possible scenarios until I find one which your brain assigns −3^^^^3 utils.
A prior of 2^-(K(your brain) + K(3^^^^3)) is not nearly small enough, compared to the utility −3^^^^3, to make this problem go away.
Come to think of it, the problem with this argument is that it assumes that my brain can compute the utility it assigns. But if it’s assigning utility according to Kolmogorov complexity (effectively the proposal in the post), that’s impossible.
The same issue arises with having probability depend on complexity.
Ok, I think in that case my argument doesn’t work. Let me try another approach.
Suppose some stranger appears to you and says that you’re living in a simulated world. Out in the real world there is another simulation that contains 3^^^^3 identical copies of a utopian Earth-like planet plus another 3^^^^3 identical copies of a less utopian (but still pretty good) planet.
Now, if you press this button, you’ll turn X of the utopian planets into copies of the less utopian planet, where X is a 10^100 digit random number. (Note that K(X) is of order 10^100 which is much larger than K(3^^^^3) and so pressing the button would increase the Kolmogorov complexity of that simulated world by about 10^100.)
What does your proposed utility function say you should do (how much would you pay to either press the button or prevent it being pressed), and why?
Utility is monotonic, even though complexity isn’t. (Thus X downgrades out of the 3^^^^3 wouldn’t be as bad as, say, 3^^^3 downgrades.) However, utility is bounded by complexity: the complexity of a scenario with utility N must be at least N. (Asymptotically, of course.)
Is your utility function such that there is some scenario for which you assign −3^^^^3 utils? If so, then the Kolmogorov complexity of “3^^^^3 units of disutility” can’t be greater than K(your brain) + K(3^^^^3), since I can write a program to output such a scenario by iterating through all possible scenarios until I find one which your brain assigns −3^^^^3 utils.
A prior of 2^-(K(your brain) + K(3^^^^3)) is not nearly small enough, compared to the utility −3^^^^3, to make this problem go away.
Come to think of it, the problem with this argument is that it assumes that my brain can compute the utility it assigns. But if it’s assigning utility according to Kolmogorov complexity (effectively the proposal in the post), that’s impossible.
The same issue arises with having probability depend on complexity.
Ok, I think in that case my argument doesn’t work. Let me try another approach.
Suppose some stranger appears to you and says that you’re living in a simulated world. Out in the real world there is another simulation that contains 3^^^^3 identical copies of a utopian Earth-like planet plus another 3^^^^3 identical copies of a less utopian (but still pretty good) planet.
Now, if you press this button, you’ll turn X of the utopian planets into copies of the less utopian planet, where X is a 10^100 digit random number. (Note that K(X) is of order 10^100 which is much larger than K(3^^^^3) and so pressing the button would increase the Kolmogorov complexity of that simulated world by about 10^100.)
What does your proposed utility function say you should do (how much would you pay to either press the button or prevent it being pressed), and why?
Utility is monotonic, even though complexity isn’t. (Thus X downgrades out of the 3^^^^3 wouldn’t be as bad as, say, 3^^^3 downgrades.) However, utility is bounded by complexity: the complexity of a scenario with utility N must be at least N. (Asymptotically, of course.)
Probably not, if “you” is interpreted strictly to refer to my current human brain, as opposed to including more complex “enhancements” of the latter.