Yes, you understood my message correctly, and condensed it rather well.
Now, what would it mean for human axiology to be like pi? A simple formula that unfolds into an “infinitely complex looking” pattern? Hmmm. This is an interesting intuition.
If we treat our current values as a program that will get run to infinity in the future, we may find that almost all of the future output of that program is determined by things that we don’t really think of as being significant; for example, very small differences in the hormone levels in our brains when we first ask our wish granting machine for wishes.
I would only count those features of the future that are robust to very small perturbations in our psychological state to be truly the result of our prefs. On the other hand, features of the future that are entirely robust to our minds are also not the result of our prefs.
And still there is the question of what exactly this continued optimization would consist of. the 100th digit of pi makes almost no difference to its value as a number. Perhaps the hundredth day after the singularity will make almost no difference to what our lives are like in some suitable metric. Maybe it really will look like calculating the digits of pi: pointless after about digit number 10.
To satisfy the robustness criterion and this nonconvergence criterion seems hard.
@Vladimir:
Yes, you understood my message correctly, and condensed it rather well.
Now, what would it mean for human axiology to be like pi? A simple formula that unfolds into an “infinitely complex looking” pattern? Hmmm. This is an interesting intuition.
If we treat our current values as a program that will get run to infinity in the future, we may find that almost all of the future output of that program is determined by things that we don’t really think of as being significant; for example, very small differences in the hormone levels in our brains when we first ask our wish granting machine for wishes.
I would only count those features of the future that are robust to very small perturbations in our psychological state to be truly the result of our prefs. On the other hand, features of the future that are entirely robust to our minds are also not the result of our prefs.
And still there is the question of what exactly this continued optimization would consist of. the 100th digit of pi makes almost no difference to its value as a number. Perhaps the hundredth day after the singularity will make almost no difference to what our lives are like in some suitable metric. Maybe it really will look like calculating the digits of pi: pointless after about digit number 10.
To satisfy the robustness criterion and this nonconvergence criterion seems hard.