If you assign some amount of utility to making muffins, and then choose not to make muffins, then you are either failing to optimize for utility, or assign some larger amount of utility to something which is mutually exclusive with making muffins.
What you have just described is correctly predicting your future actions, not deciding what your future actions will be for the purpose of reducing the negative effects of an action which would otherwise be a short-term benefit followed by a long-term harm. I predict that I would benefit greatly from buying a cloak, but I would be harmed more in the long term from having to keep and maintain a cloak that I rarely used. Without hacking myself, I would rarely use a cloak, and that is why I haven’t purchased one.
I thought I saw someone who had managed to to hack their own future preferences for a purpose similar to my own, and was trying to confirm if that was the case and if so gather a data point from which to evaluate the expected results of imitation. It appears as though I just found someone who made the same decision based on simple evaluation of the expected results, without a prior attempt to alter the expected results.
And while my opinions regarding cloaks are literally true, my final goal isn’t to decide whether to buy a cloak, but to practice in a low-risk environment like garb the skills required for high-risk behavior with multiple complicating factors.
If you assign some amount of utility to making muffins, and then choose not to make muffins, then you are either failing to optimize for utility, or assign some larger amount of utility to something which is mutually exclusive with making muffins.
What you have just described is correctly predicting your future actions, not deciding what your future actions will be for the purpose of reducing the negative effects of an action which would otherwise be a short-term benefit followed by a long-term harm. I predict that I would benefit greatly from buying a cloak, but I would be harmed more in the long term from having to keep and maintain a cloak that I rarely used. Without hacking myself, I would rarely use a cloak, and that is why I haven’t purchased one.
I thought I saw someone who had managed to to hack their own future preferences for a purpose similar to my own, and was trying to confirm if that was the case and if so gather a data point from which to evaluate the expected results of imitation. It appears as though I just found someone who made the same decision based on simple evaluation of the expected results, without a prior attempt to alter the expected results.
And while my opinions regarding cloaks are literally true, my final goal isn’t to decide whether to buy a cloak, but to practice in a low-risk environment like garb the skills required for high-risk behavior with multiple complicating factors.