My personal intuition is that what is ‘rational’ depends exclusively on your objective function at the time you make the choice.
I may value $10, and avoiding eating bugs a lot; if you offer me $30 to eat a cricket and a pill that gets rid of my sense of disgust and makes me care about money exclusively, I wouldn’t take that deal because until I take that deal I still want to avoid eating bugs. That, were I to take that pill, I would no longer regret the decision I made, is not that interesting. If on the other hand I dislike eating bugs, but don’t value not eating bugs, well then I would happily take your offer. But these aren’t two different arguments about what is ‘rational’, I see them as entirely different setups.
We don’t need to go as far as to posit angels, honestly; Heroin is amazing (I assume). I accept that were I to try it, I would be supremely glad I did—and yet I am perfectly comfortable not trying it.
I think the more interesting question (which I personally wrestle with and have not discovered an answer to), is what to do when you don’t know your true objective function, or if your true objective function fluctuates significantly over time, or if your actions/brain/conscious experience is best-modeled by multiple agents with different objective functions which at different times have more control.
My personal intuition is that what is ‘rational’ depends exclusively on your objective function at the time you make the choice.
I may value $10, and avoiding eating bugs a lot; if you offer me $30 to eat a cricket and a pill that gets rid of my sense of disgust and makes me care about money exclusively, I wouldn’t take that deal because until I take that deal I still want to avoid eating bugs. That, were I to take that pill, I would no longer regret the decision I made, is not that interesting. If on the other hand I dislike eating bugs, but don’t value not eating bugs, well then I would happily take your offer. But these aren’t two different arguments about what is ‘rational’, I see them as entirely different setups.
We don’t need to go as far as to posit angels, honestly; Heroin is amazing (I assume). I accept that were I to try it, I would be supremely glad I did—and yet I am perfectly comfortable not trying it.
I think the more interesting question (which I personally wrestle with and have not discovered an answer to), is what to do when you don’t know your true objective function, or if your true objective function fluctuates significantly over time, or if your actions/brain/conscious experience is best-modeled by multiple agents with different objective functions which at different times have more control.