It sounds like you’re thinking of the “true utility function’s” preferences as a serious attempt to model the future consequences of present actions, including their effect on future brain-states.
I don’t think that’s always how the brain works, even if you can tell a nice story that way.
I think that’s usually not how the brain works, but I also think that I’m less than totally antirational. That is, it’s possible to construct a “true utility function” that would dictate to me a life I will firmly enjoy living.
That statement has a large inferential distance from what most people know, so I should actually hurry up and write the damn LW entry explaining it.
I think you could probably construct several mutually contradictory utility functions which would dictate lives you enjoy living. I think it’s even possible that you could construct several which you’d perceive as optimal, within the bounds of your imagination and knowledge.
I don’t think we yet have the tools to figure out which one actually is optimal. And I’m pretty sure the latter aren’t a subset of the former; we see plenty of people convincing themselves that they can’t do better than their crappy lives.
Like I said: there’s a large inferential distance here, so I have an entire post on the subject I’m drafting for notions of construction and optimality.
It sounds like you’re thinking of the “true utility function’s” preferences as a serious attempt to model the future consequences of present actions, including their effect on future brain-states.
I don’t think that’s always how the brain works, even if you can tell a nice story that way.
I think that’s usually not how the brain works, but I also think that I’m less than totally antirational. That is, it’s possible to construct a “true utility function” that would dictate to me a life I will firmly enjoy living.
That statement has a large inferential distance from what most people know, so I should actually hurry up and write the damn LW entry explaining it.
I think you could probably construct several mutually contradictory utility functions which would dictate lives you enjoy living. I think it’s even possible that you could construct several which you’d perceive as optimal, within the bounds of your imagination and knowledge.
I don’t think we yet have the tools to figure out which one actually is optimal. And I’m pretty sure the latter aren’t a subset of the former; we see plenty of people convincing themselves that they can’t do better than their crappy lives.
Well that post happened.
Like I said: there’s a large inferential distance here, so I have an entire post on the subject I’m drafting for notions of construction and optimality.