Wait a minute, I’m confused. I thought CEV meant something closer to “what we would want do if we were much smarter”. What Stuart suggests sounds more like “what we think we want now, executed by someone much smarter”, i.e. basically the overly-literal genie problem.
But your answer seems to suggest… well, I’m not sure I get what you mean exactly, but it doesn’t sound like you’re pointing to that distinction. What am I missing?
Is that “what we would want if we were more the person we wanted to be”, or “what we would want if we were more the person a much smarter version of us would want to be”? (My understanding of CEV leans towards the latter, and I think your problem is an instance of the former.)
I’m not sure the two are different in any meaningful way. There person we want to be today isn’t well defined—it takes a smarter intelligence to unwind (CEV) our motivations enough to figure out what we mean by “the person we wanted to be.”
Wait a minute, I’m confused. I thought CEV meant something closer to “what we would want do if we were much smarter”. What Stuart suggests sounds more like “what we think we want now, executed by someone much smarter”, i.e. basically the overly-literal genie problem.
But your answer seems to suggest… well, I’m not sure I get what you mean exactly, but it doesn’t sound like you’re pointing to that distinction. What am I missing?
Also, what we would want if we were more the person we wanted to be.
Is that “what we would want if we were more the person we wanted to be”, or “what we would want if we were more the person a much smarter version of us would want to be”? (My understanding of CEV leans towards the latter, and I think your problem is an instance of the former.)
I’m not sure the two are different in any meaningful way. There person we want to be today isn’t well defined—it takes a smarter intelligence to unwind (CEV) our motivations enough to figure out what we mean by “the person we wanted to be.”