What epistemic algorithms would you run to discover more about your arbitrary preferences and to make sure you were interpreting them correctly? (Assuming you don’t have access to an FAI.) For example, what kinds of reflection/introspection or empiricism would you do, given your current level of wisdom/intelligence and a lot of time?
It’s a good question, and ruling out the FAI takes away my favourite strategy!
One thing I consider is how my verbal expressions of preference will tend to be biased. For example if I went around saying “I’d willingly give up immortality to prevent 28 strangers from starving” then I would triple check my belief to see if it was an actual preference and not a pure PR soundbite. More generally I try to bring the question down to the crude level of “what do I want?”, eliminating distracting thoughts about how things ‘should’ be. I visualize possible futures and simply pick the one I like more.
Another question I like to ask myself (and frequently find myself asked by other people while immersed in SIAI affiliated culture) is “what if an FAI or Omega told you that your actual extrapolated preference was X?”. If I find myself seriously doubting the FAI then that is rather significant evidence. (And also not an unreasonable position. The doubt is correctly directed at the method of extrapolating preferences instilled by the programmers or the Omega postulator.)
What epistemic algorithms would you run to discover more about your arbitrary preferences and to make sure you were interpreting them correctly? (Assuming you don’t have access to an FAI.) For example, what kinds of reflection/introspection or empiricism would you do, given your current level of wisdom/intelligence and a lot of time?
It’s a good question, and ruling out the FAI takes away my favourite strategy!
One thing I consider is how my verbal expressions of preference will tend to be biased. For example if I went around saying “I’d willingly give up immortality to prevent 28 strangers from starving” then I would triple check my belief to see if it was an actual preference and not a pure PR soundbite. More generally I try to bring the question down to the crude level of “what do I want?”, eliminating distracting thoughts about how things ‘should’ be. I visualize possible futures and simply pick the one I like more.
Another question I like to ask myself (and frequently find myself asked by other people while immersed in SIAI affiliated culture) is “what if an FAI or Omega told you that your actual extrapolated preference was X?”. If I find myself seriously doubting the FAI then that is rather significant evidence. (And also not an unreasonable position. The doubt is correctly directed at the method of extrapolating preferences instilled by the programmers or the Omega postulator.)