It seems I have to make decision theory a priority. Right now I don’t see why one shouldn’t help uFAI actively to maximize utility. If uFAI is much more likely than FAI then the possible consequences of trying to prevent uFAI might be much larger, especially if the uFAI tries to outweigh any counterweight applied by the FAI.
I’m also puzzled that this topic isn’t more frequently discussed on LW as it is obviously being thought about as the OP shows. Could there be a more important topic given the scope of the problem?
It seems I have to make decision theory a priority. Right now I don’t see why one shouldn’t help uFAI actively to maximize utility. If uFAI is much more likely than FAI then the possible consequences of trying to prevent uFAI might be much larger, especially if the uFAI tries to outweigh any counterweight applied by the FAI.
I’m also puzzled that this topic isn’t more frequently discussed on LW as it is obviously being thought about as the OP shows. Could there be a more important topic given the scope of the problem?