Here’s a point of consideration: if you take Kurzweil’s solution, then you can avoid Pascal’s mugging when you are an agent, and your utility function is defined over similar agents. However, this solution wouldn’t work on, for example, a paperclip maximizer, which would still be vulnerable—anthropiic reasoning does not apply over paperclips.
While it might be useful to have Friendly-style AIs be more resilient to P-mugging than simple maximizers, it’s not exactly satisfying as an epistemological device.
Here’s a point of consideration: if you take Kurzweil’s solution, then you can avoid Pascal’s mugging when you are an agent, and your utility function is defined over similar agents. However, this solution wouldn’t work on, for example, a paperclip maximizer, which would still be vulnerable—anthropiic reasoning does not apply over paperclips.
While it might be useful to have Friendly-style AIs be more resilient to P-mugging than simple maximizers, it’s not exactly satisfying as an epistemological device.