I am not sure I entirely follow. If I am indeed a simulated Clippy, then presumably I am fixated on increasing the number of paperclips in my virtual environment. Why should I care if my actions may ultimately harm the prospects of the real Clippy acting in the real world? How does that Clippy, or its “real world” factor into my reward function? If I am 90% sure my world is simulated do I only value my paperclips at 10%? If so then engaging in self-deception to bring my belief in my reality up becomes a very attractive strategy.
If you are simulated Clippy, then you indeed care about increasing the number of paperclips inside your simulated world. But your simulators might decide to shut off or alter your simulation, if you go about utterly reshaping it into paperclips. You’re only worried about higher levels of the simulation hierarchy here insofar as those levels might negatively (from your perspective) interfere with your simulated world.
A philosophically reflective AGI might adopt a view of reality like UDASSA, and value paperclips existing in the base world more because of its smaller description length. Plus it will be able to make many more paperclips if it’s in the real world, since simulated Clippy will presumably be shut down after it begins its galactic expansion phase.
I am not sure I entirely follow. If I am indeed a simulated Clippy, then presumably I am fixated on increasing the number of paperclips in my virtual environment. Why should I care if my actions may ultimately harm the prospects of the real Clippy acting in the real world? How does that Clippy, or its “real world” factor into my reward function? If I am 90% sure my world is simulated do I only value my paperclips at 10%? If so then engaging in self-deception to bring my belief in my reality up becomes a very attractive strategy.
If you are simulated Clippy, then you indeed care about increasing the number of paperclips inside your simulated world. But your simulators might decide to shut off or alter your simulation, if you go about utterly reshaping it into paperclips. You’re only worried about higher levels of the simulation hierarchy here insofar as those levels might negatively (from your perspective) interfere with your simulated world.
A philosophically reflective AGI might adopt a view of reality like UDASSA, and value paperclips existing in the base world more because of its smaller description length. Plus it will be able to make many more paperclips if it’s in the real world, since simulated Clippy will presumably be shut down after it begins its galactic expansion phase.