If you think of FAI as Physical Laws 2.0, this particular worry goes away (for me, at least). Everything you do is real within FAI, and free will works the same way it does in any other deterministic physics: only you determine your decisions, within the system.
Everything you do is real within FAI, and free will works the same way it does in any other deterministic physics: only you determine your decisions, within the system.
It’s not quite the same, because when the FAI decided what Physical Laws 2.0 ought to be, it must have made a prediction of what my decisions would be under the laws that it considered. So when I make my decisions, I’m really making decisions for two agents: the real me, and the one in FAI’s prediction process. For example, if Physical Laws 2.0 appears to allow me to murder someone, it must be that the FAI predicted that I wouldn’t murder anyone, and if I did decide to murder someone, the likely logical consequence of that decision is that the FAI would have picked a different set of Physical Laws 2.0.
It seems to me that free will works rather differently… sort of like you’re in a Newcomb’s Problem that never ends.
For example, if Physical Laws 2.0 appears to allow me to murder someone, it must be that the FAI predicted that I wouldn’t murder anyone, and if I did decide to murder someone, the likely logical consequence of that decision is that the FAI would have picked a different set of Physical Laws 2.0.
It just means that you were mistaken and PL2.0 doesn’t actually allow you to murder. It’s physically (rather, magically, since laws are no longer simple) impossible. This event has been prohibited.
If you think of FAI as Physical Laws 2.0, this particular worry goes away (for me, at least). Everything you do is real within FAI, and free will works the same way it does in any other deterministic physics: only you determine your decisions, within the system.
It’s not quite the same, because when the FAI decided what Physical Laws 2.0 ought to be, it must have made a prediction of what my decisions would be under the laws that it considered. So when I make my decisions, I’m really making decisions for two agents: the real me, and the one in FAI’s prediction process. For example, if Physical Laws 2.0 appears to allow me to murder someone, it must be that the FAI predicted that I wouldn’t murder anyone, and if I did decide to murder someone, the likely logical consequence of that decision is that the FAI would have picked a different set of Physical Laws 2.0.
It seems to me that free will works rather differently… sort of like you’re in a Newcomb’s Problem that never ends.
It just means that you were mistaken and PL2.0 doesn’t actually allow you to murder. It’s physically (rather, magically, since laws are no longer simple) impossible. This event has been prohibited.