No, I mean what if we offered you rewards for changing your terminal goals so that you’d continue to be motivated by FAI even after the rewards end? You should take that deal if we can offer big enough rewards and your discount rate is high enough, right? Previous related thread
You’re trying to affect the motivation of a decision theory researcher by offering a transaction whose acceptance is itself a tricky decision theory problem?
Upvoted for hilarious metaness.
Now, all we need to do is figure out how humans can modify their own source code and verify those modifications in others...
That sounds sideways. Wouldn’t that make the reward my primary motivation? =)
No, I mean what if we offered you rewards for changing your terminal goals so that you’d continue to be motivated by FAI even after the rewards end? You should take that deal if we can offer big enough rewards and your discount rate is high enough, right? Previous related thread
You’re trying to affect the motivation of a decision theory researcher by offering a transaction whose acceptance is itself a tricky decision theory problem?
Upvoted for hilarious metaness.
Now, all we need to do is figure out how humans can modify their own source code and verify those modifications in others...
That could work, but how would that affect my behavior? We don’t seem to have any viable mathematical attacks on FAI-related matters except this one.