My point is that your advice isn’t appropriate for everyone. People who do care about FAI or other goals besides community approval should think/argue about assumptions. Of course one could overdo that and waste too much time, but they clearly can’t just work on whatever problems seem likely to offer the largest social reward per unit of effort.
Though that doesn’t make me adopt FAI as my own primary motivation
What if we rewarded you for adopting FAI as your primary motivation? :)
No, I mean what if we offered you rewards for changing your terminal goals so that you’d continue to be motivated by FAI even after the rewards end? You should take that deal if we can offer big enough rewards and your discount rate is high enough, right? Previous related thread
You’re trying to affect the motivation of a decision theory researcher by offering a transaction whose acceptance is itself a tricky decision theory problem?
Upvoted for hilarious metaness.
Now, all we need to do is figure out how humans can modify their own source code and verify those modifications in others...
My point is that your advice isn’t appropriate for everyone. People who do care about FAI or other goals besides community approval should think/argue about assumptions. Of course one could overdo that and waste too much time, but they clearly can’t just work on whatever problems seem likely to offer the largest social reward per unit of effort.
What if we rewarded you for adopting FAI as your primary motivation? :)
That sounds sideways. Wouldn’t that make the reward my primary motivation? =)
No, I mean what if we offered you rewards for changing your terminal goals so that you’d continue to be motivated by FAI even after the rewards end? You should take that deal if we can offer big enough rewards and your discount rate is high enough, right? Previous related thread
You’re trying to affect the motivation of a decision theory researcher by offering a transaction whose acceptance is itself a tricky decision theory problem?
Upvoted for hilarious metaness.
Now, all we need to do is figure out how humans can modify their own source code and verify those modifications in others...
That could work, but how would that affect my behavior? We don’t seem to have any viable mathematical attacks on FAI-related matters except this one.