I guess it appears to you that you are working on these problems because they seem like interesting math, or “interesting when taken on its own terms”, but I wonder why you find these particular math problems or assumptions interesting, and not the countless others you could choose instead. Maybe the part of your brain that outputs “interesting” is subconsciously evaluating importance and relevance?
An even more likely explanation is that my mind evaluates reputation gained per unit of effort. Academic math is really crowded, chances are that no one would read my papers anyway. Being in a frustratingly informal field with a lot of pent-up demand for formality allows me to get many people interested in my posts while my mathematician friends get zero feedback on their publications. Of course it didn’t feel so cynical from the inside, it felt more like a growing interest fueled by constant encouragement from the community. If “Re-formalizing PD” had met with a cold reception, I don’t think I’d be doing this now.
In that case you’re essentially outsourcing your “interestingness” evaluation to the SIAI/LW community, and I think we are basing it mostly on relevance to FAI.
Yeah. Though that doesn’t make me adopt FAI as my own primary motivation, just like enjoying sex doesn’t make me adopt genetic fitness as my primary motivation.
My point is that your advice isn’t appropriate for everyone. People who do care about FAI or other goals besides community approval should think/argue about assumptions. Of course one could overdo that and waste too much time, but they clearly can’t just work on whatever problems seem likely to offer the largest social reward per unit of effort.
Though that doesn’t make me adopt FAI as my own primary motivation
What if we rewarded you for adopting FAI as your primary motivation? :)
No, I mean what if we offered you rewards for changing your terminal goals so that you’d continue to be motivated by FAI even after the rewards end? You should take that deal if we can offer big enough rewards and your discount rate is high enough, right? Previous related thread
You’re trying to affect the motivation of a decision theory researcher by offering a transaction whose acceptance is itself a tricky decision theory problem?
Upvoted for hilarious metaness.
Now, all we need to do is figure out how humans can modify their own source code and verify those modifications in others...
An even more likely explanation is that my mind evaluates reputation gained per unit of effort. Academic math is really crowded, chances are that no one would read my papers anyway. Being in a frustratingly informal field with a lot of pent-up demand for formality allows me to get many people interested in my posts while my mathematician friends get zero feedback on their publications. Of course it didn’t feel so cynical from the inside, it felt more like a growing interest fueled by constant encouragement from the community. If “Re-formalizing PD” had met with a cold reception, I don’t think I’d be doing this now.
In that case you’re essentially outsourcing your “interestingness” evaluation to the SIAI/LW community, and I think we are basing it mostly on relevance to FAI.
Yeah. Though that doesn’t make me adopt FAI as my own primary motivation, just like enjoying sex doesn’t make me adopt genetic fitness as my primary motivation.
My point is that your advice isn’t appropriate for everyone. People who do care about FAI or other goals besides community approval should think/argue about assumptions. Of course one could overdo that and waste too much time, but they clearly can’t just work on whatever problems seem likely to offer the largest social reward per unit of effort.
What if we rewarded you for adopting FAI as your primary motivation? :)
That sounds sideways. Wouldn’t that make the reward my primary motivation? =)
No, I mean what if we offered you rewards for changing your terminal goals so that you’d continue to be motivated by FAI even after the rewards end? You should take that deal if we can offer big enough rewards and your discount rate is high enough, right? Previous related thread
You’re trying to affect the motivation of a decision theory researcher by offering a transaction whose acceptance is itself a tricky decision theory problem?
Upvoted for hilarious metaness.
Now, all we need to do is figure out how humans can modify their own source code and verify those modifications in others...
That could work, but how would that affect my behavior? We don’t seem to have any viable mathematical attacks on FAI-related matters except this one.