My best post was a dunk on MIRI[1], and now I’ve written up another point of disagreement/challenge to the Yudkowsky view.
There’s a part of me that questions the opportunity cost of spending hours expressing takes of mine that are only valuable because they disagree in a relevant aspect with a MIRI position? I could have spent those hours studying game theory or optimisation.
I feel like the post isn’t necessarily raising the likelihood of AI existential safety?
I think those are questions I should ask more often before starting on a new LessWrong post; “how does this raise the likelihood of AI existential safety? By how much? How does it compare to my other post ideas/unfinished drafts?”
Maybe I shouldn’t embark on a writing project until I have (a) compelling narrative(s) for why my writing would be useful/valuable for my stated ends.
This is an uncharitable framing of the post, but it is true that the post was written from a place of annoyance. It’s also true that I have many important disagreements with the Yudkowsky-Soares-Bensinger position, and expressing them is probably a valuable epistemic service.
Generally, I don’t think it’s good to gate “is subquestion X, related to great cause Y, true?” with questions about “does addressing this subquestion contribute to great cause Y?” Like I don’t think it’s good in general, and don’t think it’s good here.
I can’t justify this in a paragraph, but I’m basing this mostly of “Huh, that’s funny” being far more likely to lead to insight than “I must have insight!” Which means it’s a better way of contributing to great causes, generally.
(And honestly, at another level entirely, I think that saying true things, which break up uniform blocks of opinion on LW, is good for the health of the LW community.)
Edit: That being said, if the alternative to following your curiosity on one thing is like, super high value, ofc it’s better. But meh, I mean I’m glad that post is out there. It’s a good central source for a particular branch of criticism, and I think it helped me understand the world more.
My best post was a dunk on MIRI[1], and now I’ve written up another point of disagreement/challenge to the Yudkowsky view.
There’s a part of me that questions the opportunity cost of spending hours expressing takes of mine that are only valuable because they disagree in a relevant aspect with a MIRI position? I could have spent those hours studying game theory or optimisation.
I feel like the post isn’t necessarily raising the likelihood of AI existential safety?
I think those are questions I should ask more often before starting on a new LessWrong post; “how does this raise the likelihood of AI existential safety? By how much? How does it compare to my other post ideas/unfinished drafts?”
Maybe I shouldn’t embark on a writing project until I have (a) compelling narrative(s) for why my writing would be useful/valuable for my stated ends.
This is an uncharitable framing of the post, but it is true that the post was written from a place of annoyance. It’s also true that I have many important disagreements with the Yudkowsky-Soares-Bensinger position, and expressing them is probably a valuable epistemic service.
Generally, I don’t think it’s good to gate “is subquestion X, related to great cause Y, true?” with questions about “does addressing this subquestion contribute to great cause Y?” Like I don’t think it’s good in general, and don’t think it’s good here.
I can’t justify this in a paragraph, but I’m basing this mostly of “Huh, that’s funny” being far more likely to lead to insight than “I must have insight!” Which means it’s a better way of contributing to great causes, generally.
(And honestly, at another level entirely, I think that saying true things, which break up uniform blocks of opinion on LW, is good for the health of the LW community.)
Edit: That being said, if the alternative to following your curiosity on one thing is like, super high value, ofc it’s better. But meh, I mean I’m glad that post is out there. It’s a good central source for a particular branch of criticism, and I think it helped me understand the world more.