The first response to a trade-off is to optimize it; it’s unlikely that each of us is at the exact right tradeoff between efficiency/flexibility. But we’re trying to do that here, I think, and the returns seem minimal.
The second response should be to ‘make the pie higher’. (To put this into programmer terms, once you’ve gotten the algorithm as good as it’s going to get, you look into what micro-optimizations are available and how to throw more resources at the problem.) Optimizing the tradeoff becomes less important if we have more learning or thinking capacity. Here I think LW-types aren’t doing so well. Are there really so few worthwhile cognitive prostheses that we should have had as few posts on them as we have had?
(I know of 0, but perhaps I’m wrong—perhaps someone has written a good post about spaced repetition systems, say, or has calculated out how worthwhile drugs like modafinil are—and I simply missed or’ve forgotten about them.)
The first response to a trade-off is to optimize it; it’s unlikely that each of us is at the exact right tradeoff between efficiency/flexibility. But we’re trying to do that here, I think, and the returns seem minimal.
The second response should be to ‘make the pie higher’. (To put this into programmer terms, once you’ve gotten the algorithm as good as it’s going to get, you look into what micro-optimizations are available and how to throw more resources at the problem.) Optimizing the tradeoff becomes less important if we have more learning or thinking capacity. Here I think LW-types aren’t doing so well. Are there really so few worthwhile cognitive prostheses that we should have had as few posts on them as we have had?
(I know of 0, but perhaps I’m wrong—perhaps someone has written a good post about spaced repetition systems, say, or has calculated out how worthwhile drugs like modafinil are—and I simply missed or’ve forgotten about them.)