My most popular LW post wasn’t a post at all. It was a comment on John Wentworth’s post asking “what’s up with Monkeypox?”
Years before, in the first few months of COVID, I took a considerable amount of time to build a scorecard of risk factors for a pandemic, and backtested it against historical pandemics. At the time, the first post received a lukewarm reception, and all my historical backtesting quickly fell off the frontpage.
But when I was able to bust it out, it paid off (in karma). People were able to see the relevance to an issue they cared about, and it was probably a better answer in this time and place than they could have obtained almost anywhere else.
Devising the scorecard and doing the backtesting built an “intellectual platform” that I can now use going forward whenever there’s a new potential pandemic threat. I liken it to engineering platforms, which don’t have an immediate payoff, but are a long-term investment.
People won’t necessarily appreciate the hard work of building an intellectual platform when you’re assembling it. And this can make it feel like the platform isn’t worthwhile: if people can’t see the obvious importance of what I’m doing, then maybe I’m on the wrong track?
Instead, I think it’s helpful to see people’s reactions as indicating whether or not they have a burning problem that your output is providing help for. Of course a platform-in-development won’t get much applause! But if you’ve selected your project thoughtfully and executed passably well, then eventually, when the right moment comes, it may pay off.
For the last couple of years, I’ve been building toward a platform for “learning how to learn,” and I’m also working on an “aging research” platform. These turn out to be harder topics—a pandemic is honestly just a nice smooth logistic curve, which is far less complicated than monitoring your own brain and hacking your own learning process. And aging research is the wild west. So I expect this to take longer.
Intellectual Platforms
My most popular LW post wasn’t a post at all. It was a comment on John Wentworth’s post asking “what’s up with Monkeypox?”
Years before, in the first few months of COVID, I took a considerable amount of time to build a scorecard of risk factors for a pandemic, and backtested it against historical pandemics. At the time, the first post received a lukewarm reception, and all my historical backtesting quickly fell off the frontpage.
But when I was able to bust it out, it paid off (in karma). People were able to see the relevance to an issue they cared about, and it was probably a better answer in this time and place than they could have obtained almost anywhere else.
Devising the scorecard and doing the backtesting built an “intellectual platform” that I can now use going forward whenever there’s a new potential pandemic threat. I liken it to engineering platforms, which don’t have an immediate payoff, but are a long-term investment.
People won’t necessarily appreciate the hard work of building an intellectual platform when you’re assembling it. And this can make it feel like the platform isn’t worthwhile: if people can’t see the obvious importance of what I’m doing, then maybe I’m on the wrong track?
Instead, I think it’s helpful to see people’s reactions as indicating whether or not they have a burning problem that your output is providing help for. Of course a platform-in-development won’t get much applause! But if you’ve selected your project thoughtfully and executed passably well, then eventually, when the right moment comes, it may pay off.
For the last couple of years, I’ve been building toward a platform for “learning how to learn,” and I’m also working on an “aging research” platform. These turn out to be harder topics—a pandemic is honestly just a nice smooth logistic curve, which is far less complicated than monitoring your own brain and hacking your own learning process. And aging research is the wild west. So I expect this to take longer.