Piggyback question on this: why aren’t LessWrongers finding and exploiting cognitive biases in markets in order to raise funds for their projects?
Large well funded markets are smarter than lesswrongers.
But to the extent that LW tends to think that entire fields of experts can be blind in their disciplines in ways disciplined rationalists are not (theologians, philosophers, doctors, politicians, educators, physicists), there would seem to be the prospect of some massively profitable arbitrage or prediction somewhere. And it’s not like any of LessWrong’s projects are allergic to funding.
Experts with incentives that reward epistemic accuracy and have significant direct feedback from the universe can usually be assumed to be reliable. All else being equal this would lead us to trust index funds, be wary of managed funds and be sceptical of paid financial advice.
Large well funded markets are smarter than lesswrongers.
Experts with incentives that reward epistemic accuracy and have significant direct feedback from the universe can usually be assumed to be reliable. All else being equal this would lead us to trust index funds, be wary of managed funds and be sceptical of paid financial advice.