Piggyback question on this: why aren’t LessWrongers finding and exploiting cognitive biases in markets in order to raise funds for their projects?
I realize that (a) it’s really hard to do this or everyone would do it; and (b) there probably are individual LessWrongers working in finance. But to the extent that LW tends to think that entire fields of experts can be blind in their disciplines in ways disciplined rationalists are not (theologians, philosophers, doctors, politicians, educators, physicists), there would seem to be the prospect of some massively profitable arbitrage or prediction somewhere. And it’s not like any of LessWrong’s projects are allergic to funding.
My theory is that initially people who believe they can beat the experts in a variety of fields try to beat the experts at testable matters, which are the natural choice for someone wanting to demonstrate superiority or gain funding. At that point one of 3 things can happen: a: success that others recognize, b: re-calibration of self assessment, c: maintenance of the belief by change of the subject matters to non testable (those without strong feedback).
Piggyback question on this: why aren’t LessWrongers finding and exploiting cognitive biases in markets in order to raise funds for their projects?
Large well funded markets are smarter than lesswrongers.
But to the extent that LW tends to think that entire fields of experts can be blind in their disciplines in ways disciplined rationalists are not (theologians, philosophers, doctors, politicians, educators, physicists), there would seem to be the prospect of some massively profitable arbitrage or prediction somewhere. And it’s not like any of LessWrong’s projects are allergic to funding.
Experts with incentives that reward epistemic accuracy and have significant direct feedback from the universe can usually be assumed to be reliable. All else being equal this would lead us to trust index funds, be wary of managed funds and be sceptical of paid financial advice.
Piggyback question on this: why aren’t LessWrongers finding and exploiting cognitive biases in markets in order to raise funds for their projects?
I realize that (a) it’s really hard to do this or everyone would do it; and (b) there probably are individual LessWrongers working in finance. But to the extent that LW tends to think that entire fields of experts can be blind in their disciplines in ways disciplined rationalists are not (theologians, philosophers, doctors, politicians, educators, physicists), there would seem to be the prospect of some massively profitable arbitrage or prediction somewhere. And it’s not like any of LessWrong’s projects are allergic to funding.
My theory is that initially people who believe they can beat the experts in a variety of fields try to beat the experts at testable matters, which are the natural choice for someone wanting to demonstrate superiority or gain funding. At that point one of 3 things can happen: a: success that others recognize, b: re-calibration of self assessment, c: maintenance of the belief by change of the subject matters to non testable (those without strong feedback).
Large well funded markets are smarter than lesswrongers.
Experts with incentives that reward epistemic accuracy and have significant direct feedback from the universe can usually be assumed to be reliable. All else being equal this would lead us to trust index funds, be wary of managed funds and be sceptical of paid financial advice.