My view is that any world where the value of possible outcomes is heavy-tailed distributed (x-risk is a thing, but also more everyday things like income, and I’d guess life satisfaction) is a world where the best opportunities are nonobvious and better epistemology will thus have very strong returns.
I maybe am open to an argument that a 10th century peasant literally has no ability to have a better life, but I basically think that it holds for everyone I talk to day-to-day.
So you’re saying rationality is good if your utility is linear in the quantity of some goods? (For most people it is more like logarithmic, right?) But it seems that you want to say that independent thought is usually useful...
I’m sure the 10th century peasant does have ways to have a better life, but they just don’t necessarily involve doing rationality training, which is pretty obviously does not (and should not) help in all situations. Right?
My view is that any world where the value of possible outcomes is heavy-tailed distributed (x-risk is a thing, but also more everyday things like income, and I’d guess life satisfaction) is a world where the best opportunities are nonobvious and better epistemology will thus have very strong returns.
I maybe am open to an argument that a 10th century peasant literally has no ability to have a better life, but I basically think that it holds for everyone I talk to day-to-day.
So you’re saying rationality is good if your utility is linear in the quantity of some goods? (For most people it is more like logarithmic, right?) But it seems that you want to say that independent thought is usually useful...
I’m sure the 10th century peasant does have ways to have a better life, but they just don’t necessarily involve doing rationality training, which is pretty obviously does not (and should not) help in all situations. Right?
Yes, it seems to me that we should care about some things linearly, though I’ll have to think some more about why I think that.