There is a much simpler way of winning than carefully building up your abstract-reasoning ability to the point where it produces usefully accurate, unbiased, well-calibrated probability distributions over relevant outcome spaces.
The simpler way is just to recognize that, as a human in a western society, you won’t lose much more or win much more than the other humans around you. So you may as well dump the abstract reasoning and rationality, and pick some humans who seem to live relatively non-awful lives (e.g. your colleagues/classmates) and take whatever actions they take. Believe what they believe, even if it seems irrational. Do what they do.
Careful probability estimation and actions taken based upon anticipations of consequences is the kind of cognitive algorithm befitting a lone agent who actually reaps what (s)he sows. For a human, herd-mentality seems to be the more elegant solution: elegant in the sense that the epistemology is hard to get right, but there is a robust argument about consequences and utilities: almost all of the relatively-average-strategy humans in the herd will get roughly the same deal out of life.
Research from hedonic psychology on the “Hedonic Treadmill” effect backs this up further: even if you make more (or less) money than average, you probably won’t actually be happier or better (worse) off.
Of course there are details and complications: which subgroup of humans do you join? How do you make the tradeoff between different subcultures etc. But still, you don’t even need a general solution to that problem, you only need to decide which of the handful of specific subcultures available to you seems best for you.
And, of course, it goes without saying that this strategy is useless for someone who is determined to invest emotionally in a nonstandard life-narrative, like utilitarian charity or life-extension. From this point of view, one might object that joining the herd is selfish in the sense that it isn’t the action which maximizes utility across the herd; but then again most people don’t have a utilitarian concept of selfishness and don’t count benefit to random strangers as part of their actual near-mode, actionable goal set, so from their axiological point of view, herding is an acceptable solution.
The simpler way is just to recognize that, as a human in a western society, you won’t lose much more or win much more than the other humans around you
Well, unless you actually take specific steps to win more....which is kind of what this is about.
which subgroup of humans do you join? How do you make the tradeoff between different subcultures etc. But still, you don’t even need a general solution to that problem, you only need to decide which of the handful of specific subcultures available to you seems best for you.
Note that people probably tend to end up here by this very process. That is, of all the subcultures available to them, the subculture of people who are interested in
carefully building up [their] abstract-reasoning ability to the point where it produces usefully accurate, unbiased, well-calibrated probability distributions over relevant outcome spaces
Note that people probably tend to end up here by this very process. That is, of all the subcultures available to them, the subculture of people who are interested in
True … but I suspect that people who end up here do so because they basically take more-than-averagely literally the verbally endorsed beliefs of the herd. Rationality as memetic immune disorder, failure to compartmentalize etc.
Perhaps I should amend my original comment to say that if you are cognitively very different from the herd, you may want to use a bit of rationality/self-development like a corrective lens. You’ll have to run compartmentalization in software.
Maybe I should try to start a new trend: use {compartmentalization} when you want to invalidate an inference which most people would not make because of compartmentalization?
E.g. “I think all human lives are equally valuable”
“Then why did you spend $1000 on an ipad rather than giving it to Givewell?”
“I refute it thus: {compartmentalization: nearmode/farmode}”
What steps can a person actually take to really, genuinely win more, in the sense of “win” which most people take as their near-mode optimization target?
I suspect that happiness set-points mean that there isn’t really much you can do.
In fact probably one of the few ways to genuinely affect the total of well-being over your lifetime is to take seriously the notion that you have so little control over it: you’ll get depressed about it.
I recently read a book called 59 seconds which said that 50% of the variance in life satisfaction/happiness is directly genetically determined via your happiness set-point.
In fact the advice that the book gave was to just chill out about life, that by far the easiest way to improve your life is to frame it more positively.
There’s at least one very big problem with this sort of majoritarian herding: If everyone did it, it wouldn’t work in the least. You need a substantial proportion of people actually trying to get the right answer in order for “going with the herd” to get you anywhere. And even then, it will only get you the average; you’ll never beat the average by going with the average. (And don’t you think that, say, Einstein beat the average?)
And in fact there are independent reasons from evolutionary psychology and memetics to suspect that everyone IS doing it, or at least a lot of people are doing it a lot of the time. Ask most Christians why they are Christian, and they won’t give you detailed theological reasons; they’ll shrug and say “It’s how I was raised”.
This is sort of analogous to the efficient market hypothesis, and the famous argument that you should never try to bet against the market because on average the market always wins. Well… if you actually look at the data, no it doesn’t, and people who bet against the market can in some cases become spectacularly rich. Moreover, the reason the market is as efficient as it is relies upon the fact that millions of people buy their stocks NOT in a Keynesian beauty contest, but instead based on the fundamental value of underlying assets. With enough value investors, people who just buy market-wide ETFs can do very well. But if there were no value investors (or worse, no underlying assets! A casino is an example of a market with options that have no underlying assets), buying ETFs would get you nowhere.
There is a much simpler way of winning than carefully building up your abstract-reasoning ability to the point where it produces usefully accurate, unbiased, well-calibrated probability distributions over relevant outcome spaces.
The simpler way is just to recognize that, as a human in a western society, you won’t lose much more or win much more than the other humans around you. So you may as well dump the abstract reasoning and rationality, and pick some humans who seem to live relatively non-awful lives (e.g. your colleagues/classmates) and take whatever actions they take. Believe what they believe, even if it seems irrational. Do what they do.
Careful probability estimation and actions taken based upon anticipations of consequences is the kind of cognitive algorithm befitting a lone agent who actually reaps what (s)he sows. For a human, herd-mentality seems to be the more elegant solution: elegant in the sense that the epistemology is hard to get right, but there is a robust argument about consequences and utilities: almost all of the relatively-average-strategy humans in the herd will get roughly the same deal out of life.
Research from hedonic psychology on the “Hedonic Treadmill” effect backs this up further: even if you make more (or less) money than average, you probably won’t actually be happier or better (worse) off.
Of course there are details and complications: which subgroup of humans do you join? How do you make the tradeoff between different subcultures etc. But still, you don’t even need a general solution to that problem, you only need to decide which of the handful of specific subcultures available to you seems best for you.
And, of course, it goes without saying that this strategy is useless for someone who is determined to invest emotionally in a nonstandard life-narrative, like utilitarian charity or life-extension. From this point of view, one might object that joining the herd is selfish in the sense that it isn’t the action which maximizes utility across the herd; but then again most people don’t have a utilitarian concept of selfishness and don’t count benefit to random strangers as part of their actual near-mode, actionable goal set, so from their axiological point of view, herding is an acceptable solution.
Well, unless you actually take specific steps to win more....which is kind of what this is about.
Note that people probably tend to end up here by this very process. That is, of all the subcultures available to them, the subculture of people who are interested in
is the most attractive.
True … but I suspect that people who end up here do so because they basically take more-than-averagely literally the verbally endorsed beliefs of the herd. Rationality as memetic immune disorder, failure to compartmentalize etc.
Perhaps I should amend my original comment to say that if you are cognitively very different from the herd, you may want to use a bit of rationality/self-development like a corrective lens. You’ll have to run compartmentalization in software.
Maybe I should try to start a new trend: use {compartmentalization} when you want to invalidate an inference which most people would not make because of compartmentalization?
E.g. “I think all human lives are equally valuable”
“Then why did you spend $1000 on an ipad rather than giving it to Givewell?”
“I refute it thus: {compartmentalization: nearmode/farmode}”
What steps can a person actually take to really, genuinely win more, in the sense of “win” which most people take as their near-mode optimization target?
I suspect that happiness set-points mean that there isn’t really much you can do.
In fact probably one of the few ways to genuinely affect the total of well-being over your lifetime is to take seriously the notion that you have so little control over it: you’ll get depressed about it.
I recently read a book called 59 seconds which said that 50% of the variance in life satisfaction/happiness is directly genetically determined via your happiness set-point.
In fact the advice that the book gave was to just chill out about life, that by far the easiest way to improve your life is to frame it more positively.
Happiness is a sham; focus on satisfaction. There don’t seem to be satisfaction set points.
That said, I agree with what you seem to be saying- that optimization is a procedure that is itself subject to optimization.
There’s at least one very big problem with this sort of majoritarian herding: If everyone did it, it wouldn’t work in the least. You need a substantial proportion of people actually trying to get the right answer in order for “going with the herd” to get you anywhere. And even then, it will only get you the average; you’ll never beat the average by going with the average. (And don’t you think that, say, Einstein beat the average?)
And in fact there are independent reasons from evolutionary psychology and memetics to suspect that everyone IS doing it, or at least a lot of people are doing it a lot of the time. Ask most Christians why they are Christian, and they won’t give you detailed theological reasons; they’ll shrug and say “It’s how I was raised”.
This is sort of analogous to the efficient market hypothesis, and the famous argument that you should never try to bet against the market because on average the market always wins. Well… if you actually look at the data, no it doesn’t, and people who bet against the market can in some cases become spectacularly rich. Moreover, the reason the market is as efficient as it is relies upon the fact that millions of people buy their stocks NOT in a Keynesian beauty contest, but instead based on the fundamental value of underlying assets. With enough value investors, people who just buy market-wide ETFs can do very well. But if there were no value investors (or worse, no underlying assets! A casino is an example of a market with options that have no underlying assets), buying ETFs would get you nowhere.