So for us to understand what you’re even trying to say, you want us to read a bunch of articles, talk to one of your friends, listen to a speech, and only then will we become EAs good enough for you? No thanks.
Diego points to a variety of resources that all make approximately the same point, which I’ll attempt to summarize: If you apply probabilistic “outside view” reasoning to your projects and your career, in practice this means copying approaches that have worked well for other people. But if it’s clear that an approach is working well, then others will be copying it too, and you won’t outperform. So your only realistic shot at outperforming is to find a useful and underused “inside view” way of looking at things.
(FYI, I’ve found that keeping a notebook has been very useful for generating & recording interesting new ideas. If you do it for long enough you can start to develop your own ontology for understanding areas you’re interested in. Don’t worry too much about your notebook’s structure & organization: embrace that it will grow organically & unpredictably.)
your only realistic shot at outperforming is to find a useful and underused “inside view” way of looking at things.
This is wrong. Human beings are not a pool of identical rational agents competing in the same game from the same starting point aiming for the same endpoint.
people make mistakes, systematically.
most people start with less IQ than you, dear reader. You have an unfair advantage, so go exploit it using perfectly standard methods like getting a technology job.
if you have particular tastes, ambitions or goals (you might not even know about them, some self exploration is required) then you may be aiming for a prize that few other people are trying to claim
If someone took the time to analyze lots of historically important inventors, entrepreneurs, and thinkers, I doubt the important common factor would be that they made fewer mistakes than others.
Yes, you can “outperform” without much difficulty if you consider getting a nice job to be “outperforming” or you change the goalposts so you’re no longer trying to do something hard.
If someone took the time to analyze lots of historically important inventors, entrepreneurs, and thinkers, I doubt the important common factor would be that they made fewer mistakes than others.
I think this depends on reference class and what one means by ‘mistakes’. The richest financier is someone whose strategy is explicitly ‘don’t make mistakes.’ (Really, it’s “never lose money” plus the emotional willingness to do the right thing, even if it’s boring instead of clever.)
I think the heart of the disagreement here is the separation between things that are ‘known to someone’ and ‘known to no one’—the strategies one needs to discover what other people have already found are often different from the strategies one needs to discover what no one knows yet, and both of them are paths to success of varying usefulness for various tasks.
I think this depends on reference class and what one means by ‘mistakes’. The richest financier is someone whose strategy is explicitly ‘don’t make mistakes.’ (Really, it’s “never lose money” plus the emotional willingness to do the right thing, even if it’s boring instead of clever.)
Depends on the investment class. Even Charlie Munger (Warren Buffet’s partner) says “If you took our top fifteen decisions out, we’d have a pretty average record.”
I think the heart of the disagreement here is the separation between things that are ‘known to someone’ and ‘known to no one’—the strategies one needs to discover what other people have already found are often different from the strategies one needs to discover what no one knows yet, and both of them are paths to success of varying usefulness for various tasks.
Yes, even if success in the domain is basically about avoiding mistakes, I imagine that if there are huge winners in the domain they got there by finding some new innovative way to get their rate of mistakes down.
The richest financier is someone whose strategy is explicitly ‘don’t make mistakes.’ (Really, it’s “never lose money)
Nope, finance doesn’t work like that. The richest financier is one who (1) has excellent risk management; and (2) got lucky.
Notably, risk management is not about avoiding risks (and so, possible mistakes). It’s about managing risk—acknowledging that mistakes will be made and making sure they won’t kill you.
So, obviously ‘never’ is hyperbole on Buffett’s part.
Nope, finance doesn’t work like that.
I’ll buy that value investing stopped working as well because of increased investor sophistication and a general increase in asset prices. As a somewhat related example, daily S&P 500 momentum investing worked up until 1980, and now you need to track more sophisticated momentum measurements. But to quote Cliff Asness (talking about momentum investing, not value investing):
If everyone did them yesterday, they would go away. They work in my opinion — again using my version of “work” — in kind of a sweet spot. Good enough to be really important if you can follow discipline, not so good enough that the world looks at it and goes, “this is easy.” They’re excruciating at times and I hate those times. I won’t pretend I’m neutral as to those times.
Yes, you can “outperform” without much difficulty if you consider getting a nice job to be “outperforming”
Getting a nice job with a stable relationship, raising children well and having a good circle of friends that you like, indulging your particular tastes is outperforming the average person.
Perhaps what you’re talking about is radical outperformance—“being Steve Jobs”, changing the world etc.
In my opinion seriously aiming for that kind of life is a massive mistake—there is no recipe for it, those who achieve it do so through extraordinary luck + skill + genetic advantages which cannot be reliably replicated by any method whatsoever.
There are lots of bits and pieces—e.g. the notes outlined above that two billionaires have signed on to.
In my opinion seriously aiming for that kind of life is a massive mistake—there is no recipe for it, those who achieve it do so through extraordinary luck + skill + genetic advantages which cannot be reliably replicated by any method whatsoever.
Since when is a high probability of failure by itself a good reason not to do anything? If you’re a rational expected utility maximizer you do things according to their expected value, which means in some cases it makes sense to do things that initially seem impossible.
If you want to wuss out on life and take the path of least resistance, avoid all the biggest and most interesting bosses in the game, and live a life that has little greater challenge or purpose—fine by me. But frankly if that’s the case I’ll have to tap out out of this conversation, since it’s a bad use of my time and I don’t really want to absorb the attitudes of people like you, who explicitly state that they’re totally uninterested in accomplishing anything meaningful.
Thanks. I will give some of those articles a look when I have the chance. However, it isn’t true that every activity is competitive in nature. Many projects are cooperative, in which case it’s not necessarily a problem if you and other people are taking similar approaches and doing them well. We also shouldn’t overestimate the competition and assume that they are going to be applying probabilistic reasoning, when in reality we can still outperform by applying basic rules of rationality.
No, that’s if you want to understand why a specific Lesswrong afficionado became wary of probabilistic thinking to the point of calling it a problem of the EA community. If you don’t care about my opinions in general, you are welcome to take no action about it. He asked for my thoughts, I provided them.
But the reference class of Diego’s thoughts contains more thoughts that are wrong than that are true. So on priors, you might want to ignore them :p
So for us to understand what you’re even trying to say, you want us to read a bunch of articles, talk to one of your friends, listen to a speech, and only then will we become EAs good enough for you? No thanks.
Diego points to a variety of resources that all make approximately the same point, which I’ll attempt to summarize: If you apply probabilistic “outside view” reasoning to your projects and your career, in practice this means copying approaches that have worked well for other people. But if it’s clear that an approach is working well, then others will be copying it too, and you won’t outperform. So your only realistic shot at outperforming is to find a useful and underused “inside view” way of looking at things.
(FYI, I’ve found that keeping a notebook has been very useful for generating & recording interesting new ideas. If you do it for long enough you can start to develop your own ontology for understanding areas you’re interested in. Don’t worry too much about your notebook’s structure & organization: embrace that it will grow organically & unpredictably.)
This is wrong. Human beings are not a pool of identical rational agents competing in the same game from the same starting point aiming for the same endpoint.
people make mistakes, systematically.
most people start with less IQ than you, dear reader. You have an unfair advantage, so go exploit it using perfectly standard methods like getting a technology job.
if you have particular tastes, ambitions or goals (you might not even know about them, some self exploration is required) then you may be aiming for a prize that few other people are trying to claim
If someone took the time to analyze lots of historically important inventors, entrepreneurs, and thinkers, I doubt the important common factor would be that they made fewer mistakes than others.
Yes, you can “outperform” without much difficulty if you consider getting a nice job to be “outperforming” or you change the goalposts so you’re no longer trying to do something hard.
I think this depends on reference class and what one means by ‘mistakes’. The richest financier is someone whose strategy is explicitly ‘don’t make mistakes.’ (Really, it’s “never lose money” plus the emotional willingness to do the right thing, even if it’s boring instead of clever.)
I think the heart of the disagreement here is the separation between things that are ‘known to someone’ and ‘known to no one’—the strategies one needs to discover what other people have already found are often different from the strategies one needs to discover what no one knows yet, and both of them are paths to success of varying usefulness for various tasks.
Depends on the investment class. Even Charlie Munger (Warren Buffet’s partner) says “If you took our top fifteen decisions out, we’d have a pretty average record.”
Yes, even if success in the domain is basically about avoiding mistakes, I imagine that if there are huge winners in the domain they got there by finding some new innovative way to get their rate of mistakes down.
Nope, finance doesn’t work like that. The richest financier is one who (1) has excellent risk management; and (2) got lucky.
Notably, risk management is not about avoiding risks (and so, possible mistakes). It’s about managing risk—acknowledging that mistakes will be made and making sure they won’t kill you.
So, obviously ‘never’ is hyperbole on Buffett’s part.
I’ll buy that value investing stopped working as well because of increased investor sophistication and a general increase in asset prices. As a somewhat related example, daily S&P 500 momentum investing worked up until 1980, and now you need to track more sophisticated momentum measurements. But to quote Cliff Asness (talking about momentum investing, not value investing):
Getting a nice job with a stable relationship, raising children well and having a good circle of friends that you like, indulging your particular tastes is outperforming the average person.
Perhaps what you’re talking about is radical outperformance—“being Steve Jobs”, changing the world etc.
In my opinion seriously aiming for that kind of life is a massive mistake—there is no recipe for it, those who achieve it do so through extraordinary luck + skill + genetic advantages which cannot be reliably replicated by any method whatsoever.
There are lots of bits and pieces—e.g. the notes outlined above that two billionaires have signed on to.
Since when is a high probability of failure by itself a good reason not to do anything? If you’re a rational expected utility maximizer you do things according to their expected value, which means in some cases it makes sense to do things that initially seem impossible.
If you want to wuss out on life and take the path of least resistance, avoid all the biggest and most interesting bosses in the game, and live a life that has little greater challenge or purpose—fine by me. But frankly if that’s the case I’ll have to tap out out of this conversation, since it’s a bad use of my time and I don’t really want to absorb the attitudes of people like you, who explicitly state that they’re totally uninterested in accomplishing anything meaningful.
You can’t reload.
Thanks. I will give some of those articles a look when I have the chance. However, it isn’t true that every activity is competitive in nature. Many projects are cooperative, in which case it’s not necessarily a problem if you and other people are taking similar approaches and doing them well. We also shouldn’t overestimate the competition and assume that they are going to be applying probabilistic reasoning, when in reality we can still outperform by applying basic rules of rationality.
No, that’s if you want to understand why a specific Lesswrong afficionado became wary of probabilistic thinking to the point of calling it a problem of the EA community. If you don’t care about my opinions in general, you are welcome to take no action about it. He asked for my thoughts, I provided them.
But the reference class of Diego’s thoughts contains more thoughts that are wrong than that are true. So on priors, you might want to ignore them :p