We need a community that at once understands probability theory, doesn’t play reference class tennis, and doesn’t lose motivation by considering the base rates of other people trying to do something, because the other people were cooks, not chefs, and also because sometimes you actually need to try a one in ten thousand chance. But people are too proud of their command of Bayes to let go of the easy chance of showing off their ability to find mathematically sound reasons not to try.
Are you saying don’t think probabilistically here? I’d love a specific post on just your thoughts on this.
Step 5: Reason why are the billionaires saying the people who do it wrong are basically reasoning probabilistically
Step 6: Find the connection between that and reasoning from first principles, or the gear hypothesis, or whichever other term you have for when you use the inside view, and actually think technically about a problem, from scratch, without looking at how anyone else did it.
Step 7: Talk to Michael Valentine about it, who has been reasoning about this recently and how to impart it at CFAR workshops.
Step 8: Find someone who can give you a recording of Geoff Anders’ presentation at EAGlobal.
Step 9: Notice how all those steps above were connected, become a Chef, set out to save the world. Good luck!
Note that the billionaires disagree on this. Thiel says that people should think more like calculus and less like probability, while Musk(the inspiration for the cook and the chef) says that people think in certainties while they should think in probabilities.
Not my reading. My reading is that Musk thinks people should not consider the probability of succeding as a spacecraft startup (0% historically) but instead should reason from first principles, such as thinking what are the materials from which a rocket is made, then building the costs from the ground up.
“Thinking in probabilities” is a consistent talking point for Musk—every interview where’s he asked how he’s able to do what he does, he mentions this.
Yeah, I think in general you always want to try to think about the future, try to predict the future. You’re going to generate some error between the series of steps you think will occur versus what actually does occur and you want to try to minimize the error. That’s a way that I think about it. And I also think about it in terms of probability streams. There’s a certain set of probabilities associated with certain outcomes and you want to make sure that you’re always the house. So things won’t always occur the way you think they’ll occur, but if you calculate it out correctly over a series of decisions you will come out significantly ahead…”
So that covers probability.
In terms of reference class, I think what Thiel and Musk are both saying is that previous startups are really bad to use as a reference class for new startups. I don’t know if that means they generally reject the idea of reference classes, but it does give me pause in using them to figure out the chances of my company succeeding based on other similar companies.
I model probabilistic thinking as something you build on top of all this. First you learn to model the world at all (your steps 3-8), then you learn the mathematical description of part of what your brain is doing when it does all this. There are many aspects of normative cognition that Bayes doesn’t have anything to say about, but there are also places where you come to understand what your thinking is aiming at. It’s a gears model of cognition rather than the object-level phenomenon.
If you don’t have gears models at all, then yes, it’s just another way to spout nonsense. This isn’t because it’s useless, it’s because people cargo-cult it. Why do people cargo-cult Bayesianism so much? It’s not the only thing in the sequences. The first post, The Simple Truth, big parts of Mysterious Answers to Mysterious Questions, and basically all of Reductionism are about the gears-model skill. Even the name rationalism evokes Descartes and Leibniz, who were all about this skill. My own guess is that Eliezer argued more forcefully for Bayesianism than for gears models in the sequences because, of the two, it is the skill that came less naturally to him, and that stuck.
What would cargo-cult gears models look like? Presumably, scientism, physics envy, building big complicated models with no grounding in reality. This too is a failure mode visible in our community.
So for us to understand what you’re even trying to say, you want us to read a bunch of articles, talk to one of your friends, listen to a speech, and only then will we become EAs good enough for you? No thanks.
Diego points to a variety of resources that all make approximately the same point, which I’ll attempt to summarize: If you apply probabilistic “outside view” reasoning to your projects and your career, in practice this means copying approaches that have worked well for other people. But if it’s clear that an approach is working well, then others will be copying it too, and you won’t outperform. So your only realistic shot at outperforming is to find a useful and underused “inside view” way of looking at things.
(FYI, I’ve found that keeping a notebook has been very useful for generating & recording interesting new ideas. If you do it for long enough you can start to develop your own ontology for understanding areas you’re interested in. Don’t worry too much about your notebook’s structure & organization: embrace that it will grow organically & unpredictably.)
your only realistic shot at outperforming is to find a useful and underused “inside view” way of looking at things.
This is wrong. Human beings are not a pool of identical rational agents competing in the same game from the same starting point aiming for the same endpoint.
people make mistakes, systematically.
most people start with less IQ than you, dear reader. You have an unfair advantage, so go exploit it using perfectly standard methods like getting a technology job.
if you have particular tastes, ambitions or goals (you might not even know about them, some self exploration is required) then you may be aiming for a prize that few other people are trying to claim
If someone took the time to analyze lots of historically important inventors, entrepreneurs, and thinkers, I doubt the important common factor would be that they made fewer mistakes than others.
Yes, you can “outperform” without much difficulty if you consider getting a nice job to be “outperforming” or you change the goalposts so you’re no longer trying to do something hard.
If someone took the time to analyze lots of historically important inventors, entrepreneurs, and thinkers, I doubt the important common factor would be that they made fewer mistakes than others.
I think this depends on reference class and what one means by ‘mistakes’. The richest financier is someone whose strategy is explicitly ‘don’t make mistakes.’ (Really, it’s “never lose money” plus the emotional willingness to do the right thing, even if it’s boring instead of clever.)
I think the heart of the disagreement here is the separation between things that are ‘known to someone’ and ‘known to no one’—the strategies one needs to discover what other people have already found are often different from the strategies one needs to discover what no one knows yet, and both of them are paths to success of varying usefulness for various tasks.
I think this depends on reference class and what one means by ‘mistakes’. The richest financier is someone whose strategy is explicitly ‘don’t make mistakes.’ (Really, it’s “never lose money” plus the emotional willingness to do the right thing, even if it’s boring instead of clever.)
Depends on the investment class. Even Charlie Munger (Warren Buffet’s partner) says “If you took our top fifteen decisions out, we’d have a pretty average record.”
I think the heart of the disagreement here is the separation between things that are ‘known to someone’ and ‘known to no one’—the strategies one needs to discover what other people have already found are often different from the strategies one needs to discover what no one knows yet, and both of them are paths to success of varying usefulness for various tasks.
Yes, even if success in the domain is basically about avoiding mistakes, I imagine that if there are huge winners in the domain they got there by finding some new innovative way to get their rate of mistakes down.
The richest financier is someone whose strategy is explicitly ‘don’t make mistakes.’ (Really, it’s “never lose money)
Nope, finance doesn’t work like that. The richest financier is one who (1) has excellent risk management; and (2) got lucky.
Notably, risk management is not about avoiding risks (and so, possible mistakes). It’s about managing risk—acknowledging that mistakes will be made and making sure they won’t kill you.
So, obviously ‘never’ is hyperbole on Buffett’s part.
Nope, finance doesn’t work like that.
I’ll buy that value investing stopped working as well because of increased investor sophistication and a general increase in asset prices. As a somewhat related example, daily S&P 500 momentum investing worked up until 1980, and now you need to track more sophisticated momentum measurements. But to quote Cliff Asness (talking about momentum investing, not value investing):
If everyone did them yesterday, they would go away. They work in my opinion — again using my version of “work” — in kind of a sweet spot. Good enough to be really important if you can follow discipline, not so good enough that the world looks at it and goes, “this is easy.” They’re excruciating at times and I hate those times. I won’t pretend I’m neutral as to those times.
Yes, you can “outperform” without much difficulty if you consider getting a nice job to be “outperforming”
Getting a nice job with a stable relationship, raising children well and having a good circle of friends that you like, indulging your particular tastes is outperforming the average person.
Perhaps what you’re talking about is radical outperformance—“being Steve Jobs”, changing the world etc.
In my opinion seriously aiming for that kind of life is a massive mistake—there is no recipe for it, those who achieve it do so through extraordinary luck + skill + genetic advantages which cannot be reliably replicated by any method whatsoever.
There are lots of bits and pieces—e.g. the notes outlined above that two billionaires have signed on to.
In my opinion seriously aiming for that kind of life is a massive mistake—there is no recipe for it, those who achieve it do so through extraordinary luck + skill + genetic advantages which cannot be reliably replicated by any method whatsoever.
Since when is a high probability of failure by itself a good reason not to do anything? If you’re a rational expected utility maximizer you do things according to their expected value, which means in some cases it makes sense to do things that initially seem impossible.
If you want to wuss out on life and take the path of least resistance, avoid all the biggest and most interesting bosses in the game, and live a life that has little greater challenge or purpose—fine by me. But frankly if that’s the case I’ll have to tap out out of this conversation, since it’s a bad use of my time and I don’t really want to absorb the attitudes of people like you, who explicitly state that they’re totally uninterested in accomplishing anything meaningful.
Thanks. I will give some of those articles a look when I have the chance. However, it isn’t true that every activity is competitive in nature. Many projects are cooperative, in which case it’s not necessarily a problem if you and other people are taking similar approaches and doing them well. We also shouldn’t overestimate the competition and assume that they are going to be applying probabilistic reasoning, when in reality we can still outperform by applying basic rules of rationality.
No, that’s if you want to understand why a specific Lesswrong afficionado became wary of probabilistic thinking to the point of calling it a problem of the EA community. If you don’t care about my opinions in general, you are welcome to take no action about it. He asked for my thoughts, I provided them.
But the reference class of Diego’s thoughts contains more thoughts that are wrong than that are true. So on priors, you might want to ignore them :p
Are you saying don’t think probabilistically here? I’d love a specific post on just your thoughts on this.
Yes I am.
Step 1: Learn Bayes
Step 2: Learn reference class
Step 3: Read 0 to 1
Step 4: Read The Cook and the Chef
Step 5: Reason why are the billionaires saying the people who do it wrong are basically reasoning probabilistically
Step 6: Find the connection between that and reasoning from first principles, or the gear hypothesis, or whichever other term you have for when you use the inside view, and actually think technically about a problem, from scratch, without looking at how anyone else did it.
Step 7: Talk to Michael Valentine about it, who has been reasoning about this recently and how to impart it at CFAR workshops.
Step 8: Find someone who can give you a recording of Geoff Anders’ presentation at EAGlobal.
Step 9: Notice how all those steps above were connected, become a Chef, set out to save the world. Good luck!
Note that the billionaires disagree on this. Thiel says that people should think more like calculus and less like probability, while Musk(the inspiration for the cook and the chef) says that people think in certainties while they should think in probabilities.
Not my reading. My reading is that Musk thinks people should not consider the probability of succeding as a spacecraft startup (0% historically) but instead should reason from first principles, such as thinking what are the materials from which a rocket is made, then building the costs from the ground up.
First, I think we should seperate two ideas.
Creating a reference class.
Thinking in probabilities.
“Thinking in probabilities” is a consistent talking point for Musk—every interview where’s he asked how he’s able to do what he does, he mentions this.
Here’s an example I found with a quick Google search:
So that covers probability.
In terms of reference class, I think what Thiel and Musk are both saying is that previous startups are really bad to use as a reference class for new startups. I don’t know if that means they generally reject the idea of reference classes, but it does give me pause in using them to figure out the chances of my company succeeding based on other similar companies.
I model probabilistic thinking as something you build on top of all this. First you learn to model the world at all (your steps 3-8), then you learn the mathematical description of part of what your brain is doing when it does all this. There are many aspects of normative cognition that Bayes doesn’t have anything to say about, but there are also places where you come to understand what your thinking is aiming at. It’s a gears model of cognition rather than the object-level phenomenon.
If you don’t have gears models at all, then yes, it’s just another way to spout nonsense. This isn’t because it’s useless, it’s because people cargo-cult it. Why do people cargo-cult Bayesianism so much? It’s not the only thing in the sequences. The first post, The Simple Truth, big parts of Mysterious Answers to Mysterious Questions, and basically all of Reductionism are about the gears-model skill. Even the name rationalism evokes Descartes and Leibniz, who were all about this skill. My own guess is that Eliezer argued more forcefully for Bayesianism than for gears models in the sequences because, of the two, it is the skill that came less naturally to him, and that stuck.
What would cargo-cult gears models look like? Presumably, scientism, physics envy, building big complicated models with no grounding in reality. This too is a failure mode visible in our community.
So for us to understand what you’re even trying to say, you want us to read a bunch of articles, talk to one of your friends, listen to a speech, and only then will we become EAs good enough for you? No thanks.
Diego points to a variety of resources that all make approximately the same point, which I’ll attempt to summarize: If you apply probabilistic “outside view” reasoning to your projects and your career, in practice this means copying approaches that have worked well for other people. But if it’s clear that an approach is working well, then others will be copying it too, and you won’t outperform. So your only realistic shot at outperforming is to find a useful and underused “inside view” way of looking at things.
(FYI, I’ve found that keeping a notebook has been very useful for generating & recording interesting new ideas. If you do it for long enough you can start to develop your own ontology for understanding areas you’re interested in. Don’t worry too much about your notebook’s structure & organization: embrace that it will grow organically & unpredictably.)
This is wrong. Human beings are not a pool of identical rational agents competing in the same game from the same starting point aiming for the same endpoint.
people make mistakes, systematically.
most people start with less IQ than you, dear reader. You have an unfair advantage, so go exploit it using perfectly standard methods like getting a technology job.
if you have particular tastes, ambitions or goals (you might not even know about them, some self exploration is required) then you may be aiming for a prize that few other people are trying to claim
If someone took the time to analyze lots of historically important inventors, entrepreneurs, and thinkers, I doubt the important common factor would be that they made fewer mistakes than others.
Yes, you can “outperform” without much difficulty if you consider getting a nice job to be “outperforming” or you change the goalposts so you’re no longer trying to do something hard.
I think this depends on reference class and what one means by ‘mistakes’. The richest financier is someone whose strategy is explicitly ‘don’t make mistakes.’ (Really, it’s “never lose money” plus the emotional willingness to do the right thing, even if it’s boring instead of clever.)
I think the heart of the disagreement here is the separation between things that are ‘known to someone’ and ‘known to no one’—the strategies one needs to discover what other people have already found are often different from the strategies one needs to discover what no one knows yet, and both of them are paths to success of varying usefulness for various tasks.
Depends on the investment class. Even Charlie Munger (Warren Buffet’s partner) says “If you took our top fifteen decisions out, we’d have a pretty average record.”
Yes, even if success in the domain is basically about avoiding mistakes, I imagine that if there are huge winners in the domain they got there by finding some new innovative way to get their rate of mistakes down.
Nope, finance doesn’t work like that. The richest financier is one who (1) has excellent risk management; and (2) got lucky.
Notably, risk management is not about avoiding risks (and so, possible mistakes). It’s about managing risk—acknowledging that mistakes will be made and making sure they won’t kill you.
So, obviously ‘never’ is hyperbole on Buffett’s part.
I’ll buy that value investing stopped working as well because of increased investor sophistication and a general increase in asset prices. As a somewhat related example, daily S&P 500 momentum investing worked up until 1980, and now you need to track more sophisticated momentum measurements. But to quote Cliff Asness (talking about momentum investing, not value investing):
Getting a nice job with a stable relationship, raising children well and having a good circle of friends that you like, indulging your particular tastes is outperforming the average person.
Perhaps what you’re talking about is radical outperformance—“being Steve Jobs”, changing the world etc.
In my opinion seriously aiming for that kind of life is a massive mistake—there is no recipe for it, those who achieve it do so through extraordinary luck + skill + genetic advantages which cannot be reliably replicated by any method whatsoever.
There are lots of bits and pieces—e.g. the notes outlined above that two billionaires have signed on to.
Since when is a high probability of failure by itself a good reason not to do anything? If you’re a rational expected utility maximizer you do things according to their expected value, which means in some cases it makes sense to do things that initially seem impossible.
If you want to wuss out on life and take the path of least resistance, avoid all the biggest and most interesting bosses in the game, and live a life that has little greater challenge or purpose—fine by me. But frankly if that’s the case I’ll have to tap out out of this conversation, since it’s a bad use of my time and I don’t really want to absorb the attitudes of people like you, who explicitly state that they’re totally uninterested in accomplishing anything meaningful.
You can’t reload.
Thanks. I will give some of those articles a look when I have the chance. However, it isn’t true that every activity is competitive in nature. Many projects are cooperative, in which case it’s not necessarily a problem if you and other people are taking similar approaches and doing them well. We also shouldn’t overestimate the competition and assume that they are going to be applying probabilistic reasoning, when in reality we can still outperform by applying basic rules of rationality.
No, that’s if you want to understand why a specific Lesswrong afficionado became wary of probabilistic thinking to the point of calling it a problem of the EA community. If you don’t care about my opinions in general, you are welcome to take no action about it. He asked for my thoughts, I provided them.
But the reference class of Diego’s thoughts contains more thoughts that are wrong than that are true. So on priors, you might want to ignore them :p