If I have a noisy estimate and a prior, I should regress towards the mean. By the “ideal case” do you mean the case in which my estimates have no noise? That is a strange idealization, which people might implicitly use but probably wouldn’t advocate.
I was primarily referring to this wide eyed optimism prevalent on these boards; attend some workshops and become more rational and win. It’s not that people advocate not regressing to the mean, it’s that they don’t even know this is an issue (and a difficult issue when probability distribution and it’s mean are something you need to find out as well). In the ideal case, you have a sum over all terms—it is not an estimate at all—you don’t discard any terms, if you discard any terms it will make it less ideal, if you apply any extra scaling it will make it less ideal, and so on. And so you have people see it as biases and imagine enormous gains to be obtained from doing something formal inspired instead. I have a cat test. Can you explicitly determine if something is a picture of a cat based on a list of numbers representing pixel luminosities? This is the size of gap between implicit processing of the evidence and explicit processing of the evidence.
But I don’t see why either of these properties—reflecting symmetries, summing to one over exclusive alternatives—are necessary for good outcomes. Suppose that I am trying to estimate the relative goodness of two options in order to pick the best. Why should it matter whether my beliefs have these particular consistency properties, as long as they are my best available guess?
This needs a specific example. Some people were worrying over a very very far fetched scenario, being unable to assign it low enough probability. The property of summing to 1 over the enormous number of likewise far fetched mutually exclusive scenarios would definitely have helped, compared to the state of—I suspect—summing to a very very huge number. Then they were taught a little bit of rationality and they know probability is subjective, which makes them inclined to consider their numerical assessment of a feeling (which may well already incorporate alleged impact) to be a probability, and multiply it with something. Other bad patterns include inversion of probability—why are you so extremely certain in negation of an event? People expect that probabilities close to 1 require evidence, and without any, are reluctant to assign something close to 1, even though in that case it is representative of a sum of almost entire hypothesis space.
With respect to the other points, I agree that estimation is hard, but the difficulties you cite seem to fit pretty squarely into the simple theoretical framework of computing a well-calibrated estimate of expected value. So to the extent there are gaps between that simple framework and reality, these difficulties don’t point to them.
not a question for which you actually know the expert consensus.
I do not see people most educated in these matters (or, indeed, the theory) to be running “rationality workshops” advocating explicit theory-based reasoning, that’s what I mean. And people I see I would not even suspect of expertise if they haven’t themselves claimed expertise.
This would be a fine response if I were trying to cast myself as better than experts because I have such an excellent clean theory (and I have little patience with Eliezer for doing this). But in fact I am just trying to say relatively simple things in the interest of building up an understanding.
Yes I certainly agree here—first make simple steps in the right direction.
I think mostly you are arguing against LW in general, which seems fine but not particularly helpful here or relevant to my point.
Some people were worrying over a very very far fetched scenario, being unable to assign it low enough probability. The property of summing to 1 over the enormous number of likewise far fetched mutually exclusive scenarios would definitely have helped, compared to the state of—I suspect—summing to a very very huge number.
What is the “very very far fetched scenario”? If you mean the intelligence explosion scenario, I do think this is reasonably unlikely, but:
Eliezer thinks this scenario is very likely, and many people around here agree. This is hardly a problem of being unwilling to assign a probability too close to 0.
In what sense is fast takeoff one hypothesis out of a very large number of equally plausible hypotheses? It seems like a fast takeoff is a priori reasonably likely, and the main reasons you think it seems unlikely are because experts don’t take it seriously and it seems incongruous with other tech progress. This seems unrelated to your critique.
I was primarily referring to this wide eyed optimism prevalent on these boards; attend some workshops and become more rational and win. It’s not that people advocate not regressing to the mean, it’s that they don’t even know this is an issue (and a difficult issue when probability distribution and it’s mean are something you need to find out as well). In the ideal case, you have a sum over all terms—it is not an estimate at all—you don’t discard any terms, if you discard any terms it will make it less ideal, if you apply any extra scaling it will make it less ideal, and so on. And so you have people see it as biases and imagine enormous gains to be obtained from doing something formal inspired instead. I have a cat test. Can you explicitly determine if something is a picture of a cat based on a list of numbers representing pixel luminosities? This is the size of gap between implicit processing of the evidence and explicit processing of the evidence.
This needs a specific example. Some people were worrying over a very very far fetched scenario, being unable to assign it low enough probability. The property of summing to 1 over the enormous number of likewise far fetched mutually exclusive scenarios would definitely have helped, compared to the state of—I suspect—summing to a very very huge number. Then they were taught a little bit of rationality and they know probability is subjective, which makes them inclined to consider their numerical assessment of a feeling (which may well already incorporate alleged impact) to be a probability, and multiply it with something. Other bad patterns include inversion of probability—why are you so extremely certain in negation of an event? People expect that probabilities close to 1 require evidence, and without any, are reluctant to assign something close to 1, even though in that case it is representative of a sum of almost entire hypothesis space.
I do not see people most educated in these matters (or, indeed, the theory) to be running “rationality workshops” advocating explicit theory-based reasoning, that’s what I mean. And people I see I would not even suspect of expertise if they haven’t themselves claimed expertise.
Yes I certainly agree here—first make simple steps in the right direction.
I think mostly you are arguing against LW in general, which seems fine but not particularly helpful here or relevant to my point.
What is the “very very far fetched scenario”? If you mean the intelligence explosion scenario, I do think this is reasonably unlikely, but:
Eliezer thinks this scenario is very likely, and many people around here agree. This is hardly a problem of being unwilling to assign a probability too close to 0.
In what sense is fast takeoff one hypothesis out of a very large number of equally plausible hypotheses? It seems like a fast takeoff is a priori reasonably likely, and the main reasons you think it seems unlikely are because experts don’t take it seriously and it seems incongruous with other tech progress. This seems unrelated to your critique.