The second is “optimism bias” (a good thing in this context): “If the predictors disagree about the probabilities conditional on any action, the decision maker acts as though they believe the more optimistic one.” (This is as opposed to taking the market average, which I assume is what Hanson had in mind with his futarchy proposal.) If you don’t have optimism bias, then you get failure modes like the ones pointed out in Obstacle 1 of Scott Garrabrant’s post “Two Major Obstacles for Logical Inductor Decision Theory”: One predictor/trader could claim that the optimal action will lead to disaster and thus cause the optimal action to never be taken and her prediction to never be tested. This optimism bias is reminiscent of some other ideas. For example some ideas for solving the 5-and-10 problem are based on first searching for proofs of high utility. Decision auctions also work based on this optimism. (Decision auctions work like this: Auction off the right to make the decision on my behalf to the highest bidder. The highest bidder has to pay their bid (or maybe the second-highest bid) and gets paid in proportion to the utility I obtain.) Maybe getting too far afield here, but the UCB term in bandit algorithms also works this way in some sense: if you’re still quite unsure how good an action is, pretend that it is very good (as good as some upper bound of some confidence interval).
I want to mention this, as I think this is one of the reasons why I get queasy epistemically speaking around future doom claims, and why the people who disagree with some AI doomers are actually more rational than the doomers think. In particular, it’s why people claiming we should stop progress on AI isn’t actually a good thing, because optimism bias serves a very useful epistemic purpose.
In particular, it avoids us moving the goalposts on doom , because the problem with doom theories is that you can always the goalposts to the next thing, or the next year, and this is extremely bad when you consider that we have confirmation biases.
I want to mention this, as I think this is one of the reasons why I get queasy epistemically speaking around future doom claims, and why the people who disagree with some AI doomers are actually more rational than the doomers think. In particular, it’s why people claiming we should stop progress on AI isn’t actually a good thing, because optimism bias serves a very useful epistemic purpose.
In particular, it avoids us moving the goalposts on doom , because the problem with doom theories is that you can always the goalposts to the next thing, or the next year, and this is extremely bad when you consider that we have confirmation biases.