[edit: looks like Rob posted elsethread a comment addressing my question here]
I’m a bit confused by this argument, because I thought MIRI-folk had been arguing against this specific type of logic. (I might be conflating a few different types of arguments, or might be conflating ‘well, Eliezer said this, so Rob automatically endorses it’, or some such).
But, I recall recommendations to generally not try to get your expected value from multiplying tiny probabilities against big values, because a) in practice that tends to lead to cognitive errors, b) in many cases people were saying things like “x-risk is a small probability of a Very Bad Outcome”, when the actual argument was “x-risk is a big probability of a Very Bad Outcome.”
(Right now maybe you’re maybe making a different argument, not about what humans should do, but about some underlying principles that would be true if we were better at thinking about things?)
[...] And finally, I once again state that I abjure, refute, and disclaim all forms of Pascalian reasoning and multiplying tiny probabilities by large impacts when it comes to existential risk. We live on a planet with upcoming prospects of, among other things, human intelligence enhancement, molecular nanotechnology, sufficiently advanced biotechnology, brain-computer interfaces, and of course Artificial Intelligence in several guises. If something has only a tiny chance of impacting the fate of the world, there should be something with a larger probability of an equally huge impact to worry about instead. You cannot justifiably trade off tiny probabilities of x-risk improvement against efforts that do not effectuate a happy intergalactic civilization, but there is nonetheless no need to go on tracking tiny probabilities when you’d expect there to be medium-sized probabilities of x-risk reduction.
[...] EDIT: To clarify, “Don’t multiply tiny probabilities by large impacts” is something that I apply to large-scale projects and lines of historical probability. On a very large scale, if you think FAI stands a serious chance of saving the world, then humanity should dump a bunch of effort into it, and if nobody’s dumping effort into it then you should dump more effort than currently into it. On a smaller scale, to compare two x-risk mitigation projects in demand of money, you need to estimate something about marginal impacts of the next added effort (where the common currency of utilons should probably not be lives saved, but “probability of an ok outcome”, i.e., the probability of ending up with a happy intergalactic civilization). In this case the average marginal added dollar can only account for a very tiny slice of probability, but this is not Pascal’s Wager. Large efforts with a success-or-failure criterion are rightly, justly, and unavoidably going to end up with small marginally increased probabilities of success per added small unit of effort. It would only be Pascal’s Wager if the whole route-to-an-OK-outcome were assigned a tiny probability, and then a large payoff used to shut down further discussion of whether the next unit of effort should go there or to a different x-risk.
From my perspective, the name of the game is ‘make the universe as a whole awesome’. Within that game, it would be silly to focus on unlikely fringe x-risks when there are high-probability x-risks to worry about; and it would be silly to focus on intervention ideas that have a one-in-a-million chance of vastly improving the future, when there are other interventions that have a one-in-a-thousand chance of vastly improving the future, for example.
That’s all in the context of debates between longtermist strategies and candidate megaprojects, which is what I usually assume is the discussion context. You could have a separate question that’s like ‘maybe I should give up on ~all the value in the universe and have a few years of fun playing sudoku and watching Netflix shows before AI kills me’.
In that context, the basic logic of anti-Pascalian reasoning still applies (easy existence proof: if working hard on x-risk raised humanity’s odds of survival from 1/1010100 to 5/1010100, it would obviously not be worth working hard on x-risk), but I don’t think we’re anywhere near the levels of P(doom) that would warrant giving up on the future.
‘There’s no need to work on supervolcano-destroying-humanity risk when there are much more plausible risks like bioterrorism-destroying-humanity to worry about’ is a very different sort of mental move than ‘welp, humanity’s odds of surviving are merely 1-in-100, I guess the reasonable utility-maximizing thing to do now is to play sudoku and binge Netflix for a few years and then die’. 1-in-100 is a fake number I pulled out of a hat, but it’s an example of a very dire number that’s obviously way too high to justify humanity giving up on its future.
(This is all orthogonal to questions of motivation. Maybe, in order to avoid burning out, you need to take more vacation days while working on a dire-looking project, compared to the number of vacation days you’d need while working on an optimistic-looking project. That’s all still within the framework of ‘trying to do longtermist stuff’, while working with a human brain.)
One additional thing adding confusion is Nate Soares’ latest threads on wallowing* which… I think are probably compatible with all this, but I couldn’t pass the ITT on.
*I think his use of ‘wallowing’ is fairly nonstandard, you shouldn’t read into it until you’ve talked to him about it for at least an hour.
[edit: looks like Rob posted elsethread a comment addressing my question here]
I’m a bit confused by this argument, because I thought MIRI-folk had been arguing against this specific type of logic. (I might be conflating a few different types of arguments, or might be conflating ‘well, Eliezer said this, so Rob automatically endorses it’, or some such).
But, I recall recommendations to generally not try to get your expected value from multiplying tiny probabilities against big values, because a) in practice that tends to lead to cognitive errors, b) in many cases people were saying things like “x-risk is a small probability of a Very Bad Outcome”, when the actual argument was “x-risk is a big probability of a Very Bad Outcome.”
(Right now maybe you’re maybe making a different argument, not about what humans should do, but about some underlying principles that would be true if we were better at thinking about things?)
Quoting the excerpt from Being Half-Rational About Pascal’s Wager is Even Worse that I quoted in the other comment:
From my perspective, the name of the game is ‘make the universe as a whole awesome’. Within that game, it would be silly to focus on unlikely fringe x-risks when there are high-probability x-risks to worry about; and it would be silly to focus on intervention ideas that have a one-in-a-million chance of vastly improving the future, when there are other interventions that have a one-in-a-thousand chance of vastly improving the future, for example.
That’s all in the context of debates between longtermist strategies and candidate megaprojects, which is what I usually assume is the discussion context. You could have a separate question that’s like ‘maybe I should give up on ~all the value in the universe and have a few years of fun playing sudoku and watching Netflix shows before AI kills me’.
In that context, the basic logic of anti-Pascalian reasoning still applies (easy existence proof: if working hard on x-risk raised humanity’s odds of survival from 1/1010100 to 5/1010100, it would obviously not be worth working hard on x-risk), but I don’t think we’re anywhere near the levels of P(doom) that would warrant giving up on the future.
‘There’s no need to work on supervolcano-destroying-humanity risk when there are much more plausible risks like bioterrorism-destroying-humanity to worry about’ is a very different sort of mental move than ‘welp, humanity’s odds of surviving are merely 1-in-100, I guess the reasonable utility-maximizing thing to do now is to play sudoku and binge Netflix for a few years and then die’. 1-in-100 is a fake number I pulled out of a hat, but it’s an example of a very dire number that’s obviously way too high to justify humanity giving up on its future.
(This is all orthogonal to questions of motivation. Maybe, in order to avoid burning out, you need to take more vacation days while working on a dire-looking project, compared to the number of vacation days you’d need while working on an optimistic-looking project. That’s all still within the framework of ‘trying to do longtermist stuff’, while working with a human brain.)
One additional thing adding confusion is Nate Soares’ latest threads on wallowing* which… I think are probably compatible with all this, but I couldn’t pass the ITT on.
*I think his use of ‘wallowing’ is fairly nonstandard, you shouldn’t read into it until you’ve talked to him about it for at least an hour.
Where do I find these threads?
Ah, this was in-person. (“Threads” was more/differently metaphorical than usual)