Why are people boggling at the 1-in-a-billion figure? You think it’s not plausible that there are three independent 1-in-a-thousand events that would have to go right for EY to “play a critical role in Friendly AI success”? Not plausible that there are 9 1-in-10 events that would have to go right? Don’t I keep hearing “shut up and multiply” around here?
Edit: Explain to me what’s going on. I say that it seems to me that events A, B are likely to occur with probability P(A), P(B). You are allowed to object that I must have made a mistake, because P(A) times P(B) seems too small to you? (That is leaving aside the idea that 10-to-the-minus-nine counts as one of these too-small-to-be-believed numbers, which is seriously making me physiologically angry, ha-ha.)
The 1-in-a-billion follows not from it being plausible that there are three such events, but from it being virtually certain. Models without such events will end up dominating the final probability. I can easily imagine that if I magically happened upon a very reliable understanding of some factors relevant to future FAI development, the 1 in a billion figure would be the right thing to believe. But I can easily imagine it going the other way, and absent such understanding, I have to use estimates much less extreme than that.
“it being virtually certain that there are three independent 1 in 1000 events required, or nine independent 1 in 10 events required, or something along those lines”
models of what, final probability of what?
Models of the world that we use to determine how likely it is that Eliezer will play a critical role through a FAI team. Final probability of that happening.
A billion is big compared to the relative probabilities we’re rationally entitled to have between models where a series of very improbable successes is required, and models where only a modest series of modestly improbable successes is required.
Why are people boggling at the 1-in-a-billion figure? You think it’s not plausible that there are three independent 1-in-a-thousand events that would have to go right for EY to “play a critical role in Friendly AI success”? Not plausible that there are 9 1-in-10 events that would have to go right? Don’t I keep hearing “shut up and multiply” around here?
Edit: Explain to me what’s going on. I say that it seems to me that events A, B are likely to occur with probability P(A), P(B). You are allowed to object that I must have made a mistake, because P(A) times P(B) seems too small to you? (That is leaving aside the idea that 10-to-the-minus-nine counts as one of these too-small-to-be-believed numbers, which is seriously making me physiologically angry, ha-ha.)
The 1-in-a-billion follows not from it being plausible that there are three such events, but from it being virtually certain. Models without such events will end up dominating the final probability. I can easily imagine that if I magically happened upon a very reliable understanding of some factors relevant to future FAI development, the 1 in a billion figure would be the right thing to believe. But I can easily imagine it going the other way, and absent such understanding, I have to use estimates much less extreme than that.
I’m having trouble parsing your comment. Could you clarify?
A billion is not so big a number. Its reciprocal is not so small a number.
Edit: Specifically, what’s “it” in “it being virtually certain.” And in the second sentence—models of what, final probability of what?
Edit 2: −1 now that I understand. +1 on the child, namaste. (+1 on the child, but I just disagree about how big one billion is. So what do we do?)
“it being virtually certain that there are three independent 1 in 1000 events required, or nine independent 1 in 10 events required, or something along those lines”
Models of the world that we use to determine how likely it is that Eliezer will play a critical role through a FAI team. Final probability of that happening.
A billion is big compared to the relative probabilities we’re rationally entitled to have between models where a series of very improbable successes is required, and models where only a modest series of modestly improbable successes is required.
Yes, this is of course what I had in mind.