If I roll a 20 sided die until I roll a 1, the expected number of times I will need to roll the die is 20. Also, according to my current expectations, immediately before I roll the 1, I expect myself to expect to have to roll 20 more times. My future self will say it will take 20 more times in expectation, when in fact it will only take 1 more time. I can predict this in advance, but I can’t do anything about it.
I think everyone should spend enough time thinking about this to see why there is nothing wrong with this picture. This is what uncertainity looks like, and it had to be this way.
That’s not how rolling a die works. Each roll is completely independent. The expected value of rolling a 20 sided die is 10.5 but there’s no logical way to assign an expected outcome of any given roll. You can calculate how many times you’d have to roll before you’re more likely than not to have rolled a specific value (1-P(specific value))^n < 0.5 so log(0.5)/log(1-P(specific_value)) < n. In this case P(specific_value) is 1⁄20 = 0.05. So n > log(0.5)/log(0.95) = 13.513. So you’re more likely than not to have rolled a “1” after 14 rolls, but that still doesn’t tell you what to expect your Nth roll to be.
I don’t see how your dice rolling example supports a pacifist outlook. We’re not rolling dice here. This is a subject we can study and gain more information about to understand the different outcomes better. You can’t do that with a dice. The outcomes of rolling a dice are not so dire. Probability is quite useful for making decisions in the face of uncertainty if you understand it better.
How? The person I’m responding to gets the math of probability wrong and uses it to make a confusing claim that “there’s nothing wrong” as though we have no more agency over the development of AI than we do over the chaotic motion of a dice.
It’s foolish to liken the development of AI to a roll of the dice. Given the stakes, we must try to study, prepare for, and guide the development of AI as best we can.
This isn’t hypothetical. We’ve already built a machine that’s more intelligent than any man alive and which brutally optimizes toward a goal that’s incompatible with the good of man kind. We call it, “Global Capitalism”. There isn’t a man alive who knows how to stock the shelves of stores all over the world with #2 pencils that cost only 2 cents each, yet it happens every day because *the system* knows how. The problem is: that system operates with a sociopathic disregard for life (human or otherwise) and has exceeded all limits of sustainability without so much as slowing down. It’s a short-sighted, cruel leviathan and there’s no human at the reigns.
At this point, it’s not about waiting for the dice to settle, it’s about figuring out how to wrangle such a beast and prevent the creation of more.
uses it to make a confusing claim that “there’s nothing wrong” as though we have no more agency over the development of AI than we do over the chaotic motion of a dice.
It’s foolish to liken the development of AI to a roll of the dice. Given the stakes, we must try to study, prepare for, and guide the development of AI as best we can.
I think you’re misinterpreting the original comment. Scott was talking about there being “nothing wrong” with this conception of epistemic uncertainty before the 1 arrives, where each new roll doesn’t tell you anything about when the 1 will come. He isn’t advocating pacifism about AI risk, though. Ironically enough, in his capacity as lead of the Agent Foundations team at MIRI, Scott is arguably one of the least AI-risk-passive people on the planet.
The person I’m responding to gets the math of probability wrong.
No, they are correctly describing a Poisson distribution, which is correct for this situation and compatible with what you say too. AFAICT they’re not saying anything about AI or morality.
If I roll a 20 sided die until I roll a 1, the expected number of times I will need to roll the die is 20. Also, according to my current expectations, immediately before I roll the 1, I expect myself to expect to have to roll 20 more times. My future self will say it will take 20 more times in expectation, when in fact it will only take 1 more time. I can predict this in advance, but I can’t do anything about it.
I think everyone should spend enough time thinking about this to see why there is nothing wrong with this picture. This is what uncertainity looks like, and it had to be this way.
Yes, but ideally our prediction methods would allow us to predict events more accurately than flipping a coin does.
That’s not how rolling a die works. Each roll is completely independent. The expected value of rolling a 20 sided die is 10.5 but there’s no logical way to assign an expected outcome of any given roll. You can calculate how many times you’d have to roll before you’re more likely than not to have rolled a specific value (1-P(specific value))^n < 0.5 so log(0.5)/log(1-P(specific_value)) < n. In this case P(specific_value) is 1⁄20 = 0.05. So n > log(0.5)/log(0.95) = 13.513. So you’re more likely than not to have rolled a “1” after 14 rolls, but that still doesn’t tell you what to expect your Nth roll to be.
I don’t see how your dice rolling example supports a pacifist outlook. We’re not rolling dice here. This is a subject we can study and gain more information about to understand the different outcomes better. You can’t do that with a dice. The outcomes of rolling a dice are not so dire. Probability is quite useful for making decisions in the face of uncertainty if you understand it better.
You are sayin the same thing as the comment you are replying to.
How? The person I’m responding to gets the math of probability wrong and uses it to make a confusing claim that “there’s nothing wrong” as though we have no more agency over the development of AI than we do over the chaotic motion of a dice.
It’s foolish to liken the development of AI to a roll of the dice. Given the stakes, we must try to study, prepare for, and guide the development of AI as best we can.
This isn’t hypothetical. We’ve already built a machine that’s more intelligent than any man alive and which brutally optimizes toward a goal that’s incompatible with the good of man kind. We call it, “Global Capitalism”. There isn’t a man alive who knows how to stock the shelves of stores all over the world with #2 pencils that cost only 2 cents each, yet it happens every day because *the system* knows how. The problem is: that system operates with a sociopathic disregard for life (human or otherwise) and has exceeded all limits of sustainability without so much as slowing down. It’s a short-sighted, cruel leviathan and there’s no human at the reigns.
At this point, it’s not about waiting for the dice to settle, it’s about figuring out how to wrangle such a beast and prevent the creation of more.
I think you’re misinterpreting the original comment. Scott was talking about there being “nothing wrong” with this conception of epistemic uncertainty before the 1 arrives, where each new roll doesn’t tell you anything about when the 1 will come. He isn’t advocating pacifism about AI risk, though. Ironically enough, in his capacity as lead of the Agent Foundations team at MIRI, Scott is arguably one of the least AI-risk-passive people on the planet.
No, they are correctly describing a Poisson distribution, which is correct for this situation and compatible with what you say too. AFAICT they’re not saying anything about AI or morality.
>>> 20.05751