I thought the standard (human) solution is to evaluate the probability first and reject anything at the noise level, before even considering utilities. Put the scope insensitivity to good use. Unfortunately, as an amateur, I don’t understand Eliezer’s (counter?) point, maybe someone can give a popular explanation. Or is this what Kindly did?
I don’t think that’s it. If there’s an asteroid on an orbit that passes near Earth, and the measurement errors are such that there’s 10^-6 chance of it hitting the Earth, we absolutely should spend money on improving the measurements. Likewise for money spent on reliability of the software controlling nuclear missiles.
On the other hand, prior probability assignment is imperfect, and there may be hypotheses (expressed as strings in English) which have unduly high priors, and a sufficiently clever selfish agent can find and use one of such hypotheses to earn a living, potentially diverting funds from legitimate issues.
That’s a good point, actually. (Your comment probably got downvoted owing to your notoriety, not its content.) But somehow the asteroid example does not feel like Pascal’s mugging. Maybe because we are more sure in the accuracy of the probability estimate, whereas for Pascal’s mugging all we get is an upper boundary?
But somehow the asteroid example does not feel like Pascal’s mugging.
If we’re to talk directly about what Dmytry is talking about without beating around the bush, an asteroid killed the dinosaurs and AI did not, therefore discussion of asteroids is not Pascal’s mugging and discussion of AI risks is; the former makes you merely scientifically aware, and the latter makes you a fraudster, a crackpot or a sucker. If AI risks were real, then surely the dinosaurs would have been killed by an AI instead.
So, let’s all give money to prevent an asteroid extinction event that we know only happens once in a few hundred million years, and let’s not pay any attention to AI risks, because AI risks afterall never happened to the dinosaurs, and must therefore be impossible, much like self-driving cars, heavier-than-air flight, or nuclear bombs.
Well, I seriously doubt Aris actually thinks AI risks are Pascal’s Mugging by definition. That doesn’t prevent this from being a rant, it’s just a sarcastic rant.
Your comment probably got downvoted owing to your notoriety, not its content.
The smirkiness of the last paragraph, which is yet another one of Dmytry’s not-so-veiled accusations that EY and all of MIRI are a bunch of fraudsters. I hate this asshole’s tactics. Whenever he doesn’t directly lie and slander, he prances about smirking and insinuating.
Your comment probably got downvoted owing to your notoriety, not its content.
By total of 3 people, 1 of them definitely ArisKatsaris, and 2 others uncomfortable about the “sufficiently clever selfish agent”.
But somehow the asteroid example does not feel like Pascal’s mugging. Maybe because we are more sure in the accuracy of the probability estimate, whereas for Pascal’s mugging all we get is an upper boundary?
I am thinking speculation is the key, especially guided speculation. Speculation comes together with misevaluation of probability, misevaluation of consequences, misevaluation of utility of alternatives, and basically just about every issue with how humans process hypotheses can get utilized as the mugger is also human and would first try to self convince. In real world examples you can see tricks such as asking “why are you so certain in [negation of the proposition]”, only re-evaluating consequences of giving the money without re-evaluating consequences of keeping the money, trying to teach victims to evaluate claims in a way that is more susceptible (update one side of the utility comparison then act out of superfluous difference), and so on.
Low probabilities really are a red herring. Even if you were to do something formal, like Solomonoff induction (physics described with a Turing machine tape), you could do some ridiculous modifications to laws of physics which add invisible consequences to something, in as little as 10..20 extra bits. All actions would be dominated by the simplest modification to laws of physics which leaves most bits to making up huge consequences.
I thought the standard (human) solution is to evaluate the probability first and reject anything at the noise level, before even considering utilities. Put the scope insensitivity to good use. Unfortunately, as an amateur, I don’t understand Eliezer’s (counter?) point, maybe someone can give a popular explanation. Or is this what Kindly did?
I don’t think that’s it. If there’s an asteroid on an orbit that passes near Earth, and the measurement errors are such that there’s 10^-6 chance of it hitting the Earth, we absolutely should spend money on improving the measurements. Likewise for money spent on reliability of the software controlling nuclear missiles.
On the other hand, prior probability assignment is imperfect, and there may be hypotheses (expressed as strings in English) which have unduly high priors, and a sufficiently clever selfish agent can find and use one of such hypotheses to earn a living, potentially diverting funds from legitimate issues.
That’s a good point, actually. (Your comment probably got downvoted owing to your notoriety, not its content.) But somehow the asteroid example does not feel like Pascal’s mugging. Maybe because we are more sure in the accuracy of the probability estimate, whereas for Pascal’s mugging all we get is an upper boundary?
If we’re to talk directly about what Dmytry is talking about without beating around the bush, an asteroid killed the dinosaurs and AI did not, therefore discussion of asteroids is not Pascal’s mugging and discussion of AI risks is; the former makes you merely scientifically aware, and the latter makes you a fraudster, a crackpot or a sucker. If AI risks were real, then surely the dinosaurs would have been killed by an AI instead.
So, let’s all give money to prevent an asteroid extinction event that we know only happens once in a few hundred million years, and let’s not pay any attention to AI risks, because AI risks afterall never happened to the dinosaurs, and must therefore be impossible, much like self-driving cars, heavier-than-air flight, or nuclear bombs.
Downvoted for ranting.
I thought it was satire?
It is. It’s also a rant.
It makes a good point, mind, and I upvoted it, but it’s still needlessly ranty.
Then maybe my sarcasm/irony/satire detector is broken.
Well, I seriously doubt Aris actually thinks AI risks are Pascal’s Mugging by definition. That doesn’t prevent this from being a rant, it’s just a sarcastic rant.
The smirkiness of the last paragraph, which is yet another one of Dmytry’s not-so-veiled accusations that EY and all of MIRI are a bunch of fraudsters. I hate this asshole’s tactics. Whenever he doesn’t directly lie and slander, he prances about smirking and insinuating.
By total of 3 people, 1 of them definitely ArisKatsaris, and 2 others uncomfortable about the “sufficiently clever selfish agent”.
I am thinking speculation is the key, especially guided speculation. Speculation comes together with misevaluation of probability, misevaluation of consequences, misevaluation of utility of alternatives, and basically just about every issue with how humans process hypotheses can get utilized as the mugger is also human and would first try to self convince. In real world examples you can see tricks such as asking “why are you so certain in [negation of the proposition]”, only re-evaluating consequences of giving the money without re-evaluating consequences of keeping the money, trying to teach victims to evaluate claims in a way that is more susceptible (update one side of the utility comparison then act out of superfluous difference), and so on.
Low probabilities really are a red herring. Even if you were to do something formal, like Solomonoff induction (physics described with a Turing machine tape), you could do some ridiculous modifications to laws of physics which add invisible consequences to something, in as little as 10..20 extra bits. All actions would be dominated by the simplest modification to laws of physics which leaves most bits to making up huge consequences.