I don’t see what’s wrong with the idea that “extraordinary claims require extraordinary evidence”.
Me neither, but quite a few people on lesswrong don’t seem to share that opinion or are in possession of vast amounts of evidence that I lack. For example, some people seem to consider “interference from an alternative Everett branch in which a singularity went badly” or “unfriendly AI that might achieve complete control over our branch by means of acausal trade”. Fascinating topics for sure, but in my opinion ridiculously far detached from reality to be taken at all seriously. Those ideas are merely logical implications of theories that we deem to reasonable. Another theory that is by itself reasonable is then used to argue that logical implications do not have to pay rent in future anticipations. And in the end, due to a combination of reasonable theories, one ends up with completely absurd ideas. I don’t see how this could have happened if one would follow the rule that “extraordinary claims require extraordinary evidence”.
I don’t understand in what way the linked comment says anything about interference from alternative Everett branches. Did you mean to link to something else?
I’m not sure what the majority view is on less wrong, but none of the people I have met in real life advocate making decisions based on (very) small probabilities of (very) large utility fluctuations. I think AI has probability at least 1% of destroying most human value under the status quo. I think 1% is a large enough number that it’s reasonable to care a lot, although it’s also small enough that it’s reasonable not to care. However, I also think that the probability is at least 20%, and that is large enough that I think it is unreasonable not to care (assuming that preservation of humanity is one of your principle terminal values, which it may or may not be).
Does this mean that I’m going to drop out of college to work at SingInst? No, because that closes a lot of doors. Does it mean that I’m seriously reconsidering my career path? Yes, and I am reasonably likely to act on those considerations.
I think AI has probability at least 1% of destroying most human value under the status quo. I think 1% is a large enough number that it’s reasonable to care a lot, although it’s also small enough that it’s reasonable not to care. However, I also think that the probability is at least 20%
Without machine intelligence, every single human alive today dies.
One wonders how that value carnage would be quantified—using the same scale.
However, I also think that the probability is at least 20%, and that is large enough that I think it is unreasonable not to care (assuming that preservation of humanity is one of your principle terminal values, which it may or may not be).
I agree.
I’m not sure what the majority view is on less wrong, but none of the people I have met in real life advocate making decisions based on (very) small probabilities of (very) large utility fluctuations.
No, I think some people here use the +20% estimate on risks from AI and act according to some implications of logical implications. See here, which is the post the comment I linked to talked about. I have chosen that post because it resembled ideas put forth in another post on lesswrong that has been banned because of the perceived risks and because people got nightmares due to it.
I don’t see what’s wrong with the idea that “extraordinary claims require extraordinary evidence”.
Me neither, but quite a few people on lesswrong don’t seem to share that opinion or are in possession of vast amounts of evidence that I lack. For example, some people seem to consider “interference from an alternative Everett branch in which a singularity went badly” or “unfriendly AI that might achieve complete control over our branch by means of acausal trade”. Fascinating topics for sure, but in my opinion ridiculously far detached from reality to be taken at all seriously.
I think you only get significant interference from “adjacent” worlds—but sure, this sounds a little strange, the way you put it.
If we go back to the Pascal’s wager post though—Eliezer Yudkowsky just seems to be saying that he doesn’t know how to build a resouce-limited version of Solomonoff induction that doesn’t make the mistake he mentions. That’s fair enough—nobody knows how to build high quality approximations of Solomonoff induction—or we would be done by now. The point is that this isn’t a problem with Solomonoff induction, or with the idea of approximating it. It’s just a limitation in Eliezer Yudkowsky’s current knowledge (and probably everyone else’s). I fully expect that we will solve the problem, though. Quite possibly to do so, we will have to approximate Solomonoff induction in the context of some kind of reward system or utility function—so that we know which mis-predictions are costly (e.g. by resulting in getting mugged) - which will guide us to the best points to apply our limited resources.
If we go back to the Pascal’s wager post though—Eliezer Yudkowsky just seems to be saying that he doesn’t know how to build a resouce-limited version of Solomonoff induction that doesn’t make the mistake he mentions.
It has nothing to do with recourse limitations, the problem is that Solomonoff induction itself can’t handle Pascal’s mugging. If anything, the resource limited version of Solomonoff induction is less likely to fall for Pascal’s mugging since it might round the small probability down to 0.
It has nothing to do with recourse limitations, the problem is that Solomonoff induction itself can’t handle Pascal’s mugging.
In what way? You think that Solomonoff induction would predict enormous torture with a non-negligible propbability if it observed the mugger not being paid? Why do you think that? That conclusion seems extremely unlikely to me—assumung that the Solomonoff induction had had a reasonable amount of previous exposure of the world. It would, like any sensible agent, assume that the mugger was lying.
That’s why the original Pascal’s mugging post post directed its criticism at “some bounded analogue of Solomonoff induction”.
In what way? You think that Solomonoff induction would predict enormous torture with a non-negligible propbability if it observed the mugger not being paid?
Because Solomonoff induction bases its priors on minimum message length and it’s possible to encode enormous numbers like 3^^^3 in a message of length much less then 3^^^3.
Why do you think that?
Because I understand mathematics. ;)
That’s why the original Pascal’s mugging post post directed its criticism at “some bounded analogue of Solomonoff induction”.
What Eliezer was referring to is the fact that an unbounded agent would attempt to incorporate all possible versions of Pascal’s wager and Pascal’s mugging simultaneously and promptly end up with an ∞ − ∞ error.
You think that Solomonoff induction would predict enormous torture with a non-negligible propbability if it observed the mugger not being paid?
Because Solomonoff induction bases its priors on minimum message length and it’s possible to encode enormous numbers like 3^^^3 in a message of length much less then 3^^^3.
Sure—but the claim there are large numbers of people waiting to be tortured also decreases in probability with the number of people involved.
I figure that Solomonoff induction would give a (correct) tiny probability for this hypothesis being correct.
Your problem is actually not with Solomonoff induction—despite what you say—I figure. Rather you are complaining about some decision theory application of Solomonoff induction—involving the concept of “utility”.
Because Solomonoff induction bases its priors on minimum message length and it’s possible to encode enormous numbers like 3^^^3 in a message of length much less then 3^^^3.
Sure—but the claim there are large numbers of people waiting to be tortured also decreases in probability with the number of people involved.
What does this have to do with my point.
I figure that Solomonoff induction would give a (correct) tiny probability for this hypothesis being correct.
It does, just not tiny enough to override the 3^^^3 utility difference.
Your problem is actually not with Solomonoff induction—despite what you say—I figure. Rather you are complaining about some decision theory application of Solomonoff induction—involving the concept of “utility”.
I don’t have a problem with anything, I’m just trying to correct misconceptions about Pascal’s mugging.
I’m just trying to correct misconceptions about Pascal’s mugging.
Well, your claim was that “Solomonoff induction itself can’t handle Pascal’s mugging”—which appears to be unsubstantiated nonsense. Solomonoff induction will give the correct answer based on Occamian priors and its past experience—which is the best that anyone could reasonably expect from it.
Hold on. What does “extraordinary claim” mean? I see two possible meanings: (1) a claim that triggers the “absurdity heuristic”, or (2) a claim that is incompatible with many things that are already believed. The examples you gave trigger the absurdity heuristic, because they introduce large, weird structures into an area of concept space that does not normally receive updates. However, I don’t see any actual incompatibilities between them and my pre-existing beliefs.
It becomes extraordinary at the point where the expected utility of the associated logical implications demands to take actions that might lead to inappropriately high risks. Where “inappropriately” is measured relative to the original evidence that led you to infer those implications. If the evidence is insufficient then discount some of the associated utility. Where “insufficient” is measured intuitively. In conclusion: Act according to your best formal theories but don’t factor out your intuition.
It becomes extraordinary at the point where the expected utility of the associated logical implications demands to take actions that might lead to inappropriately high risks.
So if I’m driving, and someone says “look out for that deer in the road!”, that’s an extraordinary claim because swerving is a large risk? Or did you push the question over into the word “inappropriately”?
Me neither, but quite a few people on lesswrong don’t seem to share that opinion or are in possession of vast amounts of evidence that I lack. For example, some people seem to consider “interference from an alternative Everett branch in which a singularity went badly” or “unfriendly AI that might achieve complete control over our branch by means of acausal trade”. Fascinating topics for sure, but in my opinion ridiculously far detached from reality to be taken at all seriously. Those ideas are merely logical implications of theories that we deem to reasonable. Another theory that is by itself reasonable is then used to argue that logical implications do not have to pay rent in future anticipations. And in the end, due to a combination of reasonable theories, one ends up with completely absurd ideas. I don’t see how this could have happened if one would follow the rule that “extraordinary claims require extraordinary evidence”.
I don’t understand in what way the linked comment says anything about interference from alternative Everett branches. Did you mean to link to something else?
I’m not sure what the majority view is on less wrong, but none of the people I have met in real life advocate making decisions based on (very) small probabilities of (very) large utility fluctuations. I think AI has probability at least 1% of destroying most human value under the status quo. I think 1% is a large enough number that it’s reasonable to care a lot, although it’s also small enough that it’s reasonable not to care. However, I also think that the probability is at least 20%, and that is large enough that I think it is unreasonable not to care (assuming that preservation of humanity is one of your principle terminal values, which it may or may not be).
Does this mean that I’m going to drop out of college to work at SingInst? No, because that closes a lot of doors. Does it mean that I’m seriously reconsidering my career path? Yes, and I am reasonably likely to act on those considerations.
Without machine intelligence, every single human alive today dies.
One wonders how that value carnage would be quantified—using the same scale.
I agree.
No, I think some people here use the +20% estimate on risks from AI and act according to some implications of logical implications. See here, which is the post the comment I linked to talked about. I have chosen that post because it resembled ideas put forth in another post on lesswrong that has been banned because of the perceived risks and because people got nightmares due to it.
I think you only get significant interference from “adjacent” worlds—but sure, this sounds a little strange, the way you put it.
If we go back to the Pascal’s wager post though—Eliezer Yudkowsky just seems to be saying that he doesn’t know how to build a resouce-limited version of Solomonoff induction that doesn’t make the mistake he mentions. That’s fair enough—nobody knows how to build high quality approximations of Solomonoff induction—or we would be done by now. The point is that this isn’t a problem with Solomonoff induction, or with the idea of approximating it. It’s just a limitation in Eliezer Yudkowsky’s current knowledge (and probably everyone else’s). I fully expect that we will solve the problem, though. Quite possibly to do so, we will have to approximate Solomonoff induction in the context of some kind of reward system or utility function—so that we know which mis-predictions are costly (e.g. by resulting in getting mugged) - which will guide us to the best points to apply our limited resources.
It has nothing to do with recourse limitations, the problem is that Solomonoff induction itself can’t handle Pascal’s mugging. If anything, the resource limited version of Solomonoff induction is less likely to fall for Pascal’s mugging since it might round the small probability down to 0.
In what way? You think that Solomonoff induction would predict enormous torture with a non-negligible propbability if it observed the mugger not being paid? Why do you think that? That conclusion seems extremely unlikely to me—assumung that the Solomonoff induction had had a reasonable amount of previous exposure of the world. It would, like any sensible agent, assume that the mugger was lying.
That’s why the original Pascal’s mugging post post directed its criticism at “some bounded analogue of Solomonoff induction”.
Because Solomonoff induction bases its priors on minimum message length and it’s possible to encode enormous numbers like 3^^^3 in a message of length much less then 3^^^3.
Because I understand mathematics. ;)
What Eliezer was referring to is the fact that an unbounded agent would attempt to incorporate all possible versions of Pascal’s wager and Pascal’s mugging simultaneously and promptly end up with an ∞ − ∞ error.
Sure—but the claim there are large numbers of people waiting to be tortured also decreases in probability with the number of people involved.
I figure that Solomonoff induction would give a (correct) tiny probability for this hypothesis being correct.
Your problem is actually not with Solomonoff induction—despite what you say—I figure. Rather you are complaining about some decision theory application of Solomonoff induction—involving the concept of “utility”.
What does this have to do with my point.
It does, just not tiny enough to override the 3^^^3 utility difference.
I don’t have a problem with anything, I’m just trying to correct misconceptions about Pascal’s mugging.
Well, your claim was that “Solomonoff induction itself can’t handle Pascal’s mugging”—which appears to be unsubstantiated nonsense. Solomonoff induction will give the correct answer based on Occamian priors and its past experience—which is the best that anyone could reasonably expect from it.
Hold on. What does “extraordinary claim” mean? I see two possible meanings: (1) a claim that triggers the “absurdity heuristic”, or (2) a claim that is incompatible with many things that are already believed. The examples you gave trigger the absurdity heuristic, because they introduce large, weird structures into an area of concept space that does not normally receive updates. However, I don’t see any actual incompatibilities between them and my pre-existing beliefs.
It becomes extraordinary at the point where the expected utility of the associated logical implications demands to take actions that might lead to inappropriately high risks. Where “inappropriately” is measured relative to the original evidence that led you to infer those implications. If the evidence is insufficient then discount some of the associated utility. Where “insufficient” is measured intuitively. In conclusion: Act according to your best formal theories but don’t factor out your intuition.
So if I’m driving, and someone says “look out for that deer in the road!”, that’s an extraordinary claim because swerving is a large risk? Or did you push the question over into the word “inappropriately”?
Claims are only extraordinary with respect to theories.