Some really fast comments on the Pascal’s Mugging part:
1) For ordinary x-risk scenarios, the Hansonian inverse-impact adjustment for “you’re unlikely to have a large impact” is within conceivable reach of the evidence—if the scenario has you affecting 10^50 lives in a future civilization, that’s just 166 bits of evidence required.
2) Of course, if you’re going to take a prior of 10^-50 at face value, you had better not start spouting deep wisdom about expert overconfidence when it comes to interpreting the likelihood ratios—only invoking “expert overconfidence” on one kind of extreme probability really is a recipe for being completely oblivious to the facts.
3) The Hansonian adjustment starts out by adding up to expected value ratios around 1 - it says that based on your priors, all scenarios that put you in a unique position to affect different large numbers of people in the same per-person way will have around the same expected value. Evidence then modifies this. If Pascal’s Mugger shows you evidence with a million-to-one Bayesian likelihood ratio favoring the scenario where they’re a Matrix Lord who has put you in a situation to affect 3^^^3 lives, the upshot is that you treat your actions as having the power to affect a million lives. It’s exactly the same if they say 4^^^^4 lives are at stake. It’s an interesting question as to whether this makes sense. I’m not sure it does.
4) But the way the Hansonian adjustment actually works out (the background theory that actually implements it in a case like this) is that after seeing medium amounts of evidence favoring the would-be x-risk charity, the most likely Hanson-adjusted hypothesis then becomes the non-Bayesian-disprovable scenario that rather than being in one of those amazingly unique pre-Singularity civilizations that can actually affect huge numbers of descendants, you’re probably in an ancestor simulation instead; or rather, most copies of you are in ancestor simulations and your average impact is correspondingly diluted. Holden Karnofsky would probably not endorse this statement, and to be coherent should also reject the Hansonian adjustment.
5) The actual policy recommendation we get out of the Hansonian adjustment is not for people to be skeptical of the prima facie causal mechanics of existential risk reduction efforts. The policy recommendation we get is that you’re probably in a simulation instead, whereupon UDT says that the correct utilitarian policy is for everyone to, without updating on the circumstances of their own existence, try to think through a priori what sort of ancestor simulations they would expect to exist and which parts of the simulation would be of most interest to the simulator (and hence simulated in the greatest detail with largest amount of computing power expended on simulating many slightly different variants), and then expend extra resources on policies that would, if implemented across both real and simulated worlds, make the most intensely simulated part of ancestor simulations pleasant for the people involved. A truly effective charity should spend money on nicer accommodations and higher-quality meals for decision theory conferences, or better yet, seek out people who have already led very happy lives and convince them to work on decision theory. Holden would probably not endorse this either.
I just boggled slightly there − 166 completely independent bits of evidence is a lot for a novel argument, and “just” is a strange word to put next to it.
True, that was a strange word. I may have been spending too much time thinking about large numbers lately. My point is that it’s not literally unreachable the way a Levin-prior penalty on running speed makes quantum mechanics (in all forms) absolutely implausible relative to any amount of evidence you can possibly collect, or the Hansonian penalty makes ever being in a position to influence 3^^^3 future lives “absolutely implausible” relative to any amount of info you can collect in less than log(3^^^3) time, given that your sensory bandwidth is on the order of a few megabits per second.
As soon as you start trying to be “reasonable” or “skeptical” or “outside view” or whatever about the likelihood ratios involved in the evidence, obviously 10^-50 instantly goes to an eternally unreachable prior penalty since after all over the course of the human species people have completely hallucinated more unlikely things due to insanity on far fewer than 10^50 tries, etcetera. That’s part of what I was trying to get at with (2). But if you’re saying that, then it’s also quite probable that the Hansonian adjustment is inappropriate or that you otherwise screwed up the calculation of 10^-50 prior probability, and that it is actually more. It is sometimes useful to be clever about adjustments, it is sometimes useful to at least look at the unadjusted utilities to see what the sheer numbers would say if taken at face value, and it is never useful to be clever about adjusting only one side of the equation while taking the other at face value.
You should only do things that increase your simulation measure after receiving good personal news or when you are unusually happy, obviously.
This isn’t obvious. Or, rather, this is a subjective preference and people who prefer to increase their simulation measure independently of attempts to amplify (one way of measuring the perception of) good events are far from incoherent. For that matter people who see no value in increasing simulation measure specifically for good events are also quite reasonable (or at least not thereby shown to be unreasonable).
Your ‘should’ here prescribes preferences to others, rather than (merely) explaining how to achieve them.
Some really fast comments on the Pascal’s Mugging part:
1) For ordinary x-risk scenarios, the Hansonian inverse-impact adjustment for “you’re unlikely to have a large impact” is within conceivable reach of the evidence—if the scenario has you affecting 10^50 lives in a future civilization, that’s just 166 bits of evidence required.
2) Of course, if you’re going to take a prior of 10^-50 at face value, you had better not start spouting deep wisdom about expert overconfidence when it comes to interpreting the likelihood ratios—only invoking “expert overconfidence” on one kind of extreme probability really is a recipe for being completely oblivious to the facts.
3) The Hansonian adjustment starts out by adding up to expected value ratios around 1 - it says that based on your priors, all scenarios that put you in a unique position to affect different large numbers of people in the same per-person way will have around the same expected value. Evidence then modifies this. If Pascal’s Mugger shows you evidence with a million-to-one Bayesian likelihood ratio favoring the scenario where they’re a Matrix Lord who has put you in a situation to affect 3^^^3 lives, the upshot is that you treat your actions as having the power to affect a million lives. It’s exactly the same if they say 4^^^^4 lives are at stake. It’s an interesting question as to whether this makes sense. I’m not sure it does.
4) But the way the Hansonian adjustment actually works out (the background theory that actually implements it in a case like this) is that after seeing medium amounts of evidence favoring the would-be x-risk charity, the most likely Hanson-adjusted hypothesis then becomes the non-Bayesian-disprovable scenario that rather than being in one of those amazingly unique pre-Singularity civilizations that can actually affect huge numbers of descendants, you’re probably in an ancestor simulation instead; or rather, most copies of you are in ancestor simulations and your average impact is correspondingly diluted. Holden Karnofsky would probably not endorse this statement, and to be coherent should also reject the Hansonian adjustment.
5) The actual policy recommendation we get out of the Hansonian adjustment is not for people to be skeptical of the prima facie causal mechanics of existential risk reduction efforts. The policy recommendation we get is that you’re probably in a simulation instead, whereupon UDT says that the correct utilitarian policy is for everyone to, without updating on the circumstances of their own existence, try to think through a priori what sort of ancestor simulations they would expect to exist and which parts of the simulation would be of most interest to the simulator (and hence simulated in the greatest detail with largest amount of computing power expended on simulating many slightly different variants), and then expend extra resources on policies that would, if implemented across both real and simulated worlds, make the most intensely simulated part of ancestor simulations pleasant for the people involved. A truly effective charity should spend money on nicer accommodations and higher-quality meals for decision theory conferences, or better yet, seek out people who have already led very happy lives and convince them to work on decision theory. Holden would probably not endorse this either.
I just boggled slightly there − 166 completely independent bits of evidence is a lot for a novel argument, and “just” is a strange word to put next to it.
True, that was a strange word. I may have been spending too much time thinking about large numbers lately. My point is that it’s not literally unreachable the way a Levin-prior penalty on running speed makes quantum mechanics (in all forms) absolutely implausible relative to any amount of evidence you can possibly collect, or the Hansonian penalty makes ever being in a position to influence 3^^^3 future lives “absolutely implausible” relative to any amount of info you can collect in less than log(3^^^3) time, given that your sensory bandwidth is on the order of a few megabits per second.
As soon as you start trying to be “reasonable” or “skeptical” or “outside view” or whatever about the likelihood ratios involved in the evidence, obviously 10^-50 instantly goes to an eternally unreachable prior penalty since after all over the course of the human species people have completely hallucinated more unlikely things due to insanity on far fewer than 10^50 tries, etcetera. That’s part of what I was trying to get at with (2). But if you’re saying that, then it’s also quite probable that the Hansonian adjustment is inappropriate or that you otherwise screwed up the calculation of 10^-50 prior probability, and that it is actually more. It is sometimes useful to be clever about adjustments, it is sometimes useful to at least look at the unadjusted utilities to see what the sheer numbers would say if taken at face value, and it is never useful to be clever about adjusting only one side of the equation while taking the other at face value.
That expresses what I thought better than I could have myself.
We can have a new site slogan. “Participate on LessWrong to increase your simulation measure!”
You should only do things that increase your simulation measure after receiving good personal news or when you are unusually happy, obviously.
This isn’t obvious. Or, rather, this is a subjective preference and people who prefer to increase their simulation measure independently of attempts to amplify (one way of measuring the perception of) good events are far from incoherent. For that matter people who see no value in increasing simulation measure specifically for good events are also quite reasonable (or at least not thereby shown to be unreasonable).
Your ‘should’ here prescribes preferences to others, rather than (merely) explaining how to achieve them.
Previously discussed here.
(EDIT: I see that you already commented on that thread, but I’m leaving this comment here for anyone else reading this thread.)
jghtgh