(meta) Well, I’m quite relieved because I think we’re actually converging rather than diverging finally.
No. Low complexity is not the same thing as symmetry.
Yes sorry symmetry was just how I pictured it in my head, but it’s not the right word. My point was that the particles aren’t acting independently, they’re constrained.
Mostly correct. However, given a low-complexity program that uses a large random input, you can make a low-complexity program that simulates it by iterating through all possible inputs, and running the program on all of them.
By the same token you can write a low complexity program to iteratively generate every number. That doesn’t mean all numbers have low complexity. It needs to be the unique output of the program. If you tried to generate every combination then pick one out as the unique output, the picking-one-out step would require high complexity.
I think as a result of this whole discussion I can simplify my entire “finite resources” section to this one statement, which I might even edit in to the original post (though at this stage I don’t think many more people are ever likely to read it):
“It is not possible to simulate n humans without resources of complexity at least n.”
Everything else can be seen as simply serving to illustrate the difference between a complexity of n, and a complexity of complexity(n).
It would be quite surprising if none of the “C-like” theories could influence action, given that there are so many of them
It’s easy to give a theory a posterior probability of less than 1/3^^^^3, by giving it zero. Any theory that’s actually inconsistent with the evidence is simply disproven. What’s left are theories which either accept the observed event, i.e. those which have priors < 1/3^^^^3 (e.g. that the number chosen was 7 in my example), and theories which somehow reject either the observation itself or the logic tying the whole thing together.
It’s my view that theories which reject either observation or logic don’t motivate action because they give you nothing to go on. There are many of them, but that’s part of the problem since they include “the world is like X and you’ve failed to observe it correctly” for every X, making it difficult to break the symmetry.
I’m not completely convinced there can’t be alternative theories which don’t fall into the two categories above (either disproven or unhelpful), but they’re specific to the examples so it’s hard to argue about them in general terms. In some ways it doesn’t matter if you’re right, even if there was always compelling arguments not to act on a belief which had a prior of less than 1/3^^^^3, Pascal’s Muggle could give those arguments and not look foolish by refusing to shift his beliefs in the face of strong evidence. All I was originally trying to say was that it isn’t wrong to assign priors that low to something in the first place. Unless you disagree with that then we’re ultimately arguing over nothing here.
Here’s my attempt at an analysis
This solution seems to work as stated, but I think the dilemma itself can dodge this solution by constructing itself in a way that forces the population of people-to-be-tortured to be separate from the population of people-to-be-mugged. In that case there isn’t of the order of 3^^^^3 people paying the $5.
(meta again) I have to admit it’s ironic that this whole original post stemmed from an argument with someone else (in a post about a median utility based decision theory), which was triggered by me claiming Pascal’s Mugging wasn’t a problem that needed solving (at least certainly not by said median utility based decision theory). By the end of that I became convinced that the problem wasn’t considered solved and my ideas on it would be considered valuable. I’ve then spent most of my time here arguing with someone who doesn’t consider it unsolved! Maybe I could have saved myself a lot of karma by just introducing the two of you instead.
“It is not possible to simulate n humans without resources of complexity at least n.”
Still disagree. As I pointed out, it is possible to for a short program to generate outputs with a very large number of complex components.
It’s my view that theories which reject either observation or logic don’t motivate action because they give you nothing to go on. There are many of them, but that’s part of the problem since they include “the world is like X and you’ve failed to observe it correctly” for every X, making it difficult to break the symmetry.
Given only partial failure of observation or logic (where most of your observations and deductions are still correct), you still have something to go on, so you shouldn’t have symmetry there. For everything to cancel so that your 1/3^^^^3-probability hypothesis dominates your decision-making, it would require a remarkably precise symmetry in everything else.
Maybe I could have saved myself a lot of karma by just introducing the two of you instead.
I have also argued against the median utility maximization proposal already, actually.
(meta) Well, I’m quite relieved because I think we’re actually converging rather than diverging finally.
Yes sorry symmetry was just how I pictured it in my head, but it’s not the right word. My point was that the particles aren’t acting independently, they’re constrained.
By the same token you can write a low complexity program to iteratively generate every number. That doesn’t mean all numbers have low complexity. It needs to be the unique output of the program. If you tried to generate every combination then pick one out as the unique output, the picking-one-out step would require high complexity.
I think as a result of this whole discussion I can simplify my entire “finite resources” section to this one statement, which I might even edit in to the original post (though at this stage I don’t think many more people are ever likely to read it):
“It is not possible to simulate n humans without resources of complexity at least n.”
Everything else can be seen as simply serving to illustrate the difference between a complexity of n, and a complexity of complexity(n).
It’s easy to give a theory a posterior probability of less than 1/3^^^^3, by giving it zero. Any theory that’s actually inconsistent with the evidence is simply disproven. What’s left are theories which either accept the observed event, i.e. those which have priors < 1/3^^^^3 (e.g. that the number chosen was 7 in my example), and theories which somehow reject either the observation itself or the logic tying the whole thing together.
It’s my view that theories which reject either observation or logic don’t motivate action because they give you nothing to go on. There are many of them, but that’s part of the problem since they include “the world is like X and you’ve failed to observe it correctly” for every X, making it difficult to break the symmetry.
I’m not completely convinced there can’t be alternative theories which don’t fall into the two categories above (either disproven or unhelpful), but they’re specific to the examples so it’s hard to argue about them in general terms. In some ways it doesn’t matter if you’re right, even if there was always compelling arguments not to act on a belief which had a prior of less than 1/3^^^^3, Pascal’s Muggle could give those arguments and not look foolish by refusing to shift his beliefs in the face of strong evidence. All I was originally trying to say was that it isn’t wrong to assign priors that low to something in the first place. Unless you disagree with that then we’re ultimately arguing over nothing here.
This solution seems to work as stated, but I think the dilemma itself can dodge this solution by constructing itself in a way that forces the population of people-to-be-tortured to be separate from the population of people-to-be-mugged. In that case there isn’t of the order of 3^^^^3 people paying the $5.
(meta again) I have to admit it’s ironic that this whole original post stemmed from an argument with someone else (in a post about a median utility based decision theory), which was triggered by me claiming Pascal’s Mugging wasn’t a problem that needed solving (at least certainly not by said median utility based decision theory). By the end of that I became convinced that the problem wasn’t considered solved and my ideas on it would be considered valuable. I’ve then spent most of my time here arguing with someone who doesn’t consider it unsolved! Maybe I could have saved myself a lot of karma by just introducing the two of you instead.
Still disagree. As I pointed out, it is possible to for a short program to generate outputs with a very large number of complex components.
Given only partial failure of observation or logic (where most of your observations and deductions are still correct), you still have something to go on, so you shouldn’t have symmetry there. For everything to cancel so that your 1/3^^^^3-probability hypothesis dominates your decision-making, it would require a remarkably precise symmetry in everything else.
I have also argued against the median utility maximization proposal already, actually.