The proof that lexigraphics can be embedded in a real function can also be walked backwards in that a decider that doesn’t know the upper limits of the goods he opines on can’t collapse their choices on a single archimedian class but must keep them separate essentially having neccesity for infinite values. A system that tries to collapse anyway will have to decide on a “margin” between good classes and risks encountring a multiple of one class that crosses over the margin. That is someone getting themselfs killed over 1 million bananas might have the reason that they reasoning capabilities are not designed to work on over 1000 bananas.
The arguments about of boltczman brains seem a little strange. If I close my eyes and can’t tell a good state of the world from a bad state of the world then yes I can’t sysdtematically use my sense data to get a good outcome. But this seems more of a statement of my epistemics rather than outside world. If a butterfly can’t expect a hurricane, does that mean that hurricanes are ethically irrelevant? Any given actor probably has a horizon on how far they can predict the future. But trying to get a result that the universe would have a limit where nobody could predict what happens is tantamount to saying that causality will break down.
The first argument is correct, and if we believe lexicographic preferences as more than exaggerations, that implies that finding a bound is important.
The ethical relevance argument was not that we can’t tell, but that we cannot influence the end-state in a meaningful way. Prediction is different than influenceability. And yes, post heat-death, I would think that causality would have broken down in any meaningful sense.
If something is determined by a pseudorandom generator that is initialised with a seed and I have control over what the seed is I can “influence” the result in that if I switch the seed the outcome will be something different but in another sense I can’t “influence” in that I can’t force it into a goal state. That I believe my actions will have the same effect doesn’t mean they will and there is a difference between not knowing and something being unable to be known.
I guess I am missing the detail on what part of their construction makes them uninfluencable. To my understanding after different “orderly phases” of the universe the resulting boltzman soup is different ie what happens before heat-death is correlated what happens after heat-death.
It’s true that the actual evolution post-heat death will depend on the state now, but 1) the distribution of states is not dependent on the seed, and 2) the result isn’t pseudorandom, it’s truly random.
I migth be a bit out of my breath, but if there is a distinction between a “actual evolution” and “potential evolution”, the “representativeness” of the potential evolution has aspects of epistemology in it. If I have a large macrostate and let a thermodynamic simulation go on then I collapse more quickly into a single mess where the start condition lineations don’t allow me to make useful distinctions. If I define my macrostates more narrowly ie have more resolution in the simulation this will take longer. For any finite horizon there should be a narrowenough accuracy on the detailedness of the start state that it retains usefulnes. If an absolute zero simulation is possible (as atleast on paper with assumtions can be).
If I just know that there is a door A and a door B then I can’t make any meaningful distinction which door is better (I guess I could arbitrary prefer one over the other). If I know behind one of the doors is a donkey and one has a car I can make much more informed decisions. In a given situation how detailed a model I apply is dependent on my knowledge and sensory organs. However me not being able to guess the rigth door doesn’t mean that cars cease to be unvaluable. In Monty Hall switching is preferable. The point about the distributions being the same would be akin to saying that the decision procedure used to pick the door doesn’t matter as any door is as good as any other. But if there are different states behind different doors, ie it is not an identical superposition of car and donkey behind each door but some doors have cars and some have donkeys then door choice does matter.
I kinda maybe know that quantum mechanics has elements which are mor properly random than pseudo random. However quantum computing is reversibloe and the blackhole information paradox would suggets that phycisist don’t treat quantum effects to make states a indistinct mess, it is a distinct mess where entanglements and other things make tricky to keep track of stuff but it doesn’t come at the sacrificde of clockworkiness.
In particular quantum mechanics has entanglemebnt which means that even if a classical mechanis is “fuzzed” by exposure to try quantum spread that spread often is correlated that is entangled states are produced which have the potential to keep choices distinct. For example if the Monty chooses the valid door to reveal via a true quantum coin the situation can still be benefitted from by switching. Even if the car is in a equal superposition behind any of the doors, if Monty opens correct doors (ie montys reveal is entangled to never reveal a car) then the puzzle remains solvable. Just the involvement of actual randomness isn’t sufficient to say that distinctions are impossible but I lack the skill to distinguish what the requirements for that would be.
However if there was true “washing out” then the correlation between the orderly and the random should be broken. If a coin is conditional on what happens before the flip then it is not a fair coin.
This seems confused in a bunch of ways, but I’m not enough of an expert in quantum mechanics, chaos theory, or teaching to figure out where you’re confused. Anders might be able to help—but I think we’d need a far longer discussion to respond and explain this.
But to appeal to authority, when Scott Aaronson looked at the earlier draft, he didn’t bring up any issues with quantum uncertainty as a concern, and when I check back in with him, I’ll double check that he doesn’t have any issues with the physics.
To the extent boltzman brains can be understood as a classical process then I think they are or can be viewed as pseudorandom phenomena. For quantum I do not really know. I do not know whether the paper intends to invoke quantum to get them that property.
The claim in the paper that they are “inaccesible by construction” is very implicit and requires a lot of accompaning assumptions and does a lot of work for the argument turn.
Numerology analog:
Say that some strange utility function wants to find the number that contains the maximum codings of the string “LOL” as a kind of smiley face maximiser. Any natural number when turned into binary and turned into strings can only contain a finite amount of such codings because there are only a finite amount of 1s in the binary representation. For any rational number turned to bianry deciaml there is going to be a period in the representation and the period can only contain finite multiples. The optimal rational number would be where the period is exactly “lol”. However for transcendental numbers there is no period. Also most transcendental numbers are “fair” in the sense that each digit appears approximately as likely as any other and additionally fair in that bigger combinations converge to even statistic. When the lol-maximiser tries to determine whether it likes pi or phi more as numbers, it is going to find infinite lols in both. However it would be astonishing if they contained the exact same amount of lols. The difference in lols is likely to be vanishingly small ie infinidesimal. But even if we can’t computationally check the matter, the difference exists before it is made apparent to us. The utility function of the lol-maximiser over the reals probably can’t be expressed as a real function.
While the difference between boltzman histories might be small if we want to be exact about preference preservation then the differences need to cancel exactly. Otherwise we are discarding lexiographic differences (it is common to treat a positive amout less than any real to be exactly 0). There is a difference between vanishingly different and indifferent and distributional sameness only gets you to vanishingly different.
The proof that lexigraphics can be embedded in a real function can also be walked backwards in that a decider that doesn’t know the upper limits of the goods he opines on can’t collapse their choices on a single archimedian class but must keep them separate essentially having neccesity for infinite values. A system that tries to collapse anyway will have to decide on a “margin” between good classes and risks encountring a multiple of one class that crosses over the margin. That is someone getting themselfs killed over 1 million bananas might have the reason that they reasoning capabilities are not designed to work on over 1000 bananas.
The arguments about of boltczman brains seem a little strange. If I close my eyes and can’t tell a good state of the world from a bad state of the world then yes I can’t sysdtematically use my sense data to get a good outcome. But this seems more of a statement of my epistemics rather than outside world. If a butterfly can’t expect a hurricane, does that mean that hurricanes are ethically irrelevant? Any given actor probably has a horizon on how far they can predict the future. But trying to get a result that the universe would have a limit where nobody could predict what happens is tantamount to saying that causality will break down.
The first argument is correct, and if we believe lexicographic preferences as more than exaggerations, that implies that finding a bound is important.
The ethical relevance argument was not that we can’t tell, but that we cannot influence the end-state in a meaningful way. Prediction is different than influenceability. And yes, post heat-death, I would think that causality would have broken down in any meaningful sense.
If something is determined by a pseudorandom generator that is initialised with a seed and I have control over what the seed is I can “influence” the result in that if I switch the seed the outcome will be something different but in another sense I can’t “influence” in that I can’t force it into a goal state. That I believe my actions will have the same effect doesn’t mean they will and there is a difference between not knowing and something being unable to be known.
I guess I am missing the detail on what part of their construction makes them uninfluencable. To my understanding after different “orderly phases” of the universe the resulting boltzman soup is different ie what happens before heat-death is correlated what happens after heat-death.
It’s true that the actual evolution post-heat death will depend on the state now, but 1) the distribution of states is not dependent on the seed, and 2) the result isn’t pseudorandom, it’s truly random.
I migth be a bit out of my breath, but if there is a distinction between a “actual evolution” and “potential evolution”, the “representativeness” of the potential evolution has aspects of epistemology in it. If I have a large macrostate and let a thermodynamic simulation go on then I collapse more quickly into a single mess where the start condition lineations don’t allow me to make useful distinctions. If I define my macrostates more narrowly ie have more resolution in the simulation this will take longer. For any finite horizon there should be a narrowenough accuracy on the detailedness of the start state that it retains usefulnes. If an absolute zero simulation is possible (as atleast on paper with assumtions can be).
If I just know that there is a door A and a door B then I can’t make any meaningful distinction which door is better (I guess I could arbitrary prefer one over the other). If I know behind one of the doors is a donkey and one has a car I can make much more informed decisions. In a given situation how detailed a model I apply is dependent on my knowledge and sensory organs. However me not being able to guess the rigth door doesn’t mean that cars cease to be unvaluable. In Monty Hall switching is preferable. The point about the distributions being the same would be akin to saying that the decision procedure used to pick the door doesn’t matter as any door is as good as any other. But if there are different states behind different doors, ie it is not an identical superposition of car and donkey behind each door but some doors have cars and some have donkeys then door choice does matter.
I kinda maybe know that quantum mechanics has elements which are mor properly random than pseudo random. However quantum computing is reversibloe and the blackhole information paradox would suggets that phycisist don’t treat quantum effects to make states a indistinct mess, it is a distinct mess where entanglements and other things make tricky to keep track of stuff but it doesn’t come at the sacrificde of clockworkiness.
In particular quantum mechanics has entanglemebnt which means that even if a classical mechanis is “fuzzed” by exposure to try quantum spread that spread often is correlated that is entangled states are produced which have the potential to keep choices distinct. For example if the Monty chooses the valid door to reveal via a true quantum coin the situation can still be benefitted from by switching. Even if the car is in a equal superposition behind any of the doors, if Monty opens correct doors (ie montys reveal is entangled to never reveal a car) then the puzzle remains solvable. Just the involvement of actual randomness isn’t sufficient to say that distinctions are impossible but I lack the skill to distinguish what the requirements for that would be.
However if there was true “washing out” then the correlation between the orderly and the random should be broken. If a coin is conditional on what happens before the flip then it is not a fair coin.
This seems confused in a bunch of ways, but I’m not enough of an expert in quantum mechanics, chaos theory, or teaching to figure out where you’re confused. Anders might be able to help—but I think we’d need a far longer discussion to respond and explain this.
But to appeal to authority, when Scott Aaronson looked at the earlier draft, he didn’t bring up any issues with quantum uncertainty as a concern, and when I check back in with him, I’ll double check that he doesn’t have any issues with the physics.
To the extent boltzman brains can be understood as a classical process then I think they are or can be viewed as pseudorandom phenomena. For quantum I do not really know. I do not know whether the paper intends to invoke quantum to get them that property.
The claim in the paper that they are “inaccesible by construction” is very implicit and requires a lot of accompaning assumptions and does a lot of work for the argument turn.
Numerology analog:
Say that some strange utility function wants to find the number that contains the maximum codings of the string “LOL” as a kind of smiley face maximiser. Any natural number when turned into binary and turned into strings can only contain a finite amount of such codings because there are only a finite amount of 1s in the binary representation. For any rational number turned to bianry deciaml there is going to be a period in the representation and the period can only contain finite multiples. The optimal rational number would be where the period is exactly “lol”. However for transcendental numbers there is no period. Also most transcendental numbers are “fair” in the sense that each digit appears approximately as likely as any other and additionally fair in that bigger combinations converge to even statistic. When the lol-maximiser tries to determine whether it likes pi or phi more as numbers, it is going to find infinite lols in both. However it would be astonishing if they contained the exact same amount of lols. The difference in lols is likely to be vanishingly small ie infinidesimal. But even if we can’t computationally check the matter, the difference exists before it is made apparent to us. The utility function of the lol-maximiser over the reals probably can’t be expressed as a real function.
While the difference between boltzman histories might be small if we want to be exact about preference preservation then the differences need to cancel exactly. Otherwise we are discarding lexiographic differences (it is common to treat a positive amout less than any real to be exactly 0). There is a difference between vanishingly different and indifferent and distributional sameness only gets you to vanishingly different.