The anthropic principle strikes me as being largely too clever for its own good, at least, the people who think you can sort a list in linear time by randomizing the list, checking if it’s sorted, and if it’s not, destroying the world.
The anthropic principle strikes me as being largely too clever for its own good, at least, the people who think you can sort a list in linear time by randomizing the list, checking if it’s sorted, and if it’s not, destroying the world.
That seems to be a rational decision for people with certain value systems. Specifically, those that don’t care about their quantum measure. (Yes, that value system is at least as insane as Clippy’s.)
“Quantum Sour Grapes” seems like a suitable label for the strategy. ;)
If I’m following the reasoning (if “reasoning” is in fact the right word, which I’m unconvinced of), you wouldn’t make any world-destroying mistakes that it’s possible for you not to make, since only the version of you that (by chance) made no such mistakes would survive.
And, obviously, there’s no point in even trying to avoid world-destroying mistakes that it’s not possible for you not to make.
A uniform randomization may not be possible but you can get an arbitrarily well randomized list in linear time. That is all that is needed for the purposes of the sorting. (You would just end up destroying the world 1 + (1 / arbitrarily large) as many times as with a uniform distribution.)
Algorithms like a modified Fisher-Yates shuffle in linear time if you’re just measuring reads and writes, but O(lg(n!)) > O(n) bits are required to specify which permutation is being chosen, so unless generating random numbers is free, shuffling is always O(n log n) .
In real life, we don’t use PRNGs with sufficiently long cycle times, so we usually get linear-time shuffles by discarding the vast majority of the potential orderings.
The anthropic principle strikes me as being largely too clever for its own good, at least, the people who think you can sort a list in linear time by randomizing the list, checking if it’s sorted, and if it’s not, destroying the world.
That seems to be a rational decision for people with certain value systems. Specifically, those that don’t care about their quantum measure. (Yes, that value system is at least as insane as Clippy’s.)
“Quantum Sour Grapes” seems like a suitable label for the strategy. ;)
It just occurred to me that you would want to be REALLY careful that there wasn’t a bug in either your shuffling or list checking code.
If you started using quantum suicide for all your problems eventually you’d make a mistake. :)
If I’m following the reasoning (if “reasoning” is in fact the right word, which I’m unconvinced of), you wouldn’t make any world-destroying mistakes that it’s possible for you not to make, since only the version of you that (by chance) made no such mistakes would survive.
And, obviously, there’s no point in even trying to avoid world-destroying mistakes that it’s not possible for you not to make.
Strictly speaking, how does one randomize a list in linear time?
Even picking a uniformly-randomized list from all possible sequences is out of reach for us under most scenarios with reasonably long lists.
A uniform randomization may not be possible but you can get an arbitrarily well randomized list in linear time. That is all that is needed for the purposes of the sorting. (You would just end up destroying the world 1 + (1 / arbitrarily large) as many times as with a uniform distribution.)
Algorithms like a modified Fisher-Yates shuffle in linear time if you’re just measuring reads and writes, but O(lg(n!)) > O(n) bits are required to specify which permutation is being chosen, so unless generating random numbers is free, shuffling is always O(n log n) .
In real life, we don’t use PRNGs with sufficiently long cycle times, so we usually get linear-time shuffles by discarding the vast majority of the potential orderings.