Speculation of sufficiently advanced future technologies is indistinguishable from magical thinking.
Unless there is scientific method in what you’re doing, and unless you’re producing something testable and testing it, you are certainly not in the reference class of scientists. Unless there is plenty of rigour, you are not in reference class of people using mathematical methods (even if you have formulas in your papers). Not up to grabs here either. Maybe the reference class is philosophers. If you wish, philosophers with PhD.
As they do tend to honestly make actual arguments rather than just try to manipulate for profit the people who already agree with core ideas, one can examine actual argumentation. Which is not very good. E.g. simulation argument of his looks watertight at first glance but is really reliant on assumption about physics (suppose MWI is correct, now the counting patently doesn’t work for probabilities), and does not account for potentially enormous number of simulated beings that can easily tell their reality is simulated as the simulator cuts some corners. Typical example of philosophy, making arguments that seem true for mere lack of alternative propositions to made up assertions. You only spend time formalizing such stuff if you can’t see that you are building false precision, making up far too many assumptions for the results to be meaningful in any way (in the field where intuition is unlikely to work, too). If you don’t see that you are making a lot of extremely shaky assumptions when you are making a lot of extremely shaky assumptions, you’ll be susceptible to generating false precision precisely as per Nick Szabo’s article.
Their musings about singularity are in precise agreement with the hypothesis that at the current point in time it is early enough that the only people ‘working’ on this are those who for some reason fail to see when they aren’t making progress. I’m pretty sure all this movement will look very silly in 100 years—there will be dangers they did not see and there won’t be dangers they focused on.
suppose MWI is correct, now the counting patently doesn’t work for probabilities
What is the correct counting for MWI, exactly?
does not account for potentially enormous number of simulated beings that can easily tell their reality is simulated as the simulator cuts some corners.
I think you are misunderstanding the SA, which is surprising since it’s formally pretty simple.
The SA is a trilemma; finding evidence that strongly supports one leg of the trilemma is not a problem with the trilemma itself. It’s just reasons for you to bite a particular bullet: “the SA says ‘either X or Y or Z’, and here our reality looks like a cutrate simulation, so I guess Z was right after all!′
So the trilemma remains ‘watertight’ even if the specific paper in enumerating reasons to believe X (or Y, or Z) fails to cover some favorite bit of reasoning of yours. The reasoning still fits inside the trilemma framework and is not an argument against the framework itself.
(My own impression is that your cutrate suggestion wouldn’t go very far since it’s not clear—to me, anyway—what a cutrate simulation would look like, and whether our own universe is not cutrate. One could validly argue that our failure to find good clear evidence that we’re in a simulation is evidence against being in a simulation, but quantifying how much evidence this would be is even harder. And given how many crude simulation we run for science & business & pleasure, and the Fermi paradox, it seems especially unlikely that this point is strong enough to move someone from biting the we’re-in-a-simulation bullet to biting one of the other bullets.)
I’m pretty sure all this movement will look very silly in 100 years—there will be dangers they did not see and there won’t be dangers they focused on.
Everyone looks silly from 100 years on. That’s not a useful point to make.
MWI: we don’t know what is that works, but we can tell if something doesn’t work. Probabilities don’t seem to work out if you just count distinct observers. Plus, the number of distinct observers grows very rapidly with time, so you get extreme case of doomsday paradox. If you aren’t just counting distinct observers but count copies twice then your probabilities could as well depend to e.g. thickness of wires in the computer, not just the raw number of simulated realities.
Furthermore, more significantly, under MWI it is not even clear what first two statements could even mean.
I think you are misunderstanding the SA, which is surprising since it’s formally pretty simple.
No civilization will reach a level of technological maturity capable of producing simulated realities.
No civilization reaching aforementioned technological status will produce a significant number of simulated realities, for any of a number of reasons, such as diversion of computational processing power for other tasks, ethical considerations of holding entities captive in simulated realities, etc.
Any entities with our general set of experiences are almost certainly living in a simulation.
I assumed that the last statement is to be taken as ‘we should expect to be in a sim if first two conditions are false and given our general set of experiences’, by assumption of at least rudimentary relevance of this trilemma.
In such case there is the fourth possibility with probability overwhelmingly higher than of this entire argument: the wild guess that there would ever be a good reason to believe that we should be among most numerous, with same weight for real thing and simulator (or same weight for different types of simulator), is not spot on. Collaborated also by us not being among those in weird sims of any kind (we’d detect a god speaking to us every day).
Furthermore, the distinction between perfect simulator and reality strikes me as nonsensical. Until there is a measurement that we are in a simulation, we may most sensibly assume we are in both (this time merely drawing inspiration from the sort of intuitions we might have had if we believed in MWI). The probability of measurement that we are in a simulation, well that has an exceptionally good chance of being a much more complicated matter than assumed.
edit: To clarify, my point is that even putting this sort of stuff in words or equations is a great example of false precision that Nick Szabo complains about. Too many assumptions have to be made without noticing that those assumptions are made, for the statements to have meaning at all.
Everyone looks silly from 100 years on. That’s not a useful point to make.
Those who aren’t grossly wrong (Newton for example) don’t look as silly as the silly I am speaking of.
Wikipedia is pretty bad on philosophy (the SEP is much better), and in this case, there’s no reason not to read Bostrom’s original paper and the correction: he writes clearly, and they are readily available on his website.
In such case there is the fourth possibility with probability overwhelmingly higher than of this entire argument: the wild guess that there would ever be a good reason to believe that we should be among most numerous, with same weight for real thing and simulator (or same weight for different types of simulator), is not spot on. Collaborated also by us not being among those in weird sims of any kind (we’d detect a god speaking to us every day).
What? Could you write this more clearly, I have no idea what you’re trying to say.
Those who aren’t grossly wrong (Newton for example) don’t look as silly as the silly I am speaking of.
Newton is actually a great example; if you don’t choose to ignore the areas which make him look bad, he looks like an incredible fool in many respects. His constant pursuit of alchemy even then was the object of derision, and while we no longer would hang or exile him for his bizarre theology & eschatology, we (even most Christians) would regard them as hilarious. Then there are his priority disputes...
If even Newton looks this foolish, what hope can the rest of us have? No, the suggestion ‘would this make me look foolish in 100 years?’ does us no good in practice.
Then we can pardon Bostrom for not taking them into account.
When pondering the possibilities where we live given lack of grand unified theory ‘of everything’, you can’t assume your physical intuitions hold true. In fact you should assume the opposite. The MWI is just an example of how philosophical argument that looks entirely invariable admits defeat from possible physics, in a way in which the mathematics—which philosophy mimics—does not. That means validity of argument requires validity of intuitions, which are reasonably very unlikely to be valid in any grand sense. There’s also historical example: a lot of philosophers assuming Euclidean geometry is the only logically possible kind of geometry, without even noticing that they are making such assumption, up until mathematicians came up with alternative.
What? Could you write this more clearly, I have no idea what you’re trying to say.
You assume that the probability of being among either group depends to number of yous within that group (rather than something entirely different), to do anthropic reasoning beyond tautological ‘we can’t observe universes where we can’t be alive’. In my opinion this is a case of wild guess over unknowns, totally false precision.
I came up with a clearer example of how something totally different may actually make a lot more sense: Solomonoff induction on codes that model various universes. A model of the universe outputting the data matching your internal self-perception must not only contain yourself, but must include the code that finds yourself within that model, so that output begins with sense data. It is then clear that the code that picks one of yous out of a model full of all sorts of simulations may easily be larger than the code that picks you out of the world where there is just one you, but the number of various others is much smaller.
I’m not arguing that Bostrom is bad for a philosopher. I am outlining how in philosophy you just make all sorts of assumptions that you don’t notice are wild guesses, and how the field is basically built on false precision. I.e. you assume that the probability of being within some huge set full of yous and non-yours is independent of number of non-yous, which is just a wild guess. Connect together half a dozen implicit wild guesses with likelihood of correctness of overly generous 1 in 100 each , and we’re speaking of probability of correctness in the range of 10^-12 . Philosophy is generally like this.
I believe this falls under Nick Szabo’s complaint about false precision.
Also, a paper by Steven Weinberg on usefulness of philosophy.
It seems to me that funding philosophical works in this field may actually be actively harmful, due to establishment of such false precision and prejudices. It’s like funding ‘embracing bias, imprecision, and making your mind up before checking where the mathematics will lead you’.
If even Newton looks this foolish, what hope can the rest of us have? No, the suggestion ‘would this make me look foolish in 100 years?’ does us no good in practice.
That’s the whole point: very low probability of being right. There’s a crucial difference: the methods Newton employed managed to achieve a non-zero (and not negligible) truth finding rate. So he made something that does not look silly. Even with this, most of stuff was quite seriously wrong.
you assume that the probability of being within some huge set full of yous and non-yours is independent of number of non-yous, which is just a wild guess. Connect together half a dozen implicit wild guesses with likelihood of correctness of overly generous 1 in 100 each
Do you think it’s “generous” to assign only 99% probability in the negation of “the probability of being within some huge set full of yous and non-yours is independent of number of non-yous” where “you” is interpreted to include all your observations? That seems like insane overconfidence in a view that goes haywire in simple finite discrete cases.
Typical philosophy: tear strawman alternatives to prove a wild guess. (Why strawman: because can be also dependent on anything else entirely like positions of copies)
Also, the 99% confidence is not in the dependence on non yous (and yous BUT nothing else), the 99% confidence is in the wild guess of independence from everything else and dependence to count of yous, to be wrong. Also, consider two computer circuits real nearby, running identical you, separated by thin layer of dielectric. Remove dielectric, 1 copy with thicker wires. Conclusion: it may depend to thickness of wires of a copy or maybe to the speed of the copy.
Hell, the probability of being in a specific copy may just as well be undefined entirely until a copy figures out which copy it is, and then depend solely to how it was figured out.
Let’s suppose that the probability of being sampled out of model is sum of 2^-l over all codes that pluck you out of the model, like in Solomonoff induction. May well be dependent on the presence or absence of stone dummies (provided those break some simple method of locating you). Will definitely depend to your position. Go show it broken.
edit : actually, this alternative distribution for observers (and observer-moments) based on Solomonoff-type prior has been proposed here before by Wei_Dai , and has also been mentioned by Marcus Hutter. I’m not at all impressed by Nick Bostrom, that’s the point, or philosophy for that matter. The conclusions of philosophers—given relative uselessness of philosophy compared to science—ought to be taken as very low grade evidence.
Thank you for linking to Hutter’s talk, what an astounding mind. What a small world it is, I remember being impressed by him when I sat through his courses back at grad school, little knowing how much of my future perspective on map-building would eventually depend on his and his colleagues’ school of thought.
That presentation should be mandatory reading. In all Everett branches.
Speculation of sufficiently advanced future technologies is indistinguishable from magical thinking.
Unless there is scientific method in what you’re doing, and unless you’re producing something testable and testing it, you are certainly not in the reference class of scientists. Unless there is plenty of rigour, you are not in reference class of people using mathematical methods (even if you have formulas in your papers). Not up to grabs here either. Maybe the reference class is philosophers. If you wish, philosophers with PhD.
As they do tend to honestly make actual arguments rather than just try to manipulate for profit the people who already agree with core ideas, one can examine actual argumentation. Which is not very good. E.g. simulation argument of his looks watertight at first glance but is really reliant on assumption about physics (suppose MWI is correct, now the counting patently doesn’t work for probabilities), and does not account for potentially enormous number of simulated beings that can easily tell their reality is simulated as the simulator cuts some corners. Typical example of philosophy, making arguments that seem true for mere lack of alternative propositions to made up assertions. You only spend time formalizing such stuff if you can’t see that you are building false precision, making up far too many assumptions for the results to be meaningful in any way (in the field where intuition is unlikely to work, too). If you don’t see that you are making a lot of extremely shaky assumptions when you are making a lot of extremely shaky assumptions, you’ll be susceptible to generating false precision precisely as per Nick Szabo’s article.
Their musings about singularity are in precise agreement with the hypothesis that at the current point in time it is early enough that the only people ‘working’ on this are those who for some reason fail to see when they aren’t making progress. I’m pretty sure all this movement will look very silly in 100 years—there will be dangers they did not see and there won’t be dangers they focused on.
What is the correct counting for MWI, exactly?
I think you are misunderstanding the SA, which is surprising since it’s formally pretty simple.
The SA is a trilemma; finding evidence that strongly supports one leg of the trilemma is not a problem with the trilemma itself. It’s just reasons for you to bite a particular bullet: “the SA says ‘either X or Y or Z’, and here our reality looks like a cutrate simulation, so I guess Z was right after all!′
So the trilemma remains ‘watertight’ even if the specific paper in enumerating reasons to believe X (or Y, or Z) fails to cover some favorite bit of reasoning of yours. The reasoning still fits inside the trilemma framework and is not an argument against the framework itself.
(My own impression is that your cutrate suggestion wouldn’t go very far since it’s not clear—to me, anyway—what a cutrate simulation would look like, and whether our own universe is not cutrate. One could validly argue that our failure to find good clear evidence that we’re in a simulation is evidence against being in a simulation, but quantifying how much evidence this would be is even harder. And given how many crude simulation we run for science & business & pleasure, and the Fermi paradox, it seems especially unlikely that this point is strong enough to move someone from biting the we’re-in-a-simulation bullet to biting one of the other bullets.)
Everyone looks silly from 100 years on. That’s not a useful point to make.
MWI: we don’t know what is that works, but we can tell if something doesn’t work. Probabilities don’t seem to work out if you just count distinct observers. Plus, the number of distinct observers grows very rapidly with time, so you get extreme case of doomsday paradox. If you aren’t just counting distinct observers but count copies twice then your probabilities could as well depend to e.g. thickness of wires in the computer, not just the raw number of simulated realities.
Furthermore, more significantly, under MWI it is not even clear what first two statements could even mean.
We are discussing Nick Bostrom, and I take the http://en.wikipedia.org/wiki/Nick_Bostrom to be at least somewhat representative of his contribution to simulation argument
The trilemma as stated is:
No civilization will reach a level of technological maturity capable of producing simulated realities.
No civilization reaching aforementioned technological status will produce a significant number of simulated realities, for any of a number of reasons, such as diversion of computational processing power for other tasks, ethical considerations of holding entities captive in simulated realities, etc.
Any entities with our general set of experiences are almost certainly living in a simulation.
I assumed that the last statement is to be taken as ‘we should expect to be in a sim if first two conditions are false and given our general set of experiences’, by assumption of at least rudimentary relevance of this trilemma.
In such case there is the fourth possibility with probability overwhelmingly higher than of this entire argument: the wild guess that there would ever be a good reason to believe that we should be among most numerous, with same weight for real thing and simulator (or same weight for different types of simulator), is not spot on. Collaborated also by us not being among those in weird sims of any kind (we’d detect a god speaking to us every day).
Furthermore, the distinction between perfect simulator and reality strikes me as nonsensical. Until there is a measurement that we are in a simulation, we may most sensibly assume we are in both (this time merely drawing inspiration from the sort of intuitions we might have had if we believed in MWI). The probability of measurement that we are in a simulation, well that has an exceptionally good chance of being a much more complicated matter than assumed.
edit: To clarify, my point is that even putting this sort of stuff in words or equations is a great example of false precision that Nick Szabo complains about. Too many assumptions have to be made without noticing that those assumptions are made, for the statements to have meaning at all.
Those who aren’t grossly wrong (Newton for example) don’t look as silly as the silly I am speaking of.
Then we can pardon Bostrom for not taking them into account.
Wikipedia is pretty bad on philosophy (the SEP is much better), and in this case, there’s no reason not to read Bostrom’s original paper and the correction: he writes clearly, and they are readily available on his website.
What? Could you write this more clearly, I have no idea what you’re trying to say.
Newton is actually a great example; if you don’t choose to ignore the areas which make him look bad, he looks like an incredible fool in many respects. His constant pursuit of alchemy even then was the object of derision, and while we no longer would hang or exile him for his bizarre theology & eschatology, we (even most Christians) would regard them as hilarious. Then there are his priority disputes...
If even Newton looks this foolish, what hope can the rest of us have? No, the suggestion ‘would this make me look foolish in 100 years?’ does us no good in practice.
When pondering the possibilities where we live given lack of grand unified theory ‘of everything’, you can’t assume your physical intuitions hold true. In fact you should assume the opposite. The MWI is just an example of how philosophical argument that looks entirely invariable admits defeat from possible physics, in a way in which the mathematics—which philosophy mimics—does not. That means validity of argument requires validity of intuitions, which are reasonably very unlikely to be valid in any grand sense. There’s also historical example: a lot of philosophers assuming Euclidean geometry is the only logically possible kind of geometry, without even noticing that they are making such assumption, up until mathematicians came up with alternative.
You assume that the probability of being among either group depends to number of yous within that group (rather than something entirely different), to do anthropic reasoning beyond tautological ‘we can’t observe universes where we can’t be alive’. In my opinion this is a case of wild guess over unknowns, totally false precision.
I came up with a clearer example of how something totally different may actually make a lot more sense: Solomonoff induction on codes that model various universes. A model of the universe outputting the data matching your internal self-perception must not only contain yourself, but must include the code that finds yourself within that model, so that output begins with sense data. It is then clear that the code that picks one of yous out of a model full of all sorts of simulations may easily be larger than the code that picks you out of the world where there is just one you, but the number of various others is much smaller.
I’m not arguing that Bostrom is bad for a philosopher. I am outlining how in philosophy you just make all sorts of assumptions that you don’t notice are wild guesses, and how the field is basically built on false precision. I.e. you assume that the probability of being within some huge set full of yous and non-yours is independent of number of non-yous, which is just a wild guess. Connect together half a dozen implicit wild guesses with likelihood of correctness of overly generous 1 in 100 each , and we’re speaking of probability of correctness in the range of 10^-12 . Philosophy is generally like this.
I believe this falls under Nick Szabo’s complaint about false precision.
Also, a paper by Steven Weinberg on usefulness of philosophy.
It seems to me that funding philosophical works in this field may actually be actively harmful, due to establishment of such false precision and prejudices. It’s like funding ‘embracing bias, imprecision, and making your mind up before checking where the mathematics will lead you’.
That’s the whole point: very low probability of being right. There’s a crucial difference: the methods Newton employed managed to achieve a non-zero (and not negligible) truth finding rate. So he made something that does not look silly. Even with this, most of stuff was quite seriously wrong.
Do you think it’s “generous” to assign only 99% probability in the negation of “the probability of being within some huge set full of yous and non-yours is independent of number of non-yous” where “you” is interpreted to include all your observations? That seems like insane overconfidence in a view that goes haywire in simple finite discrete cases.
Typical philosophy: tear strawman alternatives to prove a wild guess. (Why strawman: because can be also dependent on anything else entirely like positions of copies)
Also, the 99% confidence is not in the dependence on non yous (and yous BUT nothing else), the 99% confidence is in the wild guess of independence from everything else and dependence to count of yous, to be wrong. Also, consider two computer circuits real nearby, running identical you, separated by thin layer of dielectric. Remove dielectric, 1 copy with thicker wires. Conclusion: it may depend to thickness of wires of a copy or maybe to the speed of the copy.
Hell, the probability of being in a specific copy may just as well be undefined entirely until a copy figures out which copy it is, and then depend solely to how it was figured out.
Let’s suppose that the probability of being sampled out of model is sum of 2^-l over all codes that pluck you out of the model, like in Solomonoff induction. May well be dependent on the presence or absence of stone dummies (provided those break some simple method of locating you). Will definitely depend to your position. Go show it broken.
edit : actually, this alternative distribution for observers (and observer-moments) based on Solomonoff-type prior has been proposed here before by Wei_Dai , and has also been mentioned by Marcus Hutter. I’m not at all impressed by Nick Bostrom, that’s the point, or philosophy for that matter. The conclusions of philosophers—given relative uselessness of philosophy compared to science—ought to be taken as very low grade evidence.
Thank you for linking to Hutter’s talk, what an astounding mind. What a small world it is, I remember being impressed by him when I sat through his courses back at grad school, little knowing how much of my future perspective on map-building would eventually depend on his and his colleagues’ school of thought.
That presentation should be mandatory reading. In all Everett branches.