I’d just point out that this wasn’t the argument Komponisto was making—he was talking only about relatedness in the ancestry sense.
I know, nevertheless still I wanted to stress that we don’t define religion by a single criterion.
Well, consider ancient Greek gods; they are not omniscient, not omnipresent, they can die… they’re not more powerful than the simulation runners, and arguably not very ontologically different; are they not deities, but aliens? Was that not religion?
Therefore I haven’t listed omni-qualities, immortality and ontological distinctiveness among my criteria for religion. If you look at those criteria, the Greek religion satisfied almost all, save perhaps sacred texts and claims of unfalsifiability (seems that they have not enough time to develop the former and no reason for the latter). Religion usually surpasses the question of existence and identity of gods.
(Now we can make distinction between religion and theism, with the latter being defined solely in terms of god’s existence and qualities. I am not sure yet what to think about that possibility.)
So proper account of what religions are actually out there makes your list of attribute much less universal, and the dividing line between religions and something like BTanism much less sharp.
The line is not sharp, of course. Many people argue that Marxism is a religion, even if it explicitly denies god, and may have based that opinion on good arguments. It is also not enough clear what to think about Scientology. Religion, or simply cult? I don’t think the classification is important at all.
OK, it’s not a religion, so what? The really important thing is whether it’s like a religion in those things that ought to make a rationalist not glibly and gleefully dismiss one if they’re psyched about another. … Have you seen a good way to falsify a simulation claim recently?
No, I haven’t. Actually my approach to simulation arguments is not much different from my approach to modern vague forms of theism: I notice it, but don’t take it seriously.
And among those things worship and sacred texts are arguably less important than e.g. falsifiability.
It depends. Belief in importance, hidden message, or even literal truth of ancient texts is generally more reliable indicator of practical irrationality than having an opinion about some undecidable propositions is.
I think we’ve converged on violent agreement, except one point:
And among those things worship and sacred texts are arguably less important than e.g. falsifiability.
It depends. Belief in importance, hidden message, or even literal truth of ancient texts is generally more reliable indicator of practical irrationality than having an opinion about some undecidable propositions is.
Have you seen a good way to falsify a simulation claim recently?
No, I haven’t. Actually my approach to simulation arguments is not much different from my approach to modern vague forms of theism: I notice it, but don’t take it seriously.
So if I may take the implication: you don’t take the SA seriously because . . it seems memetically similar to ideas espoused or held by agents you deem irrational?
So if I may take the implication: you don’t take the SA seriously because . . it seems memetically similar to ideas espoused or held by agents you deem irrational?
I though it was clear from the previous discussion that the reason was pretty weak testability of simulationism, rather than ad hominem reasoning.
Conflating simulationism with calculus or gravitation is absurd. Our universe would look very different if calculus or gravitation did not exist as we understand them, whereas we have no reason at all to suppose this is true of the simulation argument. There are statistical arguments for supposing it’s true, but not all the assumptions in the mathematical model are given, and it increases the complexity of our model of reality without providing any explanatory power.
Calculus is a generic algorithmic tool, gravitation is an algorithmic predictive model of some subset of reality, simulationism is a belief about reality derived from future predictions of current physical theory. Yes these are distinct epistemological categories, my point was more that the similarity of simulationism to the older theism is an inadequate reason to dismiss simulationism.
There are statistical arguments for supposing it’s true, but not all the assumptions in the mathematical model are given, and it increases the complexity of our model of reality without providing any explanatory power.
This is I believe a common misunderstanding about the SA.
Suppose you are given a series of seemingly random numbers—say from a SETI signal. You put a crack team of mathematicians on it for many years and eventually they develop a complex model for the sequence that can predict it. It also appears that you can derive timing from the signal and determine how long it has been progressing. Then later you are able to run the model forward and predict that it in fact eventually repeats itself . . .
That last discovery is not a change to the model that need be justified by Ockham’s razor. It does not add one iota to the model’s complexity.
The SA doesn’t add an iota of complexity to our model of reality—ie physics. It’s a predicted consequence of running physics forward.
The SA doesn’t add an iota of complexity to our model of reality—ie physics. It’s a predicted consequence of running physics forward.
Not necessarily. Given our understanding of the laws of physics, simulating our universe inside itself would be tough. Note that nothing in the simulation hypothesis requires that we are being simulated in a universe that has much resemblance to our apparent universe. (Digression: Even small amounts of monkeying with the constants of the universe can make universes that can plausibly give rise to life. See here (unfortunately everything beyond the summary is behind a paywall). And in some of those cases, it seems plausible that large scale computation might be easier. If certain inflationary models are correct then there should be lots of different universal bubbles with slightly different physical laws. Some of those could be quite hospitable to large-scale computation.)
The simulation argument isn’t a predicted consequence of running physics forward; the scenario you put forward doesn’t establish that we exist in a simulation, just that our universe follows predictable rules that can be forward computed. Postulating an entire universe outside the one we observe does add to the complexity of that model.
The simulation argument is a probabilistic argument that states that if certain assumptions hold then most apparent universes are in fact simulated by other universes, and thus our own is probably a simulation.
Postulating an entire universe outside the one we observe does add to the complexity of that model.
Not so at all. A model’s complexity is not determined by the entities it references or postulates.
For example, I have a model of the future which postulates new processors every few years. The model is not complex enough to capture every new processor from here to infinity. Nor does it need to be. The model is simple, yet it can generate new postulated entities.
You in effect are saying that my model, which postulates many new future processors, is somehow more ‘complex’ than a model which postulates just three, or none.
An entire external universe adds to the complexity of the model, not just how many entities the model contains.
This may not be the case if the simulation itself was produced in the universe as we know it, and our own apparent universe is only a simulated fragment. That isn’t what I thought you were asserting, but that is untenable for completely separate reasons.
What do you mean by complexity and how is it at all relevant?
Take Conway’s life for example. Tons of apparent complexity can emerge from rules simple enough to write on a bar napkin.
Was the copernican model ‘wrong’ because it made our universe-model more complex? Was the discovery of multiple galaxies wrong for similar reason? Many worlds?
The only formal definition of complexity that is well justified is algorithmic complexity and it has some justification as a quality metric for deciding between theories in terms of Solonomoff Induction.
The formal complexity of a universe-model is that of it’s simplest reduction.
The simplest reduction for any scientific model is universal physics.
So there is only one model, all complexity emerges from it, and saying things like “your premise X adds to the complexity of the model” is untrue and equivalent to saying your “premise X makes the model smell bad”.
Adding a universe external to this one doesn’t just add more stuff. To take the Conway’s Game of Life example, suppose that you simulated an entire universe inside it, from the beginning. For the inhabitants, a model that not only explained how their universe worked, but postulated the existence of our universe, would be more complex than one that merely explained their own. With evidence that their reality was a simulation, the proposition could be made more likely than the proposition that it stood alone.
In terms of minimum message length, having to describe another universe superordinate to your own adds to the information of the model, not just the entities described in it. The addition of our own universe could not be encapsulated in a model that simply describes the working of the simulated Conway universe from the inside without adding more information.
Once you have a model that includes a universe and the capacity to simulate universes you can add universes to the model without taking much more complexity because the model can be recursively defined. The minimum message length need not be increased much to add new universes, you just edit the escape clause. Where we are in the model doesn’t matter.
You seem to be thinking in terms of time complexity. Space complexity also needs to be considered. It seems axiomatic to me that an outer universe simulation can only contain nested universe simulations of lower space complexity than than itself.
If I am wrong, is there some discussion of this kind of issue online or in a well-know paper or textbook?
Once you have a model that includes a universe and the capacity to simulate universes you can add universes to the model without taking much more complexity because the model can be recursively defined.
This only follows if your universe can not only model other universes but can easily model universes that share its own rules of physics. This is a much stronger claim about the nature of a universe (for example, it seems likely that this is not true about our universe.)
Adding a universe external to this one doesn’t just add more stuff.
The SA does not ‘add’ a universe external to the model. The SA is a deduction derived from the Singularity-model. The Singularity-model does not ‘add’ the external universes either, they emerge within it naturally, just as naturally as future AI’s do.
For the inhabitants, a model that not only explained how their universe worked, but postulated the existence of our universe, would be more complex than one that merely explained their own.
That would only be true if their model was not also a full explanation of our universe, and thus isomorphic to some historical slice of our universe.
In terms of minimum message length, having to describe another universe superordinate to your own adds to the information of the model,
Not at all. The Singularity-model is a scientific extrapolation of our observed history into the future. As it is scientific, it reduces to physics (the model approximates what we believe would happen if we could simulate physics into the future).
The SA is not a model at all. It is a deduction which can be simplified down to:
If the Singularity-model is accurate.
Then most observable universes are simulations.
And thus our observable universe is a simulation.
You seem to think the minimum message length is somehow physics + extra simulations scrawled in. The physics generates everything, so it’s already minimal.
The addition of our own universe could not be encapsulated in a model that simply describes the working of the simulated Conway universe from the inside without adding more information.
No—but only because the physics differ substantially. You are right of course that if Conway beings evolved and somehow they had some singularity of their own in their future that generated simulated Conway universes, they would establish a lower prior to believing they were embedded in a String/M-theory universe like ours. (they of course could still be wrong, as complexity is just a reasonable bias measure). They’d attach higher credence to being embedded in a Conway universe.
But if the simulated universe is based on the same physics, then it reduces to exactly the same minimal program, and it absolutely describes both universes.
This is very similar to the multiverse in physics and the space of universes string/M-whatever theory can generate.
As I mentioned before, I thought you were arguing the orthodox simulation argument, rather than one where the simulations are created from within our own universe. That would not necessarily increase the complexity of the model, but it’s untenable for its own reasons.
For one thing, it’s far from given that any civilization would ever want to simulate the universe at a previous point; the reasons you provided before don’t remotely justify such a project; it’s not a practical use of computing power. For another, assuming you’re only simulating small fractions of the history of existence, the majority of all sentient beings in the universe would not be ones in a simulation. In fact, you would have to defy a number of probable assumptions about our universe to fit as much universe space and time in the simulation as existed outside it.
I know, nevertheless still I wanted to stress that we don’t define religion by a single criterion.
Therefore I haven’t listed omni-qualities, immortality and ontological distinctiveness among my criteria for religion. If you look at those criteria, the Greek religion satisfied almost all, save perhaps sacred texts and claims of unfalsifiability (seems that they have not enough time to develop the former and no reason for the latter). Religion usually surpasses the question of existence and identity of gods.
(Now we can make distinction between religion and theism, with the latter being defined solely in terms of god’s existence and qualities. I am not sure yet what to think about that possibility.)
The line is not sharp, of course. Many people argue that Marxism is a religion, even if it explicitly denies god, and may have based that opinion on good arguments. It is also not enough clear what to think about Scientology. Religion, or simply cult? I don’t think the classification is important at all.
No, I haven’t. Actually my approach to simulation arguments is not much different from my approach to modern vague forms of theism: I notice it, but don’t take it seriously.
It depends. Belief in importance, hidden message, or even literal truth of ancient texts is generally more reliable indicator of practical irrationality than having an opinion about some undecidable propositions is.
I think we’ve converged on violent agreement, except one point:
You’re right. I retract this part.
I like the phrase.
So if I may take the implication: you don’t take the SA seriously because . . it seems memetically similar to ideas espoused or held by agents you deem irrational?
Do you believe in calculus? Gravitation?
I though it was clear from the previous discussion that the reason was pretty weak testability of simulationism, rather than ad hominem reasoning.
Conflating simulationism with calculus or gravitation is absurd. Our universe would look very different if calculus or gravitation did not exist as we understand them, whereas we have no reason at all to suppose this is true of the simulation argument. There are statistical arguments for supposing it’s true, but not all the assumptions in the mathematical model are given, and it increases the complexity of our model of reality without providing any explanatory power.
Calculus is a generic algorithmic tool, gravitation is an algorithmic predictive model of some subset of reality, simulationism is a belief about reality derived from future predictions of current physical theory. Yes these are distinct epistemological categories, my point was more that the similarity of simulationism to the older theism is an inadequate reason to dismiss simulationism.
This is I believe a common misunderstanding about the SA.
Suppose you are given a series of seemingly random numbers—say from a SETI signal. You put a crack team of mathematicians on it for many years and eventually they develop a complex model for the sequence that can predict it. It also appears that you can derive timing from the signal and determine how long it has been progressing. Then later you are able to run the model forward and predict that it in fact eventually repeats itself . . .
That last discovery is not a change to the model that need be justified by Ockham’s razor. It does not add one iota to the model’s complexity.
The SA doesn’t add an iota of complexity to our model of reality—ie physics. It’s a predicted consequence of running physics forward.
Not necessarily. Given our understanding of the laws of physics, simulating our universe inside itself would be tough. Note that nothing in the simulation hypothesis requires that we are being simulated in a universe that has much resemblance to our apparent universe. (Digression: Even small amounts of monkeying with the constants of the universe can make universes that can plausibly give rise to life. See here (unfortunately everything beyond the summary is behind a paywall). And in some of those cases, it seems plausible that large scale computation might be easier. If certain inflationary models are correct then there should be lots of different universal bubbles with slightly different physical laws. Some of those could be quite hospitable to large-scale computation.)
The simulation argument isn’t a predicted consequence of running physics forward; the scenario you put forward doesn’t establish that we exist in a simulation, just that our universe follows predictable rules that can be forward computed. Postulating an entire universe outside the one we observe does add to the complexity of that model. The simulation argument is a probabilistic argument that states that if certain assumptions hold then most apparent universes are in fact simulated by other universes, and thus our own is probably a simulation.
Not so at all. A model’s complexity is not determined by the entities it references or postulates.
For example, I have a model of the future which postulates new processors every few years. The model is not complex enough to capture every new processor from here to infinity. Nor does it need to be. The model is simple, yet it can generate new postulated entities.
You in effect are saying that my model, which postulates many new future processors, is somehow more ‘complex’ than a model which postulates just three, or none.
An entire external universe adds to the complexity of the model, not just how many entities the model contains.
This may not be the case if the simulation itself was produced in the universe as we know it, and our own apparent universe is only a simulated fragment. That isn’t what I thought you were asserting, but that is untenable for completely separate reasons.
What do you mean by complexity and how is it at all relevant?
Take Conway’s life for example. Tons of apparent complexity can emerge from rules simple enough to write on a bar napkin.
Was the copernican model ‘wrong’ because it made our universe-model more complex? Was the discovery of multiple galaxies wrong for similar reason? Many worlds?
The only formal definition of complexity that is well justified is algorithmic complexity and it has some justification as a quality metric for deciding between theories in terms of Solonomoff Induction.
The formal complexity of a universe-model is that of it’s simplest reduction.
The simplest reduction for any scientific model is universal physics.
So there is only one model, all complexity emerges from it, and saying things like “your premise X adds to the complexity of the model” is untrue and equivalent to saying your “premise X makes the model smell bad”.
Adding a universe external to this one doesn’t just add more stuff. To take the Conway’s Game of Life example, suppose that you simulated an entire universe inside it, from the beginning. For the inhabitants, a model that not only explained how their universe worked, but postulated the existence of our universe, would be more complex than one that merely explained their own. With evidence that their reality was a simulation, the proposition could be made more likely than the proposition that it stood alone.
In terms of minimum message length, having to describe another universe superordinate to your own adds to the information of the model, not just the entities described in it. The addition of our own universe could not be encapsulated in a model that simply describes the working of the simulated Conway universe from the inside without adding more information.
Once you have a model that includes a universe and the capacity to simulate universes you can add universes to the model without taking much more complexity because the model can be recursively defined. The minimum message length need not be increased much to add new universes, you just edit the escape clause. Where we are in the model doesn’t matter.
You seem to be thinking in terms of time complexity. Space complexity also needs to be considered. It seems axiomatic to me that an outer universe simulation can only contain nested universe simulations of lower space complexity than than itself.
If I am wrong, is there some discussion of this kind of issue online or in a well-know paper or textbook?
This only follows if your universe can not only model other universes but can easily model universes that share its own rules of physics. This is a much stronger claim about the nature of a universe (for example, it seems likely that this is not true about our universe.)
The SA does not ‘add’ a universe external to the model. The SA is a deduction derived from the Singularity-model. The Singularity-model does not ‘add’ the external universes either, they emerge within it naturally, just as naturally as future AI’s do.
That would only be true if their model was not also a full explanation of our universe, and thus isomorphic to some historical slice of our universe.
Not at all. The Singularity-model is a scientific extrapolation of our observed history into the future. As it is scientific, it reduces to physics (the model approximates what we believe would happen if we could simulate physics into the future).
The SA is not a model at all. It is a deduction which can be simplified down to:
If the Singularity-model is accurate.
Then most observable universes are simulations.
And thus our observable universe is a simulation.
You seem to think the minimum message length is somehow physics + extra simulations scrawled in. The physics generates everything, so it’s already minimal.
No—but only because the physics differ substantially. You are right of course that if Conway beings evolved and somehow they had some singularity of their own in their future that generated simulated Conway universes, they would establish a lower prior to believing they were embedded in a String/M-theory universe like ours. (they of course could still be wrong, as complexity is just a reasonable bias measure). They’d attach higher credence to being embedded in a Conway universe.
But if the simulated universe is based on the same physics, then it reduces to exactly the same minimal program, and it absolutely describes both universes.
This is very similar to the multiverse in physics and the space of universes string/M-whatever theory can generate.
As I mentioned before, I thought you were arguing the orthodox simulation argument, rather than one where the simulations are created from within our own universe. That would not necessarily increase the complexity of the model, but it’s untenable for its own reasons.
For one thing, it’s far from given that any civilization would ever want to simulate the universe at a previous point; the reasons you provided before don’t remotely justify such a project; it’s not a practical use of computing power. For another, assuming you’re only simulating small fractions of the history of existence, the majority of all sentient beings in the universe would not be ones in a simulation. In fact, you would have to defy a number of probable assumptions about our universe to fit as much universe space and time in the simulation as existed outside it.