Whether the distinction is worth making or not, it is irrelevant to my point, since both are very unlikely and therefore require much more evidence than we have now.
I assume that your idea is to prevent doomsday or make it less likely. If not, why bother with all these simulations?
Whether the distinction is worth making or not, it is irrelevant to my point, since both are very unlikely and therefore require much more evidence than we have now.
Look, does this seem like solid reasoning to you? Because your arguments are beginning to sound quite like it.
Nope: there is sufficient evidence that the Earth is not flat, but there isn’t sufficient evidence that causality doesn’t exist. That is the difference. There are some counterintuitive theories, like QM or relativity or, maybe, round Earth, but all of them have been supported by a lot of evidence, there were actual experiments to prove them, etc. And these theories appeared, because old theories failed to explain existing evidence.
Can you name a single real-world example where causality doesn’t work?
And you’re not the first LessWronger to think that if your idea sounds clever enough, you don’t actually need any evidence to prove it.
Bad analogies don’t count as solid arguments, either. The difference between evolution/thermodynamics example and your case is that the relation between thermodynamics and evolution is complicated, and in fact there is no contradiction. While it’s evident that your idea works only if you can acausally influence something. That’s much closer to perpetual motion engine (direct contradiction), than to evolution (non-direct, questionable contradiction which turns out to be false).
Look, I explained the details in the OP. Create a lot of Earths and hope that yours turns out to be one of them. That already violates causality, according to your standards. I don’t see much of a way to make it clearer.
FWIW—I suspect it violates causality under nearly everyone’s standards.
You asked if your proposal was plausible. Unless you can postulate some means to handle that causality issue, I would have to say the answer is “no”.
So—you are suggesting that if the AI generates enough simulations of the “prime” reality with enough fidelity, then the chances that a given observer is in a sim approach 1, because of the sheer quantity of them. Correct?
If so—the flaw lies in orders of infinity. For every way you can simulate a world, you can incorrectly simulate it an infinite number of other ways. So—if you are in a sim, it is likely with a chance approaching unity that you are NOT in a simulation of the higher level reality simulating you. And if it’s not the same, you have no causality violation, because the first sim is not actually the same as reality; it just seems to be from the POV an an inhabitant.
The whole thing seems a bit silly anyway—not your argument, but the sim argument—from a physics POV. Unless we are actually in a SIM right now, and our understanding of physics is fundamentally broken, doing the suggested would take more time and energy than has ever or will ever exist, and is still mathematically impossible (another orders of infinity thing).
FWIW—I suspect it violates causality under nearly everyone’s standards.
Oh god damn it, Lesswrong is responsible for every single premise of my argument. I’m just the first to make it!
As for the rest of your post: I have to admit I did not consider this, but I still don’t see why they wouldn’t just create a less complex physical universe for the simulation.
Or maybe I’m misunderstanding you. My brain is feeling more than usually fried at the moment.
Look, does this seem like solid reasoning to you? Because your arguments are beginning to sound quite like it.
“Species can’t evolve, that violates thermodynamics! We have too much evidence for thermodynamics to just toss it out the window.”
Listing arguments that you find unconvincing, and simply declaring that you find your opponent’s argument to be similar, is not a valid line of reasoning, isn’t going to make anyone change their mind, and is kind of a dick move. This is, at its heart, simply begging the question: the similarity that you think exists is that you think all of these arguments are invalid. Saying “this argument is similar to another one because they’re both invalid, and because it’s so similar to an invalid argument, it’s invalid” is just silly.
“My argument shares some similarities to an argument made by someone respected in this community” isn’t much of an argument, either.
Sure, but I found the analogy useful because it is literally the exact same thing. Both draw a line between a certain mechanism and a broader principle with which it appears to clash if the mechanism were applied universally. Both then claim that the principle is very well established and that they do not need to condescend to address my theory unless I completely debunk the principle, even though the theory is very straightforward.
I was sort of hoping that he would see it for himself, and do better. This is a rationality site after all; I don’t think that’s a lot to ask.
You clearly expect estimator to agree that the other arguments are fallacious. And yet estimator clearly believes that zir argument is not fallacious. To assert that they are literally the same thing, that they are similar in all respects, is to assert that estimator’s argument is fallacious, which is exactly the matter under dispute. This is begging the question. I have already explained this, and you have simply ignored my explanation.
All the similarities that you cite are entirely irrelevant. Simply noting similarities between an argument, and a different, fallacious argument, does nothing to show that the argument in question is fallacious as well, and the fact that you insist on pretending otherwise does not speak well to your rationality.
Estimator clearly believes that there is no way that creating simulations can affect whether we are in a simulation. You have presented absolutely no argument for why it can. Instead, you’ve simply declared that your “theory” is “straightforward”, and that disagreeing is unacceptable arrogance. Arguing that your “theory” violates a well-established principled is addressing your “theory”. So apparently, when you write “do not need to condescend to address my theory”, what you really mean is “have failed to present a counterargument that I have deigned to recognize as legitimate”.
Whether the distinction is worth making or not, it is irrelevant to my point, since both are very unlikely and therefore require much more evidence than we have now.
I assume that your idea is to prevent doomsday or make it less likely. If not, why bother with all these simulations?
Look, does this seem like solid reasoning to you? Because your arguments are beginning to sound quite like it.
I am not the first Lesswronger to think of a causality-evading idea, btw.
Nope: there is sufficient evidence that the Earth is not flat, but there isn’t sufficient evidence that causality doesn’t exist. That is the difference. There are some counterintuitive theories, like QM or relativity or, maybe, round Earth, but all of them have been supported by a lot of evidence, there were actual experiments to prove them, etc. And these theories appeared, because old theories failed to explain existing evidence.
Can you name a single real-world example where causality doesn’t work?
And you’re not the first LessWronger to think that if your idea sounds clever enough, you don’t actually need any evidence to prove it.
“Species can’t evolve, that violates thermodynamics! We have too much evidence for thermodynamics to just toss it out the window.”
Just realized how closely your argument mirrors this.
Er.. what? Evolution doesn’t violate thermodynamics.
Bad analogies don’t count as solid arguments, either. The difference between evolution/thermodynamics example and your case is that the relation between thermodynamics and evolution is complicated, and in fact there is no contradiction. While it’s evident that your idea works only if you can acausally influence something. That’s much closer to perpetual motion engine (direct contradiction), than to evolution (non-direct, questionable contradiction which turns out to be false).
Look, I explained the details in the OP. Create a lot of Earths and hope that yours turns out to be one of them. That already violates causality, according to your standards. I don’t see much of a way to make it clearer.
Ah—that’s much clearer than your OP.
FWIW—I suspect it violates causality under nearly everyone’s standards.
You asked if your proposal was plausible. Unless you can postulate some means to handle that causality issue, I would have to say the answer is “no”.
So—you are suggesting that if the AI generates enough simulations of the “prime” reality with enough fidelity, then the chances that a given observer is in a sim approach 1, because of the sheer quantity of them. Correct?
If so—the flaw lies in orders of infinity. For every way you can simulate a world, you can incorrectly simulate it an infinite number of other ways. So—if you are in a sim, it is likely with a chance approaching unity that you are NOT in a simulation of the higher level reality simulating you. And if it’s not the same, you have no causality violation, because the first sim is not actually the same as reality; it just seems to be from the POV an an inhabitant.
The whole thing seems a bit silly anyway—not your argument, but the sim argument—from a physics POV. Unless we are actually in a SIM right now, and our understanding of physics is fundamentally broken, doing the suggested would take more time and energy than has ever or will ever exist, and is still mathematically impossible (another orders of infinity thing).
Oh god damn it, Lesswrong is responsible for every single premise of my argument. I’m just the first to make it!
As for the rest of your post: I have to admit I did not consider this, but I still don’t see why they wouldn’t just create a less complex physical universe for the simulation.
Or maybe I’m misunderstanding you. My brain is feeling more than usually fried at the moment.
Listing arguments that you find unconvincing, and simply declaring that you find your opponent’s argument to be similar, is not a valid line of reasoning, isn’t going to make anyone change their mind, and is kind of a dick move. This is, at its heart, simply begging the question: the similarity that you think exists is that you think all of these arguments are invalid. Saying “this argument is similar to another one because they’re both invalid, and because it’s so similar to an invalid argument, it’s invalid” is just silly.
“My argument shares some similarities to an argument made by someone respected in this community” isn’t much of an argument, either.
Sure, but I found the analogy useful because it is literally the exact same thing. Both draw a line between a certain mechanism and a broader principle with which it appears to clash if the mechanism were applied universally. Both then claim that the principle is very well established and that they do not need to condescend to address my theory unless I completely debunk the principle, even though the theory is very straightforward.
I was sort of hoping that he would see it for himself, and do better. This is a rationality site after all; I don’t think that’s a lot to ask.
You clearly expect estimator to agree that the other arguments are fallacious. And yet estimator clearly believes that zir argument is not fallacious. To assert that they are literally the same thing, that they are similar in all respects, is to assert that estimator’s argument is fallacious, which is exactly the matter under dispute. This is begging the question. I have already explained this, and you have simply ignored my explanation.
All the similarities that you cite are entirely irrelevant. Simply noting similarities between an argument, and a different, fallacious argument, does nothing to show that the argument in question is fallacious as well, and the fact that you insist on pretending otherwise does not speak well to your rationality.
Estimator clearly believes that there is no way that creating simulations can affect whether we are in a simulation. You have presented absolutely no argument for why it can. Instead, you’ve simply declared that your “theory” is “straightforward”, and that disagreeing is unacceptable arrogance. Arguing that your “theory” violates a well-established principled is addressing your “theory”. So apparently, when you write “do not need to condescend to address my theory”, what you really mean is “have failed to present a counterargument that I have deigned to recognize as legitimate”.