You are (mostly) a simulation.
This post was completely rewritten on July 17th, 2015, 6:10 AM. Comments before that are not necessarily relevant.
Assume that our minds really do work the way Unification tells us: what we are experiencing is actually the sum total of every possible universe which produces them. Some universes have more ‘measure’ than others, and that is typically the stable ones; we do not experience chaos. I think this makes a great deal of sense- if our minds really are patterns of information I do not see why a physical world should have a monopoly on it.
Now to prove that we live in a Big World. The logic is simple- why would something finite exist? If we’re going to reason that some fundamental law causes everything to exist, I don’t see why that law restricts itself to this universe and nothing else. Why would it stop? It is, arguably, simply the nature of things for an infinite multiverse to exist.
I’m pretty terrible at math, so please try to forgive me if this sounds wrong. Take the ‘density’ of physical universes where you exist- the measure, if you will- and call it j. Then take the measure of universes where you are simulated and call it p. So, the question become is j greater than p? You might be thinking yes, but remember that it doesn’t only have to be one simulation per universe. According to our Big World model there is a universe out there in which all processing power (or a significant portion) as been turned into simulations of you.
So we take the amount of minds being simulated per universe and call that x. Then the real question becomes if j > px. What sort of universe is common enough and contains enough minds to overcome j? If you say that approximately 10^60 simulated human minds could fit in it (a reasonable guess for this universe) but that such universes are five trillion times rarer than the universe we live in, than it’s clear that our own ‘physical’ measure is hopelessly lower than our simulated measure.
Should we worry about this? It would seem highly probable that in most universes where I am being simulated I once existed in, or humans did, since the odds of randomly stumbling upon me in Mind Space seem unlikely enough to ignore. Presumably they are either AIs gone wrong or someone trying to grab some of my measure, for whatever reason.
As way of protecting measure, pretty much all of our postsingularity universes would divide up the matter of the universe for each person living, create as many simulations as possible of them from birth, and allow them to go through the Singularity. I expect that my ultimate form is a single me, not knowing if he is simulated or not, with billions of perfect simulations of himself across our universe, all reasoning the same way (he would be told this by the AI, since there isn’t any more reason for secrecy). This, I think, would be able to guard my measure against nefarious or bizarre universes in which I am simulated. It cannot just simulate the last few moments of my life because those other universes might try to grab younger versions of me. So if we take j to be safe measure rather than physical measure, and p to be unsafe or alien, it becomes jx > px, which I think is quite reasonable.
I do not think of this as some kind of solipsist nightmare; the whole point of this is to simulate the ‘real’ you, the one that really existed, and part of your measure is, after all, always interacting in a real universe. I would suggest that by any philosophical standard the simulations could be ignored, with the value of your life being the same as ever.
- Simulations Map: what is the most probable type of the simulation in which we live? by 11 Oct 2015 5:10 UTC; 10 points) (
- 10 Dec 2015 16:05 UTC; 0 points) 's comment on Rationalist Magic: Initiation into the Cult of Rationatron by (
- 18 Jul 2015 16:29 UTC; -6 points) 's comment on Open Thread, Jul. 13 - Jul. 19, 2015 by (
I consider it bad form to do such a massive rewrite, thereby obsoleting the entire previous comment stream.
Regarding your new post, I think you need to taboo the word ‘measure’ and rewrite all your posts without it. It would make things much more clear for the rest of us. When communicating with others, it is more important to be clear and precise than it is to be compact, and your use of ‘measure’ is neither clear nor precise to a good number of your audience.
I’ve made three desperate threads in about a week; I don’t want to take over the website. I think we can agree that my original post wasn’t very useful, though.
Sorry. I’ll try and see how that goes in the future.
What the frakking Hell? Dust theory on its face gives almost uniformly false predictions. Now, we are admittedly confused about how to assign probabilities in cases like this, but confusion is a bad reason to adopt a radical new view of reality. That’s not how confusion is supposed to work.
Could you name a few?
Discontinuities in the observed laws of physics, before I finish typing this. Nope, still wrong on its face.
Which laws of physics in particular?
No laws in particular: all laws in general.
Why should it disobey every observed law of physics? Are you arguing that conscious observers would almost certainly experience chaos? If so I agree with you. I don’t accept ‘pure’ Dust Theory.
Eitan, I think you should set down exactly what you take “Dust Theory” to mean, for at least the following reasons: Not everyone has read “Permutation City”; those who have may have forgotten some details; the book may not nail down all those details firmly enough to make the term unambiguous; you might mean by it something slightly different from what Egan does.
(For the avoidance of doubt, that last one would not necessarily be a bad thing. The most credible thing deserving the name “Dust Theory” might not be quite the same as what’s described in what is, after all, a work of fiction with its own narrative constraints.)
Dust Theory isn’t actually relevant to this. I’m discussing practicality rather than interpretation, i.e. “shut up and calculate”.
But a lot of what you’ve been writing makes explicit use of that term. Your post begins “First of all, let’s assume that our minds really do work the way Dust Theory tells us”. If what follows is less than perfectly clear on its own (as, at least for me, it is) then it’s reasonable to try to use your allusion to Dust Theory to help disambiguate. But we can only do that in so far as we understand exactly what you’re taking Dust Theory to be.
Basically, all that is required is for two minds in the same conscious state to have only one phenomenological experience. This is something I think is absolutely true.
If you mean that there is literally one experience (numerical identity), not two identical experiences (qualitative identity), that would need support.
And you still need further assumptions to say something interesting about measure, expected experience, personal history etc.
If there are two identical experiences, it doesn’t actually affect the argument. Except that you would be wholly in a simulation (or not), and there would be less incentive for future FAIs to simulate you. Grim, if I took it seriously.
If there are two identical experiences, there is no problem of jumping, .or waking up as someone else. Identical twins don’t randomly swap identities. Jumping is a dynamical, causal process. You can make it happen by transplanting a brain,or copying a neural pattern, but there is no reason it should happen because of pure logic,
While we’re on the subject, if there is a single experience threaded through multiple worlds, there is also no jumping. You can’t jump from one version your self another, because there is only one version. You can’t jump from one world to another, in the sense of leaving one and ariving at another, because you are always in all of them.
That disposes of jumping, but you seem to have some further concern about simulation.
Sure, but haven’t I just said I don’t take Duplication seriously?
The whole point is about what happens when my self becomes less detailed. If it resumes its former detail (waking up), all may not be as it was. If a memory is completely extracted from my brain, than my brain ceases to anchor me predominantly in worlds where that memory happened. Other options could fill in the hole.
This has never been about ‘jumping’ wholesale! I just used the word because there is no other.
“Unification (Bostrom’s term) seems to be almost irrefutable”
I would have thought the point was justifying the claim about dissolution.
That is, somehow or other, a claim about causlaity, .transtemporal identity, or something else you have never provided a premise relating to.
I can make some sense of that, assuming duplication. If your brain has been copied N times, then you have a 1/N chance of being the original …. assuming you can only be one at time.
That would create a worry about being in a simulation that wasn’t stable, but your actual worry is apparently about lack of reality....although a .simulation still has an indirect connection to reality.
But then you believe in Unification, which would mean you you are indissolubly n whatever world you ate in.
You can use a phrase, or invent a word.
Can someone explain me what’s the point of this post? No offense intended; reading the first paragraph made my mind literally explode wondering what the hell I’ve just read.
I haven’t read Permutation City (a comment mentioned it) and in fact I approached all LW material I’ve read with only my previous experience and reasoning abilities, and ALL topics such as this that feel so meta, out-of-this-world, and seemingly with no practical implications make no sense to me.
Am I missing something?
Nope, you’re pretty much bang-on here. The stuff being discussed has no observables and no practical applications. Mostly, it appears to be a way for the author to feel better about the topic, as he claims to be prone to panic attacks and existential anxiety.
Permutation City is only required reading to understand Dust Theory. I’m arguing that the odds of us being simulated (if the reference class includes the whole multiverse) is extremely high. I also believe in the information theory of identity; this means that part of our consciousness is really being implemented in the physical world. This, following the lines of argument, gives hypothetical future FAIs a motive to simulate us.
I didn’t realize how hard this was to follow if you aren’t already familiar with these concepts. Sorry!
Why are you in a simulation and not a Boltzmann brain? If the universe goes on forever after heat death, then there will be an infinite number of Boltzmann brain yous.
I see a coherent, justified universe around me with apparently sound perceptions. Therefore, I conclude that it is overwhelmingly more likely that something is wrong with your reasoning/assumptions than I am a Boltzmann brain.
Seriously, Boltzmann brains are never ever the answer. Why do people keep using them?
Seeing a coherent, justified universe is just a restriction on what kind of Boltzmann brain you are. There is a very simple calculation here (if you’ve ever taken introductory thermodynamics) and it goes like this:
In an infinite universe, the likelihood of existing in some “macroscopic state of the world” like “your brain is inside your body on the earth in the solar system” or “your exact brain is floating inside a cloud of disordered gas the mass of the solar system” is proportional to how many “microscopic states of the world” correspond that that macroscopic state, where a microscopic state means writing down the states of all the subatomic particles and what they’re doing. (This is the assumption that the universe reaches thermal equilibrium).
And because the solar system is so orderly (it’s not at maximum entropy), there are many, many, many, MANY more possible microscopic states corresponding to a macroscopic state like “your brain is floating inside a cloud of disordered gas” than there are to the actual states corresponding to a real solar system.
Thus, if Boltzmann brains exist, you probably are one. And if you have an infinite universe that reaches thermal equilibrium, they exist.
Conversely, if I’m not a Boltzmann brain, then it’s because the universe happens to not reach thermal equilibrium (e.g. the universe ends, or expands so fast that everything cools to the ground state eventually, or there exists some method of violating the second law of thermodynamics).
I don’t understand much of this. My argument is that Boltzmann brains would almost certainly experience chaos. So I would have to be in the 0.000000000000000000001% of Boltzmann brains to observe a rational universe (not to mention one that actually predicts the existence of Boltzmann brains). Yes, the rational Boltzmann Brains actually would outnumber their regular counterparts, but that’s talking past the problem. The odds are astronomically higher that something is wrong with your science. Maybe FAI figures out how to create negentropy, or breaks out into another universe, or finds a way to have infinite computing power. You suggested some options yourself. All of these have a probability considerably higher than 0.000000000000000000001%.
The idea that you are Bolzmann brain is of the same level of dangerous ideas as your interpretation of Dust theory. Basically it is the same theory. I spent unpleasant evening once thinking that I may be Bolzmann brain. But I solve it after I decided that information theory of personal identity is true, and so the number of copies does not matter, if at least one of them continue its existence.
The fact that you are BB does not exclude the fact that you are in simulation as there are special class of BB - that is Bolzmann supercomputers. It is an AI that appears from nothing and creats a simulation of our world. I think that this may be dominating class of BBs (by number of human observers). It also solves the problem of orderly world view around us.
Interesting. If you have time please elaborate.
I was planning to write a post about one day…
Basically the idea is that between ordinary BB and real brains exist third class of objects. These objects temporary appear from fluctuation but are able to create very large number of minds during its short existence. These objects are more complex than ordinary brain and thus more rare, but as they are able to creates many minds, the minds inside these objects will dominate. At first I named these objects “Bolzmann typewriters” but later I understood that it could be just a computer with a program which is able to create minds. (And as simulated mind is simpler than biological brain, which include all neurons and atoms, such simple simulated minds must dominate.)
Another type of Bolzamnn typewriter are universes fine tuned to create as many minds as possible (and even our universe is a type of it.)
If we are in Bolzmann typewriter or Bolzmann supercomputer it may have observable consequences, like small “mistakes in the matrix”. It also may have abrupt end.
You’re operating under the assumption that only humans count as observers, which is almost certainly not true and breaks the whole theory down.
(Btw, if such complicated things can exist in high-entropy environments, than why aren’t we able to survive there after heat death? Unless we’re talking about quantum permutations?)
In fact, I think that only humans who are able to understand Doomsday Argument should be counted as observers… :) But where I use this idea here?
Yes, may be we can survive after heat death in such fluctuation and in my recent roadmap “How to survive the end of the Universe” it was suggested.
All I’m saying is that out of all possible observers that would arise in a Boltzmann state, ours is a long way from the most common.
Why?
When I search my position in the class of observers that are like me, the only thing which is define this class of observers is that it is able to write down and understand this sentence. And I should not count the ones who are not able to understand it, because I already know that they are not me. In short: If one ask “Why I am not a worm?”, the answer is: because a worm can’t make this question.
So the right question would be in case of BB: “from all observers who could think that they are in BB, am I most common or not?” The answer depends on how random our circumstances are. My surrounding seems to be not so random as TV signal noice: I sit in my room.
The problem is that we can’t take for granted that BB could judge randomness of their surroundings adequately. For example: in a dream you may have a thought and think that it is very wise. But in the morning you will understand that it is bullshit.
So, in fact, we have a class of observers, which now defined by two premises: the thought: “Am I a BB” and the observation: “My surrounding seems to be not enough random for BB” (which may be untrue, but we still think so)
Now we could ask a question where is the biggest part of this subset of observers? And even for this subset of observers we still have to conclude that its biggest is in BB.
Personally, I think that it is just a problem of our theory or reality, and if we move to another reality theory, the problem will disappear. The next level theory will be theory of qualia universe. But there may be other solutions: if we take linear model of reality than only information is identity substrate but not continuity, and so copies are smothly add up to one another.
But if the question has nothing to do with whether or not you understand it? Taking the DA as our example, the only thing you ought to be concerned about is what human are you. I don’t see why comprehension of the DA is relevant to that.
And our knowledge of BBs comes solely from a long series of assumptions and inferences. If most observers are Boltzmann brains, than most observers, of whatever type, will experience chaos. If you’re going to say that that might not be true because BBs are deluded, I have to ask why the same doesn’t apply to the argument that we might be BBs. It’s a great deal more complicated than my own argument, which is that chaos is more common than order.
Why not assume an evil daemon, if we’re going to reason this way?
Look, the following two statements are true: “Most of observers, who are not expiring chaos, are still BB (if BB exist)” But “the fact that I do not expiring chaos is argument against BB theory”—this is your point. The main question is which of the statements is stronger in Bayesian point of view.
Lets make a model: or only 1 real observer exists, or exist two world, and the second one exists with probability P. The second one (which is BB) includes 1 million observers of which 1000 are non-chaos observes. Given that me is non-chaos observer what is the probability of P? In this case P is like 0, 001 and you win.
The problem with this conclusion that it relies on ability of BB truely distinguish the type of reality they are in. If we prove some how that most BB are not able to understand that they live in random environment, than our reality check does not work.
EDITED: most people do not understand that they have night dreams during the dream while they have quiet random experience there. So we cant use the fact that I don’t think that I am in a dream as a proof that I am not in a dream. And even less we can rely on BB in this ability.
Edited 2: If you were randomly picked of all possible observers you should be a worm or other simple creature. The fact that you are not worm may be used as a proof that worms does not exist. Which is false. You not a random observer. You are randomly selected from observers who could understand that they are an observers. ))
And if we speak about DA—generally it works to any referent class, but gives different ends for different classes. It is natural to apply it to only those who understand it. Refernce class of humans does not have any special about it. Unfortunately as a class of those who understand DA is small, this means sooner end. But the end may not mean human extinction, but only that DA will be disproven or that people will stop to think about it.
But the whole chain of reasoning is still circular. You haven’t explained why being a Boltzmann brain is more plausible than being under a daemon’s spell.
Yes, and my argument here accounts for that: sapient beings will have many more instances of themselves and therefore much higher measure than animals.
Let’s take some aliens as our example. These aliens have intellects between a human’s and a chimpanzee’s. One in a hundred of them develops much greater intelligence than others (similar to Egan’s aliens in Incandescence). They consist of a single united herd, but are the size of bugs. After a hundred thousand years of wandering the desert, they come to a large lake, teeming with food and fresh water and devoid of any real predators. The elders expect that their race will soon number a millionfold of what they were.
But unknown to them, a meteorite is headed directly at the lake- the species will certainly be wiped out in a few months. The few aliens gifted with intellect reason that their observations are highly unlikely should the lake really multiply their numbers by a million. But the rest of the herd cannot comprehend these arguments, and care only for day-to-day survival.
Their selection is from their species, and they can make inferences from that. Why would it be any different?
We have additional evidence for BB, that is idea of eternal fluctuation of vacuum after heat death, which may give us very strong prior. Basically if there is 10 power 100 BBs for each real mind it will override the evidence by non randomness of our environment. (Bostrom wrote about similar logic in Presumptuous philosopher.) What I wanted to say that efforts to disprove BB existence by relying on BB ability to distinguish chaotic and non chaotic environments are themselves looks like circular logic)))
I agree that sapient beings are more probable because they have many more internal states. But it also means that you and I are in the middle of IQ distribution in the universe, that is no superintelligence exists anywhere. This is grim. It is like DA for intelligence and it means that high intelligence post-humans are impossible. It may still allow some kind of mechanical superintelligence, which uses completely different thinking procedures and lack qualia.
Basically, the main meta difference between your and mine positions is that you want to return the world to normal, and I want it to be strange and exploite its strangeness. :))
You long example is in fact about aliens who created DA for themselves. My idea was that you may use mediocracy logic for any reference class, from which you randomly chosen, and you could belong to several such classes simultaneously. But the class of observers who knows about DA, is special class because it will appear in any alien specie, and in any thought experiment. This class include such observers from all possible species and so we may speak about their distribution in the universe. Also such class is smallest and imply soonest Doom in DA. Even Carter who created DA in 1983 knew it, and as he was the only one at the moment in this class, he felt himself in danger.
In your example you also have a subclass of aliens who knows all that, and it will exist not for long. It will be killed by meteorite in several months. )) Its subclass is smaller and its time duration is smaller. But result is the same—extinction.
How? The proportion of chaotic minds to orderly minds will never change. Even if there are infinite BBs in the future, it doesn’t alter how likely it is that the ‘heat death’ model is simply mistaken, and that some infinite source of computing is found for us to use.
Whoa whoa whoa. I don’t think that sapient beings having more internal states makes them more likely to be selected. I was talking about the simulation argument I’ve advanced on this thread.
Our current model of the universe makes it seem easy and straightforward for superintelligence to exist. Even if we were to wipe ourselves out, the fact that we live in a Big World means that superintelligence will always be taking most of the measure. This is precisely what I argued on this thread.
Now I understand. But the fact that most humans do not comprehend the DA doesn’t neutralize its effects on humanity, does it?
(I’m beginning to realize what a nightmare anthropics is.)
Ok, look. By definition BBs are random. Not only random are their experience but also their thoughts. So, half of them think that they are in chaotic environment, and 50 per cent thinks that they are not. So thought that I am in non-chaotic environment has zero information about am I BB or not. As BB exist only one moment of experience, it can’t make long conjectures. It can’t check its surrounding, then compare it (with what?), then calculate its measure of randomness and thus your own probability of existence.
Finally, what do you mean by “measure”? The fact that Im not a superintelligence is evidence against that superintelligence are dominating class of beings. But some may exist.
No, my DA version only make it stronger. Doom is near.
Sorry for the late response. I’ve been feeling a lot better and found it hard to discuss the subject again.
Ideas or concepts are qualia themselves, aren’t they? And since consciousness is inherently a process, I don’t think that you can reduce it to ‘one moment’ of experience. You would benefit to read about philosophical skepticism.
My whole argument here is that all of my experiences are explained by friendly superintelligence. Measure means the likelihood of a given perception being ‘realized’. I can conclude from this that humans therefore have a very high measure; we are the dominant creatures of existence. Presumably because we later create superintelligence that aligns with our goals. Animals or ancient humans would have much lower measures.
May be better to speak about them as one acts of experience, not moments.
Ok, but why it should be friendly? It may just test different solutions of Fermi paradox on simulations, which it must do. It would result in humans of 20-21 century to be dominating class of observers in the universe, but each test will include global catastrophe. Or you mean that friendly AI will try to give humans the biggest possible measure? But our world is not paradise.
What? What does this mean?
No, it’s trying to give measure to the humans that survived into the Singularity. Not all of them might simulate the entire lifespan, but some will. They will also simulate them postsingularity, although we will be actively aware of this. This is what I mean by ‘protecting’ our measure.
the main question for any AI is its relations with other AIs in the universe. So it should learn somehow if any exist and if not, why. The best way to do it is to model AIs development on different planets. I think it include billion simulations of near-singularity civilizations. That is ones that are in the equivalent of the beginning of 21 century in their time scale. This explain why we found our selves in the beggining of 21 century—it is dominating class of simulations. But thee is nothing good in it. Many extinction scenarious will be checked in such simulations, and even if they pass, they will be switched off.
But I dont understand why FAI should model only people living near singulaity. Only to counteract this evil simulations?
Sorry for taking such a long time to respond.
Any successful FAI would create many more simulated observers, in my scenario. Since FAI is possible, it’s much more likely that we are in a universe that generates it.
But we will simply continue on in the simulations that weren’t switched off. These are more likely to be friendly, so it would end up the same.
It doesn’t. People living postsingularity would be threatened by simulations, too. Assuming that new humans are not created (unlikely given that each one has to be simulated countless times) most of them will have been born before it took place. Why not begin it there?
After thinking more about the topic and while working on simulation map, I find the following idea: If in the infinite world exist infinitely many FAIs, each of them couldn’t change the landcsape of simulation distibution, because his share in all simulations is infinitely small. So we need acasual trade between infinite number of FAIs to really change proportion of simulations. I can’t say that it is impossible, but it maybe difficult.
Ok, I agree with you that FAI will invest in preventing BBs problem by increasing measure of existence of all humans ( if he find it useful and will not find simpler method to do it) - but anyway such AI must dominate measure landscape, as it exist somewhere.
In short, we are inside (one of) AI which tries to dominate inside the total number of observer. And most likely we are inside most effective of them (or subset of them as they are many). Most reasonable explanation for such desire to dominate total number of observers is Friendliness (as we understand it now).
So, do we have any problems here? Yes—we don’t know what is the measure of existence. We also can’t predict landscape of all possible goals for AIs and so we could only hope that it is Friendly or that its friendliness is really good one.
So our laws of physics seem consistent because this requires less code.
Now you just repeating Bostrom simulation argument. But why you rewrite early post is not clear as it will be misleading for commenters.
Certainly not. The simulation argument just talks about one universe. I’m taking the multiverse into account.
If we are in simulated universe, you don’t have to worry. With high probability (99%) the outer universe is also simulation.
Matrioshka universe is not stable because glitch on any level will terminate downlevel
If they are using backups on their simulators/computers, they just restart from the last backup. Therefore if the root universe is using backups and all simulated are copy from the root, the system is stable. The restart is not visible inside the simulated universe.
yes, it is obvious patch it Bostrom’s logic. If you continue it, you may conclude that cheap simulation are more numerous. “Cheap” means this that physics detalisation is poor and probably it is simulation of just one person—you.
You haven’t properly read my argument. That’s exactly what I say- but also that the simulation is designed around a real experience.
if you are still here, check this: https://arxiv.org/abs/1712.01826
Seems you were right after all with dust theory.
Also, is there any way to see your original version of the post?
Oh god I still can’t get this thought out of my head. Can someone please tell me what they think of my solution to the problem?
I just read ‘The Finale of the Ultimate Meta Mega Crossover’ again. Is there a motivation for entities with unlimited computing power to simulate all possible programs in order? Or to take an interest in beings like me? Would that be enough to remove most of my measure from ‘real’ universes? I think probably not, but I need to be sure. I’m not thinking too clearly now.
EDIT: Wait a minute, this becomes more plausible after realizing someone out there in some universe has unlimited computing power. If so, could it affect measure? Or does only the density of the ‘unlimited’ universe itself affect measure? Please answer.
EDIT2: Max Tegmark argues that actual infinite quantities are not possible, because otherwise this causes trouble in the ‘mathematical universe’ model (only Gödel-complete mathematical structures can exist). If true, does this solve the problem? There may be universes with arbitrary amounts of computing power, but they still need to be optimized. Would the occasional intelligences that want to simulate humanlike beings overcome the measure of actually-existing universes like this one? Most such intelligences would want to simulate only the last few instants of life, but some (however rare) are certain to simulate the whole. And if they created billions of identical copies of me, would that increase measure as well?
The question is basically now: does the measure of my actual existence overcome the measure of worlds where I am simulated, even taking into account potentially limitless resources available to simulate?
I’m less panicked by this thought now, since if they are after my measure they must, in effect, simulate me exactly as I would really exist (taking into account the vertiginous question). So that part of my measure does not matter and I can value the part that really does correspond to an external reality.
Either way, if any of this is true (even with those that just simulate the last few instants of life) than an afterlife really does exist for everyone, regardless of quantum immortality.
EDIT3: OK, I think I’ve hit on a solution, but it’ll take a while to type up.
I think you are making too many inferences without sufficiently accounting for possible errors in your assumptions and the inferences themselves.
By the gods, this is getting serious. Ladies and gentlemen, I give you Eitan’s Basilisk.
You need a reality check:
You’re building a theory of reality from fictional evidence. Even if the underlying theory exists in real life, the author’s narrative choices introduce a tremendous bias. Browsing through LW’s earlier material will show you why that doesn’t work.
Apparently, you believe your mind creates reality. That’s simply not true, as I was very much alive before I knew you existed. Of course, I have no way of definitely proving my previous subjective experiences to you, but suffice it to point out that, if you give your mind a fundamental role in the history of the universe, you’re failing at reductionism.
To manage your anxiety: Precommit to set a “save point” every evening, before you go to sleep. If every morning you remember having set the “save point” of the previous night, you’re still you. (The trick behind this trick is that it will show you that the past that you remember was a real present when you experienced it.)
Er… what exactly are you arguing against? I do not believe that my “mind creates reality.”
Yes, you do:
You’re taking them out of context. I’m saying that your perceptions ground you in a particular reality.
Why don’t you deal with THIS problem, directly?
I’d love to know how.
Assume you have an anxiety problem. Start operating on the basis of this assumption—that you mind needs to be adjusted. Either go to a professional or start experimenting with self-help techniques. One of recent Yvain’s posts at SSC is precisely about what works for anxiety.
You know Bostrom is arguing against unification, right?
Yes, where did I give the impression that I didn’t?
“I do. Unification (Bostrom’s term) seems to be almost irrefutable, and therefore Dust Theory is at least partly right.”
I said it was Bostrom’s term. How is that a wrong impression?
Its easy to make the inference that it’s Bostrom argument that seems irrefutable, particularly as you didnt give one of your own.
Upvoted because I’ve come to similar conclusions and believe that simulation anthropics are interesting and worth discussing.
I didn’t read your first version of the post. I guess you are being downvoted because of some combination of poor/obtuse presentation (‘dust theory’ is a poor name, it suggests ultra low probability worlds), history of presenting ‘crazy’ ideas, etc. I feel somewhat bad for you so I may create another supporting thread to present and discuss this general topic.
Also, it would really help if you link to earlier discuss of anthropics—some of these issues have been discussed at length here in the past.
That sounds like a good idea, but don’t feel pressured. EDIT: Actually, I regret saying that. This is an urgent topic that needs to be discussed, and I people are too used to me creating desperate threads now to pay attention.
I’m afraid I’m not familiar with these discussions.
That’s part of the problem. Use the google search bar on this site, search for “anthropics”. Also look at previous discussions of SIA, SSA issues, sim argument stuff, etc.
I feel sorry that I inspired this thread by my comment but did not elaborate it enough. Basically I have multilevel depositorium of wild ideas, with wildest are on the top. (I think I should make a map about it :) So I peak an idea from the next level of the depositorium, not the wildest one, to help Eltan. As I know that it is wild idea I can’t take it for granted and I understand that it needs more profound explanation. As I write slowly on foreign language, I was hesitated to go for long explanation which will only spoil my carma afterwords.
Basically, my idea would be easy to explain using quantum multiverse hypothesis. The explanatory logic when could be ported to Dust theory.
Different minds have different probabilities to exist in multiverse.
When, IF there is a way to influence probabilities of existence, when minds able to do so will dominate. It is just another explanation for natural selection and evolutionary psychology, nothing new here.
If there is other ways to change distribution of minds in different branches of multiverse, some minds will use it and will dominate and most likely we are such type of minds. Or, simply speaking, if magic is physically possible, when such magic capable mids would dominate, so they will be an attractor in total multiverse mind space. (Again, if you see Bostrom’s UN++ article you could find rational explanation how a mind could manipulate probability of future reality by almost magical means but in fact using Doomsday argument). It all depends of word “if”, so we can’t take it for granted.
I was speaking about it only to illustrate what is an attractor of minds in complex universe. Different complex universes has different mind attractors, for example, if time travel is possible, where will be constant loops as attractors.
Now we move to the topic. Suppose that many me-like-expirineces exist in different worlds. If I narrow my experience, it may exist in many more worlds. If I narrow it to a point, I could be almost everybody. When I open my eyes, and I become one of trillions different observers. Resulting process may be explained as jumping of consciousness from one observer to another. Personally, I think it is not a problem at all, as after many jumps consciousness will return to original observer and it will not have observable consequences.
Lets assume for the sake of the argument that such jumping may happen.
This jumping of consciousness from one observer to another may be represented as a line in the set of all possible observers, and such line may go almost infinite and random until it meet an observer that somehow prevents its jumping—he will be an attractor. How he could do it? His consciousness must be more stable and each small part of it must be more firmly connected with other.
Now the things go wilder. We will evaluate human consciousness for these criteria. Human has almost constant dreaming during sleep time, and I suggested that it may prevent consciousness from loosing its exact correspondence to mostly human and “his” world. (But on the next level of wilderness we could remember that yogis deliberately tries to experience one point state. I had it once during dream. It like a become a point even without colors or difference between audio and video.)
Also humans have qualia that are uniform inside all visual field (which is strange in fact). So any small qualia, any green point is enough to firmly correspondent to all possible experiences and thus stabilise them against jumping. I think that qualia are needed mostly to strictly stabilise worlds. I understood that I went too far for rationalist forum, so I will stop with my speculations how qualia stabilise the world.
What is Bostrom’s UN++ article?
EDIT: nevermind, google can easily find it
May it? DT and MWI don’t support the idea of consciousness that can detach itself and literally jump. Or if it is a metaphor, what is it a metaphor for?
Do you have any reason for believing that other than the inconvenience of the alternative?
I speak about DT and MWI in terms of opening post, where they need to explain more general theory which was named “flux universe”. Idea of flux universe is that things which are not inside your attention become blurred. The most well known example of blurred things us Shredinger cat. But in my opinion you do not need neither DT or MWI to get to flux reality. Because even in classical universe the same observers (copies) may exist in different circumstances. And if I deliberately exclude from my attention many things I will become equal to many other observers, thus changing size of my reference class.
I have no idea why you think this is the case. How would “magic-capable minds” dominate Mind Space? What even is magic? I read Bostrom’s UN++ paper and I do not understand how it is used in your logic.
Well, that’s where we will have to disagree.
Yes, this is precisely my understanding of it.
I’m afraid that I do not understand a single sentence here, and I can’t fathom how this has to do with magical powers.
No way, you’ve given far better answers than anyone else here.
Ok, first I will explain what do I mean using word “magic” here. I define magic as an ability to influence probability of your own success by direct manipulation of probabilities, rather than simple fitting. In Bostrom article Adam and Eve manipulate probabilities of useful events by manipulating total number of observers in their universe. Now is my claim: IF any magic is physically possible, when evolutionary process of natural selection has already use it. Because manipulating of probabilities of success of your offsprings will give you strong evolutionary advantage. This magic may be very small, just 51 percent chance to get better genes in offspring and 49 per cent chance not to die in 50-50 situation. This claim is not equal to the claim that any magic exist or any claims about possible mechanism of such magic. But if evolutionary small magic exist, we could find its evidence studying theory and history of the evolution. Such evidence would be unprobable genetic mutations or evolutionary jumps. (But one of explanatory mechanism of such magic could be just anthropic principle—we could find our selves only on the worlds there evolution was quick enough to create intelligence). But even stronger mechanism of evolutionary magic is possible, and they are most likely connected with manipulating за branches probabilities in the quantum multiverse.
And here we come to stronger claim: If any magic is possible in any part of quantum multiverse and is using manipulating of amount of branches - when we are most likely in such part of the multiverse. Prove: if someone able manipulate ammount of branches, it most likely is about creating many new branches. Many here means astronomically more. So we are probably here, based on self-sampling assumption.
Ok, lets try to explain one-pont state in your own words. Basically, your personality model consists of pure attention part P, and memory part M1. If P moves away from M1, it can’t return back and could go to any M2, M3 and so on. Basically one point state is pure attention without any experience. In words of contemporary philosopher BENJ HELLIE it is just “sole pellet”. Somehow it is possible to feel pure attention, may be it is a situation when it starts to experience itself. (Or maybe it is only illusion). Some esoteric theories are concentrated about this ability, mostly Dzogchen school of TIbet Buddhism (rigpa conception). But in european rational tradition one may remember phenomenalogical reduction by Husserl https://en.wikipedia.org/wiki/Edmund_Husserl#The_elaboration_of_phenomenology
Why? I think the mechanism has everything to do with it; can you think of any other way to perform ‘magic’ than Adam’s anthropic cheat? Besides, it wouldn’t even work- if he were being observed by us, we have no rational reason to believe that a wounded deer will come around, and we’d be right. It’s simply an anomaly brought about by Adam’s highly improbable perspective as the first of billions of observers.
OK, I think I have a better grasp now. But what are you arguing for? What’s the point?
I do not argue now, I just try to answer your questions. The final nature of reality is not known to me, but I have many interesting ideas. If our idea of reality is wrong it means that new existential risks exist. And may be new opportunities to prevent them.
Magic it self is not important, it is important that it require different nature of reality.
Now how to manipulate probabilities. For example I want to win in a lottery with chances 1 to 100. What I need to do so is a copy machine and external computer and MWI interpretation. I create 100 copies of me, and put them in sleep. Than external computer check result of the lottery and kill 99 sleeping copies which do not win. The only survived copy finds that it won. (Such “magic” can’t be measured by outside observer.)
It in fact somehow resemble two-slot experiment. It may be presented in following way: An electron sends its wave function to “see” all possible ways and “calculate” probabilities using all trajectories and their interference. After it, it move by one of this trajectories. Sending wave-function is like creating millions copies. Interference of trajectories which results in cancelling of trajectories is like killing wrong copies. Actual trajectory is like awakening in with wining ticket. Basically it helps the electron to create some “magic” like moving through two slots simultaneously.
While this explanation may be simplificated and probably wrong, I hope it may be food for thoughts. It does not include qualia, for example.
UPDATE: I think that first living organisms was able to use this also somehow, so it is not only for electrons and supercomputers, but also for living beings, and this ability evolve during evolution.
When I was a youth I wondered where my consciousness came from. How it came that I perceived, that my consciousness was mine. Where it came from. What good luck that the consciousness that was I had such good parents and home. I could have been someone else, someone suffering. I conceived many consciousness instances like processors waiting for a being (e.g. a baby) to become conscious and start ‘executing’ in it. But where were these? Were these the souls? And were did they go when we die?
It was quite obvious that even if these instances ‘persisted’ that they carried no state from one physical being to the next. Otherwise there’d be effects from that. Like memory. I obviously had no memory of having been in other being previously. Some people seemed to claim it but not concvincingly so. And also memory was encoded in the brain and that decayed. So the only persistent thing about my imaginary consciousness processors were … the concept of a processor being necessary. And that seems to have solved the mystery for me then.
Curiously these ideas mostly came in the evening when sleep was near but I never thought that the consciousness would break by sleep.
I had the same question as a kid. Only about a couple of years ago I discovered it was of academic interest.
Yeah, that closely matches it.
When I was 6 years old I was surprised that I see the world from my eyes but not my grandmothers eyes. I asked her “Why me is me”, but she didn’t understood what I am speaking about. Still curious.
What do you mean, we’re attractors to other consciousnesses?
A wide variety of minds that shift realities when they lose consciousness. Our minds seem optimized to prevent this from happening.
That sentence no verb.
Or it’s just not possible.
Than please please please explain why. Remember, pure Dust Theory doesn’t need to be true- only the overlay of identical minds.
Eg, under mind-brain identity theory, my mind isnt going to jump around while I’m asleep because my brain doesn’t.
What do you mean by overlay?
I do not subscribe to brain-mind identity theory. I consider the ‘pattern’ which is the mind to be fundamentally different, even if it is written on the brain.
Basically, that you are in all universes at once which generate your experiences. Some more than others.
If you did subscribe to mind brain identity theory, you wouldn’t have to struggle to find a way of explaining why the jumping you don’t predict but don’t observe doesn’t occur. Adopting a theory that doesn’t match observation, and then bolting on more theory to solve the problem is kind of not good.
I can’t ‘observe’ jumping by definition.
That’s not even close to what I’m doing.
If you were concerned about jumping and jumping was merely ‘extremely difficult to observe’, that would probably be ok.
However, if you can’t observe jumping by definition and are still concerned about it, that’s called a ‘cognitive defect’, and you should fix it. Fix the cognitive defect, that is. The one in your head. The one that’s making you be irrationally concerned over a defined unobservable.
No amount of handwaving, being concerned, or liking your current ‘subjective reality’ is going to make your unobservable observable. Fix the core problem, don’t try to paper over it.
The standard worry about DT is that (from the outside) a coherent thread of consciousnes goes though a set of incoherent external world states, which, for, the inside, would look like observing chaos. You have said that what you are worried about is observing chaos, although you have also said that you have a solution to the no-physical-law problem of DT. So who knows?
ETA If you jump, but don’t notice you are jumping, what is the problem.
You’ve also said that what you are worried about is something to do,with measure, although there is an answer to that as well...so....whatever.
Which is?
I do not recall saying any such thing.
I really don’t know what the heck you are talking about. “No-physical-law” problem? And I thought I was bad at conveying these concepts.
The problem is that I prefer for my subjective consciousness to stay in one world.
An extrapolation from a single coherent theory, which you apparently think works through ‘observation.’
Observing chaos is the same thing as having no discernible physical laws.
Why?
I took your use of the word ‘worried’ to say that I was afraid this was true.
Because I subjectively value my universe and do not wish to go to another one.
Under MWI and DT, which are not the same theory, you dont go to another universe, in the sense of leaving the old one..
Under physicalism+simulationusmism, which is not the same as the other two, you can cease to exist at one point in time, and be resurrected in a simulation millions of years later. But I don’t see how staying awake would prevent that.
I note that Coherent falls some way short of True or even Likely.
Arguably, i would have no way of knowing whether or not I am in a supposition of identical mind states or world states, because they’re identical. Why would a difference that doesn’t make a difference, make a difference?
Or: what is the dirfference, if there is one?
If it’s rational to prefer your perceptions to conform to an external reality, than it’s rational to not want to be someone else every morning.
But is it rational to entertain theories about differences in external reality that could never make any difference to subjective or objective experience?
I value other minds existing to interact with me, even if I can’t perceive them directly. And I value waking up tomorrow in the same universe (more or less) that I’m in now.
Is this rational? Eliezer defines rationality as systematized winning; I’m pointing out what.
Under DT, and MWI, which are not the same, you wake up in all the universes you were ever in.
ETA
You might have a concern about your measure being dominated by simulations.That isnt the same as jumping. Also, you can only be simulated if you ever had a real life, so it’s possible to take the glass half full view, that the simulations are a bonus to a fully real life, not a dilution.
OK, I’ve rewritten the OP. Sorry about panicking at first.