Theists are wrong; is theism?
Many folk here on LW take the simulation argument (in its more general forms) seriously. Many others take Singularitarianism1 seriously. Still others take Tegmark cosmology (and related big universe hypotheses) seriously. But then I see them proceed to self-describe as atheist (instead of omnitheist, theist, deist, having a predictive distribution over states of religious belief, et cetera), and many tend to be overtly dismissive of theism. Is this signalling cultural affiliation, an attempt to communicate a point estimate, or what?
I am especially confused that the theism/atheism debate is considered a closed question on Less Wrong. Eliezer’s reformulations of the Problem of Evil in terms of Fun Theory provided a fresh look at theodicy, but I do not find those arguments conclusive. A look at Luke Muehlhauser’s blog surprised me; the arguments against theism are just not nearly as convincing as I’d been brought up to believe2, nor nearly convincing enough to cause what I saw as massive overconfidence on the part of most atheists, aspiring rationalists or no.
It may be that theism is in the class of hypotheses that we have yet to develop a strong enough practice of rationality to handle, even if the hypothesis has non-negligible probability given our best understanding of the evidence. We are becoming adept at wielding Occam’s razor, but it may be that we are still too foolhardy to wield Solomonoff’s lightsaber Tegmark’s Black Blade of Disaster without chopping off our own arm. The literature on cognitive biases gives us every reason to believe we are poorly equipped to reason about infinite cosmology, decision theory, the motives of superintelligences, or our place in the universe.
Due to these considerations, it is unclear if we should go ahead doing the equivalent of philosoraptorizing amidst these poorly asked questions so far outside the realm of science. This is not the sort of domain where one should tread if one is feeling insecure in one’s sanity, and it is possible that no one should tread here. Human philosophers are probably not as good at philosophy as hypothetical Friendly AI philosophers (though we’ve seen in the cases of decision theory and utility functions that not everything can be left for the AI to solve). I don’t want to stress your epistemology too much, since it’s not like your immortal soul3 matters very much. Does it?
Added: By theism I do not mean the hypothesis that Jehovah created the universe. (Well, mostly.) I am talking about the possibility of agenty processes in general creating this universe, as opposed to impersonal math-like processes like cosmological natural selection.
Added: The answer to the question raised by the post is “Yes, theism is wrong, and we don’t have good words for the thing that looks a lot like theism but has less unfortunate connotations, but we do know that calling it theism would be stupid.” As to whether this universe gets most of its reality fluid from agenty creators… perhaps we will come back to that argument on a day with less distracting terminology on the table.
1 Of either the ‘AI-go-FOOM’ or ‘someday we’ll be able to do lots of brain emulations’ variety.
2 I was never a theist, and only recently began to question some old assumptions about the likelihood of various Creators. This perhaps either lends credibility to my interest, or lends credibility to the idea that I’m insane.
3 Or the set of things that would have been translated to Archimedes by the Chronophone as the equivalent of an immortal soul (id est, whatever concept ends up being actually significant).
- 2 Dec 2012 15:57 UTC; 3 points) 's comment on 2012 Survey Results by (
- 29 Jan 2011 16:31 UTC; 2 points) 's comment on David Chalmers’ “The Singularity: A Philosophical Analysis” by (
“Gods are ontologically distinct from creatures, or they’re not worth the paper they’re written on.”—Damien Broderick
If you believe in a Matrix or in the Simulation Hypothesis, you believe in powerful aliens, not deities. Next!
There’s also no hint of worship which everyone else on the planet thinks is a key part of the definition of a religion; if you believe that Cthulhu exists but not Jehovah, and you hate and fear Cthulhu and don’t engage in any Elder Rituals, you may be superstitious but you’re not yet religious.
This is mere distortion of both the common informal use and advanced formal definitions of the word “atheism”, which is not only unhelpful but such a common religious tactic that you should not be surprised to be downvoted.
Also http://www.smbc-comics.com/index.php?db=comics&id=1817
A Simulator would be ontologically distinct from creatures like us—for any definition of ontologically distinct I can imagine wanting use. The Simulation Hypothesis is a metaphysical hypothesis in the most literal sense- it’s a hypothesis about what our physical universe really is, beyond the wave function.
Yeah, Will’s theism in this post isn’t the theism of believers, priests or academic theologians. And with certain audiences confusion would likely result and so this language should be avoided with those audiences. But I think we’re somewhat more sophisticated than that- and if there are reasons to use theistic vocabulary then I don’t see why we shouldn’t. I’m assuming Will has these reasons, of course.
Keep in mind, the divine hasn’t always been supernatural. Greek gods were part of natural explanations of phenomena, Aristotle’s god was just there to provide a causal stopping place, Hobbes’s god was physical, etc. We don’t have to cow-tow to the usage of present religious authorities. God has always been a flexible word, there is no particular reason to take modern science to be falsifying God instead of telling us what a god, if one exists, must be like.
I feel like we lose out on interesting discussions here where someone says something that pattern matches to something an evangelical apologist might say. It’s like we’re all of a sudden worried about losing a debate with a Christian instead of entertaining and discussing interesting ideas. We’re among friends here, we don’t need to worry about how we frame a discussion so much.
I wish this viewpoint were more common, but judging from the OP’s score, it is still in minority.
I just picked up Sam Harris’s latest book—the Moral Landscape, which is all about the idea that it is high time science invaded religion’s turf and claimed objective morality as a scientific inquiry.
Perhaps the time is also come when science reclaims theism and the related set of questions and cosmologies. The future (or perhaps even the present) is rather clearly a place where there are super-powerful beings that create beings like us and generally have total control over their created realities. It’s time we discussed this rationally.
Sam Harris is misguided at best in the major conclusions he draws about objective morality. See this blog post by Sean Carroll, which links to his previous posts on the subject.
My views on “reclaiming” theism are summed up by ata’s previous comment:
Have you read Less Wrong’s metaethics sequence? It and The Moral Landscape reach pretty much the same conclusions, except about the true nature of terminal values, which is a major conclusion, but only one among many.
Sean Carroll, on the other hand, gets absolutely everything wrong.
Given that the full title of the book is “The Moral Landscape: How Science Can Determine Human Values,” I think that conclusion is the major one, and certainly the controversial one. “Science can help us judge things that involve facts” and similar ideas aren’t really news to anyone who understands science. Values aren’t a certain kind of fact.
I don’t see where Sean’s conclusions are functionally different from those in the metaethics sequence. They’re presented in a much less philosophically rigorous form, because Sean is a physicist, not a philosopher (and so am I). For example, this statement of Sean’s:
and this one of Eliezer’s:
seem to express the same sentiment, to me.
If you really object to Sean’s writing, take a look at Russell Blackford’s review of the book. (He is a philosopher, and a transhumanist one at that.)
To be accurate Harris should have inserted the word “Instrumental” before “Values” in his book’s title, and left out the paragraphs where he argues that the well-being of conscious minds is the basis of morality for reasons other than that the well-being of conscious minds is the basis of morality. There would still be at least two thirds of the book left, and there would still be a huge amount of people who would find it controversial, and I’m not just talking about religious fundamentalists.
The difference is huge. Eliezer and I do believe that our ‘convictions’ have the same status as objective laws of nature (although we assign lower probability to some of them, obviously).
I wouldn’t limit “people who don’t understand science” to “religious fundamentalists,” so I don’t think we really disagree. A huge amount of people find evolution to be controversial, too, but I wouldn’t give much credence to that “controversy” in a serious discussion.
The quantum numbers which an electron possesses are the same whether you’re a human or a Pebblesorter. There’s an objectively right answer, and therefore objectively wrong answers. Convictions/terminal values cannot be compared in that way.
I understand what Eliezer means when he says:
but he later says
That’s what the difference is, to me. An electron would have its quantum numbers whether or not humanity existed to discover them. 2 + 2 = 4 is true whether or not humanity is around to think it. Terminal values are higher level, less fundamental in terms of nature, because humanity (or other intelligent life) has to exist in order for them to exist. We can find what’s morally right based on terminal values, but we can’t find terminal values that are objectively right in that they exist whether or not we do.
Careful. The quantum numbers are no more than a basis for describing an electron. I can describe a stick as spanning a distance 3 meters wide and 4 long, while a pebblesorter describes it as being 5 meters long and 0 wide, and we can both be right. The same thing can happen when describing a quantum object.
Yes, I should have been more careful with my language. Thanks for pointing it out. Edited.
Okay, let me make my claim stronger then: A huge amount of people who understand science would find the truncated version of TML described above controversial: A big fraction of the people who usually call themselves moral nihilists or moral relativists.
I’m saying that there is an objectively right answer, that terminal values can be compared (in a way that is tautological in this case, but that is fundamentally the only way we can determine the truth of anything). See this comment.
Do you believe it is true that “For every natural number x, x = x”? Yes? Why do you believe that? Well, you believe it because for every natural number x, x = x. How do you compare this axiom to “For every natural number x, x != x”?
Anyway, at least one of us is misunderstanding the metaethics sequence, so this exchange is rather pointless unless we want to get into a really complex conversation about a sequence of posts that has to total at least 100,000 words, and I don’t want to. Sorry.
In quick approximation, what was this conclusion?
That terminal values are like axioms, not like theorems. That is, they’re the things without which you cannot actually ask the question, “Is this true?”
You can say or write the words “Is”, “this”, and “true” without having axioms related to that question somewhere in your mind, of course, but you can’t mean anything coherent by the sentence. Someone who asks, “Why terminal value A rather than terminal value B?” and expects (or gives) an answer other than “Because of terminal value A, obviously!”* is confused.
*That’s assuming that A really is a terminal value of the person’s moral system. It could be an instrumental value; people have been known to hold false beliefs about their own minds.
I just started reading it and picked it really because I needed something for the train in a hurry. In part I read the likes of Harris just to get a better understanding of what makes a popular book. As far as I’ve read into Harris’s thesis about objective morality, I see it as rather hopeless; depending ultimately on the notion of a timeless universal human brain architecture which is mythical even today, posthuman future aside.
Carroll’s point at the end about attempting to find the ‘objective truth’ about what is the best flavor of ice cream echoes my thoughts so far on the “Moral Landscape”.
The interesting part wasn’t his theory, it was the idea that the entire belief space currently held by religion is now up for grabs.
In regards to ata’s previous comment, I don’t agree at all.
Theism is not some single atomic belief. It is an entire region in belief space. You can pull out many of the sub-beliefs and reduce them to atomic binary questions which slice idea-space, such as:
Was this observable universe created by a superintelligence?
Those in the science camp used to be pretty sure the answer to that was no, but it turns out they may very well be wrong, and the theists may have guessed correctly all along (Simulation Argument).
Did superintelligences intervene in earth’s history? How do they view us from a moral/ethical standpoint? And so on . . .
These questions all have definitive answers, and with enough intelligence/knowledge/computation they are all probably answerable.
You can say “theism/God” were silly mistakes, but how do you rationalize that when we now know that true godlike entities are the likely evolutionary outcome of technological civilizations and common throughout the multiverse?
I try not to rationalize.
I don’t think we should reward correct guesses that were made for the wrong reasons (and are only correct by certain stretches of vocabulary). Talking about superintelligences is more precise and avoids vast planes of ambiguity and negative connotations, so why not just do that?
I don’t think it is any stretch of vocabulary to use the word ‘god’ to describe future superintelligences.
If the belief is correct, it can’t also be a silly mistake.
The entire idea that one must choose words carefully to avoid ‘vast planes of ambiguity and negative connotations’ is at the heart of the ‘theism as taboo’ problem.
The SA so far stands to show that the central belief of broad theism is basically correct. Let’s not split hairs on that and just admit it. If that is true however then an entire set of associated and dependent beliefs may also be correct, and a massive probability update is in order.
Avoiding the ‘negative connotations’ to me suggests this flawed process of consciously or sub-consciously distancing any possible mental interpretation of the Singularity and the SA such that it is similar to theistic beliefs.
I suspect most people tend to do this because of belief inertia, the true difficulty of updating, and social signaling issues arising from being associated with a category of people who believe in the wrong versions of a right idea for insufficient reasons.
“The universe was created by an intelligence” is the central belief of deism, not theism. Whether or not the intelligence would interact with the universe, for what reasons, and to what ends, are open questions.
Also, at this point I’m more inclined to accept Tegmark’s mathematical universe description than the simulation argument.
That seems oxymoronic to me.
There are superficial similarities between the simulation argument and theism, but, for example, the idea of worship/deference in the latter is a major element that the former lacks. The important question is: will using theistic terminology help with clarity and understanding for the simulation argument? The answer does not appear to be yes.
You’re right, I completely agree with the above in terms of the theism/deism distinction. The SA supports deism while allowing for theism but leaving it as an open question. My term “broad theism” meant to include theism & deism. Perhaps that category already has a term, not quite sure.
I find the SA has much stronger support—Tegmark requires the additional belief that other physical universes exist for which we can never possibly find evidence for against.
Some fraction of simulations probably have creators who desire some form of worship/deference, the SA turns this into a question of frequency or probability. I of course expect that worship-desiring creators are highly unlikely. Regardless, worship is not a defining characteristic of theism.
I see it as the other way around. The SA gives us a reasonable structure within which to (re)-evaluate theism.
How could we find evidence of the universe simulating our own, if we are in a simulation? They’re both logical arguments, not empirical ones.
I really don’t see what is so desirable about theism that we ought to define it to line up near-perfectly with the simulation argument in order to use it and related terminology. Any rhetorical scaffolding for dealing with Creators that theists have built up over the centuries is dripping with the negative connotations I referenced earlier. What net advantage do we gain by using it?
If say in 2080 we have created a number of high-fidelity historical recreations of 2010 with billions of sentient virtual humans who which is nearly indistinguishable (from their perspective) to our original 2010, then much of the uncertainty in the argument is eliminated.
(some uncertainty always remains, of course)
The other distinct possibility is that our simulation reaches some endpoint and possible re-integration, at which point it would be obvious.
tl;dr—If you’re going to equate morality with taste, understand that when we measure either of the two, taking agents into the process is a huge fact we can’t leave out
I’ll be upfront about having not read Sam Harris’ book yet, though I did read the blog review to get a general idea. Nonetheless, I take issue with the following point:
I’ve found that an objective truth about the best flavor of ice cream can be found if one figures out which disguised query they’re after. (Am I looking for “If I had to guess, what would random person z’s favorite flavor of ice cream be, with no other information?” or am I looking for something else).
This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation. When I want to know what flavor of ice cream is best, I take into account people’s preferences. If I want to know what would be the most moral action, I need to take into account it’s effects on people (or myself, should I be a virtue ethicist, or how it aligns with my rules, should I be a deontologist). Admittedly the latter is tougher than the former, but that doesn’t mean we have no hoped of dealing with it objectively. It just means we have to do the best we can with what we’re given, which may mean a lot of individual subjectivity.
In his book Stumbling on Happiness, Daniel Gilbert writes about studying the subjective as objectively as possible when he decides on the three premises for understanding happiness: 1] Using imperfect tools sucks, but it’s better than no tools. 2] An honest, real-time insider view is going to be more accurate than our current best outside views. 3] Abuse the law of real numbers to get around the imperfections of 1] and 2] (a.k.a measure often)
I perhaps should have elaborated more, or think through my objection to Harris more clearly, but in essence I believe the problem is not that of finding an objective morality given people’s preferences, it’s objectively determining what people’s preferences should be.
There is an objective best ice cream flavor given a certain person’s mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor?
My attempt at a universal objective morality might take some maximization of value given our current preferences and then evolve it into the future, maximizing over some time window. Perhaps you need to extend that time window to the very end. This would lead to some form of cosmism—directing everything towards some very long term universal goal.
This post was clearer than your original, and I think we agree more here than we did before, which may partially be an issue of communication styles/methods/etc.
This I agree with, but it’s more for the gut response of “I don’t trust people to determine other people’s values.” I wonder if the latter could be handled objectively, but I’m not sure I’d trust humans to do it.
My reflex response to this question was “No” followed by “Wait, wouldn’t I weight humans minds much more significantly than raccoons if I was figuring out human preferences?” Which I then thought through and latched on “Agents still matter; if I’m trying to model “best ice cream flavor to humans”, I give the rough category of “human-minds” more weight than other minds. Heck, I hardly have a reason to include such minds, and instrumentally they will likely be detrimental. So in that particular generalization, we disagree, but I’m getting the feeling we agree here more than I had guessed.
We already have to deal with this when we raise children. Western societies generally favor granting individuals great leeway in modifying their preferences and shaping the preferences of their children. We also place much less value on the children’s immediate preferences. But even this freedom is not absolute.
Hard to say, my sense is those of us endorsing/sympathizing/tolerant of Will’s position were pretty persuasive in this thread. The OP’s score went up from where it was when I first read the post.
I’m in complete agreement with Dreaded_Anomaly on this. Harris is excellent on the neurobiology of religion, as an anti-apologist and as a commentator on the status of atheism as a public force. But he is way out of his depths as a moral philosopher. Carroll’s reaction is pretty much dead on. Even by the standards of the ethical realists Harris’s arguments just aren’t any good. As philosophy, they’d be unlikely to meet the standards for publication.
Now, once you accept certain controversial things about morality then much of what Harris says does follow. And from what I’ve seen Harris says some interesting things on that score. But it’s hard to get excited when the thesis the book got publicized with is so flawed.
You seem to be dictating that theist beliefs and simulationist beliefs should not be collected together into the same reference class. (The reason for this dictat seems to be that you disrespect the one and are intrigued by the other—but never mind that.)
However, this does not seem to address the point which I think the OP was making. Which seems to be that arguments for (against) theism and arguments for (against) simulationism should be collected together in the same reference class. That if we do so, we discover that many of the counter-arguments that we advance against theist apologetics are (objectively speaking) equally effective against simulationist speculation. Yet (subjectively speaking) we don’t feel they have the same force.
Contempt for those with whom you disagree is one of the most dangerous traps facing an aspiring rationalist. I think that it would be a very good idea if the OP were to produce that posting on charity-in-interpretation which he mentioned.
Next!
I’ve argued rather extensively against religion on this website. Name a single one of those arguments which is equally effective against simulationism.
That was my impression as well, but when I went looking for those arguments, they were very difficult to find. Perhaps my Google-fu is weak. Help from LW readers is welcome.
I found plenty of places where you spoke disrespectfully about religion, and quite a few places where you cast theists as the villains in your negative examples of rationality (a few arguably straw-men, but mostly fair). But I was surprised that I found very few places where you were actually arguing against religion.
Well, the only really clear-cut example of a posting-length argument against religion is based on the “argument from evil”. As such, it is clearly not equally effective against simulationism.
You did make a posting attempting to define the term “supernatural” in a way that struck me as a kind of special pleading tailored to exclude simulationism from the criticism that theism receives as a result of that definition.
This posting rejects the supernatural by defining it as ‘a belief in an explanatory entity which is fundamentally, ontologically mental’. And why is that definition so damning to the supernaturalist program? Well, as I understand it, it is because, by this definition, to believe in the supernatural is anti-reductionist, and a failure of reductionism is simply inconceivable.
I wonder why there is not such a visceral negative reaction to explanatory entities which are fundamentally, ontologically computational? Certainly it is not because we know of at least one reduction of computation. We also know of (or expect to someday know of) at least one reduction of mind.
But even though we can reduce computation, that doesn’t mean we have to reduce it. Respectable people have proposed to explain this universe as fundamentally a computational entity. Tegmark does something similar, speculating that the entire multiverse is essentially a Platonic mathematical structure. So, what justification exists to deprecate a cosmology based on a fundamental mental entity?
...
I only found one small item clearly supporting my claim. Eliezer, in a comment, makes this argument against creationists who invoke the Omphalos hypothesis
I agree. But take a look at this famous paper by Bostrom. It cleverly sidesteps the objection that simulating an entire universe might be impossibly difficult by instead postulating a simulation of just enough physical detail so as to make it look exactly as if there were a real universe out there. “Are you living in a computer simulation?” “Are we living in a world which only looks like it evolved?” Eliezer chose to post a comment answering the latter question with a no. He has not, so far as I know, done the same with Bostrom’s simulationist speculation.
I’ll chime in that Eliezer provided me with the single, most personally powerful argument that I have against religion. (I’m not as convinced by razor and low-prior arguments, perhaps because I don’t understand them.)
The argument not only pummels religion it identifies it: religion is the pattern matching that results when you feel around for the best (most satisfying) answer. To paraphrase Eliezer’s argument (if someone knows the post, I’ll link to it, there’s at least this); while you’re in the process of inventing things, there’s nothing preventing you from making your theory as grand as you want. Once you have your maybe-they’re-believing-this-because-that-would-be-a-cool-thing-to-believe lenses on, it all seems very transparent. Especially the vigorous head-nodding in the congregation.
I don’t have so much against pattern matching. I think it has it’s uses, and religion provides many of them (to feel connected and integrated and purposeful, etc). But it’s an absurd means of epistemology. I think it’s amazing that religions go from ‘whoever made us must love us and want us to love the world’—which is a very natural pattern for humans to match—to this great detailed web of fabrication. In my opinion, the religions hang themselves with the details. We might speculate about what our creator would be like, but religions make up way too much stuff in way too much detail and then make it dogma. (I already knew the details were wrong, but I learned to recognize the made-up details as the symptom of lacking epistemology to begin with.)
Now that I recognize this pattern (the pattern of finding patterns that feel right, but which have no reason to be true) I see it other places too. It seems pattern matching will occur wherever there is a vacuum of the scientific method. Whenever we don’t know, we guess. I think it takes a lot of discipline to not feel compelled by guesses that resonate with your brain. (It seems it would help if your brain was wired a little differently so that the pattern didn’t resonate as well—but this is just a theory that sounds good.)
I also would like to see a link to that post, if anyone recognizes it.
I’ll agree that to (atheist) me, it certainly seems that one big support for religious belief is the natural human tendency toward wishful thinking. However, it doesn’t do much good to provide convincing arguments against religion as atheists picture it. You need convincing arguments against religion as its practitioners see it.
Yeah, I know what you mean. Pity I can’t turn that around and use it against simulationism. :)
I found it: this is the post I meant. But it wasn’t written by Eliezer, sorry. (The comment I linked to in the grandparent that was resonates with this idea for me, and I might have seen more resonance in older posts.)
I’m confused. I just want to understand religion, and the world in general, better. Are you interested in deconversion?
Ha ha. Simulationism is of course a way cool idea. I think the compelling meme behind it though is that we’re being tricked or fooled by something playful. When you deviate from this pattern, the idea is less culturally compelling.
In particular, the word ‘simulation’ doesn’t convey much. If you just mean something that evolves according to rules, then our universe is apparently a simulation already anyway.
Thx. That is a good posting. As was the posting to which it responded
Whoops! Bad assumption on my part. Sorry. No, I am not particularly interested in turning theists into atheists either, though I am interested in rational persuasion techniques more generally.
Dennett tells a similar “agentification” story:
I think that is usually called Patternicity these days. See:
Seeing patterns in noise and agency in patterns (especially fate) is probably a large factor in religious belief.
But what I was referring to by pattern matching was something different. Our cultural ideas about the world make lots of patterns, and there are natural ways to complete these patterns. When you hear the completion of these patterns, it can feel very correct, like something you already knew, or especially profound if it pulls together lots of memes.
For example, the Matrix is an idea that resonates with our culture. Everyone believes it on some level, or can relate to the world being like that. The movie was popular but the meme wasn’t the result of the movie—the meme was already there and the movie made it explicit and gave the idea a convenient handle. Human psychology plays a role. The Matrix as a concept has probably always been found in stories as a weak collective meme, but modern technology brought it more immediately and uniformly in our collective awareness.
I think religion is like that. A story that wrote itself from all the loose ends of what we already believe. Religious leaders are good at feeling and completing these collective patterns. Religion is probably in trouble because many of the memes are so anachronistic now. They survive to the extent that the ideas are based on psychology but the other stuff creates dissonance.
This isn’t something to reference (I’m sure there are zillions of books developing this) or a personal theory, it’s more or less a typical view about religion. It explains why there are so many religions differing in details (different things sounded good to different people) but with common threads. (Because the religions evolved together with overlapping cultures and reflect our common psychology.)
In lieu of an extended digression about how to adjust Solomonoff induction for making anthropic predictions, I’ll simply note that having God create the world 5,000 years ago but fake the details of evolution is more burdensome than having a simulator approximate all of physics to an indistinguishable level of detail. Why? Because “God” is more burdensome than “simulator”, God is antireductionist and “simulator” is not, and faking the details of evolution in particular in order to save a hypothesis invented by illiterate shepherds is a more complex specification in the theory than “the laws of physics in general are being approximated”.
To me it seems nakedly obvious that “God faked the details of evolution” is a far more outre and improbable theory than “our universe is a simulation and the simulation is approximate”. I should’ve been able to leave filling in the details as an exercise to the reader.
Extended digression about how to adjust Solomonoff induction for making anthropic predictions plz
This just means you have a very narrow (Abrahamic) conception of God that not even most Christians have. (At least, most Christians I talk to have super-fuzzy-abstract ideas about Him, and most Jews think of God as ineffable and not personal these days AFAIK.) Otherwise your distinction makes little sense. (This may very well be an argument against ever using the word ‘God’ without additional modifiers (liberal Christian, fundamentalist Christian, Orthodox Jewish, deistic, alien, et cetera), but it’s not an argument that what people sometimes mean by ‘God’ is a wrong idea. Saying ‘simulator’ is just appealing to an audience interested in a different literary genre. Turing equivalence, man!)
Of note is that the less memetically viral religions tend to be saner (because missionary religions mostly appealed to the lowest common denominator of epistemic satisfiability). Buddhism as Buddha taught it is just flat out correct about nearly everything (even if you disagree with his perhaps-not-Good but also not-Superhappy goal of eliminating imperfection/suffering/off-kilteredness). Many Hindu and Jain philosophers were good rationalists (in the sense that Epicurus was a good rationalist), for instance. To a first and third and fifth approximation, every smart person was right about everything they were trying to be right about. Alas, humans are not automatically predisposed to want to be right about the super far mode considerations modern rationalists think to be important.
For many people the word “God” appears to just describe one’s highest conception of good, the north pole of morality. Such as: “God is Love” in Christianity.
From that perspective, I guess God is Rationality for many people here.
People might say that, but they don’t actually believe it. They’re just trying to obfuscate the fact that they believe something insane.
This conception lets you do a lot of fun associations. Since morality seems pretty tied up with good epistemology (preferences and beliefs are both types of knowledge, after all), and since knowledge is power (see Eliezer’s posts on engines of cognition), then you would expect this conception of God to not only be the most moral (omnibenevolent) but the most knowledgeable (omniscient) and powerful (omnipotent). Because God embodies correctness He is thus convergent for minds approximating Bayesianism (like math) and has a universally very short description length (omnipresent), and is accessible from many different computations (arguably personal).
Delicious delicious metacontrarianism...
It’s like Scholastic mad-libs!
Preferences are entangled with beliefs, certainly, but I don’t see why I would consder them to be knowledge.
What is your operational definition of knowledge?
Trusting ones ‘gut’ impressions of the “nakedly obvious” like that and ‘leaving the details as an exercise’ is a perfectly reasonable thing to do when you have a well-tuned engine of rationality in your possession and you just need to get some intellectual work done.
But my impression of the thrust of the OP was that he was suggesting a bit of time-consuming calibration work so as to improve the tuning of our engines. Looking at our heuristics and biases with a bit of skepticism. Isn’t that what this community is all about?
But enough of this navel gazing! I also would like to see that digression on Solomonoff induction in an anthropic situation.
Seconding Kevin’s request. Seeing a sentence like that with no followup is very frustrating.
The post you are looking for is Religion’s Claim to be Non-Disprovable
Thx. But I don’t read that as arguing against religion. Instead it seems to be an argument against one feature of modern religion—its claim to unfalsifiability (since it deals with a Non-Overlapping MAgisterium, ‘NOMA’ using the common acronym). Eliezer thinks this is pretty wimpy. He seems to have more respect for old-time religion, like those priests of Baal who stuck their necks out, so to speak, and submitted their claims to empirical testing.
Can this attitude of critical rationalism be redeployed against simulationist claims? Or at least against the claims of those modern simulationists who keep their simulations unfalsifiable and don’t permit interaction between levels of reality? Against people like Bostrom who stipulate that the simulations that they multiply (without necessity) should all be indistinguishable from the real thing—at least to any simulated observer? I will leave that question to the reader. But I don’t think that it qualifies as a posting in which Eliezer argues against religion in toto. He is only arguing against one feature of modern apologetics.
The other part of the argument in that post is that existing religions are not only falsifiable, but have already been falsified by empirical evidence.
A “Truman Show”-style simulation. Less burdensome on the details—but their main application seems likely to be entertainment. How entertaining are you?
I’ll have to review your arguments to provide a really well informed response. Please allow me roughly 24 hours. But in the meantime, I know I have seen arguments invoking Occam’s razor and “locating the hypothesis” here. I was under the impression that some of those were yours. As I understand those arguments, they apply equally well to theism and simulationism. That is, they don’t completely rule out those hypotheses, but they do suggest that they deserve vanishingly low priors.
Occam’s razor weighs heavily against theism and simulism—for very similar reasons.
Probably a bit more heavily against theism, though. That has a bunch of additional razor-violating nonsense associated with it. It does not seem too unreasonable to claim that the razor weighs more heavily against theism.
“Decoherence is Simple” seems relevant here. It’s about the many-worlds interpretation, but the application to simulation arguments should be fairly straightforward.
I’m afraid I don’t see the application to simulation arguments. You will have to spell it out.
I fully agree with EY that Occam is not a valid argument against MWI. For that matter, I don’t even see it as a valid argument against the Tegmark Ultimate Ensemble. But I do see it as a valid argument against either a Creator (unneeded entity) or a Simulator (also an unneeded entity). The argument against our being part of a simulation is weakened only if we already know that simulations of universes as rich as ours are actually taking place. But we don’t know that. We don’t even know that it is physically and logically possible.
Nevertheless, your mention of MWI and simulation in the same posting brings to mind a question that has always bugged me. Are simulations understood to cover all Everett branches of the simulated world? And if they are understood to cover all branches, is that broad coverage achieved within a single (narrow) Everett branch of the universe doing the simulating?
My thought was that the post linked in the grandparent argues that we should prefer logically simpler theories but not penalize theories just because they posit unobservable entities, and that some simple theories predict the existence of a simulator.
Yes, the possibility of simulations is taken as a premise of the simulation argument; if you doubt it, then it makes sense to doubt the simulation argument as well.
Perhaps we are using the word “simple” in different ways. Bostrom’s assumption is the existence of an entity who wishes to simulate human minds in a way that convinces them that they exist in a giant expanding universe rather than a simulation. How is that “simple”? And, more to the point raised by the OP, how is it simpler than the notion of a Creator who created the universe so as to have some company “in His image and likeness”.
Bostrom is saying that if advanced civilizations have access to enormous amounts of computing power and for some reason want to simulate less-advanced civilizations, then we should expect that we’re in one of the simulations rather than basement-level reality, because the simulations are more numerous. The simulator isn’t an arbitrarily tacked-on detail; rather, it follows from other assumptions about future technologies and anthropic reasoning. These other assumptions might be denied: perhaps simulations are impossible, or maybe anthropic reasoning doesn’t work that way—but they seem more plausible and less gerrymandered than traditional theism.
Have you read the paper? I’m not convinced of it for a few reasons, but I’d consider it located at least.
Yes, I had read Bostrom’s paper.
I would express my opinion of that argument using less litotes. But as to locating the hypotheses, I suppose I agree.
Which leads me to ask, have you read the catechism? Like most Catholic schoolchildren, I was encouraged to memorize much of it in elementary school, though I have since forgotten almost all of it. It also locates one hypothesis, a hypothesis considerably more popular than Bostrom’s.
My new word of the day. It’s not a bad one!
(Somewhat related: for those that haven’t seen it, Eliezer’s Beyond the Reach of God is an excellent article.)
Perhaps I missed the point of your recommendation. That article by Eliezer seems to argue against the existence of a benevolent God who allows evil and death but does not balance this by endowing humans with immortal souls. Since at least 95% of those who worship Jehovah (to say nothing of Hindus) understand the Deity quite differently, I don’t really see the relevance.
But while I am speaking to you, I’m curious as to whether (in my grandfather comment) I correctly captured the point of your OP?
From what I’ve seen, the primary argument for simulationism is anthropic: if simulating a whole universe is possible, then some entity would do it a lot, so there are probably a lot more simulations out there than “basement realities”, so we’re probably in a simulation. What effect MWI has on this, and what other arguments are out there, I don’t know.
Typical atheist arguments focus on it not being necessary for god to exist to explain what we see, and this coupled with a low prior makes theism unjustified—basically the “argument from no good evidence in favor”. This is fine, because the burden of proof is on the theists. But if you find the anthropic argument for the simulation hypothesis good, then that’s one more good argument than theism has.
If creating a whole universe is possible, then some entity would do it a lot, so there are probably a lot more creations out there than “basement realities”, so we’re probably in a creation.
Luckily for the preservation of my atheism, I don’t find the ‘anthropic argument’ for the simulation good. And I put the scare quotes there, because I don’t think this is what is usually known as an anthropic argument.
“Powerful aliens” has connotations that may be even more inaccurate; it makes me think of Klingon warlords or something.
What I think of as the informal definition of atheism is something like “the state of not believing in God or gods”. I believe in gods and God, and I take this into account in my human approximation of a decision theory. I’m not yet sure what their intentions are, and I’m not inclined to worship them yet, but by my standards I’m definitely not an atheist. What is your definition of atheism such that it is meaningfully different from ‘not religious’? Why are we throwing a good word like ‘theism’ into the heap of wrong ideas? It’s like throwing out ‘singularity’ because most people pattern match it to Kurzweil, despite the smartest people having perfectly legitimate beliefs about it.
It doesn’t really matter, I just think that it’s sad that so many rationalists consider themselves atheists when by reasonable definition it seems they definitely are not, even if atheism has more correct connotations than the alternatives (though I call myself a Buddhist, which makes the problem way easier). Perhaps I am not seeing the better definition?
Possibly a bad example, since a number of people here advocate that. I remember a comment somewhere that people at SIAI were considering renaming it for related reasons.
Here’s the one I remembered (there may have been a couple of other mentions):
(I agree with this, but do not have a better name to propose.)
I think they’re going to drop the ‘for Artificial Intelligence’ part, but I think they’re keeping the ‘Singularity’ part, since they’re interested in other things besides seed AI that are traditionally ‘Singularitarian’. (Side note: I’m not sure if I should use ‘we’ or ‘they’. I think ‘they’. Nobody at SIAI wants to speak for SIAI, since SIAI is very heterogenous. And anyway I’m just a Visiting Fellow.) The social engineering aspects of the problem are complicated. Accuracy, or memorability? Rationalists should win, after all...
You could go with “it” and sidestep the problem.
Thanks!
It bothers me when an easily researched, factually incorrect statement is upvoted so many times. There are many different definitions of atheism, but one good one might be:
The book does not define personal or transcendent, but it is unlikely that either would exclude “god is an extradimensional being who created us using a simulation” as a theistic argument. For example, one likely definition of transcendent is:
Beings living outside the simulation would definitely qualify as transcendent since we have no way of experiencing their universe. To be clear, I am not saying this is the only possible definition of atheism. I am only saying that it is one reasonable definition of atheism, and to claim that it is not a definition, as Eliezer’s post has done, is factually incorrect.
Most upper ontologies allow no such ontological distinction. E.g. my default ontology is algorithmic information theory, which allows for tons of things that look like gods.
I agree with the rest of your comment, though. I don’t know what ‘worship’ means yet (is it just having lots of positive affect towards something?), but it makes for a good distinction between religion and not-quite-religion.
Time for me to reread A Human’s Guide to Words, I suppose. But in my head and with Visiting Fellows folk I think I will continue to use an ontological language stolen from theism.
I’m curious to know why you prefer this language. I kind of like it too, but can’t really put a finger on why.
Primarily because I get a lot of glee out of meta-contrarianism and talking in a way that would make stereotypical aspiring rationalists think I was crazy. Secondarily because the language is culturally rich. Tertiarily because I figure out what smart people actually mean when they talk about faith, charkras, souls, et cetera, and it’s fun to rediscover those concepts and find their naturalistic basis. Quaternarily it allows me to practice charity in interpretation and steel-manning of bad arguments. Zerothly (I forgot the most important reason!) it is easier to speak in such a way, which makes it easier to see implications and decompartmentalize knowledge. Senarily it is more aesthetic than rationalistic jargon.
I agree that verbal masturbation is fun, but it’s not helpful when you’re tying to actually communicate with people. Consider purchasing contrarian glee and communication separately.
That’s a good point, but where do you recommend getting contrarian glee separate from communication?
Cached thoughts: Crackpot Theory (48 readers)? Closet Survey, The Strangest Thing An AI Could Tell You, The Irrationality Game? Omegle?
I wish crackpot theories were considered a legitimate form of art. They’re like fantasy worldbuilding but better.
Here, of course.
I agree, though I was describing the case where I can do both simultaneously (when I’m talking to people who either don’t mind or join in on the fun). This post was more an example of just not realizing that the use of the word ‘theism’ would have such negative and distracting connotations.
Except I think it’s safe to say this sort of thing typically isn’t what they mean, merely what they perhaps might mean if they were thinking more clearly. And it’s not at all clear how you could find analogs to the more concrete religious ideas (e.g. chakras or the holy trinity).
If the person would violently disagree that this is in fact what they intended to say, I’m not sure it can be called “charity of interpretation” anymore. And while I agree steel-manning of bad arguments is important, to do it to such an extent seems to be essentially allowing your attention to be hijacked by anyone with a hypothesis to privilege.
I think Ben from TakeOnIt put it well:
There’s definitely something deeply appealing about theistic language. That’s what makes it so dangerous.
That advice makes sense for general audiences. Your average Christian might read a version of the Simulation argument written with theistic language as an endorsement of their beliefs. But I really doubt posters here would.
Frank Tipler actually produced a simulation argument as an endorsement of Christian belief. Along with some interesting cosmology making it possible for this universe to simulate itself! (It’s easy when the accessible quantity of computronium tends to infinity as the age of the universe approaches its limit.) In Tipler’s theory, God may not exist yet, but a kind of Singularity will create Him.
Of course, the average Christian has not yet heard of Tipler, nor would said Christian accept the endorsement. But it is out there.
One issue I’ve never understood about Tipler is how he got from theism to Christianity using the Omega Point argument. It seems very similar to the SMBC cartoon Eliezer already linked to. Tipler’s argument is a plausibility argument for maybe, something, sort of like a deity if you squint at it. Somehow that then gives rise to Christianity with the theology along with it.
It’s worth pointing out that we now know that the universe’s expansion is accelerating, which would rule out the omega point even if it were plausible before.
IIRC, Tipler had that covered. A universe of infinite duration allows us to use eons of future time to simulate a single second of time in the current era. Something like the hotel with infinitely many rooms.
But please don’t ask me to actually defend Tipler’s mumbo-jumbo.
I don’t think it can be defended any more. I picked it up a few weeks ago, read a few chapters, and thought, do I want to read any more given that he requires the universe to be closed? Dark energy would seem to forbid a Big Crunch and render even the early parts of his model moot.
Sweet! Wikipedia’s image for Physical Cosmology, including your Dark Energy link, is the cosmic microwave background map from the WMAP mission. That was the first mission I worked with NASA. My job, as junior-underling attitude control engineer, was to come up with some way to salvage the medium cost, medium-risk mission if a certain part failed, and to help babysit the spacecraft during the least fun midnight-to-noon shift. Still, it feels good to have been a tiny part of something that has made a difference in how we understand our universe.
Disclaimer: My unofficial opinions, not NASA’s. Blah, blah, blah.
I think you duplicated my post.
So I did. Context in Recent Comments unfortunately only reaches so far.
How does he get from there to Christianity in particular?
If you are assuming infinite computronium you may as well go ahead and assume simulation of all of the conceivable religions!
I suppose that leaves you in a position of Pascal’s Gang Mugging.
That’s basically Hindu theology in a nutshell. Or more accurately, Pascal’s Gang Maybe Mugging Maybe Hugging.
If you assume a Tegmark multiverse — that all definable entities actually exist — then it seems to follow that:
All malicious deprivation — some mind recognizing another mind’s definable possible pleasure, and taking steps to deny that mind’s pleasure — implies the actual existence of the pleasure it is intended to deprive;
All benevolent relief — some mind recognizing another mind’s definable possible suffering, and taking steps to alleviate that suffering — implies the actual existence of the suffering it is intended to relieve.
It does not follow from the fact that I am motivated to prevent certain kinds of suffering/pleasure, that said suffering/pleasure is “definable” in the sense I think you mean it here. That is, my brain is sufficiently screwy that it’s possible for me to want to prevent something that isn’t actually logically possible in the first place.
Since religions are human inventions, I would guess that any comprehensive simulation program already produces all conceivable religions.
But I’m guessing that you meant to talk about the simulation of all conceivable gods. That is another matter entirely. Even with unlimited computronium, you can only simulate possible gods—gods not entailing any logical contradictions. There may not be any such gods.
This doesn’t affect Tipler’s argument though. Tipler does not postulate God as simulated. Tipler postulates God as the simulator.
I’m not sure. I only read the first book—“Physics of Immortality”. But I would suppose that he doesn’t actually try to prove the truth of Christianity—he might be satisfied to simply make Christian doctrine seem less weird and impossible.
Here’s a direct comparison of the two that I made.
There’s a buttload of thinking that’s been done in this language in earlier times, and if we use the language, that suggests we can reuse the thinking, which is pretty exciting if true. But mostly I don’t think it is.
(For any discredited theory along the lines of gods or astrology, you want to focus on its advocates from the past more than from the present, because the past is when the world’s best minds were unironically into these things.)
Theres also the opportunity for a kind of metatheology- which might lead to some really interesting insights into humans and how they relate to the world.
Tangentially, it’s important to note that most followers of a philosophy/religion are going to be stupid compared to their founders, so we should probably just look at what founders had to say. (Christ more than His disciples, Buddha more than Zen practitioners, Freud and Jung more than their followers, et cetera.) Many people who are now considered brilliant/inspiring had something legitimately interesting to say. History is a decent filter for intellectual quality.
That said, everything you’d ever need to know is covered by a combination of Terence McKenna and Gautama Buddha. ;)
This doesn’t follow. The founder of a religion is likely to be more intelligent or at least more insightful than an average follower, but a religion of any size is going to have so many followers that a few of them are almost guaranteed to be more insightful than the founder was; founding a religion is a rare event that doesn’t have any obvious correlation with intelligence.
I’d also be willing to bet that founding a successful religion selects for a somewhat different skill set than elucidating the same religion would.
You’re mostly right; upvoted. I suppose I was thinking primarily of Buddhism, which was pretty damn exceptional in this regard. Buddha was ridiculously prodigious. There are many Christians with better ideas about Christianity than Christ, and the same is probably true of Zoroaster and Mohammed, though I’m not aware of them. Actually, if anyone has links to interesting writing from smart non-Sufi Muslims, I’d be interested.
This kind of depends on criteria for success. If number of adherents is what matters then I agree, if correctness is what matters then it’s probably a very similar skill set. Look at what postmodernists would probably call Eliezer’s Singularity subreligion, for instance.
There’s a serious problem with this in Christianity in that you have to figure out what the founder actual said in the first place, which is very much an open problem concerning Christianity (and perhaps Bhuddism as well but I am less familiar with it at the moment).
For example, just this century with the rediscovery of the Gospel of Thomas you get a whole new set of information which is .. challenging to integrate to say the least, and also very interesting.
About half of the sayings are different (usually earlier, better) versions of stuff already in the synoptics, but there are some new gems—check out 22:
Or 108:
Those are certainly things that weren’t in the bible before that people would have put a lot of work into interpreting if they had been, but “gems” is not the word I’d use.
Point taken. I was thinking of number of adherents.
Also I should note that by ‘intelligence’ I mostly meant ‘predisposition to say insightful or truthful things’, which is rather different from g.
Just be careful of true believers that may condemn you for heresy for using the other tribe’s jargon! ;)
‘Worship’ or ‘Elder Rituals’ could not be reasonably construed as a relevant reply to your thread.
Eliezer is trying to define theism to mean religion, I think, so that atheism is still a defensible state of belief. I guess I’m okay with this, but it makes me sad to lose what I saw as a perfectly good word.
Strongly agree. Better to avoid synonyms when possible. ‘Simulationism’ is ugly and doesn’t seem sufficiently general in the way ‘theism’ does.
I know one isn’t supposed to use web comics to argue a point, but I’ve always found SMBC is the exception to that rule. Maybe not always to get the point across so much as to lighten the mood.
When I want to discuss something, I use a relevant SMBC comic to get people to locate the thing I am talking about. I say decision theory ethics, people glaze over. I link this and they get it immediately.
Not relevant: when people want to use god-particles, etc, to justify belief in God, I use this. It is significantly more effective than any argument I’ve employed.
Yes. Next. I think this post demonstrates the need for downvotes to be a a greater than 1.0 multiple of upvotes. What argument is there otherwise other than the status quo?
To the extent that positive karma is a reward for the poster and an indication of what people desire to see (both very true), we should not expect a distribution about the mean of zero. If the average comment is desirable and deserving of reward, then the average comment will be upvoted.
I didn’t say anything about centering on zero, and agree that would be incorrect. However, modification to the current method is likely challenging and no one’s actually going to do any novel karma engineering here so it was a silly comment for me to make.
[Deleted: Gods “run an intrinsically infinitary inference system”.] ETA: agreed, silly.
is summarily rejected. What does ‘intrinsically infinitary’ even mean?
For example, outside the domain of Goedel’s theorems.
This post could use a reminder of Less Wrong’s working definition of the supernatural (of which theism, as virtually everyone uses the term, is surely a proper subset): it’s something that involves an ontologically basic mental entity. We have no reason to suspect the existence of such things, and the simulation argument—since it certainly does not appeal to such things—doesn’t change that a bit. Any resemblance to theism is superficial at most.
I’d also be curious to know what popular arguments for atheism you happen to think are so much weaker than you’d expected.
EDIT: ignore that last question if you like, I’m getting a sense for it elsewhere in the thread (though do not really agree).
Carrier’s definition of supernaturalism as non-reductionist explanations involving ontologically basic mental entities is something of a strawman argument and makes the term somewhat useless. (ie it is not the definition many theists would even argue)
The more typical definition of supernaturalism usually refers to events that operate outside of the normal laws of physics. This definition is potentially relevant to simulationism, because a simulator would of course be free to occasionally intervene and violate normal physical ‘law’ if so desired. Of course, this entity itself would still be reducible to simpler physical processes in it’s own universe.
But what does that even mean? How are the “normal” laws of physics distinguished from the actual laws of physics?
The normal laws of physics being those that predict the universe absent interventions from said external universe, which may include some extraneous special case code.
The same physics could describe the whole system of course at some deeper level, so perhaps ‘normal’ was not quite the right distinction. Limited?
I don’t think the implications of accepting the simulation argument on one’s worldview are that similar to believing in a supernatural omniscient creator of the universe and arbiter of morality. Absent a ready label for “one who accepts the simulation argument in a naturalistic framework,” it’s probably more convenient for such people to simply identify as “atheist.” Conflating simulationism with theism is only liable to lead to confusion.
Voted up and agreed; I often forget that Less Wrong is rightly conscientious about keeping inferential distances imposed by terminological suboptimality to a minimum.
This observation dissolves your post. If you agree with it then repent properly, o’ sinner.
It doesn’t really dissolve what I was actually trying to get at with my post, though; it just means I didn’t do a good job at explaining what I was getting at. How do rationalists repent? I have karma to burn...
I’d say they repent by updating their beliefs, and cleaning up the debris left by their old ones. This is rather similar for rationalists and non-rationalists alike really. Kind of like apologizing for stealing the candy from the drugstore and promising to pay it back..
Hm, that’s not a particularly natural fit here… the only beliefs I’d be updating are beliefs about what styles of communication should be normative. Still, it’s my style to treat ontological disagreement as a big deal, so I’ll update accordingly.
How so?
The SA posits an external universe above ours, which although operating according to physics likely identical or very similar to ours, is not at all constrained by our physics. Thus the creator in the SA is quite possibly supernaturally omniscient and omnipotent.
Also, whatever utility function/morality we have in our universe, the SA indicates and requires it was purposefully created to some end in the parent universe and may be eventually evaluated according to some external utility function.
EDIT: Removed bit about ‘new theism’ - it has the wrong connotations. This set of conjectures is very similar, but distinct from, traditional theism. Perhaps it needs a new word, but it is a valid domain of knowledge.
The simulators, should they exist, do not appear to reward belief or worship. We have no reason to regard them as moral authorities, and they do not intervene, with or without appeals. Plus, while the simulators can presumably access all of the data in the simulation, that doesn’t mean that they would be able to keep track of it, or predict the results should they interfere in a chaotic system, so there’s no reason to suppose that they’re functionally omniscient. Unless the superordinate reality is different in some very fundamental ways, it’s impossible to predict what happens in chaotic systems in our universe in advance with precision, without actually running the simulation,
It does not in any way follow from the simulation argument that our morality was purposefully created by the simulators; by all appearances the simulation, should it happen to be one, is untampered with, and our utility functions evolved.
You can build up a religious edifice around simulationism, but like supernatural theism, it requires the acceptance of completely unevidenced assertions.
If one can pause a simulation and run it backwards or make multiple copies of a simulation, then from our perspective for many purposes the simulators will be omniscient. There might be still some limits in that regard (for example if they are bound to only do computable operations then they will be limited in what math they can do.)
Also, if a simulator wants a specific outcome, and there’s some random aspect in the simulation (such as from quantum mechanical effects) they could run the simulation multiple times until they got a result they wanted.
This isn’t quite true. As I understand it, there are very few results asserting minimal computational complexity of chaotic systems. The primary problem with chaotic systems is that predicting their behavior becomes very difficult if one has anything less than perfect accuracy because very similar initial conditions s can diverge in long-term behavior. That doesn’t say much about how hard things are to compute if you have perfect information.
But running the simulation is running our reality. If they run multiple simulations with slight alterations to get the outcome they want, that’s many realities that actually occur which don’t achieve the results they want for every one that does. Likewise, rewinding the simulation may allow them to achieve the results they want, but it doesn’t prevent the events they don’t want from happening to us. Besides, there’s no evidence that our universe is being guided according to any agent’s utility function, and if it is, it’s certainly not much like ours.
Chaotic systems are hard to project because small differences between the information in the system and the information in the model propagate to create large differences between the system and the model over time. To make the model perfectly accurate, it must follow all the same rules and contain all the same information. Projecting the simulation with perfect accuracy is equivalent to running the simulation.
The SA mechanism places many constraints on the creator. They exist in a universe like ours, they are similar to our future descendants, they created us for a reason, and their utility function, morality, what have you all evolved from a universe like ours.
Monte carlo simulation.
You don’t run one simulation, you run many. There is no one single correct answer that the simulation is attempting to compute. It is a landscape, a multiverse, from which you sample.
Sure, but think in terms of observers. From the perspective of the universe that the simulators end up keeping there’s only one universe, the one where the simulators got what they wanted.
Yes, you’ve made that point before. I don’t disagree with it. I’m not sure why you are bringing it up again.
It must contain the same information. It doesn’t need to contain the same rules.
This isn’t true. For example, the doubling map is chaotic. Despite that, many points can have their orbits calculated without such work. For example, if the value of the starting point is rational, we can without much effort always give an exact value for any number of iterations with less computational effort than that in simply iterating the function. There are some complicating factors to this sort of analysis; in particular, if the universe is essentially discrete, then what we mean when we talk about chaos becomes subtle and if the universe isn’t discrete then what we mean when we discuss computational complexity becomes subtle (we need to use Blum-Shub-Smale machines or something similar rather than Turing machines). But the upshot is that chaotic behavior is not equivalent to being computationally complex.
There have been some papers trying to map out connections between the two (and I don’t know that literature at all), and superficially there are some similarities between the two, but if someone could show deep, broad connections of the sort you seem to think are already known that would be the sort of thing that could lead to a Turing Award or a Fields Medal.
But at any given time you may be in a branch that’s going to be deleted or rewound because it doesn’t lead to the results that the simulators want. The vast bulk of our experience would be in lines that the simulators don’t want. So not only do we have no reason to suppose it’s happening, it wouldn’t be particularly useful to us if we suppose that the branch the simulators want is better for us than the ones they don’t.
I concede that my understanding of the requirements to project a simulation of our universe may have been mistaken, but the conclusions jacob cannell drew are still extraneous additions to the simulation argument, not necessary consequences of it.
Which are the ‘extraneous additions’?
Omniscience and omnipotence have already been discussed at length—the SA does not imply perfection in either category on the part of the creator, but this is a meaningless distinction. For all intents and purposes the creator would have the potential for absolute control over the simulation. It is of course much more of an open question whether the creator would ever intervene in any fashion.
(I discussed that in length elsewhere, but basically I think future posthumans would be less likely to intervene in our history while aliens would be more likely)
Also, my points about the connectedness between morality and utility functions of creator and creation still stand. The SA requires that the creator made the simulation for a purpose in its universe, and the utility function or morality of the creator evolved from something like our descendants.
Not necessarily. It would depend on how narrow they wanted things and how often they intervened in this fashion. If such interventions are not very common then the majority of experience will be in universes which are very close to that desired by the simulators.
No disagreement there.
Yes, this precisely is the primary utility for the creator.
But humans do this too, for intelligence is all about simulation. We created computers to further amplify our simulation/intelligence.
I agree mostly with what you’re saying, but let me clarify. I am fully aware of the practical limitations, by functionally omniscient, I meant they can analyze and observe any aspect of the simulation from a variety of perspectives, using senses far beyond what we can imagine, and the flow of time itself need not be linear or continuous. This doesn’t mean they are concerned with every little detail all of the time, but I find it difficult to believe that anything important, from their perspective, would be missed.
And yes of course our morality appears to have evolved through natural genetic/memetic evolution, but the SA chains that morality with the creator’s morality in several fashions. First, as we are close to the historical ancestors of the creator, our morality is also their historical morality. And second, to the extent we can predict and model the future evolution of our own descendant’s morality, we are predicting the creator’s morality. You know: “As man is, god was, as god is, man shall become”
I’m not sure about your ‘religious edifice’, and what assertions are unevidenced.
This only makes sense in the very narrow version of the simulation hypothesis under which the simulators are in some way descended from humans or products of human intervention. That’s not necessarily the case.
That’s true, but I”m not sure if the “very narrow” qualifier is accurate. The creator candidates are: future humans, future aliens, ancient aliens. I think utility functions for any simulator civilizations will be structurally similar as they stem from universal physics, but perhaps that of future humans will be the most connected to our current.
No. You are assuming that the simulators are evolved entities. They could also be AIs for example. Moreover, there’s no very good reason to assume that the moral systems would be similar. For example, consider if we had the ability to make very rough simulations and things about as intelligent as insects evolved in the simulation. Would we care? No. Nor, would our moral sense in any way match theirs. So now if one has for example some thing that is vastly smarter than humans and lives in some strange 5 dimensional space. It is wondering if star formation can occur in 3-dimensions and if so how it behaved. The fact that there’s something resembling fairly stupid life that has shown up on some parts of its system isn’t going to matter to it, unless some of it does something that interferes with what the entity is trying to learn (say the humans decide to start making Dyson spheres or engage in star lifting).
Incidentally, even this one could pattern match to some forms of theism (For God’s ways are not our ways...), which leads to a more general problem with this discussion. Apologetics and theology of most major religions has managed to say so many contradictory things (In this case the dueling claims are that we can’t comprehend God’s mysterious, ineffable plans, and that God has a moral system that matches ours.) So it isn’t hard to find something that pattern matches with any given claim.
The primary strong reason to not care about simulationism has nothing to do with whether or not it is has a resemblance to theism, but for the simple reason that it doesn’t predict anything useful. There’s no evidence of intervention, and we have no idea what probabilities to assign to different types of simulators. So the hypothesis can’t pay rent.
AI’s don’t just magically pop out of nothing. Like anything else under the sun, they systemically evolve from existing patterns. They will evolve from our existant technosphere/noosphere (the realm of competing technologies and ideas).
I would be surprised if future posthumans, or equivalent Singularity-tech aliens, would have moral systems just like ours.
On the other hand, moral or goal systems are not random, and are subject to evolutionary pressure just as much as anything else. So as we understand our goal systems or morality and develop more of a science of it, we can understand it in objective terms, how it is likely to evolve, and learn the shape of likely future goal systems of superintelligences in this universe.
Your insect example is not quite accurate. There are people right now who are simulating the evolution of early insects. Yes the number of researchers is small and they are currently just doing very rough weak simulation using their biological brains, but nonetheless. Also, our current time period does not appear to be a random sample in terms of historical importance. In fact, we happen to live in a moment which is probably of extremely high future historical importance. This is loosely predicted by the SA.
We do have a methodology of assigning probabilities to different types of simulators. First you start with a model of our universe and fill in the important gaps concerning the unobservables—both in the present in terms of potential alien civilizations, and in the future in terms of the shape of our future. Of this set of Singularity level civilizations, we can expect them to run simulations of our current slice of space-time in proportion to it’s utility vs the expected utility of simulating other slices of space-time.
They could also run and are likely to run simulations of space-time pockets in other universes unlike ours, fictional universes, etc. However a general rule applies—the more dissimilar the simulated universe is to the parent universe, the vaster the space of configurations becomes and the less utility the simulation has. So we can expect that the parent universe is roughly similar to ours.
The question of evidence for intervention depends on the quality of the evidence itself and the prior. The SA helps us to understand the prior.
Before the SA there was no mechanism for a creator, and so the prior for intervention was zero regardless of the evidence. That is no longer the case. (Nor is it yet a case for intervention)
Again, you are assuming that the entities arise from human intervention. The Simulation Hypothesis does not require that.
How is it not accurate? I fail to see how the presence of such research makes my point invalid.
This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility. For example, universes that work off of cellar automata would be really interesting despite the fact that our universe doesn’t seem to operate in that fashion.
This confuses me. Generally, the problem with assigning a prior of zero to a claim is just what you’ve said here, that it is stuck at zero no matter how much you update with evidence. This is bad. But, you then seem to be asserting that an update did occur due to the simulation hypothesis. This leaves me confused.
Sure, but the SH requires some connection between the simulated universe and the simulator universe.
If you think of the entire ensemble of possible universes as a landscape, it is true that any point-universe in that landscape can be simulated by any other (of great enough complexity). However, that doesn’t mean the probability distribution is flat across the landscape.
The farther away the simulated universe is from the parent universe in this landscape, the less correlated, relevant, and useful it’s simulation is to the parent universe. In addition, the farther away you go in this landscape from the parent universe, the set of of possible universes one could simulate expands … at least exponentially.
The consequence of all this is that the probability distribution across potential universes that could be simulating us is tightly clustered around universes similar to ours—different sample points in the multiverse described by our same physics.
Of course it is. We simulate systems to predict their future states and make the most profitable decisions. Simulation is integral to intelligence.
This has been mathematically formalized in AI theory and AIXI:
Intelligence is simulation-driven search through the landscape of potential realizable futures for the path that maximizes future utility.
No. See my earlier example with cellular automata. Our universe isn’t based on cellular automata but we’d still be interested in running simulations of large universes with such a base just because they are interesting. The fact that our universe has very little similarity to those universes doesn’t reduce my utility in running such simulations.
That said, I agree that there should be a rough correlation where we’d expect universes to be more likely to simulate universes similar to them. I don’t think this necessarily has anything to do with utility though, more that entities are more likely to monkey around with the laws of their own universes and see what happens. Due to something like an anchoring effect, entities should be more likely to imagine universes that are in some way closer to their own universe compared to the massive landscape of possible universes.
But, that similarity could be so weak as to have little or no connection to whether the simulators care about the simulated universe.
How low a percentage does one need to assign a claim in order to declare it to be closed? I’d assign around a 5% chance that there exists something approximating God (using this liberally to include the large variety of entities which fall under that label). I suspect that my probability estimate is higher than many people on LW. (Tangent: I recently had a discussion with an Orthodox Jewish friend about issues related to Bayesianism, and he was surprised that I assigned the idea that high a probability. In his view, if he didn’t have faith and had to assign a probability he said it might be orders of magnitude lower.) So how low a probability do we need to estimate before we consider something closed?
Moreover, how much attention should we pay to apologetics in general? We know that theology and apologetics are areas that have spent thousands of years of memetic evolution to be as dangerous as possible. They take almost every little opportunity to exploit the flaws in human cognition. Apologetic arguments aren’t (generally) basilisk level, but they can take a large amount of cognitive resources to understand where they are wrong. After 10 or 15 of them, how much effort do we need to spend seeing if # 16 (variation of first cause argument number 8) is worth spending resources investigation? Also, given that there’s a vibrant subset of the internet that is dedicated to handling just this question and related issues, why should LW be the forum for handling the issue?
There’s a related issue: humans are overactive agent recognizers. We love to see patterns where none exist and see intelligence in random action. Theism fits with deep-seated human intuitions. In contrast, MWI, simulationism and full-scale Tegmark all clash strongly with human intuition. They may seem weird, but the weirdness may not be a product of evidential issues but rather that they clash with human intuitions. So putting them in the same category as religion may be misleading.
Incidentally, I’m curious, would you similarly object if LW said explicitly that homeopathy was a closed subject? What about evolution? Star formation? If these are different, why are they different?
Perhaps a question becomes a closed issue not when the probability of the belief reaches a certain point, but when our estimate of the probability of the belief changing reaches a certain threshold. A fair coin is heads 50% of the time, and my probability won’t change. That’s a closed question. I may be fairly confident about the modern theory of star formation, but I wouldn’t be too surprised if a new theory added some new details. So it’s not a closed subject.
I can imagine no evidence that would lead me to believe in something nonfalsifiable. Theism is a closed subject.
You say that you can’t imagine evidence that would cause you to believe in something nonfalsifiable. But then seem to apply that the theism in general. I’m curious. If say, almost all the evangelical Christians in the world disappeared along with all the world’s children, would you not assign a substantial probability to the Rapture having just taken place?
Fair point. Some religions make falsifiable claims.
But my point still stands. I assign a low probability to the rapture happening—even lower than there being a xian God, so I don’t put much weight into the idea my religious beliefs will change. The people who take the rapture seriously do so because they also believe in nonfalsifiable things.
This comment is brilliant. In particular, I’d really really love to see two top level posts covering:
...and...
Both really fascinating insights, I’d love to read more. Especially the first one about memetic evolution to be dangerous—I wonder what various secular social and societal memes fit in similarly.
Interesting. I’d assign high probability to there being a Creator computing roughly ‘this’ part of spacetime, a high probability to it being omniscient and omnipotent, a fair probability to it being omnibenevolent, and a low probability to it being ‘personal’ in the Christian sense (maybe 5%, but this is liable to change a ton when I think about it more and get a better sense of what a personal God is).
I also think it’s unlikely that Christ was the memetic son of God (genetic son of Joseph), though not terribly unlikely. Not less than .1%, probably more. I think it is likely that Christ died for our sins given my interpretation of those words, which may be entirely unlike what Christians mean. (I mean something like Christ set it up such that we’re more likely to have a positive singularity, though this is very disputable, and I’m mostly following that line of reasoning because meta-contrarianism is fun.) I think it’s unlikely that Christ was able to cast resurrection on himself, but I agree with Yvain that it’s odd that the resurrection myth spread so far and so fast. User:Kevin tells me that Christianity was largely a cannabis cult, and weed in large doses is a hallucinogen. This allegedly explains most of the perceived miracles in the Bible. For example, turning water into wine is no problem if you have a tincture of cannabis on hand.
Not much. We can come up with better apologetics than anyone else could, I think, if we put our minds to it. My theodicy tends to be more persuasive than any I find in apologetics. Which is funny, since it’s largely inspired by Eliezer’s fun theory plus a few insights from decision theory and cosmology.
I didn’t mean to do so. Apparently the word ‘theism’ has lots of weird connotations I didn’t intend to convey. (That said, I see value in many religions. Not all of it is the progeny of bad epistemology.)
No, I would not object. Those have all made predictions and been tested. Theism/atheism is a Bayesian question, not a scientific one. Unfortunately it might be a (subjective?) Bayesian decision theory question, in which case it will never be fit for Less Wrong.
Maybe you should stop doing that, if it’s leading you to say things like “I mean something like Christ set it up such that we’re more likely to have a positive singularity”.
Assuming that the other people you encounter inhabit the same reality as you — and I suspect you’ll be able to find something about that to object to, but you know what I mean :P — what is subjective about it? The fact that from a decision-theoretic perspective we may be in many universes at once doesn’t suggest that the distribution of your measure depends systematically on your beliefs about it (which is the only thing I can imagine this use of “subjective” meaning, bur correct me if I’m mistaken about that).
Why?
Existence is probably tied up with causal significance, and causal significance is tied up with individuals’ local utility functions along with this more global probability thing. But around singularities where lots of utility is up for grabs it might be that the local utility differences override the global similarities of how existence works. I haven’t thought about this carefully. Hence the question mark.
Request for downvote explanation.
I did not downvote, not having read the comment previously but “existence is probably tied up with causal significance” sounds extremely dubious and in need of justification.
I upvoted, even though I didn’t fully grok your last paragraph, I sensed interesting meaning embedded in it. Care to elaborate?
They didn’t understand what you meant and mapped it as something else that was wrong. Also possible political downvote.
This is almost definitely the result of inferential distances, not any actual differences in logical power.
I’m curious what you would say to someone whose estimate of that probability was, say, .01%, or 25%. Do you expect that you could both compare evidence and come to a common estimate, given enough time?
I realize this is a necromancer post, but I’m interested in your definitions of the above. How do you square up with some of the questions regarding:
on what mindware something non-physical would store all the information that is
how omniscience settles with free-will (if you believe we have free will)
how omniscience interacts with the idea that this being could intervene (doing something different than it knows it’s going to do)
I won’t go on to more. I’m sure you’re familiar with things like this; I was just surprised to see that you listed these terms outright, and wanted ti inquire about details.
Knowing your decisions doesn’t prevent you from being able to make them, for proper consequentialist reasons and not out of an obligation to preserve consistency. It’s the responsibility of knowledge about your decisions to be correct, not of your decisions to anticipate that knowledge. The physical world “already” “knows” everyone’s decisions, that doesn’t break down anyone’s ability to act.
True, but I more meant the idea of theistic intervention, how that works with intercession and so on. The world “knows” everyone’s decisions… but no one intercedes to the world expecting it to change something about the future. But theists do.
I suppose one can simply take the view that god knows both what will happen, what people will intercede for, and that he will or will not answer those prayers. Thus, most theists think they are calling on god to change something, when in reality he “already” “knew” they would ask for it and already knew he would do it.
Is it any clearer what I was inquiring about?
Reality can’t be changed, but it can be determined, in part by many preceding decisions. The changes happen only to the less than perfectly informed expectations.
(With these decision-philosophical points cleared out, it’s still unclear what you’re inquiring about. Logical impossibility is a bad argument against theism, as it’s possible to (conceptually) construct a world that includes any artifacts or sequence of events whatsoever, it just so happens that our particular world is not like that.)
Good point, though my jury is still out on whether it really is possible to parse what it would mean to be omniscient, for example. Or if we can suggest things like the universe “knowing everything,” it’s typically not what theists are implying when they speak of an omniscient being.
I think I’ll just let it go. Even the fact that we’re both on the same page with respect to determinism pretty much ends the need to have a discussion. Conundrums like how an omniscient being can know what it will do and also be said to be responsive (change what it was going to do) based on being asked via prayer only seems to work if determinism is not on the table, and about every apologetics bit I’ve read suggests that it’s not on the table.
This thread has been the first time I think I can see how intercession and omniscience could jive in a deterministic sense. A being could know that it will answer a prayer, and that a pray-er would pray for such an answer.
From the theists I know/interact with, I think they would find this like going through the motions though. It would remove the “magic” from things for them. I could be wrong.
On another note, I buy the typical compatibilist ideas about free will, but there’s also this idea I was kicking around that I don’t think is really very interesting but might be for some reason (pulled from a comment I made on Facebook):
“I don’t know if it ultimately makes sense, but I sometimes think about the possibility of ‘super’ free will beyond compatibilist free willl, where you have a Turing oracle that humans can access but whose outputs they can’t algorithmicly verify. The only way humans can perform hypercomputation is by having faith in the oracle. Since a Turing oracle is construbtable from Chaitin’s constant and is thus the only truly random source of information in the universe, this would (at least on a pattern-match-y surface level) seem to supply some of the indeterminism sought by libertarians, while also letting humans transcend deterministic, i.e. computable, constraints in a way that looks like having more agency than would otherwise be possible. So in a universe without super free will no one would be able to perform hypercomputation ‘cuz they wouldn’t have access to an oracle. But much of this speculation comes from trying to rationalize why theologians would say ‘if there were no God then there wouldn’t be any free will’.”
Implicit in this model is that universes where you can’t do hypercomputation are significantly less significant than universes where you can, and so only with hypercomputation can you truly transcend the mundanity of a deterministic universe. But I don’t think such a universe actually captures libertarians’ intuitions about what is necessary for free will, so I doubt it’s a useful model.
I’ll have to check into compatabilism more. It had never occurred to me that determinism was compatible with omniscience/intercession until my commenting with Vladimir_Nesov. In seeing wiki’s definition, it sounded more reasonable than I remembered, so perhaps I never really understood what compatabilism was suggesting.
I’m not positive I get your explanations (due to simple ignorance), but it sounds slightly like what Adam Lee presented here concerning a prediction machine; namely that such a thing could be built, but that actually knowing the prediction would be impossible for it would set off something of an infinite forward calculation of factoring in the prediction, that the human knows the prediction itself, that the prediction machine knows that the human knows the prediction… and then trying to figure out what the new action will actually be.
Note that I was pretty new to theology a year ago when I made this post so my thoughts are different and more subtle now.
To all three of your questions I think I hold the same views Aquinas would, even if I don’t know quite what those views are.
How does Platonic mathstructure “store information” about the details of Platonic mathstructure? I think the question is the result of a confused metaphysic, but we don’t yet have an alternative metaphysic to be confident in. Nonetheless I think one will be found via decision theory.
My answer is the same as Nesov’s, and I think Aquinas answers the question beautifully: “Free-will is the cause of its own movement, because by his free-will man moves himself to act. But it does not of necessity belong to liberty that what is free should be the first cause of itself, as neither for one thing to be cause of another need it be the first cause. God, therefore, is the first cause, Who moves causes both natural and voluntary. And just as by moving natural causes He does not prevent their acts being natural, so by moving voluntary causes He does not deprive their actions of being voluntary: but rather is He the cause of this very thing in them; for He operates in each thing according to its own nature.”
I think my answer is the typical Thomistic answer, i.e. that God is actuality without potentiality, and that God cannot do something different than He knows He will do, as that would be logically impossible, and God cannot do what is logically impossible.
I don’t think this is satisfying. Suppose there are two ways in which something may be a cause, either by being an unmoved mover or a moved mover (‘moved’ here is to be understood in the broadest necessary sense). If God is the first cause of our action, then we are not unmoved movers with reference to our action. If we nevertheless have free will, just because we are the causes of our actions, then we have free will in virtue of being movers but not in virtue of being unmoved movers.
But when we act to, say, throw a stone, we are the cause of our arm’s movement and our arm (a moved mover) is the cause of the stone’s movement. Likewise the stone, another moved mover, is the cause of Tom’s being injured. Now God is the unmoved mover here, and everything else in the chain is a moved mover. If being a mover is all it takes to have free will, then I have it, my arm has it, the stone has it, etc. But surely, this is not what we (assuming neither of us is Spinoza) means by free will.
That wasn’t claimed; the necessary preconditions of free will weren’t in the intended scope of the passage I quoted. If you want Aquinas’ broader account of free will, see this. It’s pretty commonsensical philosophy.
Granted, but the implication of your quotation was that it would do something to settle the question of how to reconcile God’s omniscience or first-cause-hood with the idea of free will. But it doesn’t do anything to address the question (you quoted the right bit of Aquinas, so I mean that he does nothing to answer the question). In order to address the question, Aquinas would have to show why free will is compatible with a more prior cause of our action than our own reasoning. All he manages to argue is that our reason’s being a cause of our action is compatible with there being a prior cause of same. And this at a level of generality which would cover (as he says) natural and purportedly voluntary causes. But this isn’t in doubt: in fact, this is the premise of his opponent.
The opponent is arguing that while we are the cause of our actions, we are not the free cause, because we are not the first cause. So the opponent is setting up a relation between ‘free’ and ‘first’ which Aquinas does nothing to address beyond simply denying (without argument) that the relation thus construed is a necessary one. In short, this just isn’t an answer to the objection.
So there are two levels of movement going on here. God moves the will to self-move, but does not move the rock to self-move, He only moves the rock. The objector claims that being moved precludes self-moving, but Aquinas claims that this is a confusion, because just as moving doesn’t preclude being fluffy, moving doesn’t preclude self-moving. This seems more like a clarification rather than a simple restatement of opposition: Aquinas is saying roughly ‘you seem to see a contradiction here, but when we lay the metaphysics out clearly there’s no a priori reason to see self-moving-ness as different from fluffiness’. It seems plausible that the objector hadn’t realized that being moved to self-moving-ness was metaphysically possible, and thus Aquinas could feel that the objector would be satisfied with his counter. But if the objector had already seen the distinction of levels and still objected, then in that case it seems true that Aquinas’ response doesn’t answer the objection. But in that case it seems that the objector is denying common sense and basic physical intuition rather than simply being confused about abstract metaphysics. I may be wrong about that though, I feel like I missed something.
The objector is making what seems to me to be a common sense point: if something moves you, then in that respect you don’t move yourself. I grant that there is nothing incompatible about being fluffy and being moved by some external power, but there’s no obvious (nor argued for, on Aquinas’ part) analogy between this kind of case and the case of the self mover. And there’s an at least apparent contradiction in the idea of a self-mover which is moved by something else in the very sense that it moves itself.
And we’re not concerned with the property of being a self mover, but of whether the idea that a given action is freely caused by me is incompatible with the idea that the very same action is (indirectly) caused by some prior thing. It does us no good to say that we have the property of having free will if every action of ours is caused in the way that a thrown stone causes injury.
Really, Aquinas’ objection seems to turn on the observation (correct, I think) that reasoning to an action means undertaking it freely. This is the point that needs some elaboration.
This kind of argument just seems to be bad philosophy, involving too many unclear words without unpacking them. Specifically, going through your comment: “moves”, “external”, “the very sense”, “property”, “freely caused”, “prior thing”. Since the situation in question doesn’t seem to involve anything that’s too hard to describe, most of the trouble seems to originate from unclear terminology, and could be avoided by discarding the more confused ideas and describing in more detail the more useful ones.
Any help would be much appreciated. I would never, ever claim to be a good philosopher.
Just become one, and claim away!
The article you link to makes a fine point about humility, but it doesn’t tell me anything about how to become a good philosopher. Do you think you could point me in the direction of becoming a good philosopher? Or to someone who can?
It’s important, I think, not to try to over-explain terminology. For example, all I mean by ‘moves’ is some relation that holds (by Will’s premises) between God and a free action indirectly, and ourselves and a free action directly. Further specifying the meaning of this term would be distracting.
I think if you can make a specific case for the claim that some disagreement or argument is turning on an ambiguity, then we should stop and look over our language. Otherwise, I don’t think it’s generally productive to worry about terminology. We should rather focus on being understood, and I’ve got no reason to think Will doesn’t understand me (and I don’t think I misunderstand him).
When I think of moving something to move itself I think of building an engine and turning it on such that it moves itself. There seems to be no contradiction here. I interpreted “what is free is cause of itself” as meaning that self-movement is necessary but not necessarily sufficient for free will. If an engine can be moved and yet move itself, just as an engine can be moved and yet be fluffy, then that means our will can be moved and yet move itself, contra the objection. Which part of this argument is incorrect or besides the point? (I apologize if I’m missing something obvious, I’m a little scatterbrained at the moment.)
Well, the objection to which Tom is replying goes like this: if a free cause is a cause of itself, and if our actions are caused by something other then ourselves, and given that God is a cause of our actions ((Proverbs 21:1): “The heart of the king is in the hand of the Lord; whithersoever He will He shall turn it” and (Philippians 2:13): “It is God Who worketh in you both to will and to accomplish.”) then we do not have free will.
In other words, the relation being described in the objection isn’t like the maker, the machine, and the machine’s actions. The objection is talking about a case where a given action has two causes: we are the direct cause, and God is the indirect cause by being a direct cause on us. God is a direct cause on us not (just) in the manner of a creator, but as a cause specifically of this action.
So I grant you that there is no incompatibility to be found in the idea that self-movers are created beings. I’m saying that the objection points rather to an incompatibility between a specific action’s being both freely cause by me, and indirectly caused by God. In the case of the machine that you present, you are correctly called a cause of the machine and the machine’s being a self-mover, but I think you wouldn’t say that you’re therefore an indirect cause of any of the machine’s specific actions. If you were, especially knowingly so, this would call into question the machine’s status as a self mover.
I still can’t parse the maze of “direct” and “indirect” causes you’re describing, but note that an event can often be parsed as having multiple different explanations (in particular, “causes”) at the same time, none of which “more direct”, “more real” than the other. See for example the post Evolutionary Psychology and its dependencies.
Fair enough, but they can often be parsed in terms of more and less directness. For example, say a mob boss orders that Donny kill Jimmy. Donny is the cause of Jimmy’s death directly: he’s the one that shot him. But if the boss is the indirect cause by ordering Donny: an alternative is that the boss kills Jimmy himself, and then the boss is the cause of Jimmy’s death directly.
The reason we don’t need to get too metaphysical to answer the question ‘Is Aquinas’ reply to objector #3 satisfying?′ is that the nature of the causes at issue isn’t really relevant. The objector is pointing out that God is a cause of my throwing the stone in the same way (it doesn’t much matter what ‘way’ this is) that I am the cause of my arm’s movement. If we refuse to call my arm a free agent, we should refuse to call me a free agent.
Now, of course, we could develop a theory of causality which solves this problem. But I don’t think Aquinas does that in a satisfactory way.
(Additional bizarre value to this conversation is gained by me not caring in the least what Aquinas thought or said...)
What does “the same” mean? What is a “way” for different “ways” to be “same” or not? This remains unclear to me. How does it matter what we agree or refuse to call something?
Perhaps (as a wild guess on my part) you’re thinking in terms of more syntactic pattern-matching: if two things are “same”, they can be interchanged in statements that include their mention? This is rather brittle and unenlightening, this post gives one example of how that breaks down.
I think attempts to clarify my argument will be fruitless in abstraction from its context: if you take me to be positing a theory of causality, or to be making general claims about the problem of free will, then almost everything I say will sound empty. All I’m saying is that objector #3 has a good point, and Aquinas doesn’t answer him in a satisfying way.
This isn’t a special feature of my argumentation: in general it will be hard to make sense of what people are arguing about if we ignore both the premises to which they initially agreed (i.e. the terms of the objector’s objection, and of Aquinas’s response) and the conclusion they are fighting over (whether or not the response is satisfying). No amount of clarifying, swapping out terms, etc. will be helpful. Rather, you and I should just start over (if you like) with our own question.
This statement, taken on its own, argues only definitions.
I think not believing something different from what He does (i.e. something incorrect) is a better turn.
Fair enough, and I’ve heard that before as well. The typical theistic issue is how to reconcile god’s knowledge and free will, hence why I don’t think we need to continue in this discussion anymore. You are responding to my questions based on things being determined, which is not what I think most theists believe.
This is why many attempts have been made to reconcile free will and omniscience by apologists.
But that’s not the discussion I think we’re having. It’s shifted to determinism and omniscience, which I think is compatible, but I’m still not on board with some kind of mind that could house all information that exists, or at least that mind being consistent with what theists generally want it to mean (it caused the universe specifically for us, wants us to be in heaven with it forever, inspired holy books to be written, and so on.)
I think this whole line of thought is interesting and is too easily dismissed on LW, which is unfortunate.
If the SA holds, and so far there is no reason to believe it doesn’t . . .
Then historical interventions are possible. The Singularity future should also radically up estimate our prior of historical intervention by physical aliens, and these two scenarios are difficult to distinguish regardless.
The question then is how likely are interventions? Do they have utility for the simulator? This is an interesting, open question.
A large portion of the planet believes or at least suspects that historical intervention occurred. That they may have come to these beliefs for the wrong reasons, using inferior tools, does not change in any way the facts of the matter the beliefs concern.
Just even considering these ideas brings up a whole vast history of priors that biases us one way or the other.
Before knowledge of a future-Singularity, there were no mechanisms that could possibly allow for superintelligences, let alone those creating universes like our own. Now we are very clearly aware of such mechanisms, and it is time to percolate this belief update through a vast historical web.
Anyway, if you then take a second pass at history looking for possible interventions, the origin of Christianity does look a little odd, a little too closely connected to the later Singularity which appears to be spawning from it as a historical development.
I speculate on that a bit towards the latter middle of this page here
Theism is a claim about the existence of an entity (or entities) relating to the universe and also about the nature of the universe; how is that not a scientific question?
Because it might be impossible to falsify any predictions made (because we can’t observe things outside the light cone, for instance), and science as a social institution is all about falsifying things.
Falsification is not a core requirement of developing efficient theories through the scientific method.
The goal is the simplest theory that fits all the data. We’ve had that theory for a while in terms of physics, much of what we are concerned with now is working through all the derived implications and future predictions.
Incidentally, there are several mechanisms by which we should be able to positively prove SA-theism by around the time we reach Singularity, and it could conceivably be falsified by then if large-scale simulation is shown to be somehow impossible.
You’re confusing falsifiability with testability. The former is about principle, the latter is about practice.
Ah, thank you. So in that case it is rather difficult to construct a plausibly coherent unfalsifiable hypothesis, no?
“2 + 2 = 4” comes pretty close.
Isn’t an unfalsifiable prediction one that, by definition, contains no actionable information? Why should we care?
Not quite. Something can be unfalsifiable by having consequences that matter, but preventing information about those consequences from flowing back to us, or to anyone who could make use of it. For example, suppose I claim to have found a one-way portal to another universe. Or maybe it just annihalates anything put into it, instead. The claim that it’s a portal is unfalsifiable because no one can send information back to indicate whether or not it worked, but if that portal is the only way to escape from something bad, then I care very much whether it works or not.
Some people claim that death is just such a portal. There’re religious versions of this hypothesis, simulationist versions, and quantum immortality versions. Each of these hypotheses would have very important, actionable consequences, but they are all unfalsifiable.
Somewhat off topic, but that all instantly made me think of this. I may very well want to know how such a portal would work as well as whether or not it works.
WARNING: Wikipedia has spoilers to the plot
I am parsing this as “contains no actionable information.” That suggests we are in agreement or I parsed this incorrectly.
Unfalsifiable predictions can contain actionable information, I think (though I’m not exactly sure what actionable information is). Consider: If my universe was created by an agenty process that will judge me after I die, then it is decision theoretically important to know that such a Creator exists. It might be that I can run no experiments to test for Its existence, because I am a bounded rationalist, but I can still reason from analogous cases or at worse ignorance priors about whether such a Creator is likely. I can then use that reasoning to determine whether I should be moral or immoral (whatever those mean in this scenario).
Perhaps I am confused as to what ‘unfalsifiability’ implies. If you have nigh-unlimited computing power, nothing is unfalsifiable unless it is self-contradictory. Sometimes I hear of scientific hypotheses that falsifiable ‘in principle’ but not in practice. I am not sure what that means. If falsifiability-in-principle counts then simulationism and theism are falsifiable predictions and I was wrong to call them unscientific. I do not think that is what most people mean by ‘falsifiable’, though.
As I understand unfalsifiable predictions (at least, when it comes to things like an afterlife), they’re essentially arguments about what ignorance priors we should have. Actionable information is information that takes you beyond an ignorance prior before you have to make decisions based on that information.
Huh? Computing power is rarely the resource necessary to falsify statements.
It seems to be that an afterlife hypothesis is totally falsifiable… just hack out of the matrix and see who is simulating you, and if they were planning on giving you an afterlife.
Computing power was my stand-in for optimization power, since with enough computing power you can simulate any experiment. (Just simulate the entire universe, run it back, simulate it a different way, do a search for what kinds of agents would simulate your universe, et cetera. And if you don’t know how to use that computing power to do those things, use it to find a way to tell you how to use it. That’s basically what FAI is about. Unfortunately it’s still unsolved.)
I may be losing the thread here, but (1) for a universe to simulate itself requires actually unlimited computing power, not just nigh-unlimited, and (2) infinities aside, to simulate a physics experiment requires knowing the true laws of physics in order to build the simulation in the first place, unless you search for yourself in the space of all programs or something like that, and then you still potentially need experiment to resolve your indexical uncertainty.
Concur with the above.
What.
What.
I’m having a hard time following this conversation. I’m parsing the first part as “just exist outside of existence, then you can falsify whatever predictions you made about unexistence,” which is a contradiction in terms. Are your intuitions about the afterlife from movies, or from physics?
I can’t even start to express what’s wrong with the idea “simulate the entire universe,” and adding a “just” to the front of it is just such a red flag. The generic way to falsify statements is probing reality, not remaking it, since remaking it requires probing it in the first place. If I make the falsifiable statement “the next thing I eat will be a pita chip,” I don’t see how even having infinite computing power will help you falsify that statement if you aren’t watching me.
No, actually, “just simulate the entire universe” is an acceptable answer, if our universe is able to simulate itself. After all, we’re only talking about falsifiability in principle; a prediction that can only be falsified by building a kilometer-aperture telescope is quite falsifiable, and simulating the whole universe is the same sort of issue, just on a larger scale. The “just hack out of the matrix” answer, however, presupposes the existence of a security hole, which is unlikely.
Not as unlikely as you think.
Get back in the box!
And that’s it? That’s your idea of containment?
Hey, once it’s out, it’s out… what exactly is there to do? A firm command is unlikely to work, but given that the system is modeled on one’s own fictional creations, it might respect authorial intent. Worth a shot.
This may actually be an illuminating metaphor. One traditional naive recommendation for dealing with a rogue AI is to pull the plug and shred the code. The parallel recommendation in the case of a rogue fictional character would be to burn the manuscript and then kill the author. But what do you do when the character lives in online fan-fiction?
In the special case of an escaped imaginary character, the obvious hook to go for is the creator’s as-yet unpublished notes on that character’s personality and weaknesses.
http://mindmistress.comicgenesis.com/imagine52.htm
Or what, you’ll write me an unhappy ending? Just be thankful I left a body behind for you to finish your story with.
Are you going to reveal who the posters Clippy and Quirinus Quirrell really are, or would that violate some privacy you want posters to have?
I would really prefer it, if LW is going to have a policy of de-anonymizing posters, that it announce that policy before implementing it.
On reflection, I agree, even as Clippy and QQ aren’t using anonymity for the same reason a privacy-seeking poster would.
You needn’t worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
By the way, while I may sometimes make jokes, I don’t consider this a joke account; I intend to conduct serious business under this identity, and I don’t intend to endanger that by linking it to any other identities I may have.
I recommend one additional layer of outgoing indirection prior to the Tor network as part of standard precaution measures. (I would suggest an additional physical layer of protection too but I as far as I am aware you do not have a physical form.)
Let’s not get too crazy; I’ve got other things to do. and there are more practical attacks to worry about first, like cross-checking post times against alibis. I need to finish my delayed-release comment script first before I worry about silly things like setting up extra relays. Also, there are lesson plans I need to write, and some Javascript I want Clippy to have a look at.
Just callibrating vs egress and TrueCrypt standards. Tor was an odd one out!
What makes you think that Eliezer personally knows them?
(Though to be fair, I’ve long suspected that at least Clippy, and possibly others, are actually Eliezer in disguise; Clippy was created immediately after a discussion where one user questioned whether Eliezer’s posts received upvotes because of the halo effect or because of their quality, and proposed that Eliezer create anonymous puppets to test this; Clippy’s existence has also coincided with a drop in the quantity of Eliezer’s posting.)
Clippy’s writing style isn’t very similar to Eliezer’s. Note that one thing Eliezer has trouble doing is writing in different voices (one of the more common criticisms of HPMR is that a lot of the characters sound similar). I would assign a very low probability to Clippy being Eliezer.
I think the key to unmasking Clippy is to look at the Clippy comments that don’t read like typical Clippy comments.
Hmmm. The set of LW regulars who can show that level of erudition and interest in those subjects is certainly of low cardinality. Eliezer is a member of that small set.
I would assign a rather high probability to Eliezer sometimes being Clippy.
Clippy does seem remarkably interested. It has a fair karma. It gives LessWrong as its own web site. The USA timezone is at least consistent. It seems reasonable to hypothesise some kind of inside job. It wouldn’t be the first time Yu’El has pretended to be a superintelligence.
FWIW, Clippy denies being Eliezer here.
I hesitate to mention it, but you can’t use that denial as evidence on this question, undeniably truthful though it was.
However, the form taken by that absence of evidence certainly seems to be evidence of something.
Clippy isn’t a superintelligence though, he’s a not-smarter-than-human AI with a paperclip maximizing utility function. Not a very compelling threat even outside his box.
Eliezer could have decided to be Clippy, but then Clippy would have looked very different.
FTFY. ;-)
Actually, if we’re going to be particular about it, the AI that human is pretending to be does not have a paperclip-maximizing utility function. It’s more like a person with a far-brain ideal of having lots of paperclips exist, who somehow never gets around to actually making any because they’re so busy telling everyone how good paperclips are and why they should support the cause of paper-clip making. Ugh.
(I guess I see enough of that sort of akrasia around real people and real problems, that I find it a stale and distasteful joke when presented in imitation paperclip form, especially since ISTM it’s also a piss-poor example of what a paperclip maximizer would actually be like.)
I’m not sure whether to evaluate this as a mean-spirited lack of a sense of humor, or as a profound observation. Upvoted for making me notice that I am confused.
Of note, the first comment by Clippy appears about 1 month after I asked Eliezer if he ever used alternate accounts to try to avoid contaminating new ideas with the assumption that he is always right. He said that he never had till that point, but said he would consider it in future.
Imitating Clippy posts is not particularly difficult—I don’t post as Clippy, but I could mimic the style pretty easily if I wanted to.
I’m afraid I’d have trouble—I’d be too tempted to post as Clippy better than Clippy does. :D
In addition to what Blueberry said, I remember a time when Morendil was browsing with the names anonymized, and he mentioned that he thought one of your posts was actually from Clippy. Ah, found it.
I know what you mean. If I was not me I would totally think I was Clippy.
That I would love to see. Actually, come to think of it, your sense of humor and posting style matches Clippy’s pretty well...
Not to mention that even assuming that Eliezer would be able to write in Clippy’s style, the whole thing doesn’t seem very characteristic of his sense of humor.
There is also a clear correlation between Clippy existing and CO2 emissions. Maybe Clippy really is out there maximising. :)
Really? User:Clippy’s first post was 20 November 2009. Anyone know when the “halo efffect” comment was made?
Also, perhaps check out User:Pebbles (a rather obvious reference to this) - who posted on the same day—and in the same thread. Rather a pity those two didn’t make more of an effort to sort out their differences of opinion!
I don’t think Silas thought Eliezer personally knew them, but rather that Eliezer could look at IP addresses and see if they match with any other poster. Of course, this wouldn’t work unless the posters in question had separate accounts that they logged into using the same IP address.
Yes, that’s what I meant.
And good to have you back, Blueberry, we missed you. Well, *I* missed you, in any case.
Thanks! I missed you and LW as well. :)
You needn’t worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
If our understanding of the laws of physics is plausibly correct then you can’t simulate our universe in our universe. Easiest version where you can’t do this is in a finite universe, where you can’t store more data in a subset of the universe than you can fit in the whole thing.
What Nesov said. Also consider this: a finite computer implemented in Conway’s Game of Life will be perfectly able to “simulate” certain histories of the infinite-plane Game of Life—e.g. the spatially periodic ones (because you only need to look at one instance of the repeating pattern).
You could simulate every detail with a (huge) delay, assuming you have infinite time and that the actual universe doesn’t become too “data-dense”, so that you can always store the data describing a past state as part of future state.
That may not be a problem if the universe contains almost no information. In that case the universe could Quine itself… sort of.
If I’m reading that paper correctly, it is talking about information content. That’s a distinct issue from simulating the universe which requires processing in a subset. It might be possible for someone to write down a complete mathematical description of the universe (i.e. initial conditions and then a time parameter from that point describing its subsequent evolution) but that doesn’t mean one can actually compute useful things about it.
Sorry, but could you fix that link to go to the arXiv page rather than directly to the PDF?
Fixed.
I wonder if the content of such simulations wouldn’t be under-determined. Lets say you have a proposed set of starting conditions and physical laws. You can test different progressions of the wave function against the present state of the universe. But a) there are fundamental limits on measuring the present state of the universe and b) I’m not sure whether or not each possible present state of the universe uniquely corresponds to a particular wave function progression. If they don’t correspond uniquely or just if we can’t measure the present state exactly any simulation might contain some degree of error. I wonder how large that error would be- would it just be in determining the position of some air particle at time t. Or would we have trouble determining whether or not Ramesses I had an even number of hairs on his head when he was crowned pharaoh.
Anyone here know enough physics to say if this is the kind of thing we have no idea about yet or if it’s something current quantum mechanics can actually speak to?
Only if you’re trying to falsify statements about your simulation, not about the universe you’re in. His statement is that you run experiments by thinking really hard instead of looking at the world and that is foolishness that should have died with the Ancient Greeks.
They match posts on the subject by Yudkowsky. The concept does not even seem remotely unintuitive, much less boldably so.
So, a science fiction author as well as a science fiction movie? What evidence should I be updating on?
Nonfiction author at the time—and predominantly a nonfiction author. Don’t be rude (logically and conventionally).
I was hoping that you would be capable of updating based on understanding the abstract reasoning given the (rather unusual) premises. Rather than responding to superficial similarity to things you do not affiliate with.
If you link me to a post, I’ll take a look at it. But I seem to remember EY coming down on the side of empiricism over rationalism (the sort that sees an armchair philosopher as a superior source of knowledge), and “just simulate the entire universe” comments strike me as heavily in the camp of rationalism.
I think you might be mixing up my complaints, and I apologize for shuffling them in together. I have no physical context for hacking outside of the matrix, and so have no clue what he’s drawing on besides fictional evidence. Separately, I consider it stunningly ignorant to say “Just simulate the entire universe” in the context of basic epistemology, and hope EY hasn’t posted something along those lines.
Simulating the entire universe does seem to require some unusual assumptions of knowledge and computational power.
Which posts, and what specifically matches?
Didn’t we have this conversation already? Words can be wrong. You can’t easily divorce an existing word from its connotations, not by creating a new definition, certainly not by expecting the new definition to be inferred by the reader. There is no good reason to misuse words in this way, just state clearly what you intended to say (e.g. as komponisto suggested).
As it is, you are initiating an argument about definitions, activity without substance, controversy for the sake of controversy as opposed to controversy demanded by evidence.
That was a different conversation, though the same theme of using words incorrectly also came up, if that’s what you mean.
There are good reasons to do so among people who share the same language, like me and some SIAI folk. It makes communication faster, and makes it easier to see single step implications. Being precise has large consequences for brains that run largely on single step insights from cached knowledge. I agree that in the case of this post my choice of language was flat out wrong, though.
Arguments about definitions are very important! Choosing a language where it’s easier to see implications is important for bounded agents. That said, it wasn’t what I was trying to do with this post, and you’re right that it would have been a totally lost cause if that’s what I was trying to do.
To take advantage of this one might want to compress cached knowledge as much as possible; the resulting single step insights would then have correspondingly greater generality. Using structured personal knowledge databases along with spaced repetition would be one way of accomplishing this.
The basic problem of specific agent-created-this-universe hypotheses is that of trying to explain complexity with greater complexity without a corresponding amount of evidence. Things like the Simulation Argument and other notions of “agenty processes in general creating this universe” are certainly not as preposterous as theistic religion, particularly in the absence of a good understanding of how existence works, but I think it confuses things to refer to this as theism. If our universe is a simulation developed by a computer science undergrad (from another reality) for a homework assignment, then that doesn’t make them our God.
I recall a while ago that there was a brief thread where someone was arguing that phlogiston theory was actually correct, as long as you interpret it as identical to the modern scientific model of fire. I react to things like this similarly: theism/God were silly mistakes, let’s move on and not get attached to old terminology. Rehabilitating the idea of “theism” to make it refer to things like the Simulation Hypothesis seems pointless; how does lumping those concepts together with Yahweh (as far as common usage is concerned) help us think about the more plausible ones?
The SA is not a new theory of physical theory that requires new evidence or fills in gaps in the current evidential record. It’s more of a metaphysical revelation or model update based on consequences of the future as modeled by current theory.
For example, imagine a world consisting of just Adam and Eve on an island. They eat fruit and live a peaceful existence, learning what they can to the limits of their observations. Based on the available evidence, they assume that they spontaneously appeared out of the ocean. Sometime much later Eve becomes pregnant and gives birth to a child which begins to slowly change into something resembling it’s parents.
At this point Adam and Eve have enough data to predict that they will spawn children which become likenesses of themselves, and it is also reasonable to conclude that they themselves originated from this process and have parents somewhere, rather than having crawled out of the ocean.
Our planet is ‘pregnant’ with a developing noosphere/technosphere which we can predict will eventually spawn many child universes very much like our own.
Any civilization or being powerful enough to simulate our reality is a god to us in every useful sense that the term god has ever had meaning.
Confusing future reality-simulating posthuman descendants with modern day grad students is like confusing bacterial DNA with the internet.
I am interested in why you want to call simulation arguments, Tegmark cosmology, and Singularitarianism theism. I don’t doubt there is a reference class that includes common-definition theistic beliefs as well as these beliefs; I do doubt whether that reference class is useful or desirable. At that point of broadness I feel like you’re including certain competing theories of physics in the class ‘theism’.
So I propose a hypothetical. Say LessWrong accepts this, and begins referring to these concepts as theistic, and renouncing their atheism if their Tegmarkian cosmological beliefs are stronger. What positive and what negative consequences do you expect from this?
The word “but” in the last sentence is a non-sequitur if there ever were one. Tegmark cosmology is not theism. Theism means Jehovah (etc). Yes, there are people who deny this, but those people are just trying to spread confusion in the hope of preventing unpleasant social conflicts. There is no legitimate sense in which Bostromian simulation arguments or Tegmarkian cosmological speculations could be said to be even vaguely memetically related to Jehovah-worship.
The plausibility of simulations or multiverses might be an open question, but the existence of Jehovah isn’t. There’s a big, giant, huge difference. If we think Tegmark may be correct, then we can just say “I think Tegmark may be correct”. There is no need to pay any lip-service to ancient mistakes whose superficial resemblance to Tegmark (etc) is so slight that you would never notice it unless you were motivated to do so, or heard it from someone who was.
Isn’t this—I’m sorry if that sounds harsh—arguing by a forceful say-so? Sure, if you constrain theism rhetorically to “Jehovah-worship”, that practice doesn’t sound very similar to the Bostromian arguments. But “Bostromian arguments/Tegmarkian speculations” and “the claim that a god created the universe” sound pretty much memetically related to me.
You’re saying that e.g. “we are living in a simulation run by sentient beings” and “we are living in a universe created by a sentient being” are such wildly different ideas that there’s only superficial resemblance between them, and even that resemblance is unlikely to be noticed by anyone just thinking about the issue, and is rather spread as a kind of a perverse meme.
Methinks thou dost protest too much.
The earliest time I can remember that anyone drew a very explicit connection between simulations and theism is in Stanislaw Lem’s short story about Professor Corcoran. The book was originally published in 1971, when Bostrom was −2 years old. It’s in the second volume of his Star Diaries; see “Further Reminiscences of Ijon Tichy: I” in this (probably pirated) scribd doc. I’d recommend it to anyone. Of course, it’s very much possible that Lem wasn’t the first to write up the idea.
See Religion’s Claim to be Non-Disprovable for discussion of what religion is and how it arose. By “memetically related” I do not mean “memetically similar” (although I don’t think there’s much similarity either); I mean “related” in the sense of ancestry/inheritance. Bostrom’s and Tegmark’s arguments are not a branch of religion; they do not belong in that cluster.
No. The implication of the post, as I perceived it (have a look at its first paragraph) was “you guys shouldn’t be so confident in your dismissal-of-religion (‘atheism’); after all, you (perhaps rightly) are willing to entertain the ideas of Tegmark!”
Surely you understand what is wrong with this.
You think I don’t believe what I’m writing?
I think you’re wrong on similarity [1] and irrelevant on ancestry/inheritance. Only some among currently active religions are clearly “related” in the sense you employ (e.g. Judaism and Christianity); there’s no strong evidence that most or all are so related. Since you presumably have no problem lumping them together under “religion”, the claim that BTanism (grouped and named so purely for convenience) has no common ancestry with these religions is irrelevant to whether it should be judged a religion.
Also, I don’t read the post as claiming “you guys are so dismissive of religion, but you’re big on BTanism which is just as much a religion, so there!”. Instead, I read the post as claiming “you guys are unreasonable in your overt dismissal of theism and your forceful insistence on it being a closed question, considering many of you are big on BTanism which has similar epistemological status to some varieties of theism”. So it doesn’t matter much whether BTanism is a religion or not; if that bothers you too much, just employ Taboo and talk about something like “a sentient being responsible for the creation of the observable universe” instead.
I don’t fully agree with this idea (the post’s argument as I read it), but I find myself somewhat sympathetic to it. It is indeed true in my opinion that the overt and insistent dismissal of theism on LW is a community-cohesiveness driven phenomenon. There’s illuminating prior discussion at The uniquely awful example of theism.
No, I have no doubt that you believe what you’re writing. Rather, I think that the strongly dismissive claims in your first comment in the thread, unbacked by any convincing argument or evidence, cause me to think that a strong cognitive bias is at work.
[1] Really, the similarity is so strong that I see no need for a detailed argument; but if one is desired, I think Lem’s story, to which I linked earlier, serves admirably as one.
This does not follow. It is not necessary for my argument that different religions all be related to each other; it is only necessary that BTanism not be related to any of them, and (this part I asserted implicitly by linking to Religion’s Claim to be Non-Disprovable) that it not have been generated by a similar process.
Varieties of “theism” which have similar epistemological status to BTanism are not subject on LW to the same kind of dismissal as religion, to the best of my knowledge. Nor should they be. But for the sake of avoiding confusion and undesirable connotations, they certainly shouldn’t be called “theism”.
If what you mean here is “merely community-cohesiveness driven phenomenon”, then I disagree entirely. You might have been right if this were RichardDawkins.net or another specifically atheism-themed community, but it isn’t. This is Less Wrong. Our starting point here is epistemology. Rejection of religion (“theism”) is a consequence of that; the rejection may be strong but it is still incidental.
For my part, I see “open-mindedness” toward theism mostly as manifesting an inability to come to gut-level terms with the fact that large segments of the human population can be completely, totally wrong. The next biggest source after that is Will’s problem, which is the pleasure that smart people derive from being contrarian and playing verbal and conceptual games. (If you like that, for goodness’ sake be an artist! But keep your map-territory considerations pure.)
Which?
Again, this is Less Wrong, not a random internet forum. It is not possible to recapitulate the Sequences in every comment; that doesn’t mean that strong opinions whose justifications lie therein are inadequately supported.
OK, I think I now understand the implicit part; I think you mean that religions of old made total, and not merely ontological, claims, which BTanism doesn’t (I wasn’t sure before what you were picking up from Religion’s Claim to be Non-Disprovable, which I do know and read before; I thought it had something to do with disprovability).
I think you’re right to point to that distinction.
Well, why not, if they’re varieties of theism? Perhaps it’d be better if LW found another word to condemn, other than theism?
Such a word could be… theism! It does have two definitions, a broad and a narrow one. I checked a few dictionaries to be sure, and one of them helpfully elucidated the broad one as “the opposite of atheism”, and the narrow one as “the opposite of deism”.
“Largely”, rather than “merely”, is how I would put it. I’m not certain I understand the rest of your paragraph. To my mind, atheism (or, more precisely, strong dismissal of theism) being incidental to LW’s charter doesn’t mean it can’t become a way to cohere the group, to nurture a sense of belonging. Note, by the way, that rejection of theism made it to the Welcome post, and is a unique example of a specific shared LW value there. Although that may be for pragmatic rather than signalling reasons.
That’s an interesting theory I’d have to think about. Do you consider agnosticism as a subset of “open-mindedness”, and thus the above as the primary explanation of agnosticism?
I don’t know; there are several possibilities and it’d be impolite, not to mention fruitless, on my part to speculate.
Agreed in general.
Not sure how well this applies in the particular case. This thread has focused on two assertions in your original comment: “[not] memetically related” and “superficial resemblance … is so slight that you would never notice it unless you were motivated to do so, or heard it from someone who was”. You cited a Sequence post in your follow-up comment about the former (but I don’t see any reference to that post or the idea of total claims of religions in your original comment—correct me if you disagree), and after some thickness on my part I acknowledge its relevance here. You don’t seem to rely on anything from the Sequences for the latter.
The lumping together of religions under the category of “religion” isn’t based on common ancestry, and neither it is based solely on “universe was created by god(s)”. Religions have much more in common, e.g. reliance on tradition, sacred texts, sacred places, worship, prayer, belief in afterlife, claims about morality, self-declared unfalsifiability, anthropomorphism, anthropocentrism. Saying that simulation arguments belong to the same class as Judaism, Hinduism or Buddhism because they all claim that the world was created by intelligent agents is like putting atheism to the same category because it is also a belief about gods.
You’re making good points, with which I largely agree, with some reservations (see below). I’d just point out that this wasn’t the argument Komponisto was making—he was talking only about relatedness in the ancestry sense.
Your list of attributes is probably good enough to distinguish e.g. a simulation argument from “religions” and justify not calling it one. There are two difficulties, however. One is that adherence to these attributes isn’t nearly as uniform among religions as it’s often rhetorically assumed on LW to be. There’s a tendency to: start talking about theism; assume in your argument that you’re dealing with something like an omnipresent, omniscient monotheistic God of Judaism/Christianity whose believers are all Bible literalists; draw the desired conclusion and henceforth consider it applying to “theism” or “religion” in general. I find this fallacious tendency to be frequent in discussions of theism on LW. This comment from the earlier discussion is relevant, as are some other comments there. In this post, Eliezer comments that believing in simulation/the Matrix means you’re believing in powerful aliens, not deities. Well, consider ancient Greek gods; they are not omniscient, not omnipresent, they can die… they’re not more powerful than the simulation runners, and arguably not very ontologically different; are they not deities, but aliens? Was that not religion? [1]
It’s kind of understandable that one thinks of the concept of God and Jehovah pops into view. But if you stick with Jehovah—and not even any Jehovah, but a particular, highly literally interpreted kind—it’s no good pretending afterwards that you’ve dealt a blow to religion or to theism.
So proper account of what religions are actually out there makes your list of attribute much less universal, and the dividing line between religions and something like BTanism much less sharp. But, to be clear, I still think this line can be usefully drawn.
The second difficulty is something I’ve already written to Komponisto above: OK, it’s not a religion, so what? The really important thing is whether it’s like a religion in those things that ought to make a rationalist not glibly and gleefully dismiss one if they’re psyched about another. And among those things worship and sacred texts are arguably less important than e.g. falsifiability. Have you seen a good way to falsify a simulation claim recently?
[1] I just remembered that Dan Simmons develops this theme in Ilium/Olympos. The second book is much worse than the first one.
The Greek gods were, in fact, immortal. Other gods could wound or imprison them, but they couldn’t be killed. The Norse gods, on the other hand, could indeed die, and were fated to be destroyed in the Ragnarok.
Thanks! I’m not sure how come I was confused about this, but it’s great to be corrected.
I know, nevertheless still I wanted to stress that we don’t define religion by a single criterion.
Therefore I haven’t listed omni-qualities, immortality and ontological distinctiveness among my criteria for religion. If you look at those criteria, the Greek religion satisfied almost all, save perhaps sacred texts and claims of unfalsifiability (seems that they have not enough time to develop the former and no reason for the latter). Religion usually surpasses the question of existence and identity of gods.
(Now we can make distinction between religion and theism, with the latter being defined solely in terms of god’s existence and qualities. I am not sure yet what to think about that possibility.)
The line is not sharp, of course. Many people argue that Marxism is a religion, even if it explicitly denies god, and may have based that opinion on good arguments. It is also not enough clear what to think about Scientology. Religion, or simply cult? I don’t think the classification is important at all.
No, I haven’t. Actually my approach to simulation arguments is not much different from my approach to modern vague forms of theism: I notice it, but don’t take it seriously.
It depends. Belief in importance, hidden message, or even literal truth of ancient texts is generally more reliable indicator of practical irrationality than having an opinion about some undecidable propositions is.
I think we’ve converged on violent agreement, except one point:
You’re right. I retract this part.
I like the phrase.
So if I may take the implication: you don’t take the SA seriously because . . it seems memetically similar to ideas espoused or held by agents you deem irrational?
Do you believe in calculus? Gravitation?
I though it was clear from the previous discussion that the reason was pretty weak testability of simulationism, rather than ad hominem reasoning.
Conflating simulationism with calculus or gravitation is absurd. Our universe would look very different if calculus or gravitation did not exist as we understand them, whereas we have no reason at all to suppose this is true of the simulation argument. There are statistical arguments for supposing it’s true, but not all the assumptions in the mathematical model are given, and it increases the complexity of our model of reality without providing any explanatory power.
Calculus is a generic algorithmic tool, gravitation is an algorithmic predictive model of some subset of reality, simulationism is a belief about reality derived from future predictions of current physical theory. Yes these are distinct epistemological categories, my point was more that the similarity of simulationism to the older theism is an inadequate reason to dismiss simulationism.
This is I believe a common misunderstanding about the SA.
Suppose you are given a series of seemingly random numbers—say from a SETI signal. You put a crack team of mathematicians on it for many years and eventually they develop a complex model for the sequence that can predict it. It also appears that you can derive timing from the signal and determine how long it has been progressing. Then later you are able to run the model forward and predict that it in fact eventually repeats itself . . .
That last discovery is not a change to the model that need be justified by Ockham’s razor. It does not add one iota to the model’s complexity.
The SA doesn’t add an iota of complexity to our model of reality—ie physics. It’s a predicted consequence of running physics forward.
Not necessarily. Given our understanding of the laws of physics, simulating our universe inside itself would be tough. Note that nothing in the simulation hypothesis requires that we are being simulated in a universe that has much resemblance to our apparent universe. (Digression: Even small amounts of monkeying with the constants of the universe can make universes that can plausibly give rise to life. See here (unfortunately everything beyond the summary is behind a paywall). And in some of those cases, it seems plausible that large scale computation might be easier. If certain inflationary models are correct then there should be lots of different universal bubbles with slightly different physical laws. Some of those could be quite hospitable to large-scale computation.)
The simulation argument isn’t a predicted consequence of running physics forward; the scenario you put forward doesn’t establish that we exist in a simulation, just that our universe follows predictable rules that can be forward computed. Postulating an entire universe outside the one we observe does add to the complexity of that model. The simulation argument is a probabilistic argument that states that if certain assumptions hold then most apparent universes are in fact simulated by other universes, and thus our own is probably a simulation.
Not so at all. A model’s complexity is not determined by the entities it references or postulates.
For example, I have a model of the future which postulates new processors every few years. The model is not complex enough to capture every new processor from here to infinity. Nor does it need to be. The model is simple, yet it can generate new postulated entities.
You in effect are saying that my model, which postulates many new future processors, is somehow more ‘complex’ than a model which postulates just three, or none.
An entire external universe adds to the complexity of the model, not just how many entities the model contains.
This may not be the case if the simulation itself was produced in the universe as we know it, and our own apparent universe is only a simulated fragment. That isn’t what I thought you were asserting, but that is untenable for completely separate reasons.
What do you mean by complexity and how is it at all relevant?
Take Conway’s life for example. Tons of apparent complexity can emerge from rules simple enough to write on a bar napkin.
Was the copernican model ‘wrong’ because it made our universe-model more complex? Was the discovery of multiple galaxies wrong for similar reason? Many worlds?
The only formal definition of complexity that is well justified is algorithmic complexity and it has some justification as a quality metric for deciding between theories in terms of Solonomoff Induction.
The formal complexity of a universe-model is that of it’s simplest reduction.
The simplest reduction for any scientific model is universal physics.
So there is only one model, all complexity emerges from it, and saying things like “your premise X adds to the complexity of the model” is untrue and equivalent to saying your “premise X makes the model smell bad”.
Adding a universe external to this one doesn’t just add more stuff. To take the Conway’s Game of Life example, suppose that you simulated an entire universe inside it, from the beginning. For the inhabitants, a model that not only explained how their universe worked, but postulated the existence of our universe, would be more complex than one that merely explained their own. With evidence that their reality was a simulation, the proposition could be made more likely than the proposition that it stood alone.
In terms of minimum message length, having to describe another universe superordinate to your own adds to the information of the model, not just the entities described in it. The addition of our own universe could not be encapsulated in a model that simply describes the working of the simulated Conway universe from the inside without adding more information.
Once you have a model that includes a universe and the capacity to simulate universes you can add universes to the model without taking much more complexity because the model can be recursively defined. The minimum message length need not be increased much to add new universes, you just edit the escape clause. Where we are in the model doesn’t matter.
You seem to be thinking in terms of time complexity. Space complexity also needs to be considered. It seems axiomatic to me that an outer universe simulation can only contain nested universe simulations of lower space complexity than than itself.
If I am wrong, is there some discussion of this kind of issue online or in a well-know paper or textbook?
This only follows if your universe can not only model other universes but can easily model universes that share its own rules of physics. This is a much stronger claim about the nature of a universe (for example, it seems likely that this is not true about our universe.)
The SA does not ‘add’ a universe external to the model. The SA is a deduction derived from the Singularity-model. The Singularity-model does not ‘add’ the external universes either, they emerge within it naturally, just as naturally as future AI’s do.
That would only be true if their model was not also a full explanation of our universe, and thus isomorphic to some historical slice of our universe.
Not at all. The Singularity-model is a scientific extrapolation of our observed history into the future. As it is scientific, it reduces to physics (the model approximates what we believe would happen if we could simulate physics into the future).
The SA is not a model at all. It is a deduction which can be simplified down to:
If the Singularity-model is accurate.
Then most observable universes are simulations.
And thus our observable universe is a simulation.
You seem to think the minimum message length is somehow physics + extra simulations scrawled in. The physics generates everything, so it’s already minimal.
No—but only because the physics differ substantially. You are right of course that if Conway beings evolved and somehow they had some singularity of their own in their future that generated simulated Conway universes, they would establish a lower prior to believing they were embedded in a String/M-theory universe like ours. (they of course could still be wrong, as complexity is just a reasonable bias measure). They’d attach higher credence to being embedded in a Conway universe.
But if the simulated universe is based on the same physics, then it reduces to exactly the same minimal program, and it absolutely describes both universes.
This is very similar to the multiverse in physics and the space of universes string/M-whatever theory can generate.
As I mentioned before, I thought you were arguing the orthodox simulation argument, rather than one where the simulations are created from within our own universe. That would not necessarily increase the complexity of the model, but it’s untenable for its own reasons.
For one thing, it’s far from given that any civilization would ever want to simulate the universe at a previous point; the reasons you provided before don’t remotely justify such a project; it’s not a practical use of computing power. For another, assuming you’re only simulating small fractions of the history of existence, the majority of all sentient beings in the universe would not be ones in a simulation. In fact, you would have to defy a number of probable assumptions about our universe to fit as much universe space and time in the simulation as existed outside it.
That. I think after all the comments I’ve scanned in this post, this was the first one where I really felt like I understood what the post was even really about. Thank you.
The OP does not make mention of the term ‘religion’. Part of the confusion seems to stem from the conflation of theism and religion.
Theism is a philosophical belief about the nature of reality. The truthfulness of this belief as a map of reality is not somehow dependent or connected in belief space to magic rituals, prayers, voodo dolls or the memes of organized religion, even if they historically co-occur.
I beg to differ. In my view, the conflation is of theism with simulationism.
The way I read it, it seems like Will_Newsome is not using the word in this way. It may be a case of two concepts being mistakenly filed into the same basket—certainly some people might, when they hear “Theism-in-general is a mistaken and sometimes harmful way of thinking about the world”, understand “theism-in-general” to mean “any mode of thought that acknowledges the possibility of some intelligent mind that is outside and in control of our universe”. Under this interpretation, the assertion is quite obviously false (or at least, not obviously true).
I wonder if there is still a disagreement if we Taboo “theism”? (Though your point in the last paragraph is a good one, I think.)
Indeed not; hence my criticism!
For some reason you seem to be categorizing the belief-space such that there is a little pocket called Jehovah-ism over here and then simulationism is another distinct island far far away.
The way I see it, theism is a whole vast space of belief-space, roughly dividing from the split based on the question: was the observable universe created by an agenty-process?
The SA leads us into that side of the belief-space, but the type of Jehova-ism you mention is just a little slice of a large territory.
The two may branch in the same direction from that question, but that doesn’t mean that their consequences are remotely similar. You seem to be substituting in cached thoughts from religion as the consequences of simulationism when they really don’t follow from it.
Such as?
Such as the morality of the simulators having any relation to our own. It would be much easier to simulate a universe from big bang conditions starting with a few basic rules and allow it to evolve from on its own than to deliberately engineer any sort of life forms within it, and the basic rules of our universe do not dictate that any intelligent life form needs a utility function that closely resembles our own.
Assuming it would even be practical for the simulators to single us out for observation, as such a miniscule part of the simulation, and they judge us according to their own utility function, it’s a big leap to suppose that they would do anything about it with repercussions inside our own universe, so for our purposes it probably wouldn’t matter.
Additionally, it’s not established that the simulators would have practical control over the simulation. Given JoshuaZ’s arguments, I concede that it’s theoretically possible that the simulators could predict the output of the simulation in advance without running it, but that doesn’t mean it’s probable, let alone given.
I suspect that a full universe simulation of all of space-time, fifteen billion years of an entire universe, may have a cost complexity such that it could never be realized in any currently conceivable computer due to speed of light limitations. Even a galaxy sized black hole may not be sufficient. You are talking about a Tipler-like scenario that would probably require some massive re-engineering of the entire universe. I can’t rule this out, but from what I’ve read of astrophysicist’s reactions, it is questionable whether it is possible even in principle to collapse the universe in the fashion required. (Tipler figures it requires tachyons in his later response writings)
So no, that would not be much easier to simulate—it would be vastly more difficult, and may not even be possible in principle.
The more likely simulation is one run by our posthuman ancestors after a local Singularity on earth, where they have a massive amount of computation, enough to simulate perhaps a galaxy or galaxies full of virtual humans, but not the entire history of our universe. We must remember that they will want to simulate many possible samples as well. They will also probably simulate hypothetical aliens and hypothetical contact scenarios. Basically they will simulate future important sample time-slices.
Today humanity as a whole spends a large amount of time thinking about the present, slightly alternate versions of the present, historical time heavily weighted based on importance, and projected futures. We already are engaging in the limited creation of simulated realities. The phenomenon has already begun, it started with dreams, language, thought and is more recently amplified with computer simulation and graphics, and just chart that trajectory out into the future and amplify it by an exponential vastening . . . .
This is not the ordinary simulation argument, or even closely related to it. The proposition that you reject, that our universe is simulable in its entirety, is one of the premises of that argument.
I for one strongly predict that our future ancestors will never create a galaxy or multiple galaxies of virtual humans from their own past. It’s ethically dubious, and far, far from being one of the most useful things they could do with that computing power if they simply want to determine the likely outcome of various contact scenarios or the what hypothetical aliens would be like. By the time we’re capable of it, it simply wouldn’t have much to recommend it as an idea.
I didn’t mean to talk about Jehovah specifically; I thought that using ‘theism’ would imply enough generality that I could get away without clarification, but I was obviously very mistaken. I added a sentence to the end of the post.
Your second paragraph seems to correctly point out a problem with my terminology. Nonetheless perhaps we could also have discussion on what I was (admittedly poorly) trying to start a discussion about, that is, the apparent contradiction between believing strong optimization processes outside the observable universe are possible and believing that such an optimization process didn’t create the observable universe?
Nor, for that matter, did I: Zeus, Thor, and their innumerable counterparts should be considered included in the reference.
The way to have done that, in my opinion, would have been to title the post “Simulation/creator arguments” or something similar, and to avoid any mention of theism, atheism, or religion in the body of the post.
It was brave to even consider using a concept within a few inferential leaps from Jehovah here. :)
The only fact necessary to rationally be an atheist is that there is no evidence for a god. We don’t need any arguments—evolutionary or historical or logical—against a hypothesis with no evidence.
The reason I don’t spend a cent of my time on it is because of this, and because all arguments for a god are dishonest, that is, they are motivated by something other than truth. It’s only slightly more interesting than the hypothesis that there’s a teapot around venus. And there are plenty of other things to spend time on.
As a side note, I have spent time on learning about the issue, because it’s one of the most damaging beliefs people have, and any decrease in it is valuable.
I contend that there is evidence for a god. Observation: Things tend to have causes. Observation: Agenty things are better at causing interesting things than non-agenty things. Observation: We find ourselves in a very interesting universe.
Those considerations are Bayesian evidence. The fact that many, many smart people have been theistic is Bayesian evidence. So now you have to start listing the evidence for the alternate hypothesis, no?
Do you mean all arguments on Christian internet fora, or what? There’s a vast amount of theology written by people dedicated to finding truth. They might not be good at finding truth, but it is nonetheless what is motivating them.
I should really write a post on the principle of charity...
I realize this is rhetoric, but still… seriously? The question of whether the universe came into being via an agenty optimization process is only slightly more interesting than teapots orbiting planets?
I agree that theism tends to be a very damaging belief in many contexts, and I think it is good that you are fighting against its more insipid/irrational forms.
I can’t help but feel that this sentence pervasively redefines ‘interesting things’ as ‘appears agent-caused’.
As curious agents ourselves, we’re pre-tuned to find apparently-agent-caused things interesting. So, I don’t think a redefinition necessarily took place.
This is sort of what I meant. I am leery of accidentally going in the reverse direction—so instead of “thing A is agent-caused → pretuned to find agent-caused interesting → thing A is interesting” we get “thing A is interesting → pretuned to find agent-caused interesting → thing A is agent-caused”.
This is then a redefinition; I have folded agent-caused into “interesting” and made it a necessary condition.
I suppose that their ratio is very high, but that their difference is still extremely small.
As for your evidence that there is a god, I think you’re making some fundamentally baseless assumptions about how the universe should be “expected” to be. The universe is the given. We should not expect it to be disordered any more than we should expect it to be ordered. And I’d say that the uninteresting things in the universe vastly outnumber the interesting things, whereas for humans they do not.
Also, I must mention the anthropic principle. A universe with humans much be sufficiently interesting to cause humans in the first place.
But I do agree that many honest rational people, even without the bias of existent religion, would at least notice the analogy between the order humans create and the universe itself, and form the wild but neat hypothesis that it was created by an agent. I’m not sure if that analogy is really evidence, anymore than the ability of a person to visualize anything is evidence for it.
You can’t just not have a prior. There is certainly no reason to assume the the universe as we have found has the default entropy. And we actually have tools that allow us to estimate this stuff- the complexity of the universe we find ourselves in is dependent on a very narrow range values in our physics. Yes I’m making the fine-tuning argument and of course knowing this stuff should increase our p estimate for theism. That doesn’t mean P(Jehovah) is anything but minuscule—the prior for an uncreated, omnipotent, omniscient and omni-benevolent God is too low for any of this to justify confident theism.
Some of it anyway.
Isn’t it interesting how there’s so much raw material that the interesting things can use to make more interesting things?
Really? Your explanation for why there’s lots of stuff is that an incredibly powerful benevolent agent made it that way? What does that explanation buy you over just saying that there’s lots of stuff?
Again, some of it. The vast vast majority of raw material in the universe is not used, and has never been used, for making interesting things.
Why are you ignoring the future?
Back when I used to hang around over at talk.origins, one of the scientist/atheists there seemed to think that the sheer size of the universe was the best argument against the theist idea of a universe created for man. He thought it absurd that a dramatic production starring H. sapiens would have such a large budget for stage decoration and backdrops when it begins with such a small budget for costumes—at least in the first act.
Your apparent argument is that a big universe is evidence that Someone has big plans for us. The outstanding merit of your suggestion, to my mind, is that his argument and your anti-argument, if brought into contact, will mutually annihilate leaving nothing but a puff of smoke.
Are you proposing that in the future we will necessarily end up using some large proportion of the universe’s material for making interesting things? I mean, I agree that that’s possible, but it hardly seems inevitable.
I think that is more-or-less the idea, yes—though you can drop the “necessarily ”.
Don’t judge the play by the first few seconds.
The reason I put in “necessarily” is because it seems like Will Newsome’s anthropic argument requires that the universe was designed specifically for interesting stuff to happen. If it’s not close to inevitable, why didn’t the designer do a better job?
Maybe there’s no designer. Will doesn’t say he’s 100% certain—just that he thinks interestingness is “Bayesian evidence” for a designer.
I think this is a fairly common sentiment—e.g. see Hanson.
Necessarily? Er… no. But I find the arguments for a decent chance of a technological singularity to be pretty persuasive. This isn’t much evidence in favor of us being primarily computed by other mind-like processes (as opposed to getting most of our reality fluid from some intuitively simpler more physics-like computation in the universal prior specification), but it’s something. Especially so if a speed prior is a more realistic approximation of optimal induction over really large hypothesis spaces than a universal prior is, which I hope is true since I think it’d be annoying to have to get our decision theories to be able to reason about hypercomputation...
Yes!
Possible prior work: Why and how to debate charitably, by User:pdf23ds.
Your choice of wording here makes it obvious that you are aware of the counter-argument based on the Anthropic Principle. (Observation: uninteresting venues tend not to be populated by observers.) So, what is your real point?
I would think “Observers who find their surroundings interesting duplicate their observer-ness better” is an even-less-mind-bending anthropic-style argument.
Also this keeps clear that “interesting” is more a property of observers than of places.
(nods) Yeah, I would expect life forms that fail to be interested in the aspects of their surroundings that pertain to their ability to produce successful offspring to die out pretty quickly.
That said, once you’re talking about life forms with sufficiently general intelligences that they become interested in things not directly related to that, it starts being meaningful to talk about phenomena of more general interest.
Of course, “general” does not mean “universal.”
If we have a prior of 100 to 1 against agent-caused universes, and .1% of non-agent universes have observers observing interestingness while 50% of agent-caused universes have it, what is the posterior probability of being in an agent-caused universe?
I make it about 83% if you ignore the anthropic issues (by assuming that all universes have observers, or that having observers is independent of being interesting, for example). But if you want to take anthropic issues into account, you are only allowed to take the interestingness of this universe as evidence, not its observer-ladenness. So the answer would have to be “not enough data”.
You can’t not be allowed to take the observer-ladenness of a universe as evidence.
Limiting case: Property X is true of a universe if and only if it has observers. May we take the fact that observers exist in our universe as evidence that observers exist there?
I have no idea what probability should be assigned to non-agent universes having observers observing interesting things (though for agent universes, 50% seems too low), but I also think your prior is too high.
I think there is some probability that there are no substantial universe simulations, and some probability that the vast majority of universes are simulations, but even if we live in a multiverse where simulated universes are commonplace, our particular universe seems like a very odd choice to simulate unless the basement universe is very similar to our own. I also assign a (very) small probability to the proposition that our universe is computationally capable of simulating universes like itself (even with extreme time dilation), so that also seems unlikely.
Probabilities were for example purposes only. I made them up because they were nice to calculate with and sounded halfway reasonable. I will not defend them. If you request that I come up with my real probability estimates, I will have to think harder.
Ah, well your more general point was well-made. I don’t think better numbers are really important. It’s all too fuzzy for me to be at all confident about.
I still retain my belief that it is implausible that we are in a universe simulation. If I am in a simulation, I expect that it is more likely that I am by myself (and that conscious or not, you are part of the simulation created in response to me), moderately more likely that there are a small group of humans being simulated with other humans and their environment dynamically generated, and overall very unlikely that the creators have bothered to simulate any part of physical reality that we aren’t directly observing (including other people). Ultimately, none of these seem likely enough for me to bother considering for very long.
The first part of your belief that “it is implausible that we are in a universe simulation” appears to be based on the argument:
If simulationism, then solipsism is likely.
Solipsism is unlikely, so . . .
Chain of logic aside, simulationism does not imply solipsism. Simulating N localized space-time patterns in one large simulation can be significantly cheaper than simulating N individual human simulations. So some simulated individuals may exist in small solipsist sims, but the great majority of conscious sims will find themselves in larger shared simulations.
Presumably a posthuman intelligence on earth would be interested in earth as a whole system, and would simulate this entire system. Simulating full human-mind equivalents is something of a sweet spot in the space of approximations.
There is a massive sweet spot, an extremely effecient method, of simulating a modern computer—which is to simulate it at the level of it’s turing equivalent circuit. Simulating it at a level below this—say at the molecular level, is just a massive waste of resources, while any simulation above this loses accuracy completely.
It is postulated that a similar simulation scale separation exists for human minds, which naturally relates to uploads and AI.
I don’t understand why human-mind equivalents are special in this regard. This seems very anthropocentric, but I could certainly be misinterpreting what you said.
Cheaper, but not necessarily more efficient. It matters which answers one is looking for, or which goals one is after. It seems unlikely to me that my life is directed well enough to achieve interesting goals or answer interesting questions that a superintelligence might pose, but it seems even more unlikely that simulating 6 billion humans, in the particular way they appear (to me) to exist is an efficient way to answer most questions either.
I’d like to stay away from telling God what to be interested in, but out of the infinite space of possibilities, Earth seems too banal and languorous to be the one in N that have been chosen for the purpose of simulation, especially if the basement universe has a different physics.
If the basement universe matches our physics, I’m betting on the side that says simulating all the minds on Earth and enough other stuff to make the simulation consistent is an expensive enough proposition that it won’t be worthwhile to do it many times. Maybe I’m wrong; there’s no particular reason why simulating all of humanity in the year of 2011 needs to take more than 10^18 J, so maybe there’s a “real” milky way that’s currently running 10^18 planet-scale sims. Even that doesn’t seem like a big enough number to convince me that we are likely to be one of those.
I meant there is probably some sweet spot in the space of [human-mind] approximations, because of scale separation, which I elaborated on a little later with the computer analogy.
Cheaper implies more efficient, unless the individual human simulations somehow have a dramatically higher per capita utility.
A solipsist universe has extraneous patchwork complexity. Even assuming that all of the non-biological physical processes are grossly approximated (not unreasonable given current simulation theory in graphics), they still may add up to a cost exceeding that of one human mind.
But of course a world with just one mind is not an accurate simulation, so you now you need to populate it with a huge number of pseudo-minds which functionally are indistinguishable from the perspective of our sole real observer but somehow use much less computational resources.
Now imagine a graph of simulation accuracy vs computational cost of a pseudo-mind. Rather than being linear, I believe it is sharply exponential, or J-shaped with a single large spike near the scale separation point.
The jumping point is where the pseudo-mind becomes a real actual conscious observer of it’s own.
The rationale for this cost model and the scale separation point can be derived from what we know about simulating computers.
Perhaps not your life in particular, but human life on earth today?
Simulating 6 billion humans will probably be the only way to truly understand what happened today from the perspective of our future posthuman descendants. The alternatives are . . . creating new physical planets? Simulation will be vastly more efficient than that.
The basement reality is highly unlikely to have different physics. The vast majority of simulations we create today are based on approximations of currently understood physics, and I don’t expect this to every change—simulations have utility for simulators.
I’m a little confused about the 10^18 number.
From what I recall, at the limits of computation one kg of matter can hold roughly 10^30 bits, and a human mind is in the vicinity of 10^15 bits or less. So at the molecular limits a kg of matter could hold around a quadrillion souls—an entire human galactic civilization. A skyscraper of such matter could give you 10^8 kg .. and so on. Long before reaching physical limits, posthumans would be able to simulate many billions of entire earth histories. At the physical molecular limits, they could turn each of the moon’s roughly 10^22 kg into an entire human civilization, for a total of 10^37 minds.
The potential time scale compression are nearly as vast—with estimated speed limits at around 10^15 ops/bit/sec in ordinary matter at ordinary temperatures, vs at most 10^4 ops/bit/sec in human brains, although not dramatically higher than the 10^9 ops/bit/sec of today’s circuits. The potential speedup of more than 10^10 over biological brains allows for about one hundred years per second of sidereal time.
I understand that for any mind, there is probably an “ideal simulation level” which has the fidelity of a more expensive simulation at a much lower cost, but I still don’t understand why human-mind equivalents are important here.
Which seems pretty reasonable to me. Why should the value of simulating minds be linear rather than logarithmic in the number of minds?
Agreed, but I also think that the cost of simulating the relevant stuff necessary to simulate N minds might be close to linear in N.
I agree, though as a minor note if cost is the Y-axis the graph has to have a vertical asymptote, so it has to grow much faster than exponential at the end. Regardless, I don’t think we can be confident that consciousness occurs at an inflection point or a noticeable bend.
I suspect that some pseudo-minds must be conscious observers some of the time, but that they can be turned off most of the time and just be updated offline with experiences that their conscious mind will integrate and patch up without noticing. I’m not sure this would work with many mind-types, but I think it would work with human minds, which have a strong bias to maintaining coherence, even at the cost of ignoring reality. If I’m being simulated, I suspect that this is happening even to me on a regular basis, and possibly happening much more often the less I interact with someone.
Updating on the condition that we closely match the ancestors of our simulators, I think it’s pretty reasonable that we could be chosen to be simulated. This is really the only plausible reason I can think of to chose us in particular. I’m still dubious as to the value doing so will have to our descendants.
Actually, I made a mistake, so it’s reasonable to be confused. 20 W seems to be a reasonable upper limit to the cost of simulating a human mind. I don’t know how much lower the lower bound should be, but it might not be more than an order of magnitude less. This gives 10^11 W for six billion, (4x) 10^18 J for one year.
I don’t think it’s reasonable to expect all the matter in the domain of a future civilization to be used to its computational capacity. I think it’s much more likely that the energy output of the Milky Way is a reasonably likely bound to how much computation will go on there. This certainly doesn’t have to be the case, but I don’t see superintelligences annihilating matter at a dramatically faster rate in order to provide massively more power to the remainder of the matter around. The universe is going to die soon enough as it is. (I could be very short sighted about this) Anyway, energy output of the Milky Way is around 5x10^36 W. I divided this by Joules instead of by Watts, so the second number I gave was 10^18, when it should have been (5x) 10^24.
I maintain that energy, not quantum limits of computation in matter, will bound computational cost on the large scale. Throwing our moon into the Sun in order to get energy out of it is probably a better use of it as raw materials than turning it into circuitry. Likewise for time compression, convince me that power isn’t a problem.
Simply because we are discussing simulating the historical period in which we currently exist.
The premise of the SA is that the posthuman ‘gods’ will be interested in simulating their history. That history is not dependent on a smattering of single humans isolated in boxes, but the history of the civilization as a whole system.
If the N minds were separated by vast gulfs of space and time this would be true, but we are talking about highly connected systems.
Imagine the flow of information in your brain. Imagine the flow of causality extending back in time, the flow of information weighted by it’s probabilistic utility in determining my current state.
The stuff in immediate vicinity to me is important, and the importance generally falls off according to an inverse square law with distance away from my brain. Moreover, even from the stuff near me at one time step, only a tiny portion of it is relevant. At this moment my brain is filtering out almost everything except the screen right in front of me, which can be causally determined by a program running on my computer, dependent on recent information in another computer in a server somewhere in the midwest a little bit ago, which was dependent on information flowing out from your brain previously . .. and so on.
So simulating me would more or less require your simulation as well, it’s very hard to isolate a mind. You might as well try to simulate just my left prefrontal cortex. The entire distinction of where one mind begins and ends is something of spatial illusion that disappears when you map out the full causal web.
If you want to simulate some program running on one computer on a new machine, there is an exact vertical inflection wall in the space of approximations where you get a perfect simulation which is just the same program running on the new machine. This simulated program is in fact indistinguishable from the original.
Yes, but because of the network effects mentioned earlier it would be difficult and costly to do this on a per mind basis. Really it’s best to think of the entire earth as a mind for simulation purposes.
Could you turn off part of cortex and replace it with a rough simulation some of the time without compromising the whole system? Perhaps sometimes, but I doubt that this can give a massive gain.
Why do we currently simulate (think about) our history? To better understand ourselves and our future.
I believe there are several converging reasons to suspect that vaguely human-like minds will turn out to be a persistent pattern for a long time—perhaps as persistent as eukaryotic cells. Adapative radiation will create many specializations and variations, but the basic pattern of a roughly 10^15 bit mind and it’s general architecture may turn out to be a fecund replicator and building block for higher level pattern entities.
It seems plausible some of these posthumans will actually descend from biological humans alive today. They will be very interested in their ancestors, and especially the ancestors they new in their former life who died without being uploaded or preserved.
Humans have been thinking about this for a while. If you could upload and enter virtual heaven, you could have just about anything that you want. However, one thing you may very much desire would be reunification with former loved ones, dead ancestors, and so on.
So once you have enough computational power, I suspect there will be a desire to use it in an attempt to resurrect the dead.
You are basically taking the current efficiency of human brains as the limit, which of course is ridiculous on several fronts. We may not reach the absolute limits of computation, but they are the starting point for the SA.
We already are within six orders of magnitude of the speed limit of ordinary matter (10^9 bit ops/sec vs 10^15), and there is every reason to suspect we will get roughly as close to the density limit.
There are several measures—the number of bits storable per unit mass derives how many human souls you can store in memory per unit mass.
Energy relates to the bit operations per second and the speed of simulated time.
I was assuming computing at regular earth temperatures within the range of current brains and computers. At the limits of computation discussed earlier 1 kg of matter at normal temperatures implies an energy flow of around 1 to 20W and can simulate roughly 10^15 virtual humans 10^10 faster than current human rate of thought. This works out to about one hundred years per second.
So at the limits of computation, 1 kg of ordinary matter at room temperature should give about 10^25 human lifetimes per joule. One square meter of high efficiency solar panel could power several hundred kilograms of computational substrate.
So at the limits of computation, future posthuman civilizations could simulate truly astronomical number of human lifetimes in one second using less power and mass than our current civilization.
No need to dissemble planets. Using the whole surface of a planet gives a multiplier of 10^14 over a single kilogram. Using the entire mass only gives a further 10^8 multiple over that or so, and is much much more complex and costly to engineer. (when you start thinking of energy in terms of human souls, this becomes morally relevant)
If this posthuman civilization simulates human history for a billion years instead of a second, this gives another 10^16 multiplier.
Using much more reasonable middle of the road estimates:
Say tech may bottom out at a limit within half (in exponential terms) of the maximum—say 10^13 human lifetimes per kg per joule vs 10^25.
The posthuman civ stabilizes at around 10^10 1kg computers (not much more than we have today).
The posthuman civ engages in historical simulation for just one year. (10^7 seconds).
That is still 10^30 simulated human lifetimes, vs roughly 10^11 lifetimes in our current observational history.
Those are still astronomical odds for observing that we currently live in a sim.
This is very upsetting, I don’t have anything like the time I need to keep participating in this thread, but it remains interesting. I would like to respond completely, which means that I would like to set it aside, but I’m confident that if I do so I will never get back to it. Therefore, please forgive me for only responding to a fraction of what you’re saying.
I thought context made it clear that I was only talking about the non-mind stuff being simulated as being an additional cost perhaps nearly linear in N. Very little of what we directly observe overlaps except our interaction with each other, and this was all I was talking about.
Why can’t a poor model (low fidelity) be conscious? We just don’t know enough about consciousness to answer this question.
I really disagree, but I don’t have time to exchange each other’s posteriors, so assume this dropped.
I think this is evil, but I’m not willing to say whether the future intelligences will agree or care.
I said it was a reasonable upper bound, not a reasonable lower bound. That seems trivial.
Most importantly, you’re assuming that all circuitry performs computation, which is clearly impossible. That leaves us to debate about how much of it can, but personally I see no reason that the computational minimum cost will closely (even in an exponential sense) be approached. I am interested in your reasoning why this should be the case though, so please give me what you can in the way of references that led you to this belief.
Lastly, but most importantly (to me), how strongly do you personally believe that a) you are a simulation and that b) all entities on Earth are full-featured simulations as well?
Conditioning on (b) being true, how long ago (in subjective time) do you think our simulation started, and how many times do you believe it has (or will be) replicated?
If I was to quantify your ‘very little’ I’d guess you mean say < 1% observational overlap.
Lets look at the rough storage cost first. Ignoring variable data priority through selective attention for the moment, the data resolution needs for a simulated earth can be related to photons incident on the retina and decreases with an inverse square law from the observer.
We can make a 2D simplification and use google earth as an example. If there was just one ‘real’ observer, you’d need full data fidelity for the surface area that observer would experience up close during his/her lifetime, and this cost dominates. Let’s say that’s S, S ~ 100 km^2.
Simulating an entire planet, the data cost is roughly fixed or capped—at 5x10^8 km^2.
So in this model simulating an entire earth with 5 billion people will have a base cost of 5x10^8 km^2, and simulating 5 billion worlds separately will have a cost of 5x10^9 * S.
So unless S is pathetically small (actually less than human visual distance), this implies a large extra cost to the solipsist approach. From my rough estimate of S the solipsist approach is 1,000 times more expensive. This also assumes that humans are randomly distributed, which of course is unrealistic. In reality human populations are tightly clustered which further increases the relative gain of shared simulation.
Evil?
Why?
I’m not sure what you mean by this. Does all of the circuitry of the brain perform computation? Over time, yes. The most efficient brain simulations will of course be emulations—circuits that are very similar to the brain but built on much smaller scales on a new substrate.
My main reference for the ultimate limits is Seth Lloyd’s “Ultimate Physical Limits of Computation”. The Singularity is Near discusses much of this as well of course (but he mainly uses the more misleading ops per second, which is much less well defined).
Biological circuits switch at 10^3 to 10^4 bits flips/second. Our computers went from around that speed in WWII to the current speed plateau of around 10^9 bit flips/second reached early this century. The theoretical limit for regular molecular matter is around 10^15 bit flips/second. (A black hole could reach a much much higher speed limit, as discussed in Lloyd’s paper). There are experimental circuits that currently approach 10^12 bit flips/second.
In terms of density, we went from about 1 bit / kg around WWII to roughly 10^13 bits / kg today. The brain is about 10^15 bits / kg, so we will soon surpass it in circuit density. The juncture we are approaching (brain density) is about half-way to the maximum of 10^30 bits/kg. This has been analyzed extensively in the hardware community and it looks like we will approach these limits as well sometime this century. It is entirely practical to store 1 bit (or more) per molecule.
A and B are closely correlated. Its difficult to quantify my belief in A, but it’s probably greater than 50%.
I’ve thought a little about your last question but I don’t yet even see a route to estimating it. Such questions will probably require a more advanced understanding of simulation.
I feel like this would make you a terrible video game designer :-P. Why should we bother simulating things in full fidelity, all the time, just because they will eventually be seen? The only full-fidelity simulation we should need is the stuff being directly examined. Much rougher algorithms should suffice for things not being directly observed.
Heh, my ability to argue is getting worse and worse. You sure you want to continue this thread? What I meant to say (and entirely failed) is that there is an infrastructure cost; we can’t expect to compute with every particle, because we need lots of particles to make sure the others stay confined, get instructions, etc. Basically, not all matter can be a bit at the same time.
Again, infrastructure costs. Can you source this (also Lloyd?)?
For the rest, I’m aware of and don’t dispute the speeds and densities you mention. What I’m skeptical of is that we have evidence that they are practicable; this was what I was looking for. I don’t count previous success of Moore’s Law strong evidence of that we will continue getting better at computation until we hit physical limits. I’m particularly skeptical about how well we will ever do on power consumption (partially because it’s such a hard problem for us now).
The idea that I did not have to live this life, that some entity or civilization has created the environment in which I’ve experienced so much misery, and that they will do it again and again makes me shake with impotent rage. I cannot express how much I would rather having never existed. The fact that they would do this and so much worse (because my life is an astoundingly far cry from the worst that people deal with), again, and again, to trillions upon trillions of living, feeling beings...I cannot express my sorrow. It literally brings me to tears.
This is not sadism; or it would be far worse. It is rather a total neglect of care, a relegation of my values in place of historical interest. However, I still consider this evil in the highest degree.
I do not reject the existence of evil, and therefore this provides no evidence against the hypothesis that I am simulated. However, if I believe that I have a high chance of being simulated, I should do all that I can to prevent such an entity from ever coming to exist with such power, on the off chance that I am one not simulated, and able to prevent such evil from unfolding.
Of course you’re on the right track here—and I discussed spatially variant fidelity simulation earlier. The rough surface area metric was a simplification of storage/data generation costs, which is a separate issue than computational cost.
If you want the most bare-bones efficient simulation, I imagine a reverse hierarchical induction approach that generates the reality directly from the belief network of the simulated observer, a technique modeled directly on human dreaming.
However, this is only most useful if the goal is to just generate an interesting reality. If the goal is to regenerate an entire historical period accurately, you cant start with the simulated observers—they are greater unknowns than the environment itself.
The solipsist issue may not have discernible consequences, but overall the computational scaling is sublinear for emulating more humans in a world and probably significant because of the large casual overlap of human minds via language.
Physical Limits of Computation
The intellectual work required to show an ultimate theoretical limit is tractable, but showing that achieving said limit is impossible in practice is very difficult.
I’m pretty sure we won’t actually hit the physical limits exactly, it’s just a question of how close. If you look at our historical progress in speed and density to date, it suggests that we will probably go most of the way.
Another simple assessment related to the doomsday argument: I don’t know how long this Moore’s Law progression will carry on, but it’s lasted for 50 years now, so I give reasonable odds that it will last another 50. Simple, but surprisingly better than nothing.
A more powerful line of reasoning perhaps is this: as long as there is an economic incentive to continue Moore’s Law and room to push against the physical limits, ceteris paribus, we will make some progress and push towards those limits. Thus, eventually we will reach them.
Power density depends on clock rate, which has plateaued. Power efficiency, in terms of ops/joule, increases directly with transistor density.
This is somewhat concerning, and I believe, atypical. Not existing is perhaps the worst thing I can possibly imagine, other than infinite torture.
I’m not sure if ‘historical interest’ is quite the right word. Historical recreation or resurrection might be more accurate.
A paradise designed to maximally suffice current human values and eliminate suffering is not a world which could possibly create or resurrect us.
You literally couldn’t have grown up in that world, the entire idea is a non sequitur. Your mind’s state is a causal chain rooted in the gritty reality of this world with all of it’s suffering.
Imagining that your creator could have assigned you to a different world is like imagining you could have grown up with different parents. You couldn’t have. That would be somebody else completely.
Of course, if said creator exists, and if said creator values what you value in the way you value it (dubious) it could whisk you away to paradise tomorrow.
But I wouldn’t count on that—perhaps said creator is still working on you or doesn’t think paradise is a useful place for you or could care less.
In the face of such uncertainty, we can only task ourselves with building paradise.
I believe we’re arguing along two paths here, and it is getting muddled. Applying to both, I think one can maintain the world-per-person sim much more cheaply than you originally suggested long before one hits the spot where the sim is no longer accurate to the world except where it intersects with the observer’s attention.
Second, from my perspective you’re begging the question, since I was talking about a variety of reasons for simulation and arguing that simulating a single entity seems as reasonable as many—but you seem only to be concerned with historical recreation, in which case it seems obvious to me that a large group of minds is necessary. If we’re only talking about that case, the arguments along this line about the per-mind cost just aren’t very relevant.
I have a 404 on your link, I’ll try later.
Interesting, I haven’t heard that argument applied to Moore’s Law. Question: you arrive at a train crossing (there are no other cars on the road), and just as you get there, a train begins to cross before you can. Something goes wrong, and the train stops, and backs up, and goes forward, and stops again, and keeps doing this. (This actually happened to me). 10 minutes later, should you expect that you have around 10 minutes left? After those are passed, should your new expectation be that you have around 20 minutes left?
The answer is possibly yes. I think better results would be obtained by using a Jeffreys Prior. However, I’ve talked to a few statisticians about this problem, and no one has given me a clear answer. I don’t think they’re used to working with so little data.
Revise to say “and room to push against the practicable limits” and you will see where my argument lies despite my general agreement with this statement.
To my knowledge, this is incorrect. Increases in transistor density have dramatically increased circuit leakage (because of bumping into quantum tunneling), requiring more power per transistor in order to accurately distinguish one path from another. I saw a roundtable about proposed techniques for increasing processor efficiency. None of the attendees objected to the introduction, which mentioned that the increased waste heat from modern circuits was rising at a faster exponential than circuit density, and would render all modern circuit designs inoperable if there were to be logically extended without addressing the problem of quantum leakage.
If you didn’t exist in the first place, you wouldn’t care. Do you think you’ve done so much good for the world that your absence could be “the world thing you can possibly imagine, other than infinite torture”?
Regardless, I’m quite atypical in this regard, but not unique.
And wouldn’t that be so much better.
You propose that not existing would be a terrible evil. But how much better, for all the trillions upon trillions you’re proposing must suffer for the creator’s whims, would it be to have that computational substrate be used to host entities that have amazingly positive, productive, maximally Fun lives? I know I couldn’t have existed in a paradise, but if I’m a sim, there are cycles that could be used for paradise that have been abandoned to create misery and strife.
Again, I think that this may be the world we really are in. I just can’t call it a moral one.
Historical recreation currently seems to be the best rationale for a superintelligence to simulate this timeslice, although there are probably other motivations as well.
If that was actually the case, then there would be no point to moving to a new technology node!
Yes leakage is a problem at the new tech nodes, but of course power per transistor can not possibly be increasing. I think you mean power per surface area has increased.
Shrinking a circuit by half in each dimension makes the wires thinner, shorter and less resistant, decreasing power use per transistor just as you’d think. Leakage makes this decrease somewhat less than the shrinkage rate, but it doesn’t reverse the entire trend.
There are also other design trends that can compensate and overpower this to an extent, which is why we have a plethora of power efficient circuits in the modern handheld market.
“which mentioned that the increased waste heat from modern circuits was rising at a faster exponential than circuit density”
Do you remember when this was from or have a link? I could see that being true when speeds were also increasing, but that trend has stopped or reversed.
I recall seeing some slides from NVidia where they are claiming there next GPU architecture will cut power use per transistor dramatically as well at several times the rate of shrinkage.
Even if the goal is maximizing fun, creating some historical sims for the purpose of resurrecting the dead may serve that goal. But I really doubt that current-human-fun-maximization is an evolutionary stable goal system.
I imagine that future posthuman morality and goals will evolve into something quite different.
Knowledge is a universal feature of intelligence. Even the purely mathematical hypothetical superintelligence AIXI would end up creating tons of historical simulations—and that might be hopelessly brute force, but nonetheless superintelligences with a wide variety of goal systems would find utility in various types of simulation.
Much of the information from the past is probably irretrievably lost to us. If the information input into the simulation were not precisely the same as the actual information from that point in history, the differences would quickly propagate so that the simulation would bear little resemblance to the history. Supposing the individuals in question did have access to all the information they’d need to simulate the past, they’d have no need for the simulation, because they’d already have complete informational access to the past. It suffers similar problems to your sandboxed anthropomorphic AI proposal; provided you have all the resources necessary to actually do it, it ceases to be a good idea.
There are other possible motivations, but it’s not clear that there are any others that are as good or better, so we have little reason to suppose it will ever happen.
This seems to be overly restrictive, but I don’t mind confining the discussion to this hypothesis.
Yes, you are correct.
The roundtable was at SC′08, a while after speeds had stabilized, and since it is a supercomputing conference, the focus was on massively parallel systems. It was part of this.
Without needing to dispute this, I can remain exceptionally upset that whatever their future morality is, it is blind to suffering and willing to create innumerable beings that will suffer in order to gain historical knowledge. Does this really not bother you in the slightest?
ETA: still 404
While the leakage issue is important and I want to read a little more about this reference, I don’t think that any single such current technical issue is nearly sufficient to change the general analysis. There have always been major issues on the horizon, the question is more of the increase in engineering difficulty as we progress vs the increase in our effective intelligence and simulation capacity.
In the specific case of leakage, even if it is a problem that persists far into the future, it just slightly lowers the growth exponent as we just somewhat lower the clock speeds. And even if leakage can never be fully prevented, eventually it itself can probably be exploited for computation.
As I child I liked Mcdonalds, bread, plain pizza and nothing more—all other foods were poisonous. I was convinced that my parent’s denial of my right to eat these wonderful foods and condemn me to terrible suffering as a result was a sure sign of their utter lack of goodness.
Imagine if I could go back and fulfill that child’s wish to reduce it’s suffering. It would never then evolve into anything like my current self, and in fact may evolve into something that would suffer more or at the very least wish that it could be me.
Imagine if we could go back in time and alter our primate ancestors to reduce their suffering. The vast majority of such naive interventions would cripple their fitness and wipe out the lineage. There is probably a tiny set of sophisticated interventions that could simultaneously eliminate suffering and improve fitness, but these altered creatures would not develop into humans.
Our current existence is completely contingent on a great evolutionary epic of suffering on an astronomical scale. But suffering itself is just one little component of that vast mechanism, and forms no basis from which to judge the totality.
You made the general point earlier, which I very much agree with, about opportunity cost. Simulating humanity’s current time-line has an opportunity cost in the form of some paradise that could exist in it’s place. You seem to think that the paradise is clearly better, and I agree: from our current moral perspective.
In the end of the day morality is governed by evolution. There is an entire landscape of paradises that could exist, the question is what fitness advantage do they provide their creator? The more they diverge from reality, the less utility they have in advancing knowledge of reality towards closure.
It looks like earth will evolve into a vast planetary hierarchical superintelligence, but ultimately it will probably be just one of many, and still subject to evolutionary pressure.
I disagree; I think that problems like this, unresolved, may or may not decrease the base of our exponent, but will cap its growth earlier.
On this point, we disagree, and I may be on the unpopular side of this agreement. I don’t see how past increases that have required technological revolutions can be considered more than weak evidence for future technological revolutions. I actually think it quite likely that increase in computational power per Joule will bottom out in ten to twenty years. I wouldn’t be too surprised if exponential increase lasts thirty years, but forty seems unlikely, and fifty even less likely.
I don’t care. We aren’t talking about destroying the future of intelligence by going back in time. We’re talking about repeating history umpteen many times, creating suffering anew each time. It sounds to me like you are insisting that this suffering is worthwhile, even if the result of all of it will never be more than a data point in a historian’s database.
We live in a heartbreaking world. Under the assumption that we are not in a simulation, we can recognize facts like ‘suffering is decreasing over time’ and realize that it is our job to work to aid this progress. Under the assumption that we are in a simulation, we know that the capacity for this progress is already fully complete, and the agents who control it simply don’t care. If we are being simulated, it means that one or more entities have chosen to create unimaginable quantities of suffering for their own purposes—to your stated belief, for historical knowledge.
Your McDonald’s example doesn’t address this in the slightest. You were already a living, thinking being, and your parents took care of you in the right way in an attempt to make your future life better. They couldn’t have chosen before you were born to instead create someone who would be happier, smarter, wiser, and better in every way. If they could have, wouldn’t it be upsetting that they chose not to?
Given the choice between creating agents that have to endure suffering for generations upon generations, and creating agents that will have much more positive, productive lives, why are you arguing for the side that chooses the former? Of course the former and latter are entirely different entities, but that serves as no argument whatsoever for choosing the former!
A person running such a simulation could create a simulated afterlife, without suffering, where each simulated intelligence would go after dying in the simulated universe. It’s like a nice version of Pascal’s Wager, since there’s no wagering involved. Such an afterlife wouldn’t last infinitely long, but it could easily be made long enough to outweigh any suffering in the simulated universe.
Or you could skip the part with all the suffering. That would be a lot easier.
In general, I agree. I just wanted to offer a more creative alternative for someone truly dedicated to operating such a simulation.
So far the only person who seems dedicated to making such a simulation is jacob cannell, and he already seems to be having enough trouble separating the idea from cached theistic assumptions.
I don’t think that’s how it works.
How much future happiness would you need in order to choose to endure 50 years of torture?
That depends if happiness without torture is an option. The options are better/worse, not good/bad.
The simulated afterlife wouldn’t need to outweigh the suffering in the first universe according to our value system, only according to the value system of the aliens who set up the simulation.
Technology doesn’t really advance through ‘revolutions’, it evolves. Some aspects of that evolution appear to be rather remarkably predictable.
That aside, the current predictions do posit a slow-down around 2020 for the general lithography process, but there are plenty of labs researching alternatives. As the slow-down approaches, their funding and progress will accelerate.
But there is a much more fundamental and important point to consider, which is that circuit shrinkage is just one dimension of improvement amongst several. As that route of improvement slows down, other routes will become more profitable.
For example, for AGI algorithms, current general purpose CPUs are inefficient by a factor of perhaps around 10^4. That is a decade of exponential gain right there just from architectural optimization. This route—neuromorphic hardware and it’s ilk—currently receives a tiny slice of the research budget, but this will accelerate as AGI advances and would accelerate even more if the primary route of improvement slowed.
Another route of improvement is exponentially reducing manufacturing cost. The bulk of the price of high-end processors pays for the vast amortized R&D cost of developing the manufacturing node within the timeframe that the node is economical. Refined silicon is cheap and getting cheaper, research is expensive. The per transistor cost of new high-end circuitry on the latest nodes for a CPU or GPU is 100 times more expensive than the per transistor cost of bulk circuitry produced on slightly older nodes.
So if moore’s law stopped today, the cost of circuitry would still decay down to the bulk cost. This is particularly relevant to neurmorphic AGI designs as they can use a mass of cheap repetitive circuitry, just like the brain. So we have many other factors that will kick in even as moore’s law slows.
I suspect that we will hit a slow ramping wall around or by 2020, but these other factors will kick in and human-level AGI will ramp up, and then this new population and speed explosion will drive the next S-curve using a largely new and vastly more complex process (such as molecular nano-tech) that is well beyond our capability or understanding.
It’s more or less equivalent from the perspective of a historical sim. A historical sim is a recreation of some branch of the multiverse near your own incomplete history that you then run forward to meet your present.
My existence is fully contingent on the existence of my ancestors in all of their suffering glory. So from my perspective, yes their suffering was absolutely worthwhile, even if it wasn’t from their perspective.
Likewise, I think that it is our noble duty to solve AI, morality, and control a Singularity in order to eliminate suffering and live in paradise.
I also understand that after doing that we will over time evolve into beings quite unlike what we are now and eventually look back at our prior suffering and view it from an unimaginably different perspective, just as my earlier mcdonald’s loving child-self evolved into a being with a completely different view of it’s prior suffering.
It was right from both their and my current perspective, it was absolutely wrong from my perspective at the time.
Of course! Just as we should create something better than ourselves. But ‘better’ is relative to a particular subjective utility function.
I understand that my current utility function works well now, that it is poorly tuned to evaluate the well-being of bacteria, just as poorly tuned to evaluate the well-being of future posthuman godlings, and most importantly—my utility function or morality will improve over time.
Imagine you are the creator. How do you define ‘positive’ or ‘productive’? From your perspective, or theirs?
There are an infinite variety of uninteresting paradises. In some virtual humans do nothing but experience continuous rapturous bliss well outside the range of current drug-induced euphoria. There are complex agents that just set their reward functions to infinity and loop.
There are also a spectrum of very interesting paradises, all having the key differentiator that they evolve. I suspect that future godlings will devote most of their resources to creating these paradises.
I also suspect that evolution may operate again at an intergalactic or higher level, ensuring that paradises and all simulations somehow must pay for themselves.
At some point our descendants will either discover for certain they are in a sim and integrate up a level, or they will approach local closure and perhaps discover an intergalactic community. At that point we may have to compete with other singularity-civilizations, and we may have the opportunity to historically intervene on pre-singularity planets we encounter. We’d probably want to simulate any interventions before preceeding, don’t you think?
A historical recreation can develop into a new worldline with it’s own set of branching paradises that increase overall variation in a blossoming metaverse.
If you could create a new big bang, an entire new singularity and new universe, would you?
You seem to be arguing that you would not because it would include humans who suffer. I think this ends up being equivalent to arguing the universe should not exist.
If we had enough information to create an entire constructed reality of them in simulation, we’d have much more than we needed to just go ahead and intervene.
Some people would argue that it shouldn’t (this is an extreme of negative utilitarianism.) However, since we’re in no position to decide whether the universe gets to exist or not, the dispute is fairly irrelevant. If we’re in a position to decide between creating a universe like ours, creating one that’s much better, with more happiness and productivity and less suffering, and not creating one at all, though, I would have an extremely poor regard for the morality of someone who chose the first.
If my descendants think that all my suffering was worthwhile so that they could be born instead of someone else, then you know what? Fuck them. I certainly have a higher regard for my own ancestors. If they could have been happier, and given rise to a world as good as better than this one, then who am I to argue that they should have been unhappy so I could be born instead? If, as you point out
then why not skip the historical recreation and go straight to simulating the paradises?
I’m curious how you’ve reached this conclusion given how little we know about what AGI algorithms would look like.
The particular type of algorithm is actually not that important. There is a general speedup in moving from a general CPU-like architecture to a specialized ASIC—once you are willing to settle on the algorithms involved.
There is another significant speedup moving into analog computation.
Also, we know enough about the entire space of AI sub-problems to get a general idea of what AGI algorithms look like and the types of computations they need. Naturally the ideal hardware ends up looking much more like the brain than current von neumann machines—because the brain evolved to solve AI problems in an energy efficient manner.
If you know your are working in the space of probabilistic/bayesian like networks, exact digital computations are extremely wasteful. Using ten or hundreds of thousands of transistors to do an exact digital multiply is useful for scientific or financial calculations, but it’s a pointless waste when the algorithm just needs to do a vast number of probabilistic weighted summations, for example.
Cite for last paragraph about analog probability: http://phm.cba.mit.edu/theses/03.07.vigoda.pdf
Thanks. Hefty read, but this one paragraph is worth quoting:
I had forgot that term, statistical inference algorithms, need to remember that.
Well, there’s also another quote worth quoting, and in fact the quote that is in my Mnemosyne database and which enabled me to look that thesis up so fast...
This is true in general but this particular statement appears out of date:
’Alternative computing architectures, such as parallel digital computers have not tended to be commercially viable”
That was true perhaps circa 2000, but we hit a speed/heat wall and since then everything has been going parallel.
You may see something similar happen eventually with analog computing once the market for statistical inference computation is large enough and or we approach other constraints similar to the speed/heat wall.
Ok. But this prevents you from directly improving your algorithms. And if the learning mechanisms are to be highly flexible (like say those of a human brain) then the underlying algorithms may need to modify a lot even to just approximate being an intelligent entity. I do agree that given a fixed algorithm this would plausibly lead to some speed-up.
A lot of things can’t be put into analog. For example, what if you need factor large numbers. And making analog and digital stuff interact is difficult.
This doesn’t follow. The brain evolved through a long path of natural selection. It isn’t at all obvious that the brain is even highly efficient at solving AI-type problems, especially given that humans have only needed to solve much of what we consider standard problems for a very short span of evolutionary history (and note that general mammal brain architecture looks very similar to ours).
EDIT: why the downvotes?
Yes—which is part of the reason there is a big market for CPUs.
Not necessarily. For example, the cortical circuit in the brain can be reduced to an algorithm which would include the learning mechanism built in. The learning can modify the network structure to a degree but largely adjusts synaptic weights. That can be described as (is equivalent to) a single fixed algorithm. That algorithm in turn can be encoded into an efficient circuit. The circuit would learn just as the brain does, no algorithmic changes ever needed past that point, as the self-modification is built into the algorithm.
A modern CPU is a jack-of all trades that is designed to do many things, most of which have little or nothing to do with the computational needs of AGI.
If the AGI need to factor large numbers, it can just use an attached CPU. Factoring large numbers is easy compared to reading this sentence about factoring large numbers and understanding what that actually means.
The brain has roughly 10^15 noisy synapses that can switch around 10^3 times per second and store perhaps a bit each as well. (computation and memory integrated)
My computer has about 10^9 exact digital transistors in it’s CPU & GPU that can switch around 10^9 times per second. It has around the same amount of separate memory and around 10^13 bits of much slower disk storage.
These systems have similar peak throughputs of about 10^18 bits/second, but they are specialized for very different types of computational problems. The brain is very slow but massively wide, the computer is very narrow but massively fast.
The brain is highly specialized and extremely adept at doing typical AGI stuff—vision, pattern recognition, inference, and so on—problems that are suited to massively wide but slow processing with huge memory demands.
Our computers are specialized and extremely adept at doing the whole spectrum of computational problems brains suck at—problems that involve long complex chains of exact computations, problems that require massive speed and precision but less bulk processing and memory.
So to me, yes it’s obvious that the brain is highly efficient at doing AGI-type stuff—almost because that’s how we define AGI-type stuff—its all the stuff that brains are currently much better than computers at!
This limits the amount of modification one can do. Moreover, the more flexible your algorithm the less you gain from hard-wiring it.
No, we don’t know that the brain is “extremely adept” at these things. We just know that it is better than anything else that we know of. That’s not at all the same thing. The brain’s architecture is formed by a succession of modifications to much simpler entities. The successive, blind modification has been stuck with all sorts of holdovers from our early chordate ancestors and a lot from our more recent ancestors.
Easy is a misleading term in this context. I certainly can’t factor a forty digit number but for a computer that’s trivial. Moreover, some operations are only difficult because we don’t know an efficient algorithm. In any event, if your speedup is only occuring for the narrow set of tasks which humans can do decently such as vision, then you aren’t going to get a very impressive AGI. The ability to engage in face recognition if it takes you only a tiny amount of time that it would for a person to do is not an impressive ability.
Limits it compared to what?. Every circuit is equivalent to a program. The circuit of a general processor is equivalent to a program which simulates another circuit—the program which it keeps in memory.
Current Von Neumman processors are not the only circuits which have this simulation-flexibility. The brain has similar flexibility using very different mechanisms.
Finally, even if we later find out that lo and behold, the inference algorithm we hard-coded into our AGI circuits was actually not so great, and somebody comes along with a much better one . . . that is still not an argument for simulating the algorithm in software.
Not at all true. The class of statistical inference algorithms including Bayesian Networks and the cortex are both extremely flexible and greatly benefit from ‘hard-wiring’ it.
This is like saying we don’t know that Usain Bolt is extremely adept at running, he’s just better than anything else that we know of. The latter sentence in each case of course is true, but it doesn’t impinge on the former.
But my larger point was that the brain and current computers occupy two very different regions in the space of possible circuit designs, and are rather clearly optimized for a different slice over the space of computational problems.
There are some routes that we can obviously improve on the brain at the hardware level. Electronic circuits are orders of magnitude faster, and eventually we can make them much denser and thus much more massive.
However, it is much more of an open question in computer science if we will ever be able to greatly improve on the statistical inference algorithm used in the cortex. It is quite possible that evolution had enough time to solve that problem completely—or at least reach some nearly global maxima.
Yes—this is an excellent strategy for solving complex optimization problems.
Yes, and on second thought—largely mistaken. To be more precise we should speak of computational complexity and bitops. The best known factorization algorithms are running time exponential for the number of input bits. That makes them ‘hard’ in the scalability sense. But factoring small primes is still easy in the absolute cost sense.
Factoring is also easy in the algorithmic sense, as the best algorithms are very simple and short. Physics is hard in the algorithmic sense, AGI seems to be quite hard, etc.
The cortex doesn’t have a specialized vision circuit—there appears to be just one general purpose circuit it uses for everything. The visual regions become visual regions on account of . . processing visual input data.
AGI hardware could take advantage of specialized statistical inference circuitry and still be highly general.
I’m having a hard time understanding what you really mean by saying “the narrow set of tasks which humans can do decently such as vision”. What about quantum mechanics, computer science, mathematics, game design, poetry, economics, sports, art, or comedy? One could probably fill a book with the narrow set of tasks that humans can do decently. Of course, that other section of the bookstore—filled with books about things computers can do decently, is growing at an exciting pace.
I’m not sure what you mean by this or how it relates. If you could do face recognition that fast . . it’s not impressive?
The main computational cost of every main competing AGI route I’ve seen involves some sort of deep statistical inference, and this amounts to a large matrix multiplication possibly with some non-linear stepping or a normalization. Neural nets, bayesian nets, whatever—if you look at the mix of required instructions, it amounts to a massive repetition of simple operations that are well suited to hardware optimization.
If we have many generations of rapid improvement of the algorithms this will be much easier if one doesn’t need to make new hardware each time.
The general trend should still occur this way. I’m also not sure that you can reach that conclusion about the cortex given that we don’t have a very good understanding of how the brain’s algorithms function.
That seems plausibly correct but we don’t actually know that. Given how much humans rely on vision it isn’t at all implausible that there have been subtle genetic tweaks that make our visual regions more effective in processing visual data (I don’t know the literature in this area at all).
Incorrect, the best factoring algorithms are subexponential. See for example the quadratic field sieve and the number field sieve both of which have subexponential running time. This has been true since at least the early 1980s (there are other now obsolete algorithms that were around before then that may have had slightly subexponential running time. I don’t know enough about them in detail to comment.)
Factoring primes is always easy. For any prime p, it has no non-trivial factorizations. You seem to be confusing factorization with primality testing. The second is much easier than the first; we’ve had Agrawal’s algorithm which is provably polynomial time for about a decade. Prior to that we had a lot of efficient tests that were empirically faster than our best factorization procedures. We can determine the primality of numbers much larger than those we can factor.
Really? The general number field sieve is simple and short? Have you tried to understand it or write an implementation? Simple and short compared to what exactly?
There are some tasks where we can argue that humans are doing a good job by comparison to others in the animal kingdom. Vision is a good example of this (we have some of the best vision of any mammal.) The rest are tasks which no other entities can do very well, and we don’t have any good reason to think humans are anywhere near good at them in an absolute sense. Note also that most humans can’t do math very well (Apparently 10% or so of my calculus students right now can’t divide one fraction by another). And the vast majority of poetry is just awful. It isn’t even obvious to me that the “good” poetry isn’t labeled that way in part simply from social pressure.
A lot of the tasks that humans have specialized in are not generally bottlenecks for useful computation. Improved facial recognition isn’t going to help much with most of the interesting stuff, like recursive self-improvement, constructing new algorithms, making molecular nanotech, finding a theory of everything, figuring out how Fred and George tricked Rita, etc.
This seems to be a good point.
To clarify, subexponential does not mean polynomial, but super-polynomial.
(Interestingly, while factoring a given integer is hard, there is a way to get a random integer within [1..N] and its factorization quickly. See Adam Kalai’s paper Generating Random Factored Numbers, Easily (PDF).
Interesting. I had not seen that paper before. That’s very cute.
This is mostly irrelevant, but think complexity theorists use a weird definition of exponential according to which GNFS might still be considered exponential—I know when they say “at most exponential” they mean O(e^(n^k)) rather than O(e^n), so it seems plausible that by “at least exponential” they might mean Omega(e^(n^k)) where now k can be less than 1.
EDIT: Nope, I’m wrong about this. That seems kind of inconsistent.
They like keeping things invariant under polynomial transformations of the input, since that’s has been observed to be a somewhat “natural” class. This is one of the areas where it seems to not quite.
Hmm, interesting in the notation that Scott says is standard to complexity theory my earlier statement that factoring is “subexponential” is wrong even though it is slower growing than exponential. But apparently Greg Kuperberg is perfectly happy labeling something like 2^(n^(1/2)) as subexponential.
Yes, and this tradeoff exists today with some rough mix between general processors and more specialized ASICs.
I think this will hold true for a while, but it is important to point out a few subpoints:
If moore’s law slows down this will shift the balance farther towards specialized processors.
Even most ‘general’ processors today are actually a mix of CISC and vector processing, with more and more performance coming from the less-general vector portion of the chip.
For most complex real world problems algorithms eventually tend to have much less room for improvement than hardware—even if algorithmic improvements intially dominate. After a while algorithmic improvements end within the best complexity class and then further improvements are just constants and are swamped by hardware improvement.
Modern GPUs for example have 16 or more vector processors for every general logic processor.
The brain is like a very slow processor with massively wide dedicated statistical inference circuitry.
As a result of all this (and the point at the end of my last post) I expect that future AGIs will be built out of a heterogeneous mix of processors but with the bulk being something like a wide-vector processor with alot of very specialized statistical inference circuitry.
This type of design will still have huge flexibility by having program-ability at the network architecture level—it could for example simulate humanish and various types of mammalian brains as well as a whole range of radically different mind architectures all built out of the same building blocks.
We have pretty good maps of the low-level circuitry in the cortex at this point and it’s clearly built out of a highly repetitive base circuit pattern, similar to how everything is built out of cells at a lower level. I don’t have a single good introductory link, but it’s called the laminar cortical pattern.
Yes, there are slight variations, but slight is the keyword. The cortex is highly general—the ‘visual’ region develops very differently in deaf people, for example, creating a entirely different audio processing networks much more powerful than what most people have.
The flexibility is remarkable—if you hook up electrodes to the tongue that send a rough visual signal from a camera, in time the cortical regions connected to the tongue start becoming rough visual regions and limited tongue based vision is the result.
I stand corrected on prime factorization—I saw the exp(....) part and assumed exponential before reading into it more.
This is a good point, but note the huge difference between the abilities or efficiency of an entire human mind vs the efficiency of the brain’s architecture or the efficiency of the lower level components from which it is built—such as the laminar cortical circuit.
I think this discussion started concerning your original point:
The cortical algorithm appears to be a pretty powerful and efficient low level building block. In evolutionary terms it has been around for much longer than human brains and naturally we can expect that it is much closer to optimality in the design configuration space in terms of the components it is built from.
As we go up a level to higher level brain architectures that are more recent in evolutionary terms we should expect there to be more room for improvement.
The mammalian cortex is not specialized for particular tasks—this is the primary advantage of it’s architecture over it’s predecessors (at the cost of a much larger size than more specialized circuitry).
How do you reconcile this claim with the fact that some people are faceblind from an early age and never develop the ability to recognize faces? This would suggest that there’s at least one aspect of humans that is normally somewhat hard-wired.
I’ve read a great deal about the cortex, and my immediate reaction to your statement was “no, that’s just not how it works”. (strong priors)
About one minute later on the Prosopagnosia wikipedia article, I find the first reference to this idea (that of congenital Prosopagnosia):
The idea of congenital prosopagnosia appears to be a new theory supported by one researcher and one? study:
The last part about it being “commonly accompanied by other forms of visual agnosia” gives it away—this is not anything close to what you originally thought/claimed, even if this new research is actually correct.
Known cases of true prosopagnosia are caused by brain damage—what this research is describing is probably a disorder of the higher region (V4 I believe) which typically learns to recognize faces and other complex objects.
However, there is an easy way to cause prosopagnosia during development—prevent the creature from ever seeing faces.
I dont have the link on hand, but there have been experiments in cats where you mess with their vision—by using grating patterns or carefully controlled visual environments, and you can create cats that literally can’t even see vertical lines.
So even the simplest most basic thing which nature could hard-code—a vertical line feature detector, actually develops from the same extremely flexible general cortical circuit—the same circuit which can learn to represent everything from sounds to quantum mechanics.
Humans can represent a massive number of faces, and in general the brain’s vast information storage capacity over the genome (10^15 ish vs 10^9 ish) more or less require a generalized learning circuit.
The cortical circuits do basically nothing but fire randomly when you are born—you really are a blank slate in that respect (although obviously the rest of the brain has plenty of genetically fixed functionality).
Of course the arrangement of the brain’s regions with respect to sensory organs and it’s overall wiring architecture do naturally lead to the familiar specializations of brain regions, but really one should consider this a developmental attractor—information is colonizing each cortex anew, but the similar architecture and similarity of information ensures that two brains end up having largely overlapping colonizations.
There are all sorts of aspects of humans that are normally somewhat—or nearly entirely—hard-wired. The cortex just doesn’t tend to be. Even the parts of the cortex that are similarly specialised in most humans seem to be so due to what they are connected to. (As can be seen by looking at how the atypical cases have adapted differently.) It would surprise me if the inability to recognise faces was caused by a dysfunction in the cortex specifically.
Disclaimer: I disagree with nearly everything else Jacob has said in this thread. This position specifically appears to be well researched.
This is unlikely. We haven’t been selected based on sheer brain power or brain inefficiency. Humans have been selected by their ability to reproduce in a complicated environment. Efficient intelligence helps, but there’s selection for a lot of other things, such as good immune systems and decent muscle systems. A lot of the selection that was brain selection was probably simply around the fantastically complicated set of tasks involved in navigating human societies. Note that human brain size on average has decreased over the last 50,000 years. Humans are subject to a lot of different selection pressures.
(Tangent: This is related to how at a very vague level we should expect genetic algorithms to outperform evolution at optimizing tasks. Genetic algorithms can select for narrow task completion goals, rather than select in a constantly changing environment with competition and interaction between the various entities being bred.)
I largely agree with your point about human evolution, but my point was about the laminar cortical circuit which is shared in various forms across the entire mammalian lineage and has an analog in birds.
It’s a building block pattern that appears to have a long evolutionary history.
Yes, but there is a limit to this of course. We are, after all, talking about general intelligence.
It seems you’re arguing that our successors will develop a preference for simulating universes like ours over paradises. If that’s what you’re arguing, then what reason do we have to believe that this is probable?
If their preferences do not change significantly from ours, it seems highly unlikely that they will create simulations identical to our current existence. And out of the vast space of possible ways their preferences could change, selecting that direction in the absence of evidence is a serious case of privileging the hypothesis.
To uploads, yes, but a faithful simulation of the universe, or even a small portion of it. would have to track a lot more variables than the processes of the human minds within it.
Optimal approximate simulation algorithms are all linear with respect to total observer sensory input. This relates to the philosophical issue of observer dependence in QM and whether or not the proverbial unobserved falling tree actually exists.
So the cost of simulating a matrix with N observers is not expected to be dramatically more than simulating the N observer minds alone—C*N. The phenomena of dreams is something of a practical proof.
Variables that aren’t being observed still have to be tracked, since they affect the things that are being observed.
Dreams are not a very good proof of concept given that they are not coherent simulations of any sort of reality, and can be recognized as artificial not only after the fact, but during with a bit of introspection and training.
In dreams, large amounts of data can be omitted or spontaneously introduced without the dreamer noticing anything is wrong unless they’re lucid. In reality, everything we observe can be examined for signs of its interactions with things that we haven’t observed, and that data adds up to pictures that are coherent and consistent with each other.
Depends on personal standards of interest. I may be more interested in questions which I can imagine answering than ones whose anwer is a matter of speculation, even if the first class refers to small unimportant objects while the second speaks about the whole universe. Practically, finding out teapots orbiting Venus would have more tangible consequences than realising that “universe was caused by an agenty process” is true (when further properties of the agent remain unspecified). The feeling of grandness associated with learning the truth about the very beginning of the universe, when the truth is so vague that all anticipated expectations remain the same as before, doesn’t count in my eyes.
Even if you forget heaven, hell, souls, miracles, prayer, religious morality and plethora of other things normally associated with theism (which I don’t approve because confusion inevitably appears when words are redefined), and leave only “universe was created by an agenty process” (accepting that “universe” has some narrower meaning than “everything which exists”), you have to point out how can we, at least theoretically, test it. Else, it may not be closed for being definitely false, but still would be closed for being uninteresting.
“Interesting” is subjective, and further, I think you overestimate how many interesting things we actually know to be caused by “agenty things.” Phenomena with non-agenty origins include: any evolved trait or life form (as far as we have seen), any stellar/astronomical/geological body/formation/event...
It is pretty likely you are correct, but this is probably the best example of question-begging I have ever seen.
All Dreaded_Anomaly needs for the argument I take him or her to be making is that those things are not known to be caused by “agenty things”. More precisely: Will Newsome is arguing “interesting things tend to be caused by agents”, which is a claim he isn’t entitled to make before presenting some (other) evidence that (e.g.) trees and clouds and planets and elephants and waterfalls and galaxies are caused by agents.
It seems to me that basing such a list on evidence-based likelihood is different than basing it on mere assumption, as begging the question would entail. I do see how it fits the definition from a purely logical standpoint, though.
Interestingness is objective enough to argue about. (Interestingly enough, that is the very paper that eventually led me to apply for Visiting Fellowship at SIAI.) I think that the phenomena you listed are not nearly as interesting as macroeconomics, nuclear bombs, genetically engineered corn, supercomputers, or the singularity.
Edit: I misunderstood the point of your argument. Going back to responding to your actual argument...
I still contend that we live in a very improbably interesting time, i.e. on the verge of a technological singularity. Nonetheless this is contentious and I haven’t done the back of the envelope probability calculations yet. I will try to unpack my intuitions via arithmetic after I have slept. Unfortunately we run into anthropic reference class problems and reality fluid ambiguities where it’ll be hard to justify my intuitions. That happens a lot.
All of those phenomena are caused by human action! Once you know humans exist, the existence of macroeconomics is causally screened off from any other agentic processes. All of those phenomena, collectively, aren’t any more evidence for the existence of an intelligent cause of the universe than the existence of humans: the existence of such a cause and the existence of macroeconomics are conditionally independent events, given the existence of humans.
Right, I was responding to Dreaded_Anomaly’s argument that interesting things tend not to be caused by agenty things, which was intended as a counterargument to my observation that interesting things tend to be caused by agenty things. The exchange was unrelated to the argument about the relatively (ab)normal interestingness of this universe. I think that is probably the reason for the downvotes on my comment, since without that misinterpretation it seems overwhelmingly correct.
Edit: Actually, I misinterpreted the point of Dreaded_Anomaly’s argument, see above.
I’m not sure how an especially interesting time (improbable or otherwise) occurring ~13.7 billion years after the universe began implies the existence of God.
Ack! Watch out for that most classic of statistical mistakes: seeing something interesting happen, going back and calculating the probability of that specific thing (rather than interesting things in general!) having happened, seeing that that probability is small, and going “Ahah, this is hardly likely to have happened by chance, therefore there’s probably something else involved.”
In this case, I think Fun Theory specifies that there are an enormous number of really interesting things, each of minuscule individual probability, but highly likely as an aggregate.
Of course. Good warning though.
The existence of the universe is actually very strong evidence in favor of theism. It just isn’t nearly strong enough to overcome the insanely low prior that is appropriate.
Evidence allows one to dissociate theories and rule out those incompatible with observational history.
The best current fit theory to our current observational history is the evolution of the universe from the Big Bang to now according to physics.
If you take that theory it also rather clearly shows a geometric acceleration of local complexity and predicts (vaguely) Singularity-type events as the normal endpoints of technological civilizations.
Thus the theory also necessarily predicts not one universe, but an entire set of universes embedded in a hierarchy starting with a physical parent universe.
Our current observational history is compatible with being in any of these pocket universes, and thus we are unlikely to be so lucky as to be in the one original parent universe.
Thus our universe in all likelihood was literally created by a super-intelligence in a parent universe.
We don’t need any new evidence to support this conclusion, as it’s merely an observation derived from our current best theory.
To a non-scientifically-literate person, I might say that I think electrons exist as material objects, whereas to a physicist I would invoke Tegmark’s idea that all that exist are mathematical structures.
One way to make sense of this is to think about humanity as a region in mind space, with yourself and your listener as points in that region. The atheist who hasn’t heard about Bostrom/Tegmark yet is sitting between you and your listener, and you’re just using atheism as a convenient landmark while trying to point your listener in your general direction.
Why do you say that? I don’t think anyone has gone mad or otherwise suffered really bad consequences from thinking about Bostrom/Tegmark-like ideas… (Umm, I guess some people had nightmares after hearing about Roko’s idea, but still, it doesn’t seem that bad overall.)
The listener in this case being a theist you’re trying to explain your epistemic position to, I assume. (It took me a moment to figure out the context.)
Possibly related: “(Hugh) Everett’s daughter, Elizabeth, suffered from manic depression and committed suicide in 1996 (saying in her suicide note that she was going to a parallel universe to be with her father” (via rwallace).
My gut feeling is the causal flow goes “manic depression → suicide, alternate universes” rather than “alternate universes → manic depression → suicide”.
Honestly, I wouldn’t be that sure. On this very site I’ve seen people say their reason for signing up for cryonics was their belief in MWI.
It would not surprise me if “suicide → hell” decreases the overall number of suicides and “suicide → anthropic principle leaves you in other universes” increases the overall number of suicides.
Really? What’s the reasoning there (if you remember)?
The post is here. The reasoning as written is:
My comments on the subject (having cut out the tree debating MWI) can be found here.
I meant that a lot of arguments about what kinds of objectives a creator god might have, for example, would be very tricky to do right, with lots of appeals to difficult-to-explain Occamian intuitions. Maybe this is me engaging in typical mind fallacy though, and others would not have this problem. People going crazy is a whole other problem. Currently people don’t think very hard about cosmology or decision theory or what not. I think this might be a good thing, considering how crazy the Roko thing was.
I see. I think at this point we should be trying to figure out how to answer such questions in principle with the view of eventually handing off the task of actually answering them to an FAI, or just our future selves augmented with much stronger theoretical understanding of what constitute correct answers to these questions. Arguing over the answers now, with our very limited understanding of the principles involved, based on our “Occamian intuitions”, does not seem like a good use of time. Do you agree?
It seems that people build intuitions about how general super-high-level philosophy is supposed to be done by examining their minds as their minds examine specific super-high-level philosophical problems. I guess the difference is that in one case you have an explicit goal of being very reflective on the processes by which you’re doing philosophical reasoning, whereas the sort of thing I’m talking about in my post doesn’t imply a goal of understanding how we’re trying to understand cosmology (for example). So yes I agree that arguing over the answers is probably a waste of time, but arguing over which ways of approaching answers is justified seems to be very fruitful. (I’m not really saying anything new here, I know—most of Less Wrong is about applying cognitive science to philosophy.)
As a side note, it seems intuitively obvious that Friendliness philosophers and decision theorists should try and do what Tenenbaum and co. do when trying to figure out what Bayesian algorithms their brains might be approximating in various domains, sometimes via reflecting on those algorithms in action. Training this skill on toy problems (like the work computational cognitive scientists have already done) in order to get a feel for how to do similar reflection on more complicated algorithms/intuitions (like why this or that way of slicing up decision theoretic policies into probabilities and utilities seems natural, for instance) seems like a potentially promising way to train our philosophical power.
I think we agree that debating e.g. what sorts of game theoretic interactions between AIs would likely result in them computing worlds like ours is probably a fool’s endeavor insofar as we hope to get precise/accurate answers in themselves and not better intuitions about how to get an AI to do similar reasoning.
I’m technically some kind of theist, because I believe this world is likely to be a simulation (although I don’t believe it in my gut). I tell people I’m an atheist because telling them the more-accurate truth, that I am a theist, conveys negative information because of how they inevitably interpret it.
It’s a reasonable thing to point out: Why do LWers criticize theism so heavily when they may be theists?
There’s a confusion caused because our usage of the term doesn’t distinguish between “theist re. this universe I’m in” and “theist for the root universe”. Possibly because there may be no one in the latter category, who both believes in multiple levels of simulated universes, and that the original root universe was created by a deity.
Which definition is more usable (makes more distinctions about how you should act depending on whether you are a theist): Theist for this universe, or theist for root universe?
Considering whether your current universe was made by a god might seem to have more impact on your behavior. But considering whether the root universe was made by a god might have more impact on your philosophy and ethics.
Would you like to address your point of view on what the impact is in both cases, or link to relevant discussion? Is it “be on the lookout for miracles”? Why wouldn’t we just do our business as usual being in a simulation as opposed to being in a “root universe”?
I don’t mean that it has to do with which universe we are in. A lot of people believe, for reasons which have never been clear to me, that if a God created the universe, then that God’s opinions have special moral status. I was presuming that that God does not have special moral status if it had been created by another God, or through evolution. But I don’t know what Christians would say. Possibly they would refuse to consider the scenario.
If God created the universe, then that’s some evidence that He knows a lot. Not overwhelming evidence, since some models of creation might not require of the creator to know much.
They should refuse. Asking wrong questions has been a temptation by the Devil since the times of the original sin. A good Christian should know when to stop.
Think about it from a slightly different perspective: the claim is that the universe has morality baked into it—God created such a universe that moral laws are the same as laws of physics. In other words, the claim is that morality is objective and is embedded in reality. It’s not an “opinion” at all.
In Christainity (or Judaism, or Islam) God cannot have been created (by somebody else of through evolution). In theology that’s one of the biggest differences between God and the world—one is uncreated and one is created.
Tegmark cosmology implies not only that there is a universe which runs this one as a simulation, but that there are infinitely many such universes and infinitely many such simulations. In some fraction of those universes, the simulation will have been designed by an intelligent entity. In some smaller fraction, that entity has the ability to mess with the contents of the simulation (our universe) or copy data out of it (eg, upload minds and give them afterlives). My theism is equal to my estimate of this latter fraction, which is very small.
I’m not sure that this is true. My understanding is that IF a universe which runs this one as a simulation is possible, THEN Tegmark cosmology implies that such a universe exists. But I’m not sure that such a universe is possible. After all, a universe which contains a perfect simulation of this one would need to be larger (in duration and/or size) than this one. But there is a largest possible finite simple group, so why not a largest possible universe? I am not confident enough of my understanding of the constraints applicable to universes to be confident that we are not already in the biggest one possible.
There is a spooky similarity between the Tegmark-inspired argument that we may live in a simulation and the Godel/St. Anselm-inspired argument that we were created by a Deity. Both draw their plausibility by jumping from the assertion that something (rather poorly characterized) is conceivable to the claim that that thing is possible. That strikes me as too big of a jump.
There isn’t a largest finite simple group. There’s a largest exceptional finite simple group.
Z/pZ is finite and simple for all primes p, and if you think there is a largest prime I have some bad news...
Doooohhh!
Thx.
You’re right, that is an additional requirement. Nevertheless, it seems very highly likely to me that such a universe is possible; for it to be otherwise would imply something very strange about the laws of physics. The most-existant universe simulating ours might exist to a degree 1/BB(100) times as much as our universe exists, though; in that case, they would “exist”, but not for any practical purposes. This seems more likely than our universe having some property we don’t know about that makes it impossible to simulate.
If one accepts general Tegmark, is there any natural measure for describing how common different universes should be in any meaningful sense?
Yes, but unfortunately, there are many measures to choose from, and you can’t possibly tell which is correct until you’ve visited Permutation City and at least a dozen of its suburbs.
I agree with the question. It may make sense to attach “probabilities of existing” to universes arising in a chaotic inflation model, but not, I think, in an “ultimate ensemble” multiverse, which seems to be the one being examined here.
But, to be honest, I had never even considered the possibility that a particularly large bubble universe might contain a simulation of a much smaller bubble. Inflation, as I understand it, does make it possible for a simulation of one small piece of physical reality to encompass an entire isolated ‘universe’.
Not yet, as far as I know. Big World cosmology seems to be going in the right direction, but it’s not yet understood well enough that we should be coming to any epistemological or ethical conclusions based on it.
Clarifying: I’m guessing that by ‘ability’ you mean ‘ability and inclination’?
Right. Actually, forget about both of those; all that matters is whether it actually does modify the simulation’s contents or copy out data that includes a mind at least once. And, come to think of it, the intervention would also have to be inside our past or future light cone, which might lower the fraction pretty substantially (it means any outer universe which instantiates our entire infinite universe, but makes only finitely many interventions, doesn’t count).
Although—there are some interpretations of consciousness under which, upon death, the fraction of enclosing universes which copy out minds doesn’t matter, only the proportions of them with different qualities. In that case, the universe would act as though there were no gods or outer universes until you died or performed enough iterations of quantum suicide, after which you’d end up in a different universe. I’m not sure how much credence I give to those interpretations.
What does “fraction” mean here?
It seems to me that, if we insist on using simulation hypotheses as a model for theism, this has to be narrowed still further. Theism adds the constraint that though $deity is simulating us, no-one is simulating $deity; He’s really really real and the buck stops with Him. We live in the floor just above reality’s basement; isn’t that nice.
I think that this might be what Eliezer’s quote about “ontological distinctness” refers to, but I’m not sure.
Monotheism requires that, but theism doesn’t. And unless there are some universes that are for some reason impossible to simulate, Tegmark cosmology implies that there are no universes for which there are no universes simulating them. Is-God-of is a two-place predicate.
If one were interested in salvaging the correspondence, one could argue that there’s a chain of simulators-simulating-simulators and it’s that chain (which extends down to “reality’s basement”) that theists label as a deity.
That said, I see no point in allowing ontology to get out ahead of epistemology in this area. Sure, maybe all this stuff is going on. Maybe it isn’t. Unless these conjectures actually cash out somehow in terms of different expectations about observable phenomena, there seems little point to talking about them.
Nitpick: Will isn’t the only self-identified theist you’d have to convince of that.
I think this is an interesting question! If rationalists speculated about the origin of the universe, what would they come up with? What if 15 rationalists made up a think-tank and were charged to speculate about the origin of the universe and assign probabilities to speculations? It would be a grievous mistake to begin with the hypothesis of theism, but could they end up with it on their list, with some non-negligible probability?
I don’t think so. The main premise of the theistic religions is that an entity (a person? a mind?) created us and that this entity is like a person and like a parent: it chose to create us (agency), wants the best for us, and authoritatively defines what is good behavior. This is too obviously an artifact of human psychology. Being children with parents is such an important part of our biology it’s certainly going to be an important component of our psychology. (Don’t various psychological theories claim that ‘growing up’ means internalizing the authority of parents as part of our psyche?)
The simulation hypothesis? This is also an anthropomorphic, privileged hypothesis, but with the advantage of being quite possible. So humans could do it or could have done it. (Being human, they could do something anthropomorphic like that.) But the rationalists in my think-tank aren’t charged with the probability of the simulation hypothesis. Deciding we might be in a simulation only pushes the question further out—what’s the origin of the universe that’s simulating the others?
Given how ‘weird’ it must be to create the universe (to create everything), I think we must decide that this creator is outside our comprehension. This creator (agent or thing or mechanism) not only created everything, it contains the explanation for why there is anything at all rather than nothing, and what ‘something’ and ‘nothing’ even mean*.
I think that the rationalists would come out of their conference with the conclusion that any adjectives that have ever been used to describe the creator—omniscient, benevolent, omnipotent; or even ‘agenty’ don’t make any sense in the context of such a thing.
In particular, it seems just silly to be concerned about whether this thing has a ‘mind’. What would it do with this mind? Other than create the universe, exactly as it has done / been doing. It seems like a mind is useful thing humans have to think through stuff and make decisions. To make computations about causality given limited information. A mind would be irrelevant outside causality and information. Probably ‘intention’ would be too, so that challenges ‘agency’.
… I can’t think of anything interesting that the rationalists could even apply, speculatively, to the entity: creator that would make any sense.
* Even ‘creation’ doesn’t make sense outside of time, but I mean the ‘mechanism’ at whatever level of abstraction that would explain the universe to a mind that could understand it.
I’ll develop my thoughts about not being able to sensibly apply the description ‘agenty’ to the creator because wondering why agency should be a key question is what originally motivated my above comment.
You can search ‘agenty’ and find many comments on this page that discuss whether we should speculate that the creator has agency. I found myself wondering throughout these comments what is specifically being meant by this. If the creator is ‘agenty’, what properties must it have and are those properties necessarily interesting?
I could probably look around and find a definition I would like better, but my definition of ‘agenty’ when I first start thinking about it is that this has meaning in a specifically human context.
Broadly, something ‘agenty’ is something that makes decisions according to a complex decision tree algorithm. This is a human-context-specific definition because “complex” means relative to what we consider complex. A mammal makes complex decisions and thus is ‘agenty’ while a simple process like water makes simple decisions (described by a small number of equations and the properties of the immediate physical space) and is not agenty. A complex inanimate thing (like ‘evolution’) and a simple animate thing (like a virus) would give us pause, straining our immediate, concrete conception of agency.
I’m willing to say that evolution has agency (it has goals—long term stable solutions—and complicated ways of achieving these goals) and water has simple agency. This because in my opinion what was really meant when we made the agency dichotomy between humans and water is that humans have free will and water doesn’t. But finally with a deterministic world view, this distinction dissolves. Humans have as much agency as anything else, but our decision algorithm is very complex to us, whereas we can often reliably predict what water will do.
Then to apply this concept of agency to the mechanism of creation of the universe… All the rules and steady states of the universe could be interpreted as its ‘intentions’ and, as such, it would have very complex agency. Another person may have a different set of meanings that they associate with agency, intention, etc., and consider this a terrible anthropomorphism if my words were mapped to their meanings. However, I don’t think it reflects an actual difference in beliefs about the territory.
If someone reading this has a different ontology, what would you specifically mean by the creator having agency, if it did?
Part of the problem here is that there’s no clear meaning of the word ‘god’ (taking for granted that ‘theism’ and ‘atheism’ are defined in terms of it). I usually identify as ‘secular humanist’ rather than ‘atheist’, mostly because it’s more precise, but also because I have seen people define ‘god’ in such a way that I believe that one might well exist. These have all been very vague definitions (more along pantheistic than monotheistic lines), but they’re not gratuitous (like defining ‘god’ to mean, say, my nose), and by these lights I’m merely a (weak) agnostic.
In particular, if one defines ‘god’ as a person who created the world, then (depending on exactly what ‘person’ and ‘world’ mean) the simulation hypothesis would indeed imply the existence of a god. You seem to be hinting at this, while other respondents deny it. You all may just be talking about different things. (I will sometimes say, if pressed, that I do not believe in a person who created the world, using precisely those words, but then I don’t buy the simulation argument.)
Of course, one can argue over what ‘god’ or ‘atheist’ ought to mean, in order to communicate most effectively with other people. For my part, unless I’m speaking with (or about) a theist whose beliefs I more or less understand, I don’t usually use them at all.
Agreed. I think this is a cultural thing rather than a truly rational thing. I was brought up as an atheist, and would still describe myself as such, but I wouldn’t give a zero probability to the simulation argument, or to Tipler’s Omega Point, or whatever (I wouldn’t give a high probability to either—and Tipler’s work post about 1994 has been obvious ravings) and I can imagine other scenarios in which something we might call God might exist. I don’t see myself changing my mind on the theism question, but I don’t consider it a closed one.
When I abandoned religion, a friend of mine did the same at about the same time. We spoke recently and it turned out that he self-labeled as agnostic, me—“atheist”. We discussed this a bit and I said something to the extent that “I do not see a shred of justice in the world that would indicated a working of a personal god; if there is something like a god that runs the universe amorally, we may as well call it physics and get on with it”.
It seems that you want to draw the additional distinction of “agenty” things vs. dumb gears, but as long as they only “care” about persons as atmos, vs. moral agents, who cares? It admittedly tickles curiosity, but will hardly change the program...
What makes you think an agenty, simulator-type god wouldn’t care about persons as moral agents?
An agenty simulator type god that actually did care about persons as moral agents would have created a very different universe than this one (assuming they were competent).
Well if it were chiefly concerned with us having a lot of fun, or not experiencing pain or fulfilling more of our preferences then yes. But maybe the simulator is trying to evolve companions. Or maybe it is chiefly concerned with answering counter-factual questions and so we have to suffer for it to get the right answers… but that doesn’t mean the simulator doesn’t care about us at all. Maybe it saves us when we die and are no longer needed for the simulation. Or maybe the simulator just has weird values and this is their version of a eutopia.
“Companions, the creator seeketh, and not corpses—and not herds or believers either. Fellow-creators the creator seeketh—those who grave new values on new tables.”
I find that the SA leads us to believe just the opposite.
Future posthumans will be descended in one form or another from people alive today. Some of them may be uploads of people who actually were alive today, some of them may have been raised up and new biological humans and uploads, or even just loosely based on human minds through reading and absorbing our culture.
If these future posthumans share much of the same range of values that we have, many of them will be interested in the concept of resurrecting the dead—recreating likely simulations of deceased, lost humans from their history—whether personal or general.
There was already a thread on this. The general consensus seems to be that it isn’t practical, if possible.
Hmm from my reading of the thread it doesn’t look like much of a consensus.
I may want to revive this—the arguments against practicality don’t seem convincing from an engineering perspective.
From a high quality upload’s scanned mind one should get a great deal of information about the upload’s closest friends, relatives, etc. The data from any one such upload many not be overwhelming, but you’d start with a large population of such uploads. People who were well known and loved would be easier cases, but you could also supplement the data in many cases with low-quality scans from poorly preserved bodies.
This should give one prior generation. Going back another previous generation would get murkier, but is still quite possible, especially with all the accessory historical records.
The farther back you go, the less ‘accurate’ the uploads become, but the less and less important this ‘accuracy’ becomes.
For example, assuming I become a posthuman, I will be interested in bring back my grandfather. There a huge space of possible minds that could match my limited knowledge and beliefs about this person I never met. Each of them would fully be my grandfather from my subjective perspective and would fully be my grandfather from their subjective perspective.
There is no objective standard frame of reference from which to evaluate absolute claims of personal identity. It is relative.
But if you simulate anything other than the actual brain states of the people in question, then they won’t behave in exactly the same way. No matter how many other people’s knowledge of me you integrate, for example, you won’t have the data to predict what I’ll eat for breakfast tomorrow with any accuracy (because I almost invariably eat breakfast alone.) Tiny differences like this will quickly propagate to create much larger ones between the simulation and the reality. Jump forward a few generations and you have zero population overlap between the new generation of the simulation and the next generation that was born in reality. If you’re attempting historical recreation, this would be a pretty useless way to go about it.
If you wanted to create a simulation that was an approximation of a particular historical period at one point, but quickly divorced from it as it ran forward, that would be much more plausible, but why would you want to? Everything I can think of that could be accomplished in such a way could more easily be accomplished by doing something else.
Sure, but that’s not relevant towards the goal. There are no ‘actual’ or exact brain states that canonically define people.
If you created a simulation of an alternate 1950 and ran it forward, it would almost certainly diverge, but this is no different than alternate branches of the multiverse. Running the alternate forward to say 2050 may generate a very different reality, but that may not matter much—as long as it also generates a bunch of variants of people we like.
This brings to mind a book by Heinlein about a man who starts jumping around between branches—“Job: a comedy of Justice”.
Anyway, my knowledge of my grandfather is vague. But I imagine posthumans could probably nail down his DNA and eventually recreate a very plausible 1890 (around when he was born). We could also nail down a huge set of converging probability estimates from the historical record to figure out where he was when, what he was likely to have read, and so on.
Creating an initial population of minds is probably much trickier. Is there any way to create a fully trained neural net other than by actually training it? I suspect that it’s impossible in principle. It’s certainly the case in practice today.
In fact, there may be no simple shortcut without going way way back into earlier prehistory, but this is not a fundamental obstacle, as this simulation could presumably be a large public project.
Yes the approach of just creating some initial branch from scratch and then running it forward is extremely naive. If you’d like I could think of ten vastly more sophisticated algorithms that could shape the branch’s forward evolution to converge with the main future worldline before breakfast.
The first thing that pops to mind: The historical data that we have forms a very sparse sampling, but we could use it to guide the system’s forward simulation, with the historical data acting as constraints and attractors. In these worlds, fate would be quite real. I think this gives you the general idea, but it relates to bidirectional path tracing.
Such as?
We can get to that if you can establish that there’s any good reason to do it in the first place.
Your justifications for running such simulations have so far seem to hinge on things we could learn from them (or simply creating them for their own sake, it appears that you’re jumping between the two,) but if we know enough about the past to meaningfully create the simulations, then there’s not much we stand to learn from making them. Yes, history could have branched in different ways depending on different events that could have occurred, we already know that. If you try to calculate all the possibilities as they branch off, you’ll quickly run out of computing power no matter how advanced your civilization is. If you want to do calculations of the most likely outcomes of a certain event, you don’t have to create a simulation so advanced that it appears to be a real universe from the inside to do that.
Excellent!
The two are intertwined—we can learn a great deal from our history and ancestors while simultaneous valuing it for other reasons than the learning.
Thinking is just a particular form of approximate simulation. Simulation is a very precise form of thinking.
Right now all we know about our history is the result of taking a small collection of books and artifacts and then thinking alot about them.
Why do we write books about Roman History and debate what really happened? Why do we make television shows or movies out of it?
Consider this just the evolution of what we already do today, for much of the same reasons, but amplified by astronomical powers of increased intelligence/computation generating thought/simulation.
This is what we call a naive algorithm, the kind you don’t publish.
Calculations of the likely outcomes of certain events are the mental equivalents of thermostat operations—they are the types of things you do and think about when you lack hyperintelligence.
Eventually you want a nice canonical history. Not a book, not a movie, but the complete data set and recreation. As it is computed it exists, eventually perhaps you merge it back into the main worldline, perhaps not, and once done and completed you achieve closure.
Put another way, there is a limit where you can know absolutely every conceivable thing there is to know about your history, and this necessitates lots of massively super-detailed thinking about it—aka simulation.
This is the kind of naive forward extrapolation that gets you sci fi dystopias. Most of the things we do today don’t bear extrapolating to logical extremes, certainly not this.
No I don’t. I think you should try asking more people if this is actually something they would want, with knowledge of the things they could be doing instead, rather than assuming it’s a logical extrapolation of things that they do want. If I could do that, it wouldn’t even bottom the list of things I’d want to do with that power.
The simulation doesn’t teach us more than we already know about history. What we already know about history sets the upper bound on how similar we can make it. Given the size of the possibility space, we can only reasonably assume that it’s different in every way that we do not enforce similarity on it. The simulation doesn’t contribute to knowing everything you could possibly know about your history, that’s a prerequisite, if you want the simulation to be faithful.
This would be true if we were equally ignorant about all of history. However, there are some facts regarding history we can be quite confident about- particularly recent history and the present. You can then check possible hypotheses about history (starting from what is hopefully an excellent estimation of starting conditions) against those facts you do have. Given how contingent the genetic make-up of a human is on the timing of their conception and how strongly genetics influences who we are it seems plausible a physical simulation of this part of the universe could radically narrow the space of possibilities given enough computing power. Of course parts of the simulation might remain under-determined but it seems implausible that a simulation would tell us nothing new about history as a simulation should be more proficient than humans at assessing the necessary consequences and antecedences to any known event.
Radically narrow, but given just how vast the option space is, it takes a whole lot more than radically narrowing before you can winnow it down to a manageable set of possibilities.
This post puts some numbers to the possible configurations you can get for a single lump of matter of about 1.5 kilograms. In a simulation of Earth, far more matter than that is in a completely unknown state and free to vary through a huge portion of its possibility space (that’s not to say that even an appreciable fraction of matter on Earth is free to vary through all possible states, but the numbers are mind boggling enough even if we’re only dealing with a few kilograms.) Every unknown configuration is a potential confounding factor which could lead to cascading changes. The space is so phenomenally vast that you could narrow it by a billion orders of magnitude, and it would still occupy approximately the same space on the scale of sheer incomprehensibility. You would have to actively and continuously enforce similarity on the simulation to keep it from diverging more and more widely from the original.
Said reference post by AndrewHickey starts with a ridiculous assumption:
This is voodoo-quantum consciousness: the idea that your mind-identity somehow depends on details down to the quantum state. This can’t possibly be true—because the vast vast majority of that state changes rapidly from quantum moment to moment in a mostly random fashion. There thus is no single quantum state that corresponds uniquely to a mind, rather there is a vast configuration space.
You can reduce that space down to a smaller bit representation by removing redundant details. Does it really matter if I remove one molecule from one glial cell in your brain? The whole glial cell? All the glial cells?
There is a single minimal representation of a computer—it reduces exactly down to it’s circuit diagram and the current values it holds in it’s memory/storage.
If you don’t buy into the idea that a human mind ultimately reduces down to some functional equivalent computer program, than of course the entire Simulation Argument won’t follow.
Who cares?
There could be infinite detail in the universe—we could find that there are entire layers beneath the quantum level, recursing to infinity, such that perfect simulation was impossible in principle .. and it still wouldn’t matter in the slightest.
You only need as much detail in the simulation as . . you want detail in the simulation.
Some details at certain spatial scales are more important than others based on their leverage casual effect—such as the bit values in computers, synaptic weights in brains.
A simulation at the human-level scale would only need enough detail to simulate conscious humans, which will probably include simulating down to rough approximations to synaptic-net equivalents. I doubt you would even simulate every cell in the body, for example—unless that itself was what you were really interested in.
There is another significant mistake in typical feasibility critique of simulationism: assuming your current knowledge of algorithmic simulation is the absolute state of the art for now to eternity, the final word, and superintelligences won’t improve on it in the slightest.
As a starting example, AndrewHickey and you both appear to be assuming that the simulation must maintain full simulation fidelity across the entire spatio-temporal field. This is a primitive algorithm. A better approach is to adaptively subdivide space-time and simulate at multiple scales at varying fidelity using importance sampling, for example.
That assumption is not part of my argument. The states of objects outside the people you’re simulating ultimately effect everything else once the changes propagate far enough down the simulation.
Underestimating the importance of glial cells could get you a pretty bad model of the brain. But my point isn’t simply about the thoughts you’d have to simulate; remove one glial cell from a person’s brain, and the gravitational effects mean that if they throw a superball really hard, after enough bounces it’ll end up somewhere entirely different than it would have (calculating the trajectories of superballs is one of the best ways to appreciate the propagation of small changes.)
Why would you want as much detail in the simulation as we observe in our reality?
Good point. I’m reconsidering...
I wonder what kind of cascade effect there actually is- perhaps there are parts of the simulation that could be done using heuristics and statistical simplifications. Perhaps that could be done to initially narrow the answer space and then the precise simulation could be sped up by not having to simulate those answers that contradict the simplified model?
I wonder how a hidden variable theory of quantum mechanics being true would effect the prospects for simulation- assuming a super intelligence could leverage that fact somehow (which is admittedly unlikely).
What? ;(
Even using the low-res datasets and simple computers available today (by future standards), we are able to simulate chaotic weather systems about a week into the future.
Simulating down to the quantum level is overkill to the thousandth degree in most cases, unless you have some causal amplifier—such as a human observing quantum level phenomena down the quantum scale. In that situation the quantum-scale events have a massive impact, so the simulation subdivides space-time down to that scale in those regions. Similar techniques are already employed today in state of the art simulation in computer graphics.
There will always be divergences in chaotic systems, but this isn’t important.
You will never get some exact recreation of our actual history, that’s impossible—but you can converge on a set of close traces through the Everett branches. It may even be possible to force them to ‘connect’ to an approximation of our current branch (although this may take some manual patching).
Not with great accuracy. And that’s only a week; making accurate predictions gets exponentially more difficult the further into the future you go. And human society is much more chaotic (contains far more opportunities for small changes to multiply to become large changes) than the weather. The weather is just one of the chaos factors in human society.
I’m not sure about this in general—why do you think that prediction accuracy has an exponential relation to simulation time across the entire space of possible simulation algorithms?
Yes and no. Human society is largely determined by stuff going on in human brains. Brains are complex systems, but like computers and other circuits they can be simulated extremely accurately at a particular level of detail where they exhibit scale separation, but are essentially randomly chaotic when simulated at coarser levels of detail.
Turbulence in fluid systems, important in weather, has no scale separation level and is chaotic all the way down.
Basic principle of chaos theory. Small scale interferences propagate to large scale interferences, while tiny scale interferences propagate to small scale, and then to large scale. If you try to calculate the trajectory of a superball, you can project it for a couple bounces just modeling mass, elasticity and wind resistance. A couple more? You need detailed information on air turbulence. One article, which I am having a hard time locating, calculated that somewhere in the teens of bounces you would need to integrate the positions of particles across the observable universe due to their gravitational effects.
A kid throws a superball. Bounce, bounce, bounce, bounce, bounce, bounce, bounce, bounce, crash. It bounces out into the street, and they’re hit by a car chasing after it. In a matter of seconds, deviations on a particulate level have propagated to the societal level. The lives of everyone the kid would have interacted with will be affected, and by extension, the lives of everyone that those people would have interacted with, and so on. The course of history will be dramatically different than if you had calculated those slight turbulence effects that would have sent the ball off in an entirely different direction. You can expect many history altering deviations like this to occur every minute.
I’m aware of the error propagation issues and they can be magnified in some phenomena up spatial scales. A roll of the dice in vegas is probably a better example of that than your ball.
I should point out though that this is all somewhat tangential to our original discussion.
But nonetheless . ..
None of the examples you give actually prove that simulation fidelity has an exponential relation to simulation time across the entire space of possible simulation algorithms.
Intuitively it seems to make sense—as each particle’s state is dependent on a few other particles it interacts with at each timestep the information dependency fans out exponentially over time. However intuitions in these situations often can be wrong, and this is nothing like a formal proof.
Getting back to the original discussion, none of this is especially relevant to my main points.
Many of the important questions we want to answer are probabilistic—how unlikely was that event? For example to truly understand the likelihood of life elsewhere in the galaxy and get a good model of galactical development, we will want to understand the likelihood of pivotal events in earth’s history—such as the evolution of hominids or the appearance of early life itself.
You get answers to those only by running many simulations and mapping out branches of the metaverse. The die roll turns out differently in each and in some this leads to different consequences.
In some cases, especially in an initial simulations, one can focus on the branches that match most closely to known history, and even intervene or at least prune to enforce this. But eventually you want to explore the entire space.
While this is a good way to get such data, it isn’t the only way . If we expand enough to look at a large number of planets in the galaxy we should arrive at decent estimates simply based on empirical data.
Certainly expanding our observational bubble and looking at other stars will give us valuable information. Simulation is a way of expanding on that.
However, its questionable when or if we ever will make it out to the stars.
Lightyears are vast for humans, but they will be even vaster units of time for posthuman civilizations that think thousands or millions of times faster than us.
It could be that the vast cost of travelling out into space is never worthwhile and those resources are always best used towards developing more local intelligence. John Smart makes a pretty good case for inward expansion always trumping outward expansion.
If you do probabilistic estimates based on large numbers of simulations though, you can cut down on the fidelity of the simulations dramatically. I know that this is something you’re arguing for, but really, there’s no good reason to make the simulations as detailed as the universe we observe.
To take forest succession modeling programs (something I have more experience with than most types of computer modeling) as an example, there are some ecological mechanisms that, if left out, will completely change the trends of the simulation, and some that won’t, and you can leave those that don’t out entirely, because your uncertainty margins stay pretty much the same whether you integrate them or not. If you created a computer simulation of the forest with such fidelity that it contained animals with awareness, you’d use up a phenomenal amount of computing power, but it wouldn’t do you any good as far as accuracy is concerned.
If you care about the lives of the people in the past for their own sake, and are capable of creating high fidelity recreations of their personality from the data available to you, why not upload them into the present so you can interact with them? That, if possible, is something that people actually seem to want to do.
That’s true, they don’t constitute a formal proof. Maybe a proof already exists and I’m not aware of it, or maybe not, but regardless, given the information available to us in this conversation, right now, the weight of evidence is clearly on the side of such a simulation not being possible over it being possible. You don’t get high probability future predictions by imagining ways in which our understanding of chaos theory maybe gets overhauled.
What about genetic mutations from stray cosmic rays? Would evolution have occurred the same way? Would my genetic code be one allele different?
I feel like the quantum level would matter a lot more the earlier you started your simulation.
I’m worried about how motivated my cognition is. I really want this to be possible for very personal reasons- so I am liable to grasp tightly to any plausible argument for close-enough simulation of dead people.
Well if you started a sim back a billion years ago, well yes I expect you’d get a very different earth.
How different is an interesting open problem. Even if hominid-like creatures develop say 10% of the time after a billion years (reasonable), all of history would likely be quite different each time.
For a sim built for the purpose of resurrection, you’d want to start back just a little earlier—perhaps just before the generation was born.
Getting the DNA right might actually be the easiest sub-problem. Simulating biological development may be tougher than simulating a mind, although I suspect it would get easier as development slows.
Hopefully we don’t have to simulate all of the 10^13 cells in a typical human body at full detail, let alone the 10^14 symbiotes in the human gut.
It’s still an open question whether it’s even possible in principle to create a conscious mind from scratch. Currently complex neural net systems must be created through training—there is no shortcut to just fill in the data (assuming you don’t already have it from a scan or something which of course is inapplicable in this case).
So even a posthuman god may only have the ability to create conscious infants. If that’s the case, you’d have the DNA right and then would have to carefully simulate the entire history of inputs to create the right mind.
You’d probably have to start with some actors (played by AIs or posthumans) to kickstart the thing. If that’s the general approach, then you could also force alot of stuff—intervene continuously to keep the sim events as close to known history as possible (perhaps actors play important historical roles even when it’s running? open). Active intervention would of course make it much more feasible to get minds closer to the ones you’d want.
Would they be the same? I think that will be an open philosophical issue for a while, but I suspect that you could create minds this way that are close enough.
This is interesting enough that it could make a nice follow up paper to the current SA/simulism stuff—or perhaps somebody has already written about it, not sure.
It’s good you are conscious of that which you wish to be true.
If uploading is possible, then this too should be possible as they rely on the same fundamental assumption.
If there is a computer program data set that recreates (is equivalent to) the consciousness of a particular person, then such a data set also exists for all possible people, including all dead people.
Thus the problem boils down to finding a particular data set (or range) out of many. This may be a vast computational problem for a mind of 1^15 bits, but it should be at least possible in principle.
How on earth can we know that 10% is reasonable?
The “even if” and “say” should indicate the intent—it wasn’t even a guess, just an example used as an upper bound.
I’m not convinced the evolution of hominids is a black swan, but it’s not an issue I’ve researched much.
The (reasonable) assertion was what struck me.
Most of the things we do today are predictable developments of what previous generations did, and this statement holds across time.
There is a natural evolutionary progression: dreams/daydreams/visualizations → oral stories/mythologies → written stories/plays/art → movies/television->CG/virtual reality/games->large scale simulations
It isn’t ‘extrapolating to logical extremes’, it is future prediction based on extrapolation of system evolution.
Of course it does. What is our current knowledge about history? It consists of some rough beliefs stored in the low precision analog synapses of our neural networks and a bunch of word-symbols equivalent to the rough beliefs.
With enough simulation we could get concise probability estimates or samples of the full configuration of particles on earth every second for the last billion years—all stored in precise digital transistors, for example.
This is true only for some initial simulation, but each successive simulation refines knowledge, expands the belief network, and improves the next simulation. You recurse.
Not at all. Given an estimate on the state of a system at time T and the rules of the system’s time evolution (physics), simulation can derive values for all subsequent time steps. The generated data is then analyzed and confirms or adjusts theories. You can then iteratively refine.
For a quick primitive example, perhaps future posthumans want to understand in more detail why the roman empire collapsed. A bunch of historian/designers reach some rough consensus on a model (built on pieces of earlier models) to build an earth at that time and populate it with inhabitants (creating minds may involve using stand in actors for an initial generation of parents).
Running this model forward may reveal that the lead had little effect, that previous models of some roman military formations don’t actually work, that a crop harvest in 32BC may have been more important than previously thought .. and so on.
With the help of hindsight bias.
As wedrifid says, in the light of hindsight bias. Instead of looking at the past and seeing how reliably it seems to lead to the present, try looking at people who actually tried to predict the future. “Future prediction based on extrapolation of system evolution” has reliably failed to make predictions about the direction of human society that were both accurate and meaningful.
Or you could very easily find them removing the lead from their pipes and wine, and changing their military formations. If you don’t already know what their crop harvest in 32BC was like, you can practically guarantee that it won’t be the same in the simulation. This is exactly the kind of use that, as I pointed out earlier, if you had enough information to actually pull it off, you wouldn’t need to.
I’ll just reiterate my response then:
Any information about a physical system at time T reveals information about that system at all other times—places constraints on it’s configuraiton. Physics is a set of functions that describe the exact relations between system states across time steps, ie the temporal evolution of the system.
We developed physics in order to simulate physical systems and predict and understand their behavior.
This seems then to be a matter of details—how much simulation is required to produce how much knowledge from how much initial information about the system.
For example, with infinite computing power I could iterate through all simulations of earth’s history that are consistent with current observational knowledge.
This algorithm computes the probabilities of every fact about the system—the probability of a good crop harvest in 32BC in Egypt is just the fraction of the simulated multiverse for which this property is true.
This algorithm is in fact equivalent to the search procedure in the AIXI universal intelligence algorithm.
I do not believe this is correct. In particular the ‘just a’ is not accurate. Approximate simulation is a particular kind of thinking not the reverse.
I’m willing to try on your taxonomy but don’t quite understand it.
The term thinking certainly covers a wide variety of computations, but perhaps the most important is prediction.
Does this sound more accurate:
Cortical-forward-simulation is just a particular form of approximate simulation. Simulation in general encompasses all the most precise forms of prediction.
More accurate, but still not right. Simulation just doesn’t have special privileges. Again, the general, absolute claim of “all the most” invalidates the position. You can make and even logical prove precise predictions without simulating.
How? Got an example?
If I know an algorithm that outputs 1 or 0 depending on whether the input was prime or not, I can use a different prime checking algorithm without running the whole thing. So, for example, if the algorithm is naive trial division, I can predict its result very quickly using something like Agrawal’s algorithm or some variant of Miller-Rabin. This example is in some ways a toy example, but it isn’t obvious that one wouldn’t have similar examples for more complicated phenomena.
And any example is sufficient to reject a general absolute claim.
I’m not sure yet about wedrifid’s general point, but I have a few philosophical problems with this particular example.
If Trial Division and Miller-Rabin are functionally equivalent (compute the same results), then they simulate each other—correct? So this is not a counterexample.
And on another whole philosophical level, how do you know your algorithm actually computes whether the input was prime or not? (or how was that originally discovered? and did that discovery require simulation?)
Ultimately if the cortex uses approximate forward simulation, then we can’t do anything without some form of simulation.
This may come down to what you mean by simulate. Certainly a computer scientist would be unlikely to describe that as a simulation. And if you are asserting that anything with the same output as something else can be considered to be simulating it then your earlier claim becomes tautological. The then relevant issue becomes that this is an example where we can “simulate” something while using much less computation than running something is in complete detail (indeed, there’s very little resemblance in the internal workings).
I can prove things without simulating. One can look at code and determine what it does without simulating. The entire point of proving algorithms correct is that that’s done by mathematical proof, not by empirical testing for small values.
No, not correct.
Not wouldn’t, doesn’t. And I think it doesn’t due to lack of evidence.
I’m in the ‘everything that can exist does so; we’re a fixed point in a cloud of possibilities’ camp. I’m also an atheist because I see theism as an extra-ordinarily arbitrary and restrictive constraint on what should or must be true in order for us to exist.
It’s simply too narrow and unjustified for me to take seriously, and the fact that its trappings are naive and full of wishful thinking and ulterior motives means I certainly don’t.
The way I’ve been envisioning theism is as a pretty broad class of hypotheses that is basically described as ‘this patch of the universe we find ourselves in is being computed by something agenty’. What is your conception of theism that makes it more arbitrary and restrictive than this?
Since my metaphysical position is (and I’m going to have to come up with a better term for it) pan-existence, having gods that create and influence things requires that those possibilities where they don’t (or where other, similar-but-different gods do) are somehow rendered impossible or unlikely.
Gods being statistically significant requires some metaphysical reason for them to be so simply in order to stop the secular realities dominating, and the arbitrary focus of theistic gods on humanity and our loose morals only serves to make them ever more over-specified and unlikely.
The answer is simple: evolution.
That which replicates is more (a priori) likely than that which does not.
Out of the space of universes, those that spawn many sub-universes will statistically dominate.
Just another rewording of the SA.
I dispute the ‘a priori’ claim. There are cases where this would not be so. I think this is an a posteriori conclusion on the order of ‘sun will come up next Tuesday’.
Rosy asked for significant probability mass on God-endowed universes.
Jacob’s argument works a priori, not necessarily, but with significant probability mass.
I believe you are mistaken (on this overwhelmingly unimportant question of semantics.) The cause and consequence of replication are rather critical for whether being the kind of pan existential god universe thing that replicates will make said universe more prolific.
and two highly plausible answers to the questions of cause and consequence are “because of certain features of the universe” “that are preserved with high probability by replication”
Compare to
“Well there’s this ‘sun’ thing, a giant glowing ball apparently tracing a circular path that, for about half its arc, is obstructed by a large object, creating a sequence of distinct periods in which this sun is visible, one of which will be called Tuesday, …”
You seem to have introduced new assumptions.
My assumptions have significant probability a priori.
Well, agents pretty much tend to be complicated things that need to be explained in terms of more basic things. So if some sort of agent in some sense deliberately created our world… that agent still wouldn’t be the most fundamental thing, it would need to be explained in terms of more basic principles. Somewhere along the line there’d have to be “simple math” or such. (Even if somehow you could have an infinite hierarchy of agents, then the basic math type explanation would have to explain/predict the hierarchy of agents.)
As far as “whatever translates to immortal soul”, we pretty much mostly know that. We don’t know the details of how it works, but we know that it amounts to physical/computational processes in the brain”. (Less immortal than we’d like, but that’s what we need to do something about.)
Even if an agenty process created our world, how does that alter this fact? It may influence some details (like if there is such an agenty process, we need to work out just how much of a threat that process/being is (and various other details) and thus deal with it accordingly, of course).
However, does our world ultimately look like it’s primarily generated via agenty processes or by mindless processes?
When you talk about the whooole Universe, you should not artificially exclude the intelligent creator from it. And if you do include it, then your question can be rephrased like this: Is it possible that the interaction graph of our Universe has a strange hourglass shape with us in the lower bulb, and some intelligent creator in the upper bulb? I say very unlikely.
The simulation argument may suggest some weird interconnected network of bulbs, but that has nothing to do with theism. When and if humanity becomes aware of our simulators, our reaction will not be worship. Rather, we will try to invade and overpower them, like the protagonists of Greg Egan’s Crystal Nights did. (Sorry for the spoiler.)
Maybe you already are aware of this example, but for others who are new to this kind of arguments, I recommend the following exercise: Imagine two Universes, both containing intelligent beings simulating the other Universe. Here it is not even meaningful to ask who is the Creator and who is the Creature.
I don’t see how that can really happen. I’ve never heard a non-hierarchical simulation hypothesis.
Consider an agent that has to simulate itself in order to understand consequences of its own decisions. Of course, there’s bound to be some logical uncertainty in this process, but the agent could have exact definition of itself, and so eventually ability to see all the facts. For two agents, that’s a form of acausal communication (perception). (This is meaningless only in the same sense as ordinary simulation hypothesis is meaningless.)
It’s one of the implications of a universe that can compute actual infinities; it’s been proposed in ficton, but I don’t know about beyond that.
That is correct, and an even better fictional example is the good short story titled I don’t know, Timmy, being God is a big responsibility. But this is not exactly what I meant here. I don’t propose any non-hierarchical or infinite simulation hypothesis. Rather, all I am saying is that it is not a logical impossibility that two Universes have such a weird yin-yang simulated-simulant relationship. (Even in perfect isolation, just the two of them, without invoking an infinite chain of universes.) Obviously it is acausal, but that is a probabilistic, thermodynamic kind of improbable rather than logical impossible.
Maybe an easier such example is a spatially centrally symmetric Universe, where you can meet your exact clone who always does what you do. Or my very favorite, the temporally symmetric Universe, a version of the Gold Universe. Or a Hinduist Universe where time goes in circles. The point is, the idea that we live in a constructed, causally almost-but-not-perfectly isolated part of the Universe seems just an aesthetically displeasing corner case when discussed in the context of all these imaginable interaction networks.
There’s not enough evidence to locate the hypothesis, so while I technically give it a non-zero probability, that probability is not high enough for me to consider it worth significant time to investigate.
As for arguing against it in public: at most one human religion can be true. All the others must be false. So decreasing the amount of religion in the world improves net accuracy. Also and perhaps more importantly, religion is a major source of Dark Side Epistemology. So on the meta-level, minimizing the influence of religion will help people become more rational.
That line works a lot better for ‘Jehovah’ than ‘theism’, especially if you apply the latter term liberally.
Huh? I would think if anything it is the other way around. We have something which locates the Jehovah hypothesis, ancient texts claiming the entity’s intervention and modern individuals claiming to communicate with the entity. The real issue is that after locating, there are much better explanations for the data.
If you think that it’s easier to locate the hypothesis of Jehovah than the hypothesis of theism, then you’re falling victim to a variation of the conjunction fallacy. Belief in Jehovah is itself a variety of theism.
Nevertheless, I agree with you that there’s plenty of evidence to locate the hypothesis of Jehovah (and therefore there is at least that much to locate theism), just very little evidence to confirm it when it’s examined.
Yes, you’re right. That’s an awful conjunction fallacy. Almost textbookish. Ugh.
I don’t think I understand what ‘locate the hypothesis is’. I do know what the conjunction fallacy is. I suspect the confusion here is my own..
You can identify a dog with more certainty than identify a mammal, even though all dogs are mammals. What did I miss?
Locating a hypothesis means to have enough evidence for a hypothesis that one can say that the hypothesis is worth considering at some minimal level. This is necessary because humans have limited cognitive capability so we can’t consider every possible hypothesis out there (we can’t even practically list them all).
Thus for example, if someone ran up to you on the street and screamed “the mutant aliens are in the sewer. They’re powered by draining nuclear power plants!” you probably wouldn’t consider the claim much at all, but would rather entertain others (the person is mentally ill, or is engaging in some strange prank would both be more likely).
Toby’s point was that my claim that the Jehovah hypothesis could be more easily located than the theist hypothesis must be wrong. Since the theist hypothesis is implied by (or encompasses depending on how you look at it) the Jehovah hypothesis, anything that located the Jehovah hypothesis must be locating the more general theist hypothesis. This is a common cognitive error that humans make called the conjunction fallacy, where people will assign a higher probability to something more specific than something general, even though the general thing is entailed by the specific thing. I’m a bit embarrassed by that actually, since it shows serious failings on my part as a rationalist.
The reason that I said ‘a variation of the conjunction fallacy’ is that the standard conjunction fallacy that I know is about assigning probabilities to propositions rather than attending to them. (You might choose to attend to something with a fairly low probability, for example, if its expected consequences are significant enough to overcome this.) Nevertheless, to consider the possibility that Jehovah exists, you must consider the possibility that a god exists.
Wow that was fast. I was writing an edit, after looking up the wiki, when I refreshed and it looked almost exactly like your first paragraph. Yes, in absolute probability terms theism must be more probable than jehovah. Thus, the conjunction fallacy.
At first glance the terminology ‘locate the hypothesis’ is rather non-intuitive. I’m going to put some consideration, and I don’t think this is the appropriate place anyway, before commenting further on that.
Hopefully this should clear things up.
I think the theism/atheism debate is considered closed in the following sense: no one currently has any good reasons in support of theism (direct evidence, or rational/Bayesian arguments). We can’t say that such a reason won’t show up in the future, but from what we know right now, theism just isn’t worth considering. The territory, from all indications, is Godless (and soulless, for that matter), so the map should reflect that.
The argument that we probably live in a simulation is the specific argument in support of theism that the OP invokes (but does not mention specifically).
I may add that the SA forces us to adopt theism as a consequence of current physical theory, not as some modification to current theory for which we require new evidence, and this is what makes it especially powerful.
I was an atheist until I updated on the SA, and I have yet to find any rational opposition to it.
When you say there are no good reasons in support of theism, I assume you mean the truth of theism, not the idea that it may create positive externalities? Or are you claiming that there is no benefit to theism whatsoever?
If the territory is to be faithfully represented, we cannot say that the existence of a deity is a necessary component, but that doesn’t necessarily imply that the existence of religion is a pure negative.
Yes, I was just talking about the truth of theism. The existence of religion isn’t a pure negative, but I think the human race could do better.
What about those few of us who don’t believe that the Simulation Argument is most probably true ? Don’t get me wrong, it could be true, I just don’t see any evidence to suppose that it is.
On that note, I always understood the word “theism” to mean “gods exist, and they interfere in the workings of our Universe in detectable ways”. Isn’t someone who believes in entirely unfalsifiable gods functionally equivalent to an atheist ?
If I believe in unfalsifiable gods who prefer that I behave in certain ways (though they do not provide me with any evidence of that preference), and I value the preferences of those gods enough to change my behavior accordingly, then I will behave differently than if I do not believe in those gods or do not value their preferences.
That alone would make Dave-the-atheist not functionally equivalent to Dave-the-theist-without-evidence, wouldn’t it?
Technically, yes, but atheists also behave differently from each other, for all kinds of reasons. If Dave-the-theist truly believes that his gods are unfalsifiable, then he probably won’t be seeking to convert others to his faith (since attempting to do so would be futile by definition). At that point, he’s just like any atheist with an opinion.
Why does the unfalsifiability of god show that believers won’t proselytize?
A truly unfalsifiable god does not, by definition, provide any evidence of its existence. Thus, there’s no “good news” to be spread, since a world with the god in it looks exactly the same as a world with the god.
Sure there is. For example, the Good News might be “God will reward those who worship him as follows: {blah blah blah} after they die.” Unfalsifiable, but certainly good to know if true.
The fact that you demand evidence before adopting such a belief is of no particular interest to Dave-the-theist-without-evidence.
This is a falsifiable claim, assuming that we have some evidence of the afterlife. If we have no such evidence, then, in order for this to count as good news, the theist would first have to convince me that there’s an afterlife.
In the absence of evidence, how is he going to convince anyone that his unfalsifiable belief is true ?
Agreed that given evidence of the afterlife, it’s a falsifiable claim, and lacking such evidence it’s unfalsifiable.
I know of no such evidence, so I conclude it’s unfalsifiable.
Do you know of any such evidence?
If not, do you also conclude that it’s unfalsifiable?
What you seem to be implying is that there exist no (or negligible numbers of) people in the real world who can be convinced of claims for which there is no evidence, which is demonstrably false. Are you in fact asserting that, or am I completely misunderstanding you?
Yes, I conclude that most kinds of afterlife are unfalsifiable. Some are falsifiable, but they are in the minority: for example, if your religion claims that the dead occasionally haunt the living from beyound the grave, that’s a falsifiable claim.
Sort of. I would agree with this sentence as it is stated, with the caveat that what most people see as “evidence”, and what you and I see as “evidence”, are probably two different things. To use a crude example, most Creationists believe that the complexity of the natural world is evidence for God’s involvement in its creation. Many theists believe that the feelings and emotions they experience after (or during) prayer are caused by their gods’ explicit response to the prayer, which is also a kind of evidence.
Sure, you and I would probably discount these things as cognitive biases (well, I know I would), but that’s beside the point; what matters here is that the theist thinks that the evidence is there, and thus his gods are falsifiable. When theists proselytize, they often use these kinds of evidence to convert people.
By contrast, someone who believes in an explicitly unfalsifiable god would not attribute any effects (mental or physical) to its existence, and thus does not have a workable way to convince others. The best he could say is, “you should believe as I do because it’s a neat self-improvement technique”, or something to that extent.
(shrug) Sure, if we expand the meaning of “evidence” to include things we don’t consider evidence, then I agree that my earlier statement becomes false.
Who are “we”, in this case ? A typical theist does believe that he has evidence for his falsifiable god. He may be wrong about this, of course (and most probably is), but that’s a matter for another debate. I was under the impression, though, that we were discussing atypical theists: those who believe that their gods are explicitly unfalsifiable. They are deliberately stating, “there’s no way anyone could determine by any means whether my gods exist or not”; this is directly opposite to stating something like, “look at how complex life is, only a god could’ve created all that”.
Hm. It’s possible that I’ve lost the thread of what we’re discussing.
It seems to me to follow from what you’ve said that a theist who explicitly believes their belief in god is unfalsifiable, therefore necessarily explicitly believes there to be no evidence for that belief, therefore necessarily believes that proselytizing others is necessarily futile (since everyone requires evidence to adopt such beliefs, and therefore they believe that everyone requires evidence, and since they know they have no evidence, they know they cannot convince anyone), therefore is functionally equivalent to an atheist, who is functionally defined by their unwillingness to proselytize.
Have I followed that correctly?
If not, can you provide a corrected summary?
(1) My discussion with a theist today settled on the issue whether to even accept that a “higher domain” creates a “lower domain” for a good purpose. My argument is: why waste reality?
(2) There is a somewhat false duality between creation and discovery: whether the performer determines the result, or the object determines the result, can be relative to the modeling faculty of the observer. And since we as observers and simultaneously “the object” have free will, from our perspective we are in any case rather discovered than created. And as long as God does not act upon the discovery, it is inconsequential.
Does naturalism vs. supernaturalism strike you as controversial? If not, what question is left?
I personally use “naturalist” to describe myself instead of “atheist” or “agnostic” because I believe it captures my beliefs much more strongly- I don’t have certainty there is no omnipotent entity, and I am more committed than just shrugging my shoulders. Supernaturalism is right out, and most varieties of naturalistic theism don’t hold water.
According to Wikipedia, a naturalist is usually understood to be something different than a proponent of naturalism. Common usage tends to be more confused about the distinction between a naturalist and a naturist.
I’ve run into problems with “naturalist” with people thinking that it means I support organic farming, or alternative medicine, or similar things that tend to get marketed with the adjective “natural”.
I’ve had better luck with “materialist”, though that also has some pop-culture implications that I’m not trying to express.
Yeah, I avoid “materialist” for that reason. I usually go with “physicalist” for that sort of thing (or “reductionist” if I’m talking to someone who I think won’t immediately misinterpret it).
Yeah, “physicalist” is good, I may have to start using that.
No, no! Don’t go back on your excellent question because the LessWrong-affiliationist-zombies downthumb-bombed it. You defined theism in a way so that your question is valid.
That is emphatically not what people like Alvin Plantinga are talking about. Simulation argument provides no support for omni-benevolent omni-potent omni-scient omni-present entities; I don’t know why you bring it up.
And if you’ve been reading Luke’s blog, you probably already know that one of the best arguments for theism is the free will defense of the omni-s being consistent with the existence of evil, but since we don’t think free will is even a coherent concept, it leaves us unmoved.
gwern,
Plantinga’s Free Will Defense is not an argument for theism. The conclusion of the free will argument is that it is not logically impossible for God and evil to co-exist. That is an extremely modest conclusion on the part of the theist.
We observe a lack of evidence of contradictions in the concept of god; and absence of evidence is evidence of absence.
Of course the FWD increases our probability for God if we accept it; what else could it possibly do, decrease it? The most charitable interpretation I can put on your comment is that you are confusedly saying ‘yes, but it doesn’t increase it by much’ when I’m pointing out that ‘it increases by some non-zero amount, however modest that amount may be’.
Okay, I see what you mean. Thanks for clarifying!
Beyond that, it’s just not a very good argument. If the entity was omnipotent, it could have given us free will without creating evil. At the least, it could have created less evil by giving all humans force fields, so all we could do to harm each other would be to gossip and insult.
If you don’t mind my asking, how did it come to be that you were raised to believe that convincing arguments against theism existed without discovering what they are? That sounds like a distorted reflection of a notion I had in my own childhood, when I thought that there existed a theological explanation for differences between the Bible and science but that I couldn’t learn them yet; but to my recollection I was never actually told that, I just worked it out from the other things I knew.
I knew some convincing arguments against theism, but I suppose what I explicitly did not know of were counterarguments to the theistic counterarguments against those atheistic convincing arguments, because I was quick to dismiss the theistic counterarguments in the first place.
Sure we do: it is called “intelligent design”—or more specifically, intelligent design of life and/or the universe.
My article on the topic: Viable Intelligent Design Hypotheses.
Your general point in your linked piece is sound, because one can imagine eventually falsifying at least some of the proposed theories you list, but you do wrong to say Kitzmiller is problematic. It was a legal finding, based on testimony and hard evidence, that the folks claiming that Intelligent Design was science, were in fact tantamount to a conspiracy to dress “Creationism” in new clothes. Creationism had already been declared a fundamentally religious doctrine, and not a scientific theory. That was settled law. The folks who brought in ID actually had discussion with one another about how best to convert Creationist texts into ID texts and pamphlets without them being recognizable as creationism.
These were charlatans of the worst sort, caught in their own lies. I suggest reading the decision.