“Given the nature of the multiverse, everything that can possibly happen will happen. This includes works of fiction: anything that can be imagined and written about, will be imagined and written about. If every story is being written, then someone, somewhere in the multiverse is writing your story. To them, you are a fictional character. What that means is that the barrier which separates the dimensions from each other is in fact the Fourth Wall.”
-- In Flight Gaiden: Playing with Tropes
(Conversely, many fictions are instantiated somewhere, in some infinitesimal measure. However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagons and having the objects in Dumbledore’s room change number without any being added or subtracted, to avoid the story being real anywhere.)
(Conversely, many fictions are instantiated somewhere, in some infinitesimal measure. However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagrams and having the objects in Dumbledore’s room change number without any being added or subtracted, to avoid the story being real anywhere.)
In the library of books of every possible string, close to “Harry Potter and the Methods of Rationality” and “Harry Potter and the Methods of Rationalitz” is “Harry Potter and the Methods of Rationality: Logically Consistent Edition.” Why is the reality of that books’ contents affected by your reticence to manifest that book in our universe?
Absolutely; I hope he doesn’t think that writing a story about X increases the measure of X. But then why else would he introduce these “impossibilities”?
It is a different story then, so the original HpMor would still not be nonfiction in another universe. For all we know, the existance of a corridor tiled with pentagons is in fact an important plot point and removing it would utterly destroy the structure of upcoming chapters.
Nnnot really. The Time-Turner, certainly, but that doesn’t make the story uninstantiable. Making a logical impossibility a basic plot premise… sounds like quite an interesting challenge, but that would be a different story.
A spell that lets you get a number of objects that is an integer such that it’s larger than some other integer but smaller than it’s successor, used to hide something.
And SCP-033. And related concepts in Dark Integers by Greg Egan. And probably a bunch of other places. I’m surprised I couldn’t find a TVtropes page on it.
impossibilities such as … tiling a corridor in pentagons
Huh. And here I thought that space was just negatively curved in there, with the corridor shaped in such a way that it looks normal (not that hard to imagine), and just used this to tile the floor. Such disappointment...
This was part of a thing, too, in my head, where Harry (or, I guess, the reader) slowly realizes that Hogwarts, rather than having no geometry, has a highly local geometry. I was even starting to look for that as a thematic thing, perhaps an echo of some moral lesson, somehow.
And this isn’t even the sort of thing you can write fanfics about. :¬(
However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagons and having the objects in Dumbledore’s room change number without any being added or subtracted, to avoid the story being real anywhere.
Could you explain why you did that?
As regards the pentagons, I kinda assumed the pentagons weren’t regular, equiangular pentagons—you could tile a floor in tiles that were shaped like a square with a triangle on top! Or the pentagons could be different sizes and shapes.
But if all mathematically possible universes exist anyway (or if they have a chance of existing), then the hypothetical “Azkaban from a universe without EY’s logical inconsistencies” exists, no matter whether he writes about it or not. I don’t see how writing about it could affect how real/not-real it is.
So by my understanding of how Eliezer explained it, he’s not creating Azkaban, in the sense that writing about it causes it to exist, he’s describing it. (This is not to say that he’s not creating the fiction, but the way I see it create is being used in two different ways.) Unless I’m missing some mechanism by which imagining something causes it to exist, but that seems very unlikely.
I seem to recall that he terminally cares about all mathematically possible universes, not just his own, to the point that he won’t bother having children because there’s some other universe where they exist anyway.
I think that violates the crap out of Egan’s Law (such an argument could potentially apply to lots of other things), but given that he seems to be otherwise relatively sane, I conclude that he just hasn’t fully thought it through (“decompartimentalized” in LW lingo) (probability 5%), that’s not his true rejection to the idea of having kids (30%), or I am missing something (65%).
That is not the reason or even a reason why I’m not having kids at the moment. And since I don’t particularly want to discourage other people from having children, I decline to discuss my own reasons publicly (or in the vicinity of anyone else who wants kids).
I don’t particularly want to discourage other people from having children
I feel that I should. It’s a politically inconvenient stance to take, since all human cultures are based on reproducing themselves; antinatal cultures literally die out.
But from a human perspective, this world is deeply flawed. To create a life is to gamble with the outcome of that life. And it seems to be a gratuitous gamble.
And since I don’t particularly want to discourage other people from having children, I decline to discuss my own reasons publicly (or in the vicinity of anyone else who wants kids).
That sounds sufficiently ominous that I’m not quite sure I want kids any more.
Unfortunately, that seems to be a malleable argument. Which way your stating that (you don’t want to disclose your reasons for not wanting to have kids) will influence audiences seems like it will depend heavily on their priors for how generally-valid-to-any-other-person this reason might be, and for how self-motivated both the not-wanting-to-have-kids and the not-wanting-to-discourage-others could be.
Then again, I might be missing some key pieces of context. No offense intended, but I try to make it a point not to follow your actions and gobble up your words personally, even to the point of mind-imaging a computer-generated mental voice when reading the sequences. I’ve already been burned pretty hard by blindly reaching for a role-model I was too fond of.
All that means is that he is aware of the halo effect. People who have enjoyed or learned from his work will give his reasons undue weight as a consequence, even if they don’t actually apply to them.
Obviously his reason is that he wants to personally maximize his time and resources on FAI research. Because not everyone is a seed AI programmer, this reason does not apply to most everyone else. If Eliezer thinks FAI is going to probably take a few decades (which evidence seems to indicate he does), then it probably very well is in the best interest of those rationalists who aren’t themselves FAI researchers to be having kids, so he wouldn’t want to discourage that. (although I don’t see how just explaining this would discourage anybody from having kids who you would otherwise want to.)
(I do care about everything that exists. I am not particularly certain that all mathematically possible universes exist, or how much they exist if they do. I do expect that our own universe is spatially and in several other ways physically infinite or physically very big. I don’t see this as a good argument against the fun of having children. I do see it as a good counterargument to creating children for the sole purpose of making sure that mindspace is fully explored, or because larger populations of the universe are good qua good. This has nothing to do with the reason I’m not having kids right now.)
I think I care about almost nothing that exists, and that seems like too big a disagreement. It’s fair to assume that I’m the one being irrational, so can you explain to me why one should care about everything?
All righty; I run my utility function over everything that exists. On most of the existing things in the modern universe, it outputs ‘don’t care’, like for dirt. However, so long as a person exists anywhere, in this universe or somewhere else, my utility function cares about them. I have no idea what it means for something to exist, or why some things exist more than others; but our universe is so suspiciously simple and regular relative to all imaginable universes that I’m pretty sure that universes with simple laws or uniform laws exist more than universes with complicated laws with lots of exceptions in them, which is why I don’t expect to sprout wings and fly away. Supposing that all possible universes ‘exist’ with some weighting by simplicity or requirement of uniformity, does not make me feel less fundamentally confused about all this; and therefore I’m not sure that it is true, although it does seem very plausible.
Supposing that all possible universes ‘exist’ with some weighting by simplicity or requirement of uniformity, does not make me feel less fundamentally confused about all this;
Shouldn’t we strongly expect this weighting, by Solomonoff induction?
Allow me to paraphrase him with some of my own thoughts.
Dang, existence, what is that? Can things exist more than other things? In Solomonoff induction we have something that kind of looks like “all possible worlds”, or computable worlds anyway, and they’re each equipped with a little number that discounts them by their complexity. So maybe that’s like existing partially? Tiny worlds exist really strongly, and complex worlds are faint? That...that’s a really weird mental image, and I don’t want to stake very much on its accuracy. I mean, really, what the heck does it mean to be in a world that doesn’t exist very much? I get a mental image of fog or a ghost or something. That’s silly because it needlessly proposes ghosty behavior on top of the world behavior which determines the complexity, so my mental imagery is failing me.
So what does it mean for my world to exist less than yours? I know how that numerical discount plays into my decisions, how it lets me select among possible explanations, it’s a very nice and useful little principle. Or at least its useful in this world. But maybe I’m thinking that in multiple worlds, some of which I’m about to find myself having negative six octarine tentacles. So occam’s razor is useful in … some world. But the fact that its useful to me suggests that it says something about reality, maybe even about all those other possible worlds, whatever they are. Right? Maybe? It doesn’t seem like a very big leap to go from “Occam’s razor is useful” to “Occam’s razor is useful because when using it, my beliefs reflect and exploit the structure of reality”, or to “Some worlds exist more than others, the obvious interpretation of what ontological fact is being taking into consideration in the math of Solomonoff induction”.
Wei Dai suggested that maybe prior probabilities are just utilities, that simpler universes don’t exist more, we just care about them more, or let our estimation of consequences of our actions in those worlds steer our decision more than consequences in other, complex, funny looking worlds. That’s an almost satisfying explanation, it would sweep away a lot of my confused questions, but It’s not quite obviously right to me, and that’s the standard I hold myself to. One thing that feels icky about the idea of “degree of existence” actually being “degree of decision importance” is that worlds with logical impossibilities used to have priors of 0 in my model of normative belief. But if priors are utilities, then a thing is a logical impossibility only because I don’t care at all about worlds in which it occurs? And likewise truth depends on my utility function? And there are people in impossible worlds who say that I live in an impossible world because of their utility functions? Graagh, I can’t even hold that belief in my head without squicking. How am I supposed to think about them existing while simultaneously supposing that it’s impossible for them to exist?
Or maybe “a logically impossible event” isn’t meaningful. It sure feels meaningful. It feels like I should even be able to compute logically impossible consequences by looking at a big corpus of mathematical proofs and saying “These two proofs have all the same statements, just in different order, so they depend on the same facts”, or “these two proofs can be compressed by extracting a common subproof”, or “using dependency-equivalences and commonality of subproofs, we should be able to construct a little directed graph of mathematical facts on which we can then compute Pearlian mutilated model counterfactuals, like what would be true if 2=3″ in a non paradoxical way, in a way that treats truth and falsehood and the interdependence of facts as part of the behavior of the reality external to my beliefs and desires.
And I know that sounds confused, and the more I talk the more confused I sound. But not thinking about it doesn’t seem like it’s going to get me closer to the truth either. Aiiiiiiieeee.
our universe is so suspiciously simple and regular relative to all imaginable universes
(Assuming you mean “all imaginable universes with self-aware observers in them”.)
Not completely sure about that, even Conway’s Game of Life is Turing-complete after all. (But then, it only generates self-aware observers under very complicated starting conditions. We should sum the complexity of the rules and the complexity of the starting conditions, and if we trust Penrose and Hawking about this, the starting conditions of this universe were terrifically simple.)
On most of the existing things in the modern universe, it outputs ‘don’t care’, like for dirt.
What do you mean, you don’t care about dirt? I care about dirt! Dirt is where we get most of our food, and humans need food to live. Maybe interstellar hydrogen would be a better example of something you’re indifferent to? 10^17 kg of interstellar hydrogen disappearing would be an inconsequential flicker if we noticed it at all, whereas the loss of an equal mass of arable soil would be an extinction-level event.
Instrumental values are just subgoals that appear when you form plans to achieve your terminal values. They aren’t supposed to be reflected in your utility function. That is a type error plain and simple.
For agents with bounded computational resources, I’m not sure that’s the case. I don’t terminally value money at all, but I pretend I do as a computational approximation because it’d be too expensive for me to run an expected utility calculation over all things I could possibly buy whenever I’m consider gaining or losing money in exchange for something else.
That one is. Instrumental values do not go in utility function. You use instrumental values to shortcut complex utility calculations, but utility calculating shortcut != component of utility function.
Try tabooing exist: you might find out that you actually disagree on fewer things than you expect. (I strongly suspect that the only real differences between the four possibilities in this is labels—the way once in a while people come up with new solutions to Einstein’s field equations only to later find out they were just already-known solutions with an unusual coordinate system.)
“Be in this universe”(1) vs “be mathematically possible” should cover most cases, though other times it might not quite match either of those and be much harder to explain.
“This universe” being defined as everything that could interact with the speaker, or with something that could interacted with the speaker, etc. ad infinitum.
Defining ‘existence’ by using ‘interaction’ (or worse yet the possibility of interaction) seems to me to be trying to define something fundamental by using something non-fundamental.
As for “mathematical possibility”, that’s generally not what most people mean by existence—unless Tegmark IV is proven or assumed to be true, I don’t think we can therefore taboo it in this manner...
Defining ‘existence’ by using ‘interaction’ (or worse yet the possibility of interaction) seems to me to be trying to define something fundamental by using something non-fundamental.
I’m not claiming they’re ultimate definitions—after all any definition must be grounded in something else—but at least they disambiguate which meaning is meant, the way “acoustic wave” and “auditory sensation” disambiguate “sound” in the tree-in-a-forest problem. For a real-world example of such a confusion, see this, where people were talking at cross-purposes because by “no explanation exists for X” one meant ‘no explanation for X exists written down anywhere’ and another meant ‘no explanation for X exists in the space of all possible strings’.
As for “mathematical possibility”, that’s generally not what most people mean by existence—unless Tegmark IV is proven or assumed to be true, I don’t think we can therefore taboo it in this manner...
Sentences such as “there exist infinitely many prime numbers” don’t sound that unusual to me.
Try tabooing exist: you might find out that you actually disagree on fewer things than you expect.
That’s way too complicated (and as for tabooing ‘exist’, I’ll believe it when I see it). Here’s what I mean: I see a dog outside right now. One of the things in that dog is a cup or so of urine. I don’t care about that urine at all. Not one tiny little bit. Heck, I don’t even care about that dog, much less all the other dogs, and the urine that is in them. That’s a lot of things! And I don’t care about any of it. I assume Eliezer doesn’t care about the dog urine in that dog either. It would be weird if he did. But it’s in the ‘everything’ bucket, so...I probably misunderstood him?
I do care about everything that exists. I am not particularly certain that all mathematically possible universes exist, or how much they exist if they do.
So you’re using exist in a sense according to which they have moral relevance iff they exist (or something roughly like that), which may be broader than ‘be in this universe’ but may be narrower than ‘be mathematically possible’. I think I get it now.
“I do care about everything that exists. I am not particularly certain that all mathematically possible universes exist, or how much they exist if they do.”
I was confused by this for a while, but couldn’t express that in words until now.
First, I think existence is necessarily a binary sort of thing, not something that exists in degrees. If I exist 20%, I don’t even know what that sentence should mean. Do I exist, but only sometimes? Do only parts of me exist at a time? Am I just very skinny? It doesn’t really make sense. Just as a risk of a risk is still a type of risk, so a degree of existence is still a type of existence. There are no sorts of existence except either being real or being fake.
Secondly, even if my first part is wrong, I have no idea why having more existence would translate into having greater value. By way of analogy, if I was the size of a planet but only had a very small brain and motivational center, I don’t think that would mean that I should receive more from utilitarians. It seems like a variation of the Bigger is Better or Might makes Right moral fallacy, rather than a well reasoned idea.
I can imagine a sort of world where every experience is more intense, somehow, and I think people in that sort of world might matter more. But I think intensity is really a measure of relative interactions, and if their world was identical to ours except for its amount of existence, we’d be just as motivated to do different things as they would. I don’t think such a world would exist, or that we could tell whether or not we were in it from-the-inside, so it seems like a meaningless concept.
So the reasoning behind that sentence didn’t really make sense to me. The amount of existence that you have, assuming that’s even a thing, shouldn’t determine your moral value.
I imagine Eliezer is being deliberately imprecise, in accordance with a quote I very much like: “Never speak more clearly than you think.” [The internet seems to attribute this to one Jeremy Bernstein]
If you believe MWI there are many different worlds that all objectively exist. Does this mean morality is futile, since no matter what we choose, there’s a world where we chose the opposite? Probably not: the different worlds seem to have different different “degrees of existence” in that we are more likely to find ourselves in some than in others. I’m not clear how this can be, but the fact that probability works suggests it pretty strongly. So we can still act morally by trying to maximize the “degree of existence” of good worlds.
This suggests that the idea of a “degree of existence” might not be completely incoherent.
I suppose you can just attribute it to imprecision, but “I am not particularly certain …how much they exist” implies that he’s talking about a subset of mathematically possible universes that do objectively exist, but yet exist less than other worlds. What you’re talking about, conversely, seems to be that we should create as many good worlds as possible, stretched in order to cover Eliezer’s terminology. Existence is binary, even though there are more of some things that exist than there are of other things. Using “amount of existence” instead of “number of worlds” is unnecessarily confusing, at the least.
Also, I don’t see any problems with infinitarian ethics anyway because I subscribe to (broad) egoism. Things outside of my experience don’t exist in any meaningful sense except as cognitive tools that I use to predict my future experiences. This allows me to distinguish between my own happiness and the happiness of Babykillers, which allows me to utilize a moral system much more in line with my own motivations. It also means that I don’t care about alternate versions of the universe unless I think it’s likely that I’ll fall into one through some sort of interdimensional portal (I don’t).
Although, I’ll still err on the side of helping other universes if it does no damage to me because I think Superrationality can function well in those sort of situations and I’d like to receive benefits in return, but in other scenarios I don’t really care at all.
I was sure I had heard seen you talk about them in public (On BHTV, I believe) some thing like (possible misquote) “Lbh fubhyqa’g envfr puvyqera hayrff lbh pna ohvyq bar sebz fpengpu,” which sounded kinda wierd, because it applies to literally every human on earth, and that didn’t seem to be where you were going.
The way I recall it, there was no such caveat in that particular instance. I am not attempting to take him outside of context and I do think I would have remembered. He may have used this every other time he’s said it. It may have been cut for time. And I don’t mean to suggest my memory is anything like perfect.
But: I strongly suspect that’s still on the internet, on BHTV or somewhere else.
It’s not something Eliezer wanted said publicly. I wasn’t sure what to do, and for some reason I didn’t want to PM or email, so I picked a shitty, irrational half measure. I do that sometimes, instead of just doing the rational thing and PMing/ emailing him/ keeping my mouth shut if it really wasn’t worth the effort to think about another 10 seconds. I do that sometimes, and I usually know about when I do it, like this time, but can’t always keep myself from doing it.
Tiling the wall with impossible geometry seems reasonable, but from what I recall about the objects in Dumbledore’s room, all the story said was that Hermione kept losing track. Not sure whether artist intent trumps reader interpretation, but at first glance it seems far more likely to me that magic was causing Hermione to be confused than that magic was causing mathematical impossibilities.
The problem with using such logical impossibilities is you have to make sure they’re really impossible. For example, tiling a corridor with pentagons is completely viable in non-euclidean space.
So, sorry to break it to you, but it there’s a multiverse your story is real in it.
“She heard Harry sigh, and after that they walked in silence for a while, passing through an archway of some reddish metal like copper, into a corridor that was just like the one they’d left except that it was tiled in pentagonsinstead of squares.”
Anyway, I’ve decided that, when not talking about mathematics, real, exist, happen, etc. are deictic terms which specifically refer to the particular universe the speaker is in. Using real to apply to everything in Tegmark’s multiverse fails Egan’s Law IMO. See also: the last chapter of Good and Real.
Of course, universes including stories extremely similar to HPMOR except that the corridor is tiled in hexagons etc. do ‘exist’ ‘somewhere’. (EDIT: hadn’t notice the same point had been made before. OK, I’ll never again reply to comments in “Top Comments” without reading already existing replies first—if I remember not to.)
Or at least… the story could not be real in a universe unless at least portions of the universe could serve as a model for hyperbolic geometry and… hmm, I don’t thinknon-standard arithmetic will get you “Exists.N (N != N)”, but reading literally here, you didn’t say they were the same as such, merely that the operations of “addition” or “subtraction” were not used on them.
Now I’m curious about mentions of arithmetic operations and motion through space in the rest of the story. Harry implicitly references orbital mechanics I think… I’m not even sure if orbits are stable in hyperbolic 3-space… And there’s definitely counting of gold in the first few chapters, but I didn’t track arithmetic to see if prices and total made sense… Hmm. Evil :-P
Viruses are technically considered non-living, and if you happen to have a pet with a cold, there may well be more viruses in the room when you enter it the second time, even though nothing has left or entered the room. I know that’s a triviality, but some part of my mind took this as a challenge.
More Ways:
Place 100 strings into a large vat of sugar solution. Come back to discover that 100 rock candies have formed. Want to argue that the number of rock candies will equal the number of strings? Okay, make the strings really brittle in multiple places so as the rock candies grow heavier, they break off into smaller chunks.
Balance a delicate lego construction on an unstable surface with a loud woofer in the room. That’s likely to turn from 1 lego object into hundreds of lego objects.
You could disguise a factory inside the room and have it turn a bucket of dense material into many less dense and therefore much larger objects, making the room appear empty at first and full later on. It could produce balloons from a block of latex for instance.
Place a set of ice cubes in a bucket. Twelve ice cubs becomes one bucket of water. Fifty ice sculptures can become one indoor pool.
Using concepts like reproduction, deconstruction, production, liquefying and crystallization, how much might one be able to really confuse a person with pranks designed to make it appear as though objects have entered or left a room?
I haven’t read HPMoR, and I certainly haven’t read the specific scene(s) in question, but inferring from what I expect Eliezer would have wanted to write in such a situation, I’m going with the prior assumption that that’s not at all what he meant.
Consider this scenario for perspective:
There are ten objects in an otherwise utterly empty, blank cubic room with while walls and a doorknob to open a panel of one wall. You can see every object from every point in the room (unless you’re really tiny and hide behind one of the objects). You know exactly which objects there are, what they are, and what they do. You count them. There are nine objects. What?! You double-check. You still know all the ten objects, and all twelve of them are still there. They add up to twelve when counted. Wait, what’s that? Weren’t there ten at first? No, you’re sure, you just counted them, you’re positive all objects are there, and there are six of them. Oh well, let’s just leave and do something more productive.
Basically, it’s not about the number of objects being different. It’s that the laws of counting themselves stop functioning altogether, such that the very same objects add up to a different amount of objects each time they are counted. It’s a ridiculous sillyness of logical impossibility.
-- In Flight Gaiden: Playing with Tropes
(Conversely, many fictions are instantiated somewhere, in some infinitesimal measure. However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagons and having the objects in Dumbledore’s room change number without any being added or subtracted, to avoid the story being real anywhere.)
In the library of books of every possible string, close to “Harry Potter and the Methods of Rationality” and “Harry Potter and the Methods of Rationalitz” is “Harry Potter and the Methods of Rationality: Logically Consistent Edition.” Why is the reality of that books’ contents affected by your reticence to manifest that book in our universe?
Absolutely; I hope he doesn’t think that writing a story about X increases the measure of X. But then why else would he introduce these “impossibilities”?
Because it’s funny?
It is a different story then, so the original HpMor would still not be nonfiction in another universe. For all we know, the existance of a corridor tiled with pentagons is in fact an important plot point and removing it would utterly destroy the structure of upcoming chapters.
Nnnot really. The Time-Turner, certainly, but that doesn’t make the story uninstantiable. Making a logical impossibility a basic plot premise… sounds like quite an interesting challenge, but that would be a different story.
A spell that lets you get a number of objects that is an integer such that it’s larger than some other integer but smaller than it’s successor, used to hide something.
This idea (the integer, not the spell) is the premise of the short story The Secret Number by Igor Teper.
And SCP-033. And related concepts in Dark Integers by Greg Egan. And probably a bunch of other places. I’m surprised I couldn’t find a TVtropes page on it.
Huh. And here I thought that space was just negatively curved in there, with the corridor shaped in such a way that it looks normal (not that hard to imagine), and just used this to tile the floor. Such disappointment...
This was part of a thing, too, in my head, where Harry (or, I guess, the reader) slowly realizes that Hogwarts, rather than having no geometry, has a highly local geometry. I was even starting to look for that as a thematic thing, perhaps an echo of some moral lesson, somehow.
And this isn’t even the sort of thing you can write fanfics about. :¬(
Could you explain why you did that?
As regards the pentagons, I kinda assumed the pentagons weren’t regular, equiangular pentagons—you could tile a floor in tiles that were shaped like a square with a triangle on top! Or the pentagons could be different sizes and shapes.
Because he doesn’t want to create Azkaban.
Also, possibly because there’s not a happy ending.
But if all mathematically possible universes exist anyway (or if they have a chance of existing), then the hypothetical “Azkaban from a universe without EY’s logical inconsistencies” exists, no matter whether he writes about it or not. I don’t see how writing about it could affect how real/not-real it is.
So by my understanding of how Eliezer explained it, he’s not creating Azkaban, in the sense that writing about it causes it to exist, he’s describing it. (This is not to say that he’s not creating the fiction, but the way I see it create is being used in two different ways.) Unless I’m missing some mechanism by which imagining something causes it to exist, but that seems very unlikely.
I seem to recall that he terminally cares about all mathematically possible universes, not just his own, to the point that he won’t bother having children because there’s some other universe where they exist anyway.
I think that violates the crap out of Egan’s Law (such an argument could potentially apply to lots of other things), but given that he seems to be otherwise relatively sane, I conclude that he just hasn’t fully thought it through (“decompartimentalized” in LW lingo) (probability 5%), that’s not his true rejection to the idea of having kids (30%), or I am missing something (65%).
That is not the reason or even a reason why I’m not having kids at the moment. And since I don’t particularly want to discourage other people from having children, I decline to discuss my own reasons publicly (or in the vicinity of anyone else who wants kids).
I feel that I should. It’s a politically inconvenient stance to take, since all human cultures are based on reproducing themselves; antinatal cultures literally die out.
But from a human perspective, this world is deeply flawed. To create a life is to gamble with the outcome of that life. And it seems to be a gratuitous gamble.
That sounds sufficiently ominous that I’m not quite sure I want kids any more.
Shouldn’t you be taking into account that I don’t want to discourage other people from having kids?
That might just be because you eat babies.
Unfortunately, that seems to be a malleable argument. Which way your stating that (you don’t want to disclose your reasons for not wanting to have kids) will influence audiences seems like it will depend heavily on their priors for how generally-valid-to-any-other-person this reason might be, and for how self-motivated both the not-wanting-to-have-kids and the not-wanting-to-discourage-others could be.
Then again, I might be missing some key pieces of context. No offense intended, but I try to make it a point not to follow your actions and gobble up your words personally, even to the point of mind-imaging a computer-generated mental voice when reading the sequences. I’ve already been burned pretty hard by blindly reaching for a role-model I was too fond of.
But you’re afraid that if you state your reason, it will discourage others from having kids.
All that means is that he is aware of the halo effect. People who have enjoyed or learned from his work will give his reasons undue weight as a consequence, even if they don’t actually apply to them.
Obviously his reason is that he wants to personally maximize his time and resources on FAI research. Because not everyone is a seed AI programmer, this reason does not apply to most everyone else. If Eliezer thinks FAI is going to probably take a few decades (which evidence seems to indicate he does), then it probably very well is in the best interest of those rationalists who aren’t themselves FAI researchers to be having kids, so he wouldn’t want to discourage that. (although I don’t see how just explaining this would discourage anybody from having kids who you would otherwise want to.)
(I must have misremembered. Sorry)
OK, no prob!
(I do care about everything that exists. I am not particularly certain that all mathematically possible universes exist, or how much they exist if they do. I do expect that our own universe is spatially and in several other ways physically infinite or physically very big. I don’t see this as a good argument against the fun of having children. I do see it as a good counterargument to creating children for the sole purpose of making sure that mindspace is fully explored, or because larger populations of the universe are good qua good. This has nothing to do with the reason I’m not having kids right now.)
I think I care about almost nothing that exists, and that seems like too big a disagreement. It’s fair to assume that I’m the one being irrational, so can you explain to me why one should care about everything?
All righty; I run my utility function over everything that exists. On most of the existing things in the modern universe, it outputs ‘don’t care’, like for dirt. However, so long as a person exists anywhere, in this universe or somewhere else, my utility function cares about them. I have no idea what it means for something to exist, or why some things exist more than others; but our universe is so suspiciously simple and regular relative to all imaginable universes that I’m pretty sure that universes with simple laws or uniform laws exist more than universes with complicated laws with lots of exceptions in them, which is why I don’t expect to sprout wings and fly away. Supposing that all possible universes ‘exist’ with some weighting by simplicity or requirement of uniformity, does not make me feel less fundamentally confused about all this; and therefore I’m not sure that it is true, although it does seem very plausible.
The moral value of imaginary friends?
I notice that I am meta-confused...
Shouldn’t we strongly expect this weighting, by Solomonoff induction?
Probability is not obviously amount of existence.
Allow me to paraphrase him with some of my own thoughts.
Dang, existence, what is that? Can things exist more than other things? In Solomonoff induction we have something that kind of looks like “all possible worlds”, or computable worlds anyway, and they’re each equipped with a little number that discounts them by their complexity. So maybe that’s like existing partially? Tiny worlds exist really strongly, and complex worlds are faint? That...that’s a really weird mental image, and I don’t want to stake very much on its accuracy. I mean, really, what the heck does it mean to be in a world that doesn’t exist very much? I get a mental image of fog or a ghost or something. That’s silly because it needlessly proposes ghosty behavior on top of the world behavior which determines the complexity, so my mental imagery is failing me.
So what does it mean for my world to exist less than yours? I know how that numerical discount plays into my decisions, how it lets me select among possible explanations, it’s a very nice and useful little principle. Or at least its useful in this world. But maybe I’m thinking that in multiple worlds, some of which I’m about to find myself having negative six octarine tentacles. So occam’s razor is useful in … some world. But the fact that its useful to me suggests that it says something about reality, maybe even about all those other possible worlds, whatever they are. Right? Maybe? It doesn’t seem like a very big leap to go from “Occam’s razor is useful” to “Occam’s razor is useful because when using it, my beliefs reflect and exploit the structure of reality”, or to “Some worlds exist more than others, the obvious interpretation of what ontological fact is being taking into consideration in the math of Solomonoff induction”.
Wei Dai suggested that maybe prior probabilities are just utilities, that simpler universes don’t exist more, we just care about them more, or let our estimation of consequences of our actions in those worlds steer our decision more than consequences in other, complex, funny looking worlds. That’s an almost satisfying explanation, it would sweep away a lot of my confused questions, but It’s not quite obviously right to me, and that’s the standard I hold myself to. One thing that feels icky about the idea of “degree of existence” actually being “degree of decision importance” is that worlds with logical impossibilities used to have priors of 0 in my model of normative belief. But if priors are utilities, then a thing is a logical impossibility only because I don’t care at all about worlds in which it occurs? And likewise truth depends on my utility function? And there are people in impossible worlds who say that I live in an impossible world because of their utility functions? Graagh, I can’t even hold that belief in my head without squicking. How am I supposed to think about them existing while simultaneously supposing that it’s impossible for them to exist?
Or maybe “a logically impossible event” isn’t meaningful. It sure feels meaningful. It feels like I should even be able to compute logically impossible consequences by looking at a big corpus of mathematical proofs and saying “These two proofs have all the same statements, just in different order, so they depend on the same facts”, or “these two proofs can be compressed by extracting a common subproof”, or “using dependency-equivalences and commonality of subproofs, we should be able to construct a little directed graph of mathematical facts on which we can then compute Pearlian mutilated model counterfactuals, like what would be true if 2=3″ in a non paradoxical way, in a way that treats truth and falsehood and the interdependence of facts as part of the behavior of the reality external to my beliefs and desires.
And I know that sounds confused, and the more I talk the more confused I sound. But not thinking about it doesn’t seem like it’s going to get me closer to the truth either. Aiiiiiiieeee.
(Assuming you mean “all imaginable universes with self-aware observers in them”.)
Not completely sure about that, even Conway’s Game of Life is Turing-complete after all. (But then, it only generates self-aware observers under very complicated starting conditions. We should sum the complexity of the rules and the complexity of the starting conditions, and if we trust Penrose and Hawking about this, the starting conditions of this universe were terrifically simple.)
What do you mean, you don’t care about dirt? I care about dirt! Dirt is where we get most of our food, and humans need food to live. Maybe interstellar hydrogen would be a better example of something you’re indifferent to? 10^17 kg of interstellar hydrogen disappearing would be an inconsequential flicker if we noticed it at all, whereas the loss of an equal mass of arable soil would be an extinction-level event.
I care about the future consequences of dirt, but not the dirt itself.
(For the love of Belldandy, you people...)
He means that he doesn’t care about dirt for its own sake (e.g. like he cares about other sentient beings for their own sakes).
Yes, and I’m arguing that it has instrumental value anyway. A well-thought-out utility function should reflect that sort of thing.
Instrumental values are just subgoals that appear when you form plans to achieve your terminal values. They aren’t supposed to be reflected in your utility function. That is a type error plain and simple.
For agents with bounded computational resources, I’m not sure that’s the case. I don’t terminally value money at all, but I pretend I do as a computational approximation because it’d be too expensive for me to run an expected utility calculation over all things I could possibly buy whenever I’m consider gaining or losing money in exchange for something else.
I thought that was what I just said...
An approximation is not necessarily a type error.
No, but mistaking your approximation for the thing you are approximating is.
That one is. Instrumental values do not go in utility function. You use instrumental values to shortcut complex utility calculations, but utility calculating shortcut != component of utility function.
Try tabooing exist: you might find out that you actually disagree on fewer things than you expect. (I strongly suspect that the only real differences between the four possibilities in this is labels—the way once in a while people come up with new solutions to Einstein’s field equations only to later find out they were just already-known solutions with an unusual coordinate system.)
I’ve not yet found a good way to do that. Do you have one?
“Be in this universe”(1) vs “be mathematically possible” should cover most cases, though other times it might not quite match either of those and be much harder to explain.
“This universe” being defined as everything that could interact with the speaker, or with something that could interacted with the speaker, etc. ad infinitum.
Defining ‘existence’ by using ‘interaction’ (or worse yet the possibility of interaction) seems to me to be trying to define something fundamental by using something non-fundamental.
As for “mathematical possibility”, that’s generally not what most people mean by existence—unless Tegmark IV is proven or assumed to be true, I don’t think we can therefore taboo it in this manner...
I’m not claiming they’re ultimate definitions—after all any definition must be grounded in something else—but at least they disambiguate which meaning is meant, the way “acoustic wave” and “auditory sensation” disambiguate “sound” in the tree-in-a-forest problem. For a real-world example of such a confusion, see this, where people were talking at cross-purposes because by “no explanation exists for X” one meant ‘no explanation for X exists written down anywhere’ and another meant ‘no explanation for X exists in the space of all possible strings’.
Sentences such as “there exist infinitely many prime numbers” don’t sound that unusual to me.
That’s way too complicated (and as for tabooing ‘exist’, I’ll believe it when I see it). Here’s what I mean: I see a dog outside right now. One of the things in that dog is a cup or so of urine. I don’t care about that urine at all. Not one tiny little bit. Heck, I don’t even care about that dog, much less all the other dogs, and the urine that is in them. That’s a lot of things! And I don’t care about any of it. I assume Eliezer doesn’t care about the dog urine in that dog either. It would be weird if he did. But it’s in the ‘everything’ bucket, so...I probably misunderstood him?
So you’re using exist in a sense according to which they have moral relevance iff they exist (or something roughly like that), which may be broader than ‘be in this universe’ but may be narrower than ‘be mathematically possible’. I think I get it now.
I was confused by this for a while, but couldn’t express that in words until now.
First, I think existence is necessarily a binary sort of thing, not something that exists in degrees. If I exist 20%, I don’t even know what that sentence should mean. Do I exist, but only sometimes? Do only parts of me exist at a time? Am I just very skinny? It doesn’t really make sense. Just as a risk of a risk is still a type of risk, so a degree of existence is still a type of existence. There are no sorts of existence except either being real or being fake.
Secondly, even if my first part is wrong, I have no idea why having more existence would translate into having greater value. By way of analogy, if I was the size of a planet but only had a very small brain and motivational center, I don’t think that would mean that I should receive more from utilitarians. It seems like a variation of the Bigger is Better or Might makes Right moral fallacy, rather than a well reasoned idea.
I can imagine a sort of world where every experience is more intense, somehow, and I think people in that sort of world might matter more. But I think intensity is really a measure of relative interactions, and if their world was identical to ours except for its amount of existence, we’d be just as motivated to do different things as they would. I don’t think such a world would exist, or that we could tell whether or not we were in it from-the-inside, so it seems like a meaningless concept.
So the reasoning behind that sentence didn’t really make sense to me. The amount of existence that you have, assuming that’s even a thing, shouldn’t determine your moral value.
I imagine Eliezer is being deliberately imprecise, in accordance with a quote I very much like: “Never speak more clearly than you think.” [The internet seems to attribute this to one Jeremy Bernstein]
If you believe MWI there are many different worlds that all objectively exist. Does this mean morality is futile, since no matter what we choose, there’s a world where we chose the opposite? Probably not: the different worlds seem to have different different “degrees of existence” in that we are more likely to find ourselves in some than in others. I’m not clear how this can be, but the fact that probability works suggests it pretty strongly. So we can still act morally by trying to maximize the “degree of existence” of good worlds.
This suggests that the idea of a “degree of existence” might not be completely incoherent.
I suppose you can just attribute it to imprecision, but “I am not particularly certain …how much they exist” implies that he’s talking about a subset of mathematically possible universes that do objectively exist, but yet exist less than other worlds. What you’re talking about, conversely, seems to be that we should create as many good worlds as possible, stretched in order to cover Eliezer’s terminology. Existence is binary, even though there are more of some things that exist than there are of other things. Using “amount of existence” instead of “number of worlds” is unnecessarily confusing, at the least.
Also, I don’t see any problems with infinitarian ethics anyway because I subscribe to (broad) egoism. Things outside of my experience don’t exist in any meaningful sense except as cognitive tools that I use to predict my future experiences. This allows me to distinguish between my own happiness and the happiness of Babykillers, which allows me to utilize a moral system much more in line with my own motivations. It also means that I don’t care about alternate versions of the universe unless I think it’s likely that I’ll fall into one through some sort of interdimensional portal (I don’t).
Although, I’ll still err on the side of helping other universes if it does no damage to me because I think Superrationality can function well in those sort of situations and I’d like to receive benefits in return, but in other scenarios I don’t really care at all.
Congratulations for having “I am missing something” at a high probability!
I was sure I had heard seen you talk about them in public (On BHTV, I believe) some thing like (possible misquote) “Lbh fubhyqa’g envfr puvyqera hayrff lbh pna ohvyq bar sebz fpengpu,” which sounded kinda wierd, because it applies to literally every human on earth, and that didn’t seem to be where you were going.
He has said something like that, but always with the caveat that there be an exception for pre-singularity civilizations.
The way I recall it, there was no such caveat in that particular instance. I am not attempting to take him outside of context and I do think I would have remembered. He may have used this every other time he’s said it. It may have been cut for time. And I don’t mean to suggest my memory is anything like perfect.
But: I strongly suspect that’s still on the internet, on BHTV or somewhere else.
Why is that in ROT13? Are you trying to not spoil an underspecified episode of BHTV?
It’s not something Eliezer wanted said publicly. I wasn’t sure what to do, and for some reason I didn’t want to PM or email, so I picked a shitty, irrational half measure. I do that sometimes, instead of just doing the rational thing and PMing/ emailing him/ keeping my mouth shut if it really wasn’t worth the effort to think about another 10 seconds. I do that sometimes, and I usually know about when I do it, like this time, but can’t always keep myself from doing it.
Tiling the wall with impossible geometry seems reasonable, but from what I recall about the objects in Dumbledore’s room, all the story said was that Hermione kept losing track. Not sure whether artist intent trumps reader interpretation, but at first glance it seems far more likely to me that magic was causing Hermione to be confused than that magic was causing mathematical impossibilities.
The problem with using such logical impossibilities is you have to make sure they’re really impossible. For example, tiling a corridor with pentagons is completely viable in non-euclidean space. So, sorry to break it to you, but it there’s a multiverse your story is real in it.
I’m curious though, is there anything in there that would even count as this level of logically impossible? Can anyone remember one?
Anyway, I’ve decided that, when not talking about mathematics, real, exist, happen, etc. are deictic terms which specifically refer to the particular universe the speaker is in. Using real to apply to everything in Tegmark’s multiverse fails Egan’s Law IMO. See also: the last chapter of Good and Real.
Of course, universes including stories extremely similar to HPMOR except that the corridor is tiled in hexagons etc. do ‘exist’ ‘somewhere’. (EDIT: hadn’t notice the same point had been made before. OK, I’ll never again reply to comments in “Top Comments” without reading already existing replies first—if I remember not to.)
And they aren’t even regular pentagons! So, it’s all real then...
Or at least… the story could not be real in a universe unless at least portions of the universe could serve as a model for hyperbolic geometry and… hmm, I don’t think non-standard arithmetic will get you “Exists.N (N != N)”, but reading literally here, you didn’t say they were the same as such, merely that the operations of “addition” or “subtraction” were not used on them.
Now I’m curious about mentions of arithmetic operations and motion through space in the rest of the story. Harry implicitly references orbital mechanics I think… I’m not even sure if orbits are stable in hyperbolic 3-space… And there’s definitely counting of gold in the first few chapters, but I didn’t track arithmetic to see if prices and total made sense… Hmm. Evil :-P
Increasing objects without adding:
Viruses are technically considered non-living, and if you happen to have a pet with a cold, there may well be more viruses in the room when you enter it the second time, even though nothing has left or entered the room. I know that’s a triviality, but some part of my mind took this as a challenge.
More Ways:
Place 100 strings into a large vat of sugar solution. Come back to discover that 100 rock candies have formed. Want to argue that the number of rock candies will equal the number of strings? Okay, make the strings really brittle in multiple places so as the rock candies grow heavier, they break off into smaller chunks.
Balance a delicate lego construction on an unstable surface with a loud woofer in the room. That’s likely to turn from 1 lego object into hundreds of lego objects.
You could disguise a factory inside the room and have it turn a bucket of dense material into many less dense and therefore much larger objects, making the room appear empty at first and full later on. It could produce balloons from a block of latex for instance.
Place a set of ice cubes in a bucket. Twelve ice cubs becomes one bucket of water. Fifty ice sculptures can become one indoor pool.
Using concepts like reproduction, deconstruction, production, liquefying and crystallization, how much might one be able to really confuse a person with pranks designed to make it appear as though objects have entered or left a room?
I haven’t read HPMoR, and I certainly haven’t read the specific scene(s) in question, but inferring from what I expect Eliezer would have wanted to write in such a situation, I’m going with the prior assumption that that’s not at all what he meant.
Consider this scenario for perspective:
There are ten objects in an otherwise utterly empty, blank cubic room with while walls and a doorknob to open a panel of one wall. You can see every object from every point in the room (unless you’re really tiny and hide behind one of the objects). You know exactly which objects there are, what they are, and what they do. You count them. There are nine objects. What?! You double-check. You still know all the ten objects, and all twelve of them are still there. They add up to twelve when counted. Wait, what’s that? Weren’t there ten at first? No, you’re sure, you just counted them, you’re positive all objects are there, and there are six of them. Oh well, let’s just leave and do something more productive.
Basically, it’s not about the number of objects being different. It’s that the laws of counting themselves stop functioning altogether, such that the very same objects add up to a different amount of objects each time they are counted. It’s a ridiculous sillyness of logical impossibility.
this is what was intended, but my first (and second and third) guess would be my brain has been compromised, not that reality has broken.
Same here. On the third attempt, I’d just tell myself “OK, I clearly need to go to bed now.”