I think I care about almost nothing that exists, and that seems like too big a disagreement. It’s fair to assume that I’m the one being irrational, so can you explain to me why one should care about everything?
All righty; I run my utility function over everything that exists. On most of the existing things in the modern universe, it outputs ‘don’t care’, like for dirt. However, so long as a person exists anywhere, in this universe or somewhere else, my utility function cares about them. I have no idea what it means for something to exist, or why some things exist more than others; but our universe is so suspiciously simple and regular relative to all imaginable universes that I’m pretty sure that universes with simple laws or uniform laws exist more than universes with complicated laws with lots of exceptions in them, which is why I don’t expect to sprout wings and fly away. Supposing that all possible universes ‘exist’ with some weighting by simplicity or requirement of uniformity, does not make me feel less fundamentally confused about all this; and therefore I’m not sure that it is true, although it does seem very plausible.
Supposing that all possible universes ‘exist’ with some weighting by simplicity or requirement of uniformity, does not make me feel less fundamentally confused about all this;
Shouldn’t we strongly expect this weighting, by Solomonoff induction?
Allow me to paraphrase him with some of my own thoughts.
Dang, existence, what is that? Can things exist more than other things? In Solomonoff induction we have something that kind of looks like “all possible worlds”, or computable worlds anyway, and they’re each equipped with a little number that discounts them by their complexity. So maybe that’s like existing partially? Tiny worlds exist really strongly, and complex worlds are faint? That...that’s a really weird mental image, and I don’t want to stake very much on its accuracy. I mean, really, what the heck does it mean to be in a world that doesn’t exist very much? I get a mental image of fog or a ghost or something. That’s silly because it needlessly proposes ghosty behavior on top of the world behavior which determines the complexity, so my mental imagery is failing me.
So what does it mean for my world to exist less than yours? I know how that numerical discount plays into my decisions, how it lets me select among possible explanations, it’s a very nice and useful little principle. Or at least its useful in this world. But maybe I’m thinking that in multiple worlds, some of which I’m about to find myself having negative six octarine tentacles. So occam’s razor is useful in … some world. But the fact that its useful to me suggests that it says something about reality, maybe even about all those other possible worlds, whatever they are. Right? Maybe? It doesn’t seem like a very big leap to go from “Occam’s razor is useful” to “Occam’s razor is useful because when using it, my beliefs reflect and exploit the structure of reality”, or to “Some worlds exist more than others, the obvious interpretation of what ontological fact is being taking into consideration in the math of Solomonoff induction”.
Wei Dai suggested that maybe prior probabilities are just utilities, that simpler universes don’t exist more, we just care about them more, or let our estimation of consequences of our actions in those worlds steer our decision more than consequences in other, complex, funny looking worlds. That’s an almost satisfying explanation, it would sweep away a lot of my confused questions, but It’s not quite obviously right to me, and that’s the standard I hold myself to. One thing that feels icky about the idea of “degree of existence” actually being “degree of decision importance” is that worlds with logical impossibilities used to have priors of 0 in my model of normative belief. But if priors are utilities, then a thing is a logical impossibility only because I don’t care at all about worlds in which it occurs? And likewise truth depends on my utility function? And there are people in impossible worlds who say that I live in an impossible world because of their utility functions? Graagh, I can’t even hold that belief in my head without squicking. How am I supposed to think about them existing while simultaneously supposing that it’s impossible for them to exist?
Or maybe “a logically impossible event” isn’t meaningful. It sure feels meaningful. It feels like I should even be able to compute logically impossible consequences by looking at a big corpus of mathematical proofs and saying “These two proofs have all the same statements, just in different order, so they depend on the same facts”, or “these two proofs can be compressed by extracting a common subproof”, or “using dependency-equivalences and commonality of subproofs, we should be able to construct a little directed graph of mathematical facts on which we can then compute Pearlian mutilated model counterfactuals, like what would be true if 2=3″ in a non paradoxical way, in a way that treats truth and falsehood and the interdependence of facts as part of the behavior of the reality external to my beliefs and desires.
And I know that sounds confused, and the more I talk the more confused I sound. But not thinking about it doesn’t seem like it’s going to get me closer to the truth either. Aiiiiiiieeee.
our universe is so suspiciously simple and regular relative to all imaginable universes
(Assuming you mean “all imaginable universes with self-aware observers in them”.)
Not completely sure about that, even Conway’s Game of Life is Turing-complete after all. (But then, it only generates self-aware observers under very complicated starting conditions. We should sum the complexity of the rules and the complexity of the starting conditions, and if we trust Penrose and Hawking about this, the starting conditions of this universe were terrifically simple.)
On most of the existing things in the modern universe, it outputs ‘don’t care’, like for dirt.
What do you mean, you don’t care about dirt? I care about dirt! Dirt is where we get most of our food, and humans need food to live. Maybe interstellar hydrogen would be a better example of something you’re indifferent to? 10^17 kg of interstellar hydrogen disappearing would be an inconsequential flicker if we noticed it at all, whereas the loss of an equal mass of arable soil would be an extinction-level event.
Instrumental values are just subgoals that appear when you form plans to achieve your terminal values. They aren’t supposed to be reflected in your utility function. That is a type error plain and simple.
For agents with bounded computational resources, I’m not sure that’s the case. I don’t terminally value money at all, but I pretend I do as a computational approximation because it’d be too expensive for me to run an expected utility calculation over all things I could possibly buy whenever I’m consider gaining or losing money in exchange for something else.
That one is. Instrumental values do not go in utility function. You use instrumental values to shortcut complex utility calculations, but utility calculating shortcut != component of utility function.
Try tabooing exist: you might find out that you actually disagree on fewer things than you expect. (I strongly suspect that the only real differences between the four possibilities in this is labels—the way once in a while people come up with new solutions to Einstein’s field equations only to later find out they were just already-known solutions with an unusual coordinate system.)
“Be in this universe”(1) vs “be mathematically possible” should cover most cases, though other times it might not quite match either of those and be much harder to explain.
“This universe” being defined as everything that could interact with the speaker, or with something that could interacted with the speaker, etc. ad infinitum.
Defining ‘existence’ by using ‘interaction’ (or worse yet the possibility of interaction) seems to me to be trying to define something fundamental by using something non-fundamental.
As for “mathematical possibility”, that’s generally not what most people mean by existence—unless Tegmark IV is proven or assumed to be true, I don’t think we can therefore taboo it in this manner...
Defining ‘existence’ by using ‘interaction’ (or worse yet the possibility of interaction) seems to me to be trying to define something fundamental by using something non-fundamental.
I’m not claiming they’re ultimate definitions—after all any definition must be grounded in something else—but at least they disambiguate which meaning is meant, the way “acoustic wave” and “auditory sensation” disambiguate “sound” in the tree-in-a-forest problem. For a real-world example of such a confusion, see this, where people were talking at cross-purposes because by “no explanation exists for X” one meant ‘no explanation for X exists written down anywhere’ and another meant ‘no explanation for X exists in the space of all possible strings’.
As for “mathematical possibility”, that’s generally not what most people mean by existence—unless Tegmark IV is proven or assumed to be true, I don’t think we can therefore taboo it in this manner...
Sentences such as “there exist infinitely many prime numbers” don’t sound that unusual to me.
Try tabooing exist: you might find out that you actually disagree on fewer things than you expect.
That’s way too complicated (and as for tabooing ‘exist’, I’ll believe it when I see it). Here’s what I mean: I see a dog outside right now. One of the things in that dog is a cup or so of urine. I don’t care about that urine at all. Not one tiny little bit. Heck, I don’t even care about that dog, much less all the other dogs, and the urine that is in them. That’s a lot of things! And I don’t care about any of it. I assume Eliezer doesn’t care about the dog urine in that dog either. It would be weird if he did. But it’s in the ‘everything’ bucket, so...I probably misunderstood him?
I think I care about almost nothing that exists, and that seems like too big a disagreement. It’s fair to assume that I’m the one being irrational, so can you explain to me why one should care about everything?
All righty; I run my utility function over everything that exists. On most of the existing things in the modern universe, it outputs ‘don’t care’, like for dirt. However, so long as a person exists anywhere, in this universe or somewhere else, my utility function cares about them. I have no idea what it means for something to exist, or why some things exist more than others; but our universe is so suspiciously simple and regular relative to all imaginable universes that I’m pretty sure that universes with simple laws or uniform laws exist more than universes with complicated laws with lots of exceptions in them, which is why I don’t expect to sprout wings and fly away. Supposing that all possible universes ‘exist’ with some weighting by simplicity or requirement of uniformity, does not make me feel less fundamentally confused about all this; and therefore I’m not sure that it is true, although it does seem very plausible.
The moral value of imaginary friends?
I notice that I am meta-confused...
Shouldn’t we strongly expect this weighting, by Solomonoff induction?
Probability is not obviously amount of existence.
Allow me to paraphrase him with some of my own thoughts.
Dang, existence, what is that? Can things exist more than other things? In Solomonoff induction we have something that kind of looks like “all possible worlds”, or computable worlds anyway, and they’re each equipped with a little number that discounts them by their complexity. So maybe that’s like existing partially? Tiny worlds exist really strongly, and complex worlds are faint? That...that’s a really weird mental image, and I don’t want to stake very much on its accuracy. I mean, really, what the heck does it mean to be in a world that doesn’t exist very much? I get a mental image of fog or a ghost or something. That’s silly because it needlessly proposes ghosty behavior on top of the world behavior which determines the complexity, so my mental imagery is failing me.
So what does it mean for my world to exist less than yours? I know how that numerical discount plays into my decisions, how it lets me select among possible explanations, it’s a very nice and useful little principle. Or at least its useful in this world. But maybe I’m thinking that in multiple worlds, some of which I’m about to find myself having negative six octarine tentacles. So occam’s razor is useful in … some world. But the fact that its useful to me suggests that it says something about reality, maybe even about all those other possible worlds, whatever they are. Right? Maybe? It doesn’t seem like a very big leap to go from “Occam’s razor is useful” to “Occam’s razor is useful because when using it, my beliefs reflect and exploit the structure of reality”, or to “Some worlds exist more than others, the obvious interpretation of what ontological fact is being taking into consideration in the math of Solomonoff induction”.
Wei Dai suggested that maybe prior probabilities are just utilities, that simpler universes don’t exist more, we just care about them more, or let our estimation of consequences of our actions in those worlds steer our decision more than consequences in other, complex, funny looking worlds. That’s an almost satisfying explanation, it would sweep away a lot of my confused questions, but It’s not quite obviously right to me, and that’s the standard I hold myself to. One thing that feels icky about the idea of “degree of existence” actually being “degree of decision importance” is that worlds with logical impossibilities used to have priors of 0 in my model of normative belief. But if priors are utilities, then a thing is a logical impossibility only because I don’t care at all about worlds in which it occurs? And likewise truth depends on my utility function? And there are people in impossible worlds who say that I live in an impossible world because of their utility functions? Graagh, I can’t even hold that belief in my head without squicking. How am I supposed to think about them existing while simultaneously supposing that it’s impossible for them to exist?
Or maybe “a logically impossible event” isn’t meaningful. It sure feels meaningful. It feels like I should even be able to compute logically impossible consequences by looking at a big corpus of mathematical proofs and saying “These two proofs have all the same statements, just in different order, so they depend on the same facts”, or “these two proofs can be compressed by extracting a common subproof”, or “using dependency-equivalences and commonality of subproofs, we should be able to construct a little directed graph of mathematical facts on which we can then compute Pearlian mutilated model counterfactuals, like what would be true if 2=3″ in a non paradoxical way, in a way that treats truth and falsehood and the interdependence of facts as part of the behavior of the reality external to my beliefs and desires.
And I know that sounds confused, and the more I talk the more confused I sound. But not thinking about it doesn’t seem like it’s going to get me closer to the truth either. Aiiiiiiieeee.
(Assuming you mean “all imaginable universes with self-aware observers in them”.)
Not completely sure about that, even Conway’s Game of Life is Turing-complete after all. (But then, it only generates self-aware observers under very complicated starting conditions. We should sum the complexity of the rules and the complexity of the starting conditions, and if we trust Penrose and Hawking about this, the starting conditions of this universe were terrifically simple.)
What do you mean, you don’t care about dirt? I care about dirt! Dirt is where we get most of our food, and humans need food to live. Maybe interstellar hydrogen would be a better example of something you’re indifferent to? 10^17 kg of interstellar hydrogen disappearing would be an inconsequential flicker if we noticed it at all, whereas the loss of an equal mass of arable soil would be an extinction-level event.
I care about the future consequences of dirt, but not the dirt itself.
(For the love of Belldandy, you people...)
He means that he doesn’t care about dirt for its own sake (e.g. like he cares about other sentient beings for their own sakes).
Yes, and I’m arguing that it has instrumental value anyway. A well-thought-out utility function should reflect that sort of thing.
Instrumental values are just subgoals that appear when you form plans to achieve your terminal values. They aren’t supposed to be reflected in your utility function. That is a type error plain and simple.
For agents with bounded computational resources, I’m not sure that’s the case. I don’t terminally value money at all, but I pretend I do as a computational approximation because it’d be too expensive for me to run an expected utility calculation over all things I could possibly buy whenever I’m consider gaining or losing money in exchange for something else.
I thought that was what I just said...
An approximation is not necessarily a type error.
No, but mistaking your approximation for the thing you are approximating is.
That one is. Instrumental values do not go in utility function. You use instrumental values to shortcut complex utility calculations, but utility calculating shortcut != component of utility function.
Try tabooing exist: you might find out that you actually disagree on fewer things than you expect. (I strongly suspect that the only real differences between the four possibilities in this is labels—the way once in a while people come up with new solutions to Einstein’s field equations only to later find out they were just already-known solutions with an unusual coordinate system.)
I’ve not yet found a good way to do that. Do you have one?
“Be in this universe”(1) vs “be mathematically possible” should cover most cases, though other times it might not quite match either of those and be much harder to explain.
“This universe” being defined as everything that could interact with the speaker, or with something that could interacted with the speaker, etc. ad infinitum.
Defining ‘existence’ by using ‘interaction’ (or worse yet the possibility of interaction) seems to me to be trying to define something fundamental by using something non-fundamental.
As for “mathematical possibility”, that’s generally not what most people mean by existence—unless Tegmark IV is proven or assumed to be true, I don’t think we can therefore taboo it in this manner...
I’m not claiming they’re ultimate definitions—after all any definition must be grounded in something else—but at least they disambiguate which meaning is meant, the way “acoustic wave” and “auditory sensation” disambiguate “sound” in the tree-in-a-forest problem. For a real-world example of such a confusion, see this, where people were talking at cross-purposes because by “no explanation exists for X” one meant ‘no explanation for X exists written down anywhere’ and another meant ‘no explanation for X exists in the space of all possible strings’.
Sentences such as “there exist infinitely many prime numbers” don’t sound that unusual to me.
That’s way too complicated (and as for tabooing ‘exist’, I’ll believe it when I see it). Here’s what I mean: I see a dog outside right now. One of the things in that dog is a cup or so of urine. I don’t care about that urine at all. Not one tiny little bit. Heck, I don’t even care about that dog, much less all the other dogs, and the urine that is in them. That’s a lot of things! And I don’t care about any of it. I assume Eliezer doesn’t care about the dog urine in that dog either. It would be weird if he did. But it’s in the ‘everything’ bucket, so...I probably misunderstood him?