how do we reconcile the ideas that 1) most imaginable expected utility maximizers would drive humanity to extinction, and 2) we humans are still alive, even though every single physical system in existence is mathematically equivalent to some class of flawless expected utility maximizers?
First of all, if everything is mathematically equivalent to an EU maximizer, then saying that something is an EU maximizer no longer represents meaningful knowledge, since it no longer distinguishes between fiction and reality. As Eliezer once beautifully put it:
Your strength as a rationalist is your ability to be more confused by fiction than by reality; if you are equally good at explaining any outcome you have zero knowledge. The strength of a model is not what it can explain, but what it can’t, for only prohibitions constrain anticipation. If you don’t notice when your model makes the evidence unlikely, you might as well have no model, and also you might as well have no evidence; no brain and no eyes.
So if you are capable of taking any self-consistent physical description of a system, whether it actually conforms to reality or not, and saying that this can be modeled as an EU maximizer set-up, then the fact that something is an EU maximizer no longer constrains expectations by prohibiting certain world-states. Since we do actually have certain expectations related to EU maximizers, this suggests that the discussion you are starting here is primarily semantic and not substantive (as you are using the concept of EU maximizers in a manner that John Wentworth has described as “confused”).
With this in mind, the resolution to the apparent tension between the two statements you included in the section I quoted at the top of this comment is that it[1] implicitly relies on the use of a counting argument without properly considering what compact manifold it arises out of. Put differently, we already know that humans are still alive, which screens off other considerations that are not as narrowly tailored to this fact. When you condition on any piece of knowledge, you explicitly change the probabilities of all other events in a manner that makes them compatible with what you are conditioning. It could not have been any other way. So the space of “imaginable EU maximizers”, after you condition, does not get a uniform subjective probability distribution or anything close to it, but rather one that puts mass only (or, at least, primarily) on those hypotheses that ensure humans would survive until now (since we know that we, in fact, have).
First of all, if everything is mathematically equivalent to an EU maximizer, then saying that something is an EU maximizer no longer represents meaningful knowledge, since it no longer distinguishes between fiction and reality.
I’m confused about your claim. For example, I can model (nearly?) everything with quantum mechanics, so then does calling something a quantum mechanical system not confer meaningful knowledge?
There are physical models which are not based on Quantum Mechanics, and are in fact incompatible with it. For example, to a physicist in the 19th century, a world that functioned on the basis of (very slight modifications of) Newtonian Mechanics and Classical E&M would have seemed very plausible.
The fact that reality turned out not to be this way does not imply the physical theory was internally inconsistent, but rather that it was incompatible with empirical observations that eventually led to the creation of the QM theory. So the point is that you cannot actually model nearly everything in conceptspace with QM, it’s just that reality turns out to be well-approximated by it, while (realistic) fiction like Newtonian Mechanics is not (for example, at the atomic & subatomic level).
This is what makes calling something a QM system an example of meaningful knowledge: it approximates reality better than it does something that is not real, exactly part of Your Strength as a Rationalist. By contrast, whatever story I give you, true or not, can be viewed as flowing from the Texas Sharpshooter Utility Function in exactly the same way that you said reality does:
All you have to do is define a utility function which, at time T, takes in all the relevant context within and around a given physical system, and assigns the highest expected utility to whatever actions that system actually takes to produce its state at time T+1.
So the fact that you “know” something is an EU maximizer, under OP’s definition of that term (which, as I mentioned above, is confused and confusing), does not constrain your expectations in any meaningful way because it does not rule out future world-states (because both true and false predictions are equally compatible with the EU process, as described).
By contrast, knowing something follows QM principles does constrain expectations significantly, as we can design self-consistent models and imagined future world-states which do not follow it (as I mentioned above). For example, the quantization of energy levels, the photoelectric effect, quantum tunneling, quantum entanglement, the anomalous magnetic moment of the electron, specific predictions about the spectra of atoms and molecules, etc., are all predictions given directly by QM; as such, the theory invalidates world-states in which we design proper experiments that do not find all of these.
But there is no articulable future world-state which is ruled out by OP’s conception of EU maximization.
For example, I can model (nearly?) everything with quantum mechanics, so then does calling something a quantum mechanical system not confer meaningful knowledge?
Actually no. Quantum mechanics is pretty well established, and we may suppose that it describes everything (at least, in low gravitational fields). Given that, pointing at a thing and saying “quantum mechanics!” adds no new information. That is not a model. An actual model would allow making predictions about the thing, or at least calculating (not merely fitting to) known properties. There aren’t all that many systems we can do that for. The successes of quantum mechanics, which are many, are found in the systems simple enough that we can.
Quantum mechanics is pretty well established, and we may suppose that it describes everything (at least, in low gravitational fields). Given that, pointing at a thing and saying “quantum mechanics!” adds no new information.
Are you making this argument?
P1: Quantum mechanics is well established.
P2: Quantum mechanics describes everything in low gravitational fields.
C1: So, calling a thing a “quantum system” doesn’t convey any information.
I wouldn’t state P1 and P2 as dogmatically as that, but rounding the uncertainty off to zero, yes. If everything is known to be described by quantum mechanics, pointing at something and saying “this is described by quantum mechanics” adds no new information.
(I endorse sunwillrise’s comment as a general response to this post; it’s an unusually excellent comment. This comment is just me harping on a pet peeve of mine.)
So, within the ratosphere, it’s well-known that every physical object or set of objects is mathematically equivalent to some expected utility maximizer
This is a wildly misleading idea which refuses to die.
As a meme within the ratosphere, the usual source cited is this old post by Rohin, which has a section titled “All behavior can be rationalized as EU maximization”. When I complained to Rohin that “All behavior can be rationalized as EU maximization” was wildly misleading, he replied:
I tried to be clear that my argument was “you need more assumptions beyond just coherence arguments on universe-histories; if you have literally no other assumptions then all behavior can be rationalized as EU maximization”. I think the phrase “all behavior can be rationalized as EU maximization” or something like it was basically necessary to get across the argument that I was making. I agree that taken in isolation it is misleading; I don’t really see what I could have done differently to prevent there from being something that in isolation was misleading, while still being able to point out the-thing-that-I-believe-is-fallacious. Nuance is hard.
Point is: even the guy who’s usually cited on this (at least on LW) agrees it’s misleading.
Why is it misleading? Because coherence arguments do, in fact, involve a notion of “utility maximization” narrower than just a system’s behavior maximizing some function of universe-trajectory. There are substantive notions of “utility maximizer”, those notions are a decent match to our intuitions in many ways, and they involve more than just behavior maximizing some function of universe-trajectory. When we talk about “utility maximizers” in a substantive sense, we’re talking about a phenomenon which is narrower than behavior maximizing some function of universe-trajectory.
If you want to see a notion of “utility maximizer” which is nontrivial, Coherence of Caches and Agents gives IMO a pretty illustrative and simple example.
This idea, the Texas Sharpshooter Utility Function, which looks at what happens and then paints the value 1 on it, is a surprisingly recurrent one on LW. But it does not work. It does not allow of any predictions. First you must see what happens; only then can you paint the target. Its present is uncomputable from its past.
The vast majority of planet-sized configurations of atoms are inimical to life, but here we are. Threats such as climate change, colliding asteroids, supervolcanoes, and AGI are not to be assessed by speculating on the generality of planet-sized configurations, but by considering the actually possible futures of the configuration we find ourselves in.
While I do agree with the general sentiment behind what you are saying (we ought to take to heart the virtue of narrowness), your comment here does give me somewhat of an impression that you do not think very highly of the relevance (to P(doom), for example) of considerations based on anthropics, the doomsday argument, and other related ideas.
Those concepts had not occurred to me in the present context, but no, in general I don’t take anthropics or the doomsday argument seriously. Don’t expect an argument from me to that effect, they just feel obviously wrong and I find them irritating. I’ve read some of the arguments around them, and it is clear that there is not currently a consensus, so I ignore them.
First of all, if everything is mathematically equivalent to an EU maximizer, then saying that something is an EU maximizer no longer represents meaningful knowledge, since it no longer distinguishes between fiction and reality. As Eliezer once beautifully put it:
So if you are capable of taking any self-consistent physical description of a system, whether it actually conforms to reality or not, and saying that this can be modeled as an EU maximizer set-up, then the fact that something is an EU maximizer no longer constrains expectations by prohibiting certain world-states. Since we do actually have certain expectations related to EU maximizers, this suggests that the discussion you are starting here is primarily semantic and not substantive (as you are using the concept of EU maximizers in a manner that John Wentworth has described as “confused”).
With this in mind, the resolution to the apparent tension between the two statements you included in the section I quoted at the top of this comment is that it[1] implicitly relies on the use of a counting argument without properly considering what compact manifold it arises out of. Put differently, we already know that humans are still alive, which screens off other considerations that are not as narrowly tailored to this fact. When you condition on any piece of knowledge, you explicitly change the probabilities of all other events in a manner that makes them compatible with what you are conditioning. It could not have been any other way. So the space of “imaginable EU maximizers”, after you condition, does not get a uniform subjective probability distribution or anything close to it, but rather one that puts mass only (or, at least, primarily) on those hypotheses that ensure humans would survive until now (since we know that we, in fact, have).
By which I am referring to “the feeling that the statements are contradictory,” not “your post.”
I’m confused about your claim. For example, I can model (nearly?) everything with quantum mechanics, so then does calling something a quantum mechanical system not confer meaningful knowledge?
There are physical models which are not based on Quantum Mechanics, and are in fact incompatible with it. For example, to a physicist in the 19th century, a world that functioned on the basis of (very slight modifications of) Newtonian Mechanics and Classical E&M would have seemed very plausible.
The fact that reality turned out not to be this way does not imply the physical theory was internally inconsistent, but rather that it was incompatible with empirical observations that eventually led to the creation of the QM theory. So the point is that you cannot actually model nearly everything in conceptspace with QM, it’s just that reality turns out to be well-approximated by it, while (realistic) fiction like Newtonian Mechanics is not (for example, at the atomic & subatomic level).
This is what makes calling something a QM system an example of meaningful knowledge: it approximates reality better than it does something that is not real, exactly part of Your Strength as a Rationalist. By contrast, whatever story I give you, true or not, can be viewed as flowing from the Texas Sharpshooter Utility Function in exactly the same way that you said reality does:
So the fact that you “know” something is an EU maximizer, under OP’s definition of that term (which, as I mentioned above, is confused and confusing), does not constrain your expectations in any meaningful way because it does not rule out future world-states (because both true and false predictions are equally compatible with the EU process, as described).
By contrast, knowing something follows QM principles does constrain expectations significantly, as we can design self-consistent models and imagined future world-states which do not follow it (as I mentioned above). For example, the quantization of energy levels, the photoelectric effect, quantum tunneling, quantum entanglement, the anomalous magnetic moment of the electron, specific predictions about the spectra of atoms and molecules, etc., are all predictions given directly by QM; as such, the theory invalidates world-states in which we design proper experiments that do not find all of these.
But there is no articulable future world-state which is ruled out by OP’s conception of EU maximization.
Actually no. Quantum mechanics is pretty well established, and we may suppose that it describes everything (at least, in low gravitational fields). Given that, pointing at a thing and saying “quantum mechanics!” adds no new information. That is not a model. An actual model would allow making predictions about the thing, or at least calculating (not merely fitting to) known properties. There aren’t all that many systems we can do that for. The successes of quantum mechanics, which are many, are found in the systems simple enough that we can.
Are you making this argument?
P1: Quantum mechanics is well established.
P2: Quantum mechanics describes everything in low gravitational fields.
C1: So, calling a thing a “quantum system” doesn’t convey any information.
I wouldn’t state P1 and P2 as dogmatically as that, but rounding the uncertainty off to zero, yes. If everything is known to be described by quantum mechanics, pointing at something and saying “this is described by quantum mechanics” adds no new information.
(I endorse sunwillrise’s comment as a general response to this post; it’s an unusually excellent comment. This comment is just me harping on a pet peeve of mine.)
This is a wildly misleading idea which refuses to die.
As a meme within the ratosphere, the usual source cited is this old post by Rohin, which has a section titled “All behavior can be rationalized as EU maximization”. When I complained to Rohin that “All behavior can be rationalized as EU maximization” was wildly misleading, he replied:
Point is: even the guy who’s usually cited on this (at least on LW) agrees it’s misleading.
Why is it misleading? Because coherence arguments do, in fact, involve a notion of “utility maximization” narrower than just a system’s behavior maximizing some function of universe-trajectory. There are substantive notions of “utility maximizer”, those notions are a decent match to our intuitions in many ways, and they involve more than just behavior maximizing some function of universe-trajectory. When we talk about “utility maximizers” in a substantive sense, we’re talking about a phenomenon which is narrower than behavior maximizing some function of universe-trajectory.
If you want to see a notion of “utility maximizer” which is nontrivial, Coherence of Caches and Agents gives IMO a pretty illustrative and simple example.
I agree with sunwillrise’s comment.
This idea, the Texas Sharpshooter Utility Function, which looks at what happens and then paints the value 1 on it, is a surprisingly recurrent one on LW. But it does not work. It does not allow of any predictions. First you must see what happens; only then can you paint the target. Its present is uncomputable from its past.
The vast majority of planet-sized configurations of atoms are inimical to life, but here we are. Threats such as climate change, colliding asteroids, supervolcanoes, and AGI are not to be assessed by speculating on the generality of planet-sized configurations, but by considering the actually possible futures of the configuration we find ourselves in.
While I do agree with the general sentiment behind what you are saying (we ought to take to heart the virtue of narrowness), your comment here does give me somewhat of an impression that you do not think very highly of the relevance (to P(doom), for example) of considerations based on anthropics, the doomsday argument, and other related ideas.
Is this correct, or am I misreading you?
Those concepts had not occurred to me in the present context, but no, in general I don’t take anthropics or the doomsday argument seriously. Don’t expect an argument from me to that effect, they just feel obviously wrong and I find them irritating. I’ve read some of the arguments around them, and it is clear that there is not currently a consensus, so I ignore them.