Bayesian Probability is for things that are Space-like Separated from You
First, I should explain what I mean by space-like separated from you. Imagine a world that looks like a Bayesian network, and imagine that you are a node in that Bayesian network. If there is a path from you to another node following edges in the network, I will say that node is time-like separated from you, and in your future. If there is a path from another node to you, I will say that node is time-like separated from you, and in your past. Otherwise, I will say that the node is space-like separated from you.
Nodes in your past can be thought of as things that you observe. When you think about physics, it sure does seem like there are a lot of things in your past that you do not observe, but I am not thinking about physics-time, I am thinking about logical-time. If something is in your past, but has no effect on what algorithm you are running on what observations you get, then it might as well be considered as space-like separated from you. If you compute how everything in the universe evaluates, the space-like separated things are the things that can be evaluated either before or after you, since their output does not change yours or vice-versa. If you partially observe a fact, then I want to say you can decompose that fact into the part that you observed and the part that you didn’t, and say that the part you observed is in your past, while the part you didn’t observe is space-like separated from you. (Whether or not you actually can decompose things like this is complicated, and related to whether or not you can use the tickle defense is the smoking lesion problem.)
Nodes in your future can be thought of as things that you control. These are not always things that you want to control. For example, you control the output of “You assign probability less than 1⁄2 to this sentence,” but perhaps you wish you didn’t. Again, if you partially control a fact, I want to say that (maybe) you can break that fact into multiple nodes, some of which you control, and some of which you don’t.
So, you know the things in your past, so there is no need for probability there. You don’t know the things in your future, or things that are space-like separated from you. (Maybe. I’m not sure that talking about knowing things you control is not just a type error.) You may have cached that you should use Bayesian probability to deal with things you are uncertain about. You may have this justified by the fact that if you don’t use Bayesian probability, there is a Pareto improvement that will cause you to predict better in all worlds. The problem is that the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them! Therefore, our reasons for liking Bayesian probability do not apply to our uncertainty about the things that are in our future! Note that many things in our future (like our future observations) are also in the future of things that are space-like separated from us, so we want to use Bayes to reason about those things in order to have better beliefs about our observations.
I claim that logical inductors do not feel entirely Bayesian, and this might be why. They can’t if they are able to think about sentences like “You assign probability less than 1⁄2 to this sentence.”
- The Parable of Predict-O-Matic by 15 Oct 2019 0:49 UTC; 353 points) (
- In Logical Time, All Games are Iterated Games by 20 Sep 2018 2:01 UTC; 94 points) (
- Probability as Minimal Map by 1 Sep 2019 19:19 UTC; 48 points) (
- Alignment Newsletter #15: 07/16/18 by 16 Jul 2018 16:10 UTC; 42 points) (
- How to Throw Away Information by 5 Sep 2019 21:10 UTC; 18 points) (
- 28 Aug 2019 20:56 UTC; 18 points) 's comment on Embedded Agency via Abstraction by (
- Meaningful things are those the universe possesses a semantics for by 12 Dec 2022 16:03 UTC; 16 points) (
- 15 Oct 2018 19:25 UTC; 2 points) 's comment on Outline of Metarationality, or much less than you wanted to know about postrationality by (
- 13 Jul 2018 2:43 UTC; 2 points) 's comment on Complete Class: Consequentialist Foundations by (
- 30 Apr 2021 16:10 UTC; 2 points) 's comment on My Current Take on Counterfactuals by (
- 28 Jul 2018 1:34 UTC; 1 point) 's comment on The Evil Genie Puzzle by (
I want to point out that this is not an esoteric abstract problem but a concrete issue that actual humans face all the time. There’s a large class of propositions whose truth value is heavily affected by how much you believe (and by “believe” I mean “alieve”) them—e.g. propositions about yourself like “I am confident” or even “I am attractive”—and I think the LW zeitgeist doesn’t really engage with this. Your beliefs about yourself express themselves in muscle tension which has real effects on your body, and from there leak out in your body language to affect how other people treat you; you are almost always in the state Harry describes in HPMoR of having your cognition constrained by the direct effects of believing things on the world as opposed to just by the effects of actions you take on the basis of your beliefs.
There’s an amusing tie-in here to one of the standard ways to break the prediction market game we used to play at CFAR workshops. At the beginning we claim “the best strategy is to always write down your true probability at any time,” but the argument that’s supposed to establish this has a hidden assumption that the act of doing so doesn’t affect the situation the prediction market is about, and it’s easy to write down prediction markets violating this assumption, e.g. “the last bet on this prediction market will be under 50%.”
Really? I feel quite the opposite, unless you’re saying we could do still more. I think LW is actually one of the few communities that take this sort of non-dualism/naturalism in arriving at a probabilistic judgement (and all its meta levels) seriously. We’ve been repeatedly exposed to the fact that Newcomblike problems are everywhere since a long time ago, and then relatively recently, with Simler’s wonderful post on crony beliefs (and now, his even more delightful book with Hanson, of course).
ETA: I’m missing quite a few posts that were even older (Wei Dai’s? Drescher’s? yvain had something too IIRC), it’d be nice if someone else who does remember posted them here.
I think your links are a good indication of the way that LW has engaged with a relatively narrow aspect of this, and with a somewhat biased manner. “Crony beliefs” is a good example—starting right from the title, it sets up a dichotomy of “merit beliefs” versus “crony beliefs”, with the not-particularly-subtle-connotation of “merit beliefs are this great thing that models reality and in an ideal world we’d only have merit beliefs, but in the real world, we also have to deal with the fact that it’s useful to have crony beliefs for the purpose of manipulating others and securing social alliances”.
Which… yes, that is one aspect of this. But the more general point of the original post is that there are a wide variety of beliefs which are underdetermined by external reality. It’s not that you intentionally have fake beliefs which out of alignment with the world, it’s that some beliefs are to some extent self-fulfilling, and their truth value just is whatever you decide to believe in. If your deep-level alief is that “I am confident”, then you will be confident; if your deep-level alief is that “I am unconfident”, then you will be that.
Another way of putting it: what is the truth value of the belief “I will go to the beach this evening”? Well, if I go to the beach this evening, then it is true; if I don’t go to the beach this evening, it’s false. Its truth is determined by the actions of the agent, rather than the environment.
The predictive processing thing could be said to take this even further: it hypothesizes that all action is caused by these kinds of self-fulfilling beliefs; on some level our brain believes that we’ll take an action, and then it ends up fulfilling that prediction:
Now, I’ve mostly been talking about cases where the truth of a belief is determined purely by our choices. But as the OP suggests, there are often complex interplays between the agent and the environment. For instance, if you believe that “I will be admitted to Example University if I study hard enough to get in”, then that belief may become self-fulfilling in that it causes you to study hard enough to get in. But at the same time, you may simply not be good enough, so the truth value of this belief is determined both by whether you believe in it, and by whether you actually are good enough.
With regard to the thing about confidence; people usually aren’t just confident in general, they are confident about something in particular. I’m much more confident in my ability to write on a keyboard, than I am in my ability to do brain surgery. You could say that my confidence in my ability to do X, is the probability that I assign to doing X successfully.
And it’s often important that I’m not overconfident. Yes, if I’m really confident in my ability to do something, then other people will give me more respect. But the reason why they do that, is that confidence is actually a bit of a costly signal. So far I’ve said that an agent’s decisions determine the truth-values of many beliefs, but it’s also the other way around: the agent’s beliefs determine the agent’s actions. If I believe myself to be really good at brain surgery even when I’m not, I may be able to talk myself into a situation where I’m allowed to do brain surgery, but the result will be a dead patient. And it’s not going to take many dead patients before people realize I’m a fraud and put me in prison. But if I’m completely deluded and firmly believe myself to be a master brain surgeon, that belief will cause me to continue carrying out brain surgeries, even when it would be better from a self-interested perspective to stop doing that.
So there’s a complicated thing where beliefs have several effects: they determine your predictions about the world and they determine your future actions and they determine the subconscious signals that you send to others. You have an interest in being overconfident for the sake of persuading others, and for the sake of getting yourself to do things, but also in being just-appropriately-confident for the sake of being able to predict the consequences of your own future actions better.
An important framing here is “your beliefs determine your actions, so how do you get the beliefs which cause the best actions”. There have been some posts offering tools for belief-modification which had the goal of causing change, but this mostly hasn’t been stated explicitly, and even some of the posts which have offered tools for this (e.g. Nate’s “Dark Arts of Rationality”) have still talked about it being a “Dark Art” thing which is kinda dirty to engage in. Which I think is dangerous, because getting an epistemically correct map is only half of what you need for success, with the “have beliefs which cause you to take the actions that you need to succeed” being the other half that’s just as important to get right. (Except, as noted, they are not two independent things but intertwined with each other in complicated ways.)
Yes, this.
There’s a thing MIRI people talk about, about the distinction between “cartesian” and “naturalized” agents: a cartesian agent is something like AIXI that has a “cartesian boundary” separating itself from the environment, so it can try to have accurate beliefs about the environment, then try to take the best actions on the environment given those beliefs. But a naturalized agent, which is what we actually are and what any AI we build actually is, is part of the environment; there is no cartesian boundary. Among other things this means that the environment is too big to fully model, and it’s much less clear what it even means for the agent to contemplate taking different actions. Scott Garrabrant has said that he does not understand what naturalized agency means; among other things this means we don’t have a toy model that deserves to be called “naturalized AIXI.”
There’s a way in which I think the LW zeitgeist treats humans as cartesian agents, and I think fully internalizing that you’re a naturalized agent looks very different, although my concepts and words around this are still relatively nebulous.
I’m confused by this. Sure, your body has involuntary mechanisms that truthfully signal your beliefs to others. But the only reason these mechanisms could exist is to help your genes! Yours specifically! That means you shouldn’t try to override them when your interests coincide with those of your genes. In particular, you shouldn’t force yourself to believe that you’re attractive. Am I missing something?
And I never said this.
But there’s a thing that can happen when someone else gaslights you into believing that you’re unattractive, which makes it true, and you might be interested in undoing that damage, for example.
It seems pretty easy for such mechanisms to be adapted for maximizing reproduction in some ancestral excitement but maladapted for maximizing your preferences in the modern environment.
I think I agree that your point is generally under-considered, especially by the sort of people who compulsively tear down Chesterton’s fences.
What rossry said, but also, why do you expect to be “winning” all arms races here? Genes in other people may have led to development of meme-hacks that you don’t know are actually giving someone else an edge in a zero sum game.
In particular, they might call you fat or stupid or incompetent and you might end up believing it.
I’m not trying to be mean here, but this post is completely wrong at all levels. No, Bayesian probability is not just for things that are space-like. None of the theorems from which it derived even refer to time.
This simply is not true. There would be no need of detectives or historical researchers if it were true.
You can say it, but it’s not even approximately true. If someone flips a coin in front of me but covers it up just before it hits the table, I observe that a coin flip has occurred, but not whether it was heads or tails—and that second even is definitely within my past light-cone.
No, I cached nothing. I first spent a considerable amount of time understanding Cox’s Theorem in detail, which derives probability theory as the uniquely determined extension of classical propositional logic to a logic that handles uncertainty. There is some controversy about some of its assumptions, so I later proved and published my own theorem that arrives at the same conclusion (and more) using purely logical assumptions/requirements, all of the form, “our extended logic should retain this existing property of classical propositional logical.”
1) It’s not clear this is really true. It seems to me that any situation that is affected by an agent’s beliefs can be handled within Bayesian probability theory by modeling the agent.
2) So what?
This is a complete non sequitur. Even if I grant your premise, most things in my future are unaffected by my beliefs. The date on which the Sun will expand and engulf the Earth is in no way affected by any of my beliefs. Whether you will get luck with that woman at the bar next Friday is in no way affected by any of my beliefs. And so on,
I think you are correct that I cannot cleanly separate the things that are in my past that I know and the things that are in my post that I do not know. For example, if a probability is chosen uniformly at random in the unit interval, then a coin with that probability is flipped a large number of times, then I see some of the results, I do not know the true probability, but the coin flips that I see really should come after the thing that determines the probability in my Bayes’ net.
[META] As a general heuristic, when you encounter a post from someone otherwise reputable that seems completely nonsensical to you, it may be worth attempting to find some reframing of it that causes it to make sense—or at the very least, make more sense than before—instead of addressing your remarks to the current (nonsensical-seeming) interpretation. The probability that the writer of the post in question managed to completely lose their mind while writing said post is significantly lower than both the probability that you have misinterpreted what they are saying, and the probability that they are saying something non-obvious which requires interpretive effort to be understood. To maximize your chances of getting something useful out of the post, therefore, it is advisable to condition on the possibility that the post is not saying something trivially incorrect, and see where that leads you. This tends to be how mutual understanding is built, and is a good model for how charitable communication works. Your comment, to say the least, was neither.
This is the first thing I’ve read from Scott Garrabant, so “otherwise reputable” doesn’t apply here. And I have frequently seen things written on LessWrong that display pretty significant misunderstandings of the philosophical basis of Bayesian probability, so that gives me a high prior to expect more of them.
The “nodes in the future” part of this, is in part the point I keep trying to make with the rigging/bias and influence posts https://www.lesswrong.com/posts/b8HauRWrjBdnKEwM5/rigging-is-a-form-of-wireheading
Non central nit: So, you know the things in your past, so there is no need for probability there Doesn’t seem true.
I suppose you mean the fallibility of memory. I think Garrabrant meant it tautologically though (ie, as the definition of “past”).
Pretty confident they meant it that way:
One way I may begin to write a similar concept formally may be something like:
An agent’s probability on a topic is “P(V|C)”, where V is some proposition and O represents all conditionals.
There are cases where one of these conditionals will include a statement such as “P(V|C) = f(n)”; whereby one must condition on the output of their total estimate. If this “recursive” conditional influences P(V|C), then the probabilistic assessment is not “state-like separated.”
I generally agree with the main message, and am happy to see it be written up, but see this less as a failure of Bayes theory than a rejection of a common misuse of Bayes theory. I believe I’ve heard a similar argument a few times before and have found it a bit frustrating for this reason. (Of course, I could be factually wrong in my understanding)
If one were to apply something other than a direct bayesian update, as could make the sense in a more complicated setting, they may as well do so in a process which includes other kinds of bayesian updates. And the decision process that they use to determine the method of updating in these circumstances may well involve bayesian updates.
I’m not sure how to solve such an equation, though doing it for simple cases seems simple enough. I’ll admit I don’t understand logical induction near as well as I would like, and mean to do so some time.
Of course, no actual individual or program is a pure Bayesian. Pure Bayesian updating presumes logical omniscience after all. Rather, when we talk about Bayesian reasoning we idealize individuals as abstract agents whose choices (potentially none) have a certain probabilistic effect on the world, i.e., basically we idealize the situation as a 1 person game.
You basically raise the question of what happens in Newcomb like cases where we allow the agent’s internal deliberative state to affect outcomes independent of explicit choices made. But whole model breaks down the moment you do this. It no longer even makes sense to idealize a human as this kind of agent and ask what should be done because the moment you bring the agent’s internal deliberative state into play it no longer makes sense to idealize the situation as one in which there is a choice to be made. At that point you might as well just shrug and say ‘you’ll choose whatever the laws of physics says you’ll choose.’
Now, one can work around this problem by instead posing a question for a different agent who might idealize a past self, e.g., if I imagine I have a free choice about which belief to commit to having in these sorts of situations which belief/belief function should I presume.
As an aside I would argue that, while a perfectly valid mathematical calculation, there is something wrong in advocating for timeless decision theory or any other particular decision theory as the correct way to make choices in these Newcomb type scenarios. The model of choice making doesn’t even really make sense in such situations so any argument over which is the true/correct decision theory must ultimately be a pragmatic one (when we suggest actual people use X versus Y they do better with X) but that’s never the sense of correctness that is being claimed.
What makes statements you control important?
Why would you wish to assign a different probability to this statement?
It’s a variant of the liar’s paradox. If you say the statement is unlikely, you’re agreeing with what it says. If you agree with it, you clearly don’t think it’s unlikely, so it’s wrong.