Hey, thanks for the response! Yes, I’ve also read about Bayes’ Theorem. However, I’m unconvinced that it is applicable in all the circumstances that I care about. For example, suppose I’m interested in the question “Should I kill lanternflies whenever I can?” That’s not really an objective question about the universe that you could, for example, put on a prediction market. There doesn’t exist a natural function from (states of the universe) to (answers to that question). There’s interpretation involved. Let’s even say that we get some new evidence (my post wasn’t really centered on that context, but still). Suppose I see the news headline “Arkansas Department of Stuff says that you should kill lanternflies whenever you can.” How am I supposed to apply Bayes’ rule in this context? How do I estimate P(I should kill lanternflies whenever I can | Arkansas Department of Stuff says I should kill lanternflies whenever I can)? It would be nice to be able to dismiss these kinds of questions as ill-posed, but in practice I spend a sizeable fraction of my time thinking about them. Am I incorrect here? Is Bayes’ theorem more powerful than I’m realizing?
(1) Yeah, I’m intentionally inserting a requirement that’s trivially true. Some claims will make object-level statements that don’t directly impose restrictions on other claims. Since these object-level claims aren’t directly responsible for putting restrictions on the structure of the argument, they induce trivial clauses in the formula.
(2) Absolutely, you can’t provide concrete predictions on how beliefs will evolve over time. But I think you can still reason statistically. For example, I think it’s valid to ask “You put ten philosophers in a room and ask them whether God exists. At the start, you present them with five questions related to the existence of God and ask them to assign probabilities to combinations of answers to these questions. After seven years, you let the philosophers out and again ask them to assign probabilities to combinations of answers. What is the expected value of the shift (say, the KL divergence) between the original probabilities and the final probabilities?“ I obviously cannot hope to predict which direction the beliefs will evolve, but the degree to which we expect them to evolve seems more doable. Even if we’ve updated so that our current probabilities equal the expected value of our future probabilities, we can still ask about the variance of our future probabilities. Is that correct or am I misunderstanding something?
One way of framing the difficulty with the lanternflies thing is that the question straddles the is-ought gap. It decomposes pretty cleanly into two questions: “What states of the universe are likely to result from me killing vs not killing lanternflies” (about which Bayes Rule fully applies and is enormously useful), and “Which states of the universe do I prefer?”, where the only evidence you have will come from things like introspection about your own moral intuitions and values. Your values are also a fact about the universe, because you are part of the universe, so Bayes still applies I guess, but it’s quite a different question to think about. If you have well defined values, for example some function from states (or histories) of the universe to real numbers, such that larger numbers represent universe states that you would always prefer over smaller numbers, then every “should I do X or Y” question has an answer in terms of those values. In practice we’ll never have that, but still it’s worth thinking separately about “What are the expected consequences of the proposed policy?” and “What consequences do I want”, which a ‘should’ question implicitly mixes together.
You raise an excellent point! In hindsight I’m realizing that I should have chosen a different example, but I’ll stick with it for now. Yes, I agree that “What states of the universe are likely to result from me killing vs not killing lanternflies” and “Which states of the universe do I prefer?” are both questions grounded in the state of the universe where Bayes’ rule applies very well. However, I feel like there’s a third question floating around in the background: “Which states of the universe ‘should’ I prefer?” Based on my inner experiences, I feel that I can change my values at will. I specifically remember a moment after high school when I first formalized an objective function over states of the world, and this was a conscious thing I had to do. It didn’t come by default. You could argue that the question “Which states of the universe would I decide I should prefer after thinking about it for 10 years” is a question that’s grounded in the state of the universe so that Bayes’ Rule makes sense. However, trying to answer this question basically reduces to thinking about my values for 10 years; I don’t know of a way to short circuit that computation. I’m reminded of the problem about how an agent can reason about a world that it’s embedded inside where its thought processes could change the answers it seeks.
If I may propose another example and take this conversation to the meta-level, consider the question “Can Bayes’ Rule alone answer the question ‘Should I kill lanternflies?’?” When I think about this meta-question, I think you need a little more than just Bayes’ Rule to reason. You could start by trying to estimate P(Bayes Rule alone solves the lanternfly question), P(Bayes Rule alone solves the lanternfly question | the lanternfly question can be decomposed into two separate questions), etc. The problem is that I don’t see how to ground these probabilities in the real world. How can you go outside and collect data and arrive at the conclusion “P(Bayes Rule alone solves the lanternfly question | the lanternfly question can be decomposed into two separate questions) = 0.734”?
In fact, that’s basically the issue that my post is trying to address! I love Bayes’ rule! I love it so much that the punchline of my post, the dismissive growth-consistent ideology weighting, is my attempt to throw probability theory at abstract arguments that really didn’t ask for probability theory to be thrown at them. “Growth-consistency” is a fancy word I made up that basically means “you can apply probability theory (including Bayes’ Rule) in the way you expect.” I want to be able to reason with probability theory in places where we don’t get “real probabilities” inherited from the world around us.
Hey, thanks for the response! Yes, I’ve also read about Bayes’ Theorem. However, I’m unconvinced that it is applicable in all the circumstances that I care about. For example, suppose I’m interested in the question “Should I kill lanternflies whenever I can?” That’s not really an objective question about the universe that you could, for example, put on a prediction market. There doesn’t exist a natural function from (states of the universe) to (answers to that question). There’s interpretation involved. Let’s even say that we get some new evidence (my post wasn’t really centered on that context, but still). Suppose I see the news headline “Arkansas Department of Stuff says that you should kill lanternflies whenever you can.” How am I supposed to apply Bayes’ rule in this context? How do I estimate P(I should kill lanternflies whenever I can | Arkansas Department of Stuff says I should kill lanternflies whenever I can)? It would be nice to be able to dismiss these kinds of questions as ill-posed, but in practice I spend a sizeable fraction of my time thinking about them. Am I incorrect here? Is Bayes’ theorem more powerful than I’m realizing?
(1) Yeah, I’m intentionally inserting a requirement that’s trivially true. Some claims will make object-level statements that don’t directly impose restrictions on other claims. Since these object-level claims aren’t directly responsible for putting restrictions on the structure of the argument, they induce trivial clauses in the formula.
(2) Absolutely, you can’t provide concrete predictions on how beliefs will evolve over time. But I think you can still reason statistically. For example, I think it’s valid to ask “You put ten philosophers in a room and ask them whether God exists. At the start, you present them with five questions related to the existence of God and ask them to assign probabilities to combinations of answers to these questions. After seven years, you let the philosophers out and again ask them to assign probabilities to combinations of answers. What is the expected value of the shift (say, the KL divergence) between the original probabilities and the final probabilities?“ I obviously cannot hope to predict which direction the beliefs will evolve, but the degree to which we expect them to evolve seems more doable. Even if we’ve updated so that our current probabilities equal the expected value of our future probabilities, we can still ask about the variance of our future probabilities. Is that correct or am I misunderstanding something?
Thanks again, by the way!
One way of framing the difficulty with the lanternflies thing is that the question straddles the is-ought gap. It decomposes pretty cleanly into two questions: “What states of the universe are likely to result from me killing vs not killing lanternflies” (about which Bayes Rule fully applies and is enormously useful), and “Which states of the universe do I prefer?”, where the only evidence you have will come from things like introspection about your own moral intuitions and values. Your values are also a fact about the universe, because you are part of the universe, so Bayes still applies I guess, but it’s quite a different question to think about.
If you have well defined values, for example some function from states (or histories) of the universe to real numbers, such that larger numbers represent universe states that you would always prefer over smaller numbers, then every “should I do X or Y” question has an answer in terms of those values. In practice we’ll never have that, but still it’s worth thinking separately about “What are the expected consequences of the proposed policy?” and “What consequences do I want”, which a ‘should’ question implicitly mixes together.
You raise an excellent point! In hindsight I’m realizing that I should have chosen a different example, but I’ll stick with it for now. Yes, I agree that “What states of the universe are likely to result from me killing vs not killing lanternflies” and “Which states of the universe do I prefer?” are both questions grounded in the state of the universe where Bayes’ rule applies very well. However, I feel like there’s a third question floating around in the background: “Which states of the universe ‘should’ I prefer?” Based on my inner experiences, I feel that I can change my values at will. I specifically remember a moment after high school when I first formalized an objective function over states of the world, and this was a conscious thing I had to do. It didn’t come by default. You could argue that the question “Which states of the universe would I decide I should prefer after thinking about it for 10 years” is a question that’s grounded in the state of the universe so that Bayes’ Rule makes sense. However, trying to answer this question basically reduces to thinking about my values for 10 years; I don’t know of a way to short circuit that computation. I’m reminded of the problem about how an agent can reason about a world that it’s embedded inside where its thought processes could change the answers it seeks.
If I may propose another example and take this conversation to the meta-level, consider the question “Can Bayes’ Rule alone answer the question ‘Should I kill lanternflies?’?” When I think about this meta-question, I think you need a little more than just Bayes’ Rule to reason. You could start by trying to estimate P(Bayes Rule alone solves the lanternfly question), P(Bayes Rule alone solves the lanternfly question | the lanternfly question can be decomposed into two separate questions), etc. The problem is that I don’t see how to ground these probabilities in the real world. How can you go outside and collect data and arrive at the conclusion “P(Bayes Rule alone solves the lanternfly question | the lanternfly question can be decomposed into two separate questions) = 0.734”?
In fact, that’s basically the issue that my post is trying to address! I love Bayes’ rule! I love it so much that the punchline of my post, the dismissive growth-consistent ideology weighting, is my attempt to throw probability theory at abstract arguments that really didn’t ask for probability theory to be thrown at them. “Growth-consistency” is a fancy word I made up that basically means “you can apply probability theory (including Bayes’ Rule) in the way you expect.” I want to be able to reason with probability theory in places where we don’t get “real probabilities” inherited from the world around us.