What would a frequentist analysis of the developing war look like?
Priors aren’t for believing. They’re for saying “if you start here, then after the evidence you end up there.” Where does a frequentist start?
I can tell my beliefs from my values. They both exist objectively, both observable by me. I can communicate them to other people. Cannot everyone? Extreme situations like Room 101 are not the usual way of things. From that example you could as well conclude that all communication is impossible.
With objective priors one can always ask “so what?” If it’s not my subjective prior, then its posterior will not equal my subjective posterior. There isn’t an obvious way to bound the difference between my subjective prior and the objective prior.
With frequentist methods it’s possible to get guarantees like “no matter what prior over θ you start with, if you run this method, you’ll correctly estimate θ to within ϵ with at least 1−δ probability”. It’s clear that a subjective Bayesian (with imperfect knowledge of their prior) might care about this sort of guarantee.
Material phenomena must be defined in material terms. My argument is that piors, beliefs and value functions are not ultimately defined in material terms. Immaterial phenomena is beyond the realm of scientific analysis.
What would a frequentist analysis of the developing war look like?
Exactly the same.
Priors aren’t for believing. They’re for saying “if you start here, then after the evidence you end up there.” Where does a frequentist start?
A Frequent starts by saying we have observed n independent trials of evidence {Bi} therefore our uncertainty about probability P is bounded below ϵ>0.
I can tell my beliefs from my values. They both exist objectively, both observable by me. I can communicate them to other people. Cannot everyone? Extreme situations like Room 101 are not the usual way of things. From that example you could as well conclude that all communication is impossible.
Elizer wrote an entire sequence about the phenomenon where the beliefs someone purports do not necessarily equal their true beliefs.
My priors, my beliefs and my values constitute neither material phenomena nor sensory (including thoughts[1]) qualia. Therefore they are not observable to me.
What would a frequentist analysis of the developing war look like?
Exactly the same.
I’m confused by this claim? I thought the whole thing where you state your priors and conditional probabilities and perform updates to arrive at a posterior was… not frequentism?
Material phenomena must be defined in material terms. My argument is that piors, beliefs and value functions are not ultimately defined in material terms. Immaterial phenomena is beyond the realm of scientific analysis.
I don’t even know what this means. It sounds like a ‘separate magisteria’ type argument.
Here’s my response, arguing for objective priors as a solution to some of the problems you raise.
(I haven’t read all the other responding comments so I may be repeating stuff.)
I will attempt to refrain from “jumping ahead” and guessing your replies to these points, because that would require me to guess your motivation (IE, why you think the four ‘problems’ are indeed problematic). I will instead take your statements at face value, as if you think these things are problems-in-themselves (which, if addressed, cease to be problems).
I do this in the spirit of hoping to draw out better statements of what you think the real problems are (things which, if addressed, would actually change your mind, as opposed to just changing your argument).
Bayesian Probability has the following problems.
The answer to “Why do you believe x?” is always reducible to priors, which are non-falsifiable. Evidence has no effect on priors.
With description-length priors, claims about prior value are verifiable. Scientists can objectively demonstrate the ‘elegance’ of their theory by displaying a short description. (IE, elegance has been operationalized.)
Rational agents with wildly differing priors are (usually) unable to come into even approximate agreement when provided with scarce evidence.
Selecting a shared prior obviously addresses this.
Rational agents who disagree about unconditional priors P(A) but who agree about evidence likelihood P(Bi) and conditional priors P(A|Bi) should be able to come into agreement. Instead, Bayesians who disagree about unconditional priors P(A) while agreeing about evidence likelihood P(Bi) and conditional priors P(A|Bi) are provably unable to ever reach exact agreement if they use Bayes’ Theorem. This is the opposite of how empiricism should work.
Selecting a shared prior obviously addresses this.
Identifying someone else’s beliefs requires you to separate a person’s value function from their beliefs, which is impossible.
Selecting a shared prior doesn’t address this fully, but does allow one to infer beliefs by combining the (agreed-upon) prior with the evidence which that person has encountered.
(I haven’t read all the other responding comments so I may be repeating stuff.)
I do not think you are repeating stuff. If you are repeating stuff, you are not doing so in an annoying way. Your comment is unequivocally constructive.
Your answers raise so many more questions that I have to wonder whether you are only role-playing a frequentist, for want of any real ones stepping up to Eliezer’s challenge. But I’ll play along.
Material phenomena must be defined in material terms. My argument is that piors, beliefs and value functions are not ultimately defined in material terms. Immaterial phenomena is beyond the realm of scientific analysis.
That rules out mathematics and all mental phenomena, except so far as people’s talk about the latter have been explained in terms of physical phenomena in the brain. Radical behaviourism and positivism, that is. Or am I misunderstanding you?
What would a frequentist analysis of the developing war look like?
Exactly the same.
Complete with the Bayesian reasoning that you set out, including your prior probabilities? What do you understand by a frequentist analysis? You say this:
A Frequent starts by saying we have observed n independent trials of evidence {Bi} therefore our uncertainty about probability P is bounded below ϵ>0.
but I see none of this in your Bayesian (“exactly the same as the frequentist”) analysis of the war.
Elizer wrote an entire sequence about the phenomenon where the beliefs someone purports do not necessarily equal their true beliefs.
He has also written about how to better arrive at true beliefs. This is the fundamental theme of all of the Sequences, and of LessWrong. You seem to have taken the negative part of his message (“Woe! Woe! Nothing is true, all is a lie! Woe!”) as the whole.
My priors, my beliefs and my values constitute neither material phenomena nor sensory (including thoughts) qualia. Therefore they are not observable to me.
I suppose that is what it is like, to be a radical behaviourist. Perhaps that accounts for the unusual style of your fictional writings. I have enjoyed them, and hope to see more, but I have wondered what sort of a mind writes them.
I have yet to finish composing my response to Eliezer. In the meantime, I will do my best to answer each of your questions.
That rules out mathematics and all mental phenomena, except so far as people’s talk about the latter have been explained in terms of physical phenomena in the brain. Radical behaviourism and positivism, that is. Or am I misunderstanding you?
It does rule out mathematics. Science and mathematics are separate epistemologies. Science is an uncertain[1] empirical art based on evidence. Mathematics is a certain[2] system of formal logic based on theorems and axioms. Mathematics is a valuable source of useful truth (I have a university degree in mathematics) but the domain of math is carefully circumscribed. Math and science intersect like the circles of a Venn Diagram.
I think we need a third category for qualia. But I don’t think that qualia is relevant to the current discussion and I would prefer to set qualia aside for the purposes of this discussion. I feel like discussing qualia would lead us down a different rabbit hole and distract us from the question at hand.
I am unfamiliar with the term “radical Behaviorism”. The way I understand the history of psychology, “Behaviorism” is a political agenda that emerged in response to Freudianism. My biggest qualm with historical Behaviorism is that it did not just throw out Freudianism. By treating behavior as the only psychological observable, it threw out valuable sources of knowledge too.
I would rather avoid discussing Behaviorism [political agenda] because politics is the mind killer and because political labels are often inconsistently defined. Is there a way we can discuss Behaviorism [philosophy] while tabooing the word “Behaviorism” itself?
I am less familiar with positivism. I am a big fan of meditation as a source of metaphysical insight, which (I think?) contradicts positivism. And I do not deny that drugs like LSD provides genuine knowledge. (I have never used LSD, but the evidence for its benefits seems extremely strong.) But (as is the case with meditation) I would rather not derail this conversation into the subject of altered states of consciousness.
What would a frequentist analysis of the developing war look like?
Exactly the same.
Complete with the Bayesian reasoning that you set out, including your prior probabilities? What do you understand by a frequentist analysis? You say this:
A frequentist start by saying we have observed n independent trials of evidence Bi therefore our uncertainty about probability P is bounded below ϵ>0.
but I see none of this in your Bayesian (“exactly the same as the frequentist”) analysis of the war.
I think I misinterpreted your question. The question I answered was “What would a Frequentist’s analysis of the developing war look like?” That is not the question you asked. I apologize.
The Bayesian analysis I wrote down is not what actually went through my head.
Bayesianism is one of many frameworks for making sense of the world e.g. Marxism, Christianity, Frequentism, Daoism and Shinto. What I wrote was a retroactive Bayesian confabulation. I could just as easily have written a Marxist confabulation. “Putin is not a true Marxist and today’s Russia is an undeserving usurper of the Soviet Empire. Therefore Putin’s Russia will inevitably….” Or a Christian confabulation. “The march to Justice passes through the Valley of Death. The ultimate outcome of a mass mobilization ultimately rests on the righteousness of each side. But in the short term….”
I did not use Bayesian logic (because Bayesian logic passes the buck from hard questions to priors).
I did not use Frequentist or scientific analysis either. Frequentism is the foundation of science. Military-political analysis is (for practical purposes) mostly beyond the domain of science. Political “science” and military “science” have “science” in their names because they are not real sciences.
If I had to answer the question “What epistemic framework did you use?” the honest answer would be “Daoist” or “none at all” (which, perhaps ironically, sounds like something a Daoist would say). But, as is the case with meditation, I would rather not open the Daoist can of worms because it involves concepts that are alien to readers of this blog.
Science takes time. My Frequentist analysis occurred later. “As the war calcified, I finally had time to research current weapons technology and build my own model of the war from a tactics-level foundation.”
[Eliezer] has also written about how to better arrive at true beliefs. This is the fundamental theme of all of the Sequences, and of LessWrong. You appear to have taken the negative part of his message (“Woe! Woe! Nothing is true, all is a lie! Woe!”) as the whole.
Observations are true. Math is true. Those our our primitive elements. We can derive arbitrarily reliable abstractions (such as fundamental physics) from them via a series of checksums.
My priors, my beliefs and my values constitute neither material phenomena nor sensory (including thoughts) qualia. Therefore they are not observable to me.
I suppose that is what it is like, to be a radical behaviourist. Perhaps that accounts for the unusual style of your fictional writings. I have enjoyed them, and hope to see more, but I have wondered what sort of a mind writes them.
Thank you. I do not know if I think differently from other people. But the way I describe how I think is different from how others describe how they think. It is fun to add to my collection of these inconsistencies.
I am unfamiliar with the term “radical Behaviorism”. The way I understand the history of psychology, “Behaviorism” is a political agenda that emerged in response to Freudianism. My biggest qualm with historical Behaviorism is that it did not just throw out Freudianism. By treating behavior as the only psychological observable, it threw out valuable sources of knowledge too.
The paradigmatic Radical Behaviourist is John B. Watson. In his paradigmatic work, “Behaviorism”, he asserted that there is no such thing as a mind, and that, for example, a dress designer cannot have any image in his mind of the dress he intends to create. (“He has not, or he would not waste his time making it up; he would make a rough sketch of it or he would tell his assistant how to make it.”) There are some who would defend him against the charge of believing something so absurd, but here is a radical behaviourist of the present day emphatically upholding this view. I am inclined to take Watson at his word, and surmise that he did not believe in minds because he was unaware of his own: he had no subjective experience of his own self. Only such a person, it seems to me, could have written what he did.
I don’t know how behaviourism vs. Freudianism aligns with any political division (or where all the other schools of psychology would fit). However, behaviourism would obviously serve the agenda of someone who would agree with Number 2: “The whole world, as this Village?” “That is my dream.”
I have noticed a political aspect to Bayes vs. frequentism: right-wing and left-wing respectively. As someone right-leaning who thinks that the correct union of the two, choosing the right tool for the job, is all of the former and none of the latter, I would say the reason for that alignment is that Bayesian reasoning requires you to know what you know and use it, while frequentist reasoning requires that you pretend not to know what you know, and on no account use it. But an actual frequentist, if one can be found, might differ.
ETA: I had thought that behaviorism arose in reaction to introspectionism, which was collapsing due to the failure of the introspectionists to agree about the basic facts of their introspections.
Thank you for the description. I’m definitely not a “radical Behaviorist”, since I do believe there is a mind. I observe my own mind and the downstream effects of others’ minds. I do have subjective experience, but to use the phrase “my own self” would distract us into metaphysical territory I’d rather avoid.
Behaviorism has lots of political implications. I read somewhere that it has historically been used to rationalize (in the confabulation/retcon/propaganda/justification sense) authoritarian dehumanizing systems.
while frequentist reasoning requires that you pretend not to know what you know, and on no account use it
I like this argument. It’s healthy food for thought.
ETA: I had thought that behaviorism arose in reaction to introspectionism, which was collapsing due to the failure of the introspectionists to agree about the basic facts of their introspections.
I wouldn’t say you’re wrong. To prevent possible miscommunication, I would like to note that Behaviorism arose in response to Freudian introspection. Mystical introspection is a different thing that wasn’t even on Western psychology’s radar at the time.
A few questions.
What about objective priors?
What would a frequentist analysis of the developing war look like?
Priors aren’t for believing. They’re for saying “if you start here, then after the evidence you end up there.” Where does a frequentist start?
I can tell my beliefs from my values. They both exist objectively, both observable by me. I can communicate them to other people. Cannot everyone? Extreme situations like Room 101 are not the usual way of things. From that example you could as well conclude that all communication is impossible.
With objective priors one can always ask “so what?” If it’s not my subjective prior, then its posterior will not equal my subjective posterior. There isn’t an obvious way to bound the difference between my subjective prior and the objective prior.
With frequentist methods it’s possible to get guarantees like “no matter what prior over θ you start with, if you run this method, you’ll correctly estimate θ to within ϵ with at least 1−δ probability”. It’s clear that a subjective Bayesian (with imperfect knowledge of their prior) might care about this sort of guarantee.
Material phenomena must be defined in material terms. My argument is that piors, beliefs and value functions are not ultimately defined in material terms. Immaterial phenomena is beyond the realm of scientific analysis.
Exactly the same.
A Frequent starts by saying we have observed n independent trials of evidence {Bi} therefore our uncertainty about probability P is bounded below ϵ>0.
Elizer wrote an entire sequence about the phenomenon where the beliefs someone purports do not necessarily equal their true beliefs.
My priors, my beliefs and my values constitute neither material phenomena nor sensory (including thoughts[1]) qualia. Therefore they are not observable to me.
I have thoughts about beliefs, but those thoughts are not, themselves, beliefs.
I’m confused by this claim? I thought the whole thing where you state your priors and conditional probabilities and perform updates to arrive at a posterior was… not frequentism?
I don’t even know what this means. It sounds like a ‘separate magisteria’ type argument.
Here’s my response, arguing for objective priors as a solution to some of the problems you raise.
(I haven’t read all the other responding comments so I may be repeating stuff.)
I will attempt to refrain from “jumping ahead” and guessing your replies to these points, because that would require me to guess your motivation (IE, why you think the four ‘problems’ are indeed problematic). I will instead take your statements at face value, as if you think these things are problems-in-themselves (which, if addressed, cease to be problems).
I do this in the spirit of hoping to draw out better statements of what you think the real problems are (things which, if addressed, would actually change your mind, as opposed to just changing your argument).
With description-length priors, claims about prior value are verifiable. Scientists can objectively demonstrate the ‘elegance’ of their theory by displaying a short description. (IE, elegance has been operationalized.)
Selecting a shared prior obviously addresses this.
Selecting a shared prior obviously addresses this.
Selecting a shared prior doesn’t address this fully, but does allow one to infer beliefs by combining the (agreed-upon) prior with the evidence which that person has encountered.
I do not think you are repeating stuff. If you are repeating stuff, you are not doing so in an annoying way. Your comment is unequivocally constructive.
Your answers raise so many more questions that I have to wonder whether you are only role-playing a frequentist, for want of any real ones stepping up to Eliezer’s challenge. But I’ll play along.
That rules out mathematics and all mental phenomena, except so far as people’s talk about the latter have been explained in terms of physical phenomena in the brain. Radical behaviourism and positivism, that is. Or am I misunderstanding you?
Complete with the Bayesian reasoning that you set out, including your prior probabilities? What do you understand by a frequentist analysis? You say this:
but I see none of this in your Bayesian (“exactly the same as the frequentist”) analysis of the war.
He has also written about how to better arrive at true beliefs. This is the fundamental theme of all of the Sequences, and of LessWrong. You seem to have taken the negative part of his message (“Woe! Woe! Nothing is true, all is a lie! Woe!”) as the whole.
I suppose that is what it is like, to be a radical behaviourist. Perhaps that accounts for the unusual style of your fictional writings. I have enjoyed them, and hope to see more, but I have wondered what sort of a mind writes them.
Thank you for the thoughtful feedback. You are not the only person who questions my Frequentist leanings. I am flattered by your accusations.
I have yet to finish composing my response to Eliezer. In the meantime, I will do my best to answer each of your questions.
It does rule out mathematics. Science and mathematics are separate epistemologies. Science is an uncertain[1] empirical art based on evidence. Mathematics is a certain[2] system of formal logic based on theorems and axioms. Mathematics is a valuable source of useful truth (I have a university degree in mathematics) but the domain of math is carefully circumscribed. Math and science intersect like the circles of a Venn Diagram.
I think we need a third category for qualia. But I don’t think that qualia is relevant to the current discussion and I would prefer to set qualia aside for the purposes of this discussion. I feel like discussing qualia would lead us down a different rabbit hole and distract us from the question at hand.
I am unfamiliar with the term “radical Behaviorism”. The way I understand the history of psychology, “Behaviorism” is a political agenda that emerged in response to Freudianism. My biggest qualm with historical Behaviorism is that it did not just throw out Freudianism. By treating behavior as the only psychological observable, it threw out valuable sources of knowledge too.
I would rather avoid discussing Behaviorism [political agenda] because politics is the mind killer and because political labels are often inconsistently defined. Is there a way we can discuss Behaviorism [philosophy] while tabooing the word “Behaviorism” itself?
I am less familiar with positivism. I am a big fan of meditation as a source of metaphysical insight, which (I think?) contradicts positivism. And I do not deny that drugs like LSD provides genuine knowledge. (I have never used LSD, but the evidence for its benefits seems extremely strong.) But (as is the case with meditation) I would rather not derail this conversation into the subject of altered states of consciousness.
I think I misinterpreted your question. The question I answered was “What would a Frequentist’s analysis of the developing war look like?” That is not the question you asked. I apologize.
The Bayesian analysis I wrote down is not what actually went through my head.
Bayesianism is one of many frameworks for making sense of the world e.g. Marxism, Christianity, Frequentism, Daoism and Shinto. What I wrote was a retroactive Bayesian confabulation. I could just as easily have written a Marxist confabulation. “Putin is not a true Marxist and today’s Russia is an undeserving usurper of the Soviet Empire. Therefore Putin’s Russia will inevitably….” Or a Christian confabulation. “The march to Justice passes through the Valley of Death. The ultimate outcome of a mass mobilization ultimately rests on the righteousness of each side. But in the short term….”
I did not use Bayesian logic (because Bayesian logic passes the buck from hard questions to priors).
I did not use Frequentist or scientific analysis either. Frequentism is the foundation of science. Military-political analysis is (for practical purposes) mostly beyond the domain of science. Political “science” and military “science” have “science” in their names because they are not real sciences.
If I had to answer the question “What epistemic framework did you use?” the honest answer would be “Daoist” or “none at all” (which, perhaps ironically, sounds like something a Daoist would say). But, as is the case with meditation, I would rather not open the Daoist can of worms because it involves concepts that are alien to readers of this blog.
Science takes time. My Frequentist analysis occurred later. “As the war calcified, I finally had time to research current weapons technology and build my own model of the war from a tactics-level foundation.”
Observations are true. Math is true. Those our our primitive elements. We can derive arbitrarily reliable abstractions (such as fundamental physics) from them via a series of checksums.
Thank you. I do not know if I think differently from other people. But the way I describe how I think is different from how others describe how they think. It is fun to add to my collection of these inconsistencies.
In theory. In practice, scientific conclusions are often very certain.
In theory. In practice, mathematical conclusions are often very uncertain.
The paradigmatic Radical Behaviourist is John B. Watson. In his paradigmatic work, “Behaviorism”, he asserted that there is no such thing as a mind, and that, for example, a dress designer cannot have any image in his mind of the dress he intends to create. (“He has not, or he would not waste his time making it up; he would make a rough sketch of it or he would tell his assistant how to make it.”) There are some who would defend him against the charge of believing something so absurd, but here is a radical behaviourist of the present day emphatically upholding this view. I am inclined to take Watson at his word, and surmise that he did not believe in minds because he was unaware of his own: he had no subjective experience of his own self. Only such a person, it seems to me, could have written what he did.
I don’t know how behaviourism vs. Freudianism aligns with any political division (or where all the other schools of psychology would fit). However, behaviourism would obviously serve the agenda of someone who would agree with Number 2: “The whole world, as this Village?” “That is my dream.”
I have noticed a political aspect to Bayes vs. frequentism: right-wing and left-wing respectively. As someone right-leaning who thinks that the correct union of the two, choosing the right tool for the job, is all of the former and none of the latter, I would say the reason for that alignment is that Bayesian reasoning requires you to know what you know and use it, while frequentist reasoning requires that you pretend not to know what you know, and on no account use it. But an actual frequentist, if one can be found, might differ.
ETA: I had thought that behaviorism arose in reaction to introspectionism, which was collapsing due to the failure of the introspectionists to agree about the basic facts of their introspections.
Thank you for the description. I’m definitely not a “radical Behaviorist”, since I do believe there is a mind. I observe my own mind and the downstream effects of others’ minds. I do have subjective experience, but to use the phrase “my own self” would distract us into metaphysical territory I’d rather avoid.
Behaviorism has lots of political implications. I read somewhere that it has historically been used to rationalize (in the confabulation/retcon/propaganda/justification sense) authoritarian dehumanizing systems.
I like this argument. It’s healthy food for thought.
I wouldn’t say you’re wrong. To prevent possible miscommunication, I would like to note that Behaviorism arose in response to Freudian introspection. Mystical introspection is a different thing that wasn’t even on Western psychology’s radar at the time.
What are beliefs if not thoughts?
What is phlogistan if not fire?