Thank you for addressing this! I also had a feeling that there was some fundamental misunderstanding here, but I couldn’t express it clearly.
From reading David Chapman’s blog, my impression is that he presents his philosophy of meaning as a solution to a dilemma between two opposing extremes. One extreme is nihilism, which refuses to debate meaning, because it insists there is simply no such thing. Another extreme is people who believe in a simplistic solution; two archetypes in this category are a religious fanatic and a Vulcan rationalist. (I am using my own words here.)
The proposed solution is something I would call “reflective compartmentalization”, i.e. considering various aspects of your life separately and finding a local meaning in each of them; and being aware that there is no overarching story; and being okay with that. In other words, there is no global meaning, but there are local meanings; the true meaning of your life are the local meanings.
Then this somehow turns into an argument about epistemology—as if finding separate meanings in multiple separate contexts requires using multiple maps, and insisting that there is one territory implies that there is one objective global meaning of life. I am not sure that I understand this part; this is simply how it sounds to me.
Then, Chapman notices an analogy between his model and Kegan’s model of moral development. The Vulcan rationalist (or the religious fanatic) corresponds to level 4 which is about having a strong identity. And Chapman’s philosophy of “many local meaning” and “many maps” corresponds to level 5 which Kegan describes as “interpenetration of systems”; also in both views this is the highest place in the hierarchy. -- This makes me a bit worried about affective spirals: the best way to find meaning in life happens to be the best epistemology and also happens to make you most moral and capable of genuine love. (On the other hand, one could easily make a similar accusation against LW.) Oh, and it’s also somehow connected with the best religion, i.e. Buddhism.
Now, maybe I just missed something, but I don’t remember reading David Chapman mentioning Less Wrong specifically. So I don’t understand his opinions per se to be attacks against rationality as defined by LW. (I think it’s more about those his fans who also happen to be familiar with LW jumping to the conclusion: “Chapman totally pwned LW, rationality is debunked, all the cool kids are meta-rationalists now!”) He seems to be familiar with Vulcan rationality, which is a popular trope in our culture, and let’s admit honestly that the trope is based on real behavior of some real people. So I don’t blame him for using Vulcan rationality as the prototype of “rationality”. I imagine (perhaps incorrectly) that he would agree with some parts of LW common knowledge—such as complexity of human value—and consider them an improvement over the Vulcan rationality. He just seems to insist that the true meaning of “rationality” is the Vulcan rationality, and frankly most of the world would probably agree with him, and this is the opponent he really debates.
The crux of the disagreement seems to be whether the belief in a territory is incompatible with having multiple maps and finding them useful, and whether trying to be rational (in the LW sense) is just another irrational and limiting identity. (And the related dictionary debate about whether the true meaning of “rationality” is ‘winning at life’, or the true meaning of “rationality” is ‘Vulcan rationality’ and the true meaning of “meta-rationality” is ‘winning at life’.)
My opinion is that when debating LW, Chapman’s perspective is partially kicking at an open door (“being a Vulcan rationalist is stupid” “thanks, we know already”; “humans are complex” “indeed, I bet there is a lesson about it somewhere in the Sequences”) and partially… what was addressed in this article.
EDIT: About Kegan… I didn’t think about his model deeply, but I would also guess he was addressing the Vulcan rationality. (And the idea of there being one territory seems to be generally unwelcome in social sciences.)
Now, maybe I just missed something, but I don’t remember reading David Chapman mentioning Less Wrong specifically. So I don’t understand his opinions per se to be attacks against rationality as defined by LW.
[...]
I imagine (perhaps incorrectly) that he would agree with some parts of LW common knowledge
[...]
He just seems to insist that the true meaning of “rationality” is the Vulcan rationality
Your understanding seems to match what he says in these tweets:
Important: by “rationalists,” I do NOT primarily mean the LW-derived community. I’m pointing to a whole history going back to the Ancient Greeks, and whose most prototypical example is early-20th-century logical positivism.
I think that much of the best work of the LW-derived community is “meta-rational” as I define that. The book is supposed to explain why that is a good thing.
While David Chapman wasn’t one of the main LessWrong contributors but he has 432 LessWrong karma (a. The first longer post of him engaging with the LessWrong philosophy is https://meaningness.com/metablog/bayesianism-updating which starts by referencing a video of Julia Galef.
If you read the comment thread of that post you will find many familiar LessWrong names and Scott wrote an article on his blog in response.
Later Chapman wrote the more technical Probability theory does not extend logic in which Chapman who has a MIT AI PHD shows how the core claim that probability theory is an extension of logic that’s made in the sequences is wrong.
If we step back it’s worth noting that you find the term Bayesianism a lot less today on LessWrong than five years ago when Chapman wrote the above posts. CFAR dropped their class that teaches Bayes rule (against Eliezer’s wishes) and instead teaches double crux which often doesn’t contain any thinking about probabilities.
Valentine who’s at the head of CFAR curriulum development was more influential of how the “LessWrong ideology” developed in the last five years than Eliezer.
I think there’s a good chance that Julia Galef would cringe a bit when she today looks back on that Bayes rule video.
The crux of the disagreement seems to be whether the belief in a territory is incompatible with having multiple maps and finding them useful, and whether trying to be rational (in the LW sense) is just another irrational and limiting identity.
That doesn’t sound to me like you pass the Ideological Turing Test. I’m not even sure whether Eliezer would argue that probability is an inherent feature of the territory.
Later Chapman wrote the more technical Probability theory does not extend logic in which Chapman who has a MIT AI PHD shows how the core claim that probability theory is an extension of logic that’s made in the sequences is wrong.
As far as I can tell, his piece is mistaken. I’m going to copypaste what I’ve written about it elsewhere:
So I looked at Chapman’s “Probability theory does not extend logic” and some things aren’t making sense. He claims that probability theory does extend propositional logic, but not predicate logic.
But if we assume a countable universe, probability will work just as well with universals and existentials as it will with conjunctions and disjunctions. Even without that assumption, well, a universal is essentially an infinite conjunction, and an existential statement is essentially an infinite disjunction. It would be strange that this case should fail.
His more specific example is: Say, for some x, we gain evidence for “There exist distinct y and y’ with R(x,y)”, and update its probability accordingly; how should we update our probability for “For all x, there exists a unique y with R(x,y)”? Probability theory doesn’t say, he says. But OK — let’s take this to a finite universe with known elements. Now all those universals and existentials can be rewritten as finite conjunctions and disjunctions. And probability theory does handle this case?
I mean… I don’t think it does. If you have events A and B and you learn C, well, you update P(A) to P(A|C), and you update P(A∩B) to P(A∩B|C)… but the magnitude of the first update doesn’t determine the magnitude in the second. Why should it when the conjunction becomes infinite? I think that Chapman’s claim about a way in which probability theory does not extend predicate logic, is equally a claim about a way in which it does not extend propositional logic. As best I can tell, it extends both equally well.
(Also here is a link to a place where I posted this and got into an argument with Chapman about this that people might find helpful?)
But if we assume a countable universe, probability will work just as well with universals and existentials as it will with conjunctions and disjunctions.
If you regard probability as a tool for thinking , which is pretty reasonable, it’s not going to work, in the sense of being usable, if it contains countable infinities or very large finite numbers.
Also, it is not a good idea to build assumptions about how the world works into the tools you are using to figure out how the world works.
But the question wasn’t about whether it’s usable. The question was about whether there is some sense in which probability extends propositional logic but not predicate logic.
But OK — let’s take this to a finite universe with known elements.
If everything is known you don’t need probability theory in the first place. You just know what happens. See Probability is in the Mind.
Most of the factors that we encounter are not known and good decision making is about dealing with the unknown and part of the promise of Bayesianism is that it helps you dealing with the unknown.
So, I must point out that a finite universe with known elements isn’t actually one where everything is known, although it certainly is one where we know way more than we ever do in the real world. But this is irrelevant. I don’t see how anything you’re saying relates to the claim is that probability theory extends propositional logic but not predicate logic.
Why is it irrelevant when you assume a world where the agent who has to make the decision knows more than they actually know?
Decision theory is about making decisions based on certain information that known.
I don’t see how anything you’re saying relates to the claim is that probability theory extends propositional logic but not predicate logic.
I haven’t studied the surrounding math but as far as I understand according to Cox’s Theorem probability theory does extend propositional calculus without having to make additional assumptions about finite universe or certain things being known.
Why is it irrelevant when you assume a world where the agent who has to make the decision knows more than they actually know? Decision theory is about making decisions based on certain information that known.
I think you’ve lost the chain a bit here. We’re just discussing to what extent probability theory does or does not extend various forms of logic. The actual conditions in the real world do not affect that. Now obviously if it only extends it in conditions that do not hold in the real world, then that is important to know; but if that were the case then “probability theory extends logic” would be a way too general statement anyhow and I hope nobody would be claiming that!
(And actually if you read the argument with Chapman that I linked, I agree that “probability theory extends logic” is a misleading claim, and that it indeed mostly does not extend logic. The question isn’t whether it extends logic, the question is whether propositional and predicate logic behave differently here.)
But again all of this is irrelevant because nobody is claiming anything like that! I mentioned a finite universe, where predicate logic essentially becomes propositional logic, to illustrate a particular point—that probability theory does not extend propositional logic in the sense Chapman claims it does. I didn’t bring it up to say “Oho well in a finite universe it does extend predicate logic, therefore it’s correct to say that probability theory extends predicate logic”; I did the opposite of that! At no point did I make any actual-rather-than-illustrative assumption to the effect that that the real world is or is like a finite universe. So objecting that it isn’t has no relevance.
I haven’t studied the surrounding math but as far as I understand according to Cox’s Theorem probability theory does extend propositional calculus without having to make additional assumptions about finite universe or certain things being known.
Cox’s theorem actually requires a “big world” assumption, which IINM is incompatible with a finite universe!
I think this is getting off-track a little. To review: Chapman claimed that, in a certain sense, probability theory extends propositional but not predicate logic. I claimed that, in that particular sense, it actually extends both of them equally well. (Which is not to say that it truly does extend both of them, to be clear—if you read the argument with Chapman that I linked, I actually agree that “probability theory extends logic” is a misleading claim, and that it mostly doesn’t.)
So now the question here is, what are you arguing for? If you’re arguing for Chapman’s original claim, the relevance of your statement of Cox’s theorem is unclear, as it’s not clear that this relates to the particular sense he was talking about.
If you’re arguing for a broader version of Chapman’s claim—broadening the scope to allow any sense rather than the particular one he claimed—then you need to exhibit a sense in which probability theory extends propositional logic but not predicate logic. I can buy the claim that Cox’s theorem provides a certain sense in which probability theory extends propositional logic. And, though you haven’t argued for it, I can even buy the claim that this is a sense in which it does not extend predicate logic [edit: at least, in an uncountable universe]. But, well, the problem is that regardless if it’s true, this broader claim—or this particular version of it, anyway—just doesn’t seem to have much to do with his original one.
Yeah I was referencing Eliezer’s views on the topic rather than stating my own. Personally I think it does make sense to think of the Born probabilities as some sort of propensity, which it might be fair to describe as “probability in the territory”. Other than that I am not sure what it would mean to talk about “probability in the territory”.
His description of LW there is: “LW suggests (sometimes, not always) that Bayesian probability is the main tool for effective, accurate thinking. I think it is only a small part of what you need.”
This seems to reflect the toolbox vs. law misunderstanding that Eliezer describes in the OP. Chapman is using a toolbox frame and presuming that, when LWers go on about Bayes, they are using a similar frame and thinking that it’s the “main tool” in the toolbox.
In the rest of the post it looks like Chapman thinks that what he’s saying is contrary to the LW ethos, but it seems to me like his ideas would fit in fine here. For example, Scott has also discussed how a robot can use simple rules which outsource much of its cognition to the environment instead of constructing an internal representation and applying Bayes & expected utility maximization.
Thank you for addressing this! I also had a feeling that there was some fundamental misunderstanding here, but I couldn’t express it clearly.
From reading David Chapman’s blog, my impression is that he presents his philosophy of meaning as a solution to a dilemma between two opposing extremes. One extreme is nihilism, which refuses to debate meaning, because it insists there is simply no such thing. Another extreme is people who believe in a simplistic solution; two archetypes in this category are a religious fanatic and a Vulcan rationalist. (I am using my own words here.)
The proposed solution is something I would call “reflective compartmentalization”, i.e. considering various aspects of your life separately and finding a local meaning in each of them; and being aware that there is no overarching story; and being okay with that. In other words, there is no global meaning, but there are local meanings; the true meaning of your life are the local meanings.
Then this somehow turns into an argument about epistemology—as if finding separate meanings in multiple separate contexts requires using multiple maps, and insisting that there is one territory implies that there is one objective global meaning of life. I am not sure that I understand this part; this is simply how it sounds to me.
Then, Chapman notices an analogy between his model and Kegan’s model of moral development. The Vulcan rationalist (or the religious fanatic) corresponds to level 4 which is about having a strong identity. And Chapman’s philosophy of “many local meaning” and “many maps” corresponds to level 5 which Kegan describes as “interpenetration of systems”; also in both views this is the highest place in the hierarchy. -- This makes me a bit worried about affective spirals: the best way to find meaning in life happens to be the best epistemology and also happens to make you most moral and capable of genuine love. (On the other hand, one could easily make a similar accusation against LW.) Oh, and it’s also somehow connected with the best religion, i.e. Buddhism.
Now, maybe I just missed something, but I don’t remember reading David Chapman mentioning Less Wrong specifically. So I don’t understand his opinions per se to be attacks against rationality as defined by LW. (I think it’s more about those his fans who also happen to be familiar with LW jumping to the conclusion: “Chapman totally pwned LW, rationality is debunked, all the cool kids are meta-rationalists now!”) He seems to be familiar with Vulcan rationality, which is a popular trope in our culture, and let’s admit honestly that the trope is based on real behavior of some real people. So I don’t blame him for using Vulcan rationality as the prototype of “rationality”. I imagine (perhaps incorrectly) that he would agree with some parts of LW common knowledge—such as complexity of human value—and consider them an improvement over the Vulcan rationality. He just seems to insist that the true meaning of “rationality” is the Vulcan rationality, and frankly most of the world would probably agree with him, and this is the opponent he really debates.
The crux of the disagreement seems to be whether the belief in a territory is incompatible with having multiple maps and finding them useful, and whether trying to be rational (in the LW sense) is just another irrational and limiting identity. (And the related dictionary debate about whether the true meaning of “rationality” is ‘winning at life’, or the true meaning of “rationality” is ‘Vulcan rationality’ and the true meaning of “meta-rationality” is ‘winning at life’.)
My opinion is that when debating LW, Chapman’s perspective is partially kicking at an open door (“being a Vulcan rationalist is stupid” “thanks, we know already”; “humans are complex” “indeed, I bet there is a lesson about it somewhere in the Sequences”) and partially… what was addressed in this article.
EDIT: About Kegan… I didn’t think about his model deeply, but I would also guess he was addressing the Vulcan rationality. (And the idea of there being one territory seems to be generally unwelcome in social sciences.)
Your understanding seems to match what he says in these tweets:
https://twitter.com/Meaningness/status/993623171411529728
https://twitter.com/Meaningness/status/993623388806496256
While David Chapman wasn’t one of the main LessWrong contributors but he has 432 LessWrong karma (a. The first longer post of him engaging with the LessWrong philosophy is https://meaningness.com/metablog/bayesianism-updating which starts by referencing a video of Julia Galef.
If you read the comment thread of that post you will find many familiar LessWrong names and Scott wrote an article on his blog in response.
Later Chapman wrote the more technical Probability theory does not extend logic in which Chapman who has a MIT AI PHD shows how the core claim that probability theory is an extension of logic that’s made in the sequences is wrong.
If we step back it’s worth noting that you find the term Bayesianism a lot less today on LessWrong than five years ago when Chapman wrote the above posts. CFAR dropped their class that teaches Bayes rule (against Eliezer’s wishes) and instead teaches double crux which often doesn’t contain any thinking about probabilities.
Valentine who’s at the head of CFAR curriulum development was more influential of how the “LessWrong ideology” developed in the last five years than Eliezer.
I think there’s a good chance that Julia Galef would cringe a bit when she today looks back on that Bayes rule video.
That doesn’t sound to me like you pass the Ideological Turing Test. I’m not even sure whether Eliezer would argue that probability is an inherent feature of the territory.
As far as I can tell, his piece is mistaken. I’m going to copypaste what I’ve written about it elsewhere:
So I looked at Chapman’s “Probability theory does not extend logic” and some things aren’t making sense. He claims that probability theory does extend propositional logic, but not predicate logic.
But if we assume a countable universe, probability will work just as well with universals and existentials as it will with conjunctions and disjunctions. Even without that assumption, well, a universal is essentially an infinite conjunction, and an existential statement is essentially an infinite disjunction. It would be strange that this case should fail.
His more specific example is: Say, for some x, we gain evidence for “There exist distinct y and y’ with R(x,y)”, and update its probability accordingly; how should we update our probability for “For all x, there exists a unique y with R(x,y)”? Probability theory doesn’t say, he says. But OK — let’s take this to a finite universe with known elements. Now all those universals and existentials can be rewritten as finite conjunctions and disjunctions. And probability theory does handle this case?
I mean… I don’t think it does. If you have events A and B and you learn C, well, you update P(A) to P(A|C), and you update P(A∩B) to P(A∩B|C)… but the magnitude of the first update doesn’t determine the magnitude in the second. Why should it when the conjunction becomes infinite? I think that Chapman’s claim about a way in which probability theory does not extend predicate logic, is equally a claim about a way in which it does not extend propositional logic. As best I can tell, it extends both equally well.
(Also here is a link to a place where I posted this and got into an argument with Chapman about this that people might find helpful?)
If you regard probability as a tool for thinking , which is pretty reasonable, it’s not going to work, in the sense of being usable, if it contains countable infinities or very large finite numbers.
Also, it is not a good idea to build assumptions about how the world works into the tools you are using to figure out how the world works.
But the question wasn’t about whether it’s usable. The question was about whether there is some sense in which probability extends propositional logic but not predicate logic.
If everything is known you don’t need probability theory in the first place. You just know what happens. See Probability is in the Mind.
Most of the factors that we encounter are not known and good decision making is about dealing with the unknown and part of the promise of Bayesianism is that it helps you dealing with the unknown.
So, I must point out that a finite universe with known elements isn’t actually one where everything is known, although it certainly is one where we know way more than we ever do in the real world. But this is irrelevant. I don’t see how anything you’re saying relates to the claim is that probability theory extends propositional logic but not predicate logic.
Edit: oops, wrote “point” instead of “world”
Why is it irrelevant when you assume a world where the agent who has to make the decision knows more than they actually know?
Decision theory is about making decisions based on certain information that known.
I haven’t studied the surrounding math but as far as I understand according to Cox’s Theorem probability theory does extend propositional calculus without having to make additional assumptions about finite universe or certain things being known.
I think you’ve lost the chain a bit here. We’re just discussing to what extent probability theory does or does not extend various forms of logic. The actual conditions in the real world do not affect that. Now obviously if it only extends it in conditions that do not hold in the real world, then that is important to know; but if that were the case then “probability theory extends logic” would be a way too general statement anyhow and I hope nobody would be claiming that!
(And actually if you read the argument with Chapman that I linked, I agree that “probability theory extends logic” is a misleading claim, and that it indeed mostly does not extend logic. The question isn’t whether it extends logic, the question is whether propositional and predicate logic behave differently here.)
But again all of this is irrelevant because nobody is claiming anything like that! I mentioned a finite universe, where predicate logic essentially becomes propositional logic, to illustrate a particular point—that probability theory does not extend propositional logic in the sense Chapman claims it does. I didn’t bring it up to say “Oho well in a finite universe it does extend predicate logic, therefore it’s correct to say that probability theory extends predicate logic”; I did the opposite of that! At no point did I make any actual-rather-than-illustrative assumption to the effect that that the real world is or is like a finite universe. So objecting that it isn’t has no relevance.
Cox’s theorem actually requires a “big world” assumption, which IINM is incompatible with a finite universe!
I think this is getting off-track a little. To review: Chapman claimed that, in a certain sense, probability theory extends propositional but not predicate logic. I claimed that, in that particular sense, it actually extends both of them equally well. (Which is not to say that it truly does extend both of them, to be clear—if you read the argument with Chapman that I linked, I actually agree that “probability theory extends logic” is a misleading claim, and that it mostly doesn’t.)
So now the question here is, what are you arguing for? If you’re arguing for Chapman’s original claim, the relevance of your statement of Cox’s theorem is unclear, as it’s not clear that this relates to the particular sense he was talking about.
If you’re arguing for a broader version of Chapman’s claim—broadening the scope to allow any sense rather than the particular one he claimed—then you need to exhibit a sense in which probability theory extends propositional logic but not predicate logic. I can buy the claim that Cox’s theorem provides a certain sense in which probability theory extends propositional logic. And, though you haven’t argued for it, I can even buy the claim that this is a sense in which it does not extend predicate logic [edit: at least, in an uncountable universe]. But, well, the problem is that regardless if it’s true, this broader claim—or this particular version of it, anyway—just doesn’t seem to have much to do with his original one.
Probability is in the Mind
if you want to prove that there is no probabilty in the territory, you need to examine the territory.
Yeah I was referencing Eliezer’s views on the topic rather than stating my own. Personally I think it does make sense to think of the Born probabilities as some sort of propensity, which it might be fair to describe as “probability in the territory”. Other than that I am not sure what it would mean to talk about “probability in the territory”.
David Chapman directly discusses his opinion about LessWrong here.
His description of LW there is: “LW suggests (sometimes, not always) that Bayesian probability is the main tool for effective, accurate thinking. I think it is only a small part of what you need.”
This seems to reflect the toolbox vs. law misunderstanding that Eliezer describes in the OP. Chapman is using a toolbox frame and presuming that, when LWers go on about Bayes, they are using a similar frame and thinking that it’s the “main tool” in the toolbox.
In the rest of the post it looks like Chapman thinks that what he’s saying is contrary to the LW ethos, but it seems to me like his ideas would fit in fine here. For example, Scott has also discussed how a robot can use simple rules which outsource much of its cognition to the environment instead of constructing an internal representation and applying Bayes & expected utility maximization.
I think this is a good summary; see also my comment below.