I feel that I need to maintain extreme mental efforts to stay “sane” here. Maybe I should refrain from commenting. It’s a pity because I’m generally very interested in the topics discussed here, but the tone and the underlying ideology is pushing me away.
I would be very interested in hearing elaboration on this topic, either publicly or privately.
I prefer public discussions. First, I’m a computer science student who took courses in machine learning, AI, wrote theses in these areas (nothing exceptional), I enjoy books like Thinking Fast and Slow, Black Swan, Pinker, Dawkins, Dennett, Ramachandran etc. So the topics discussed here are also interesting to me. But the atmosphere seems quite closed and turning inwards.
I feel similarities to reddit’s Red Pill community. Previously “ignorant” people feel the community has opened a new world to them, they lived in darkness before, but now they found the “Way” (“Bayescraft”) and all this stuff is becoming an identity for them.
Sorry if it’s offensive, but I feel as if many people had no success in the “real world” matters and invented a fiction where they are the heroes by having joined some great organization much higher above the general public, who are just irrational automata still living in the dark.
I dislike the heavy use of insider terminology that make communication with “outsiders” about these ideas quite hard because you get used to referring to these things by the in-group terms, so you get kind of isolated from your real-life friends as you feel “they won’t understand, they’d have to read so much”. When actually many of the concepts are not all that new and could be phrased in a way that the “uninitiated” can also get it.
There are too many cross references in posts and it keeps you busy with the site longer than necessary. It seems that people try to prove they know some concept by using the jargon and including links to them. Instead, I’d prefer authors who actively try to minimize the need for links and jargon.
I also find the posts quite redundant. They seem to be reiterations of the same patterns in very long prose with people’s stories intertwined with the ideas, instead of striving for clarity and conciseness. Much of it feels a lot like self-help for people with derailed lives who try to engineer their life (back) to success. I may be wrong but I get a depressed vibe from reading the site too long. It may also be because there is no lighthearted humor or in-jokes or “fun” or self-irony at all. Maybe because the members are just like that in general (perhaps due to mental differences, like being on the autism spectrum, I’m not a psychiatrist).
I can see that people here are really smart and the comments are often very reasonable. And it makes me wonder why they’d regard a single person such as Yudkowsky in such high esteem as compared to established book authors or academics or industry people in these areas. I know there has been much discussion about cultishness, and I think it goes a lot deeper than surface issues. LessWrong seems to be quite isolated and distrusting towards the mainstream. Many people seem to have read stuff first from Yudkowsky, who often does not reference earlier works that basically state the same stuff, so people get the impression that all or most of the ideas in “The Sequences” come from him. I was quite disappointed several times when I found the same ideas in mainstream books. The Sequences often depict the whole outside world as dumber than it is (straw man tactics, etc).
Another thing is that discussion is often too meta (or meta-meta). There is discussion on Bayes theorem and math principles but no actual detailed, worked out stuff. Very little actual programming for example. I’d expect people to create github projects, IPython notebooks to show some examples of what they are talking about. Much of the meta-meta-discussion is very opinion-based because there is no immediate feedback about whether someone is wrong or right. It’s hard to test such hypotheses. For example, in this post I would have expected an example dataset and showing how PCA can uncover something surprising. Otherwise it’s just floating out there although it matches nicely with the pattern that “some math concept gave me insight that refined my rationality”. I’m not sure, maybe these “rationality improvements” are sometimes illusions.
I also don’t get why the rationality stuff is intermixed with friendly AI and cryonics and transhumanism. I just don’t see why these belong that much together. I find them too speculative and detached from the “real world” to be the central ideas. I realize they are important, but their prevalence could also be explained as “escapism” and it promotes the discussion of untestable meta things that I mentioned above, never having to face reality. There is much talk about what evidence is but not much talk that actually presents evidence.
I needed to develop a sort of immunity against topics like acausal trade that I can’t fully specify how they are wrong, but they feel wrong and are hard to translate to practical testable statements, and it just messes with my head in the wrong way.
And of course there is also that secrecy around and hiding of “certain things”.
That’s it. This place may just not be for me, which is fine. People can have their communities in the way they want. You just asked for elaboration.
Thanks for the detailed response! I’ll respond to a handful of points:
Previously “ignorant” people feel the community has opened a new world to them, they lived in darkness before, but now they found the “Way” (“Bayescraft”) and all this stuff is becoming an identity for them.
I certainly agree that there are people here who match that description, but it’s also worth pointing out that there are actual experts too.
the general public, who are just irrational automata still living in the dark.
One of the things I find most charming about LW, compared to places like RationalWiki, is how much emphasis there is on self-improvement and your mistakes, not mistakes made by other people because they’re dumb.
It seems that people try to prove they know some concept by using the jargon and including links to them. Instead, I’d prefer authors who actively try to minimize the need for links and jargon.
I’m not sure this is avoidable, and in full irony I’ll link to the wiki page that explains why.
In general, there are lots of concepts that seem useful, but the only way we have to refer to concepts is either to refer to a label or to explain the concept. A number of people read through the sequences and say “but the conclusions are just common sense!”, to which the response is, “yes, but how easy is it to communicate common sense?” It’s one thing to be able to recognize that there’s some vague problem, and another thing to be able to say “the problem here is inferential distance; knowledge takes many steps to explain, and attempts to explain it in fewer steps simply won’t work, and the justification for this potentially surprising claim is in Appendix A.” It is one thing to be able to recognize a concept as worthwhile; it is another thing to be able to recreate that concept when a need arises.
Now, I agree with you that having different labels to refer to the same concept, or conceptual boundaries or definitions that are drawn slightly differently, is a giant pain. When possible, I try to bring the wider community’s terminology to LW, but this requires being in both communities, which limits how much any individual person can do.
I also don’t get why the rationality stuff is intermixed with friendly AI and cryonics and transhumanism.
Part of that is just seeding effects—if you start a rationality site with a bunch of people interested in transhumanism, the site will remain disproportionately linked to transhumanism because people who aren’t transhumanists will be more likely to leave and people who are transhumanists will be more likely to find and join the site.
Part of it is that those are the cluster of ideas that seem weird but ‘hold up’ under investigation—most of the reasons to believe that the economy of fifty years from now will look like the economy of today are just confused, and if a community has good tools for dissolving confusions you should expect them to converge on the un-confused answer.
A final part seems to be availability; people who are convinced by the case for cryonics tend to be louder than the people who are unconvinced. The annual surveys show the perception of LW one gets from just reading posts (or posts and comments) is skewed from the perception of LW one gets from the survey results.
One of the things I find most charming about LW, compared to places like RationalWiki, is how much emphasis there is on self-improvement and your mistakes, not mistakes made by other people because they’re dumb.
I agree that LW is much better than RationalWiki, but I still think that the norms for discussion are much too far in the direction of focus on how other commenters are wrong as opposed to how one might oneself be wrong.
I know that there’s a selection effect (with respect to the more frustrating interactions standing out). But people not infrequently mistakenly believe that I’m wrong about things that I know much more about than they do, with very high confidence, and in such instances I find the connotations that I’m unsound to be exasperating.
I don’t think that this is just a problem for me rather than a problem for the community in general: I know a number of very high quality thinkers in real life who are uninterested in participating on LW explicitly because they don’t want to engage with commenters who are highly confident that their own positions are incorrect. There’s another selection effect here: such people aren’t salient because they’re invisible to the online community.
I know that there’s a selection effect (with respect to the more frustrating interactions standing out).
I agree that those frustrating interactions both happen and are frustrating, and that it leads to a general acidification of the discussion as people who don’t want to deal with it leave. Reversing that process in a sustainable way is probably the most valuable way to improve LW in the medium term.
There’s also the whole Lesswrong-is-dying thing that might be contribute to the vibe you’re getting. I’ve been reading the forum for years and it hasn’t felt very healthy for a while now. A lot of the impressive people from earlier have moved on, we don’t seem to be getting that many new impressive people coming in and hanging out a lot on the forum turns out not to make you that much more impressive. What’s left is turning increasingly into a weird sort of cargo cult of a forum for impressive people.
Actually, I think that LessWrong used to be worse when the “impressive people” were posting about cryonics, FAI, many-world interpretation of quantum mechanics, and so on.
It has seemed to me that a lot of the commenters who come with their own solid competency are also less likely to get unquestioningly swept away following EY’s particular hobbyhorses.
I needed to develop a sort of immunity against topics like acausal trade that I can’t fully specify how they are wrong, but they feel wrong and are hard to translate to practical testable statements, and it just messes with my head in the wrong way.
The applicable word is metaphysics. Acausal trade is dabbling in metaphysics to “solve” a question in decision theory, which is itself mere philosophizing, and thus one has to wonder: what does Nature care for philosophies?
By the way, for the rest of your post I was going, “OH MY GOD I KNOW YOUR FEELS, MAN!” So it’s not as though nobody ever thinks these things. Those of us who do just tend to, in perfect evaporative cooling fashion, go get on with our lives outside this website, being relatively ordinary science nerds.
Sorry avoiding metaphysics doesn’t work. You just end up either reinventing them (badly) or using a bad 5th hand version of some old philospher’s metaphysics. Incidentally, Eliezer also tried avoiding metaphysics and wound up doing the former.
I don’t like Eliezer’s apparent mathematical/computational Platonism myself, but most working scientists manage to avoid metaphysical buggery by simply dealing with only those things with which what they can actually causally interact. I recall an Eliezer post on “Explain/Worship/Ignore”, and would add myself that while “Explain” eventually bottoms out in the limits of our current knowledge, the correct response is to hit “Ignore” at that stage, not to drop to one’s knees in Worship of a Sacred Mystery that is in fact just a limit to current evidence.
EDIT: This is also one of the reasons I enjoy being in this community: even when I disagree with someone’s view (eg: Eliezer’s), people here (including him) are often more productive and fun to talk to than someone who hits the limits of their scientific knowledge and just throws their hands up to the tune of “METAPHYSICS, SON!”, and then joins the bloody Catholic Church, as if that solved anything.
I don’t like Eliezer’s apparent mathematical/computational Platonism myself, but most working scientists manage to avoid metaphysical buggery by simply dealing with only those things with which what they can actually causally interact.
That works up until the point where you actually have to think about what it means to “causally interact” with something. Also questions like “does something that falls into a black hole cease to exist since it’s no longer possible to interact with it”?
Also questions like “does something that falls into a black hole cease to exist since it’s no longer possible to interact with it”?
But there are trivially easy answers to questions like that. Basically you have to ask “Cease to exist for whom?” i.e. it obviously ceases to exist for you. You just have to taboo words like “really” here such “does it really cease to exist” as they are meaningless, they don’t lead to predictions. What often people consider “really” reality is the perception of a perfect god-like omniscient observer but there is no such thing.
Essentially there are just two extremes to avoid, the po-mo “nothing is real, everything is mere perception” and the traditional, classical “but how things really really REALLY are?” and the middle way here is “reality is the sum of what could be perceived in principle”. A perception is right or wrong based on how much it meshes with all the other things that can in principle be perceived. Everything that cannot even be perceived in theory is not part of reality. There is no how things “really” are, the closest we have to that what is the sum of all potential, possible perceivables about a thing.
I picked up this approach from Eric S. Raymond, I think he worked it out decades before Eliezer did, possibly both working from Peirce.
I don’t know what real-for-me means here. Everything that in principle, in theory, could be observed, is real. Most of those you didn’t. This does not make them any less real.
I meant the “for whom?” not in the sense of me, you, or the barkeeper down the street. I meant it in the sense of normal beings who know only things that are in principle knowable, vs. some godlike being who can know how things really “are” regardless of whether they are knowable or not.
Everything that in principle, in theory, could be observed, is real.
Well, that’s where it starts to break down; because what you can, in theory, observe is different from what I can, in theory, observe.
This is because, as far as anyone can tell, observations are limited by the speed of light. I cannot, even in principle, observe the 2015 Alpha Centauri until at least 2019 (if I observe it now, I am seeing light that left it around 2011). If Alpha Centauri had suddenly exploded in 2013, I have no way of observing that until at least 2018 - even in principle.
So if the barkeeper, instead of being down the street, is rather living on a planet orbiting Alpha Centauri, then the set of what he can observe in principle is not the same as the set of what I can observe in principle.
Physicists are not very precise about it, may I suggest looking into “potential outcomes” (the language some statisticians use to talk about counterfactuals):
Potential outcomes let you think about a model that contains a random variable for what happens to Fred if we give Fred aspirin, and a random variable for what happens to Fred if we give Fred placebo. Even though in reality we only gave Fred aspirin. This is “counterfactual definiteness” in statistics.
This paper uses potential outcomes to talk about outcomes of physics experiments (so there is an exact isomorphism between counterfactuals in physics and potential outcomes):
Sounds like this is perhaps related to the counterfactual-consistency statement? In its simple form, that the counterfactual or potential outcome under policy “a” equals the factual observed outcome when you in fact undertake policy “a”, or formally, Y^a = Y when A = a.
No, not quite. Counterfactual consistency is what allows you to link observed and hypothetical data (so it is also extremely important). Counterfactual definiteness is even more basic than that. It basically sets the size of your ontology by allowing you to talk about Y(a) and Y(a’) together, even if we only observe Y under one value of A.
edit: Stephen, I think I realized who you are, please accept my apologies if I seemed to be talking down to you, re: potential outcomes, that was not my intention. My prior is people do not know what potential outcomes are.
edit 2: Good talks by Richard Gill and Jamie Robins at JSM on this:
I just need to translate that for him to street lingo.
“There is shit we know, shit we could know, and shit could not know no matter how good tech we had, we could not even know the effects it has on other stuff. So why should we say this later stuff exists? Or why should we say this does not exist? We cannot prove either.”
My serious point is that one cannot avoid metaphysics, and that way too many people start out from “all this metaphysics stuff is BS, I’ll just use common sense” and end up with there own (bad) counter-intuitive metaphysical theory that they insist is “not metaphysics”.
You could charitably understand everything that such people (who assert that metaphysics is BS) say with a silent “up to empirical equivalence”. Doesn’t the problem disappear then?
I would be very interested in hearing elaboration on this topic, either publicly or privately.
I prefer public discussions. First, I’m a computer science student who took courses in machine learning, AI, wrote theses in these areas (nothing exceptional), I enjoy books like Thinking Fast and Slow, Black Swan, Pinker, Dawkins, Dennett, Ramachandran etc. So the topics discussed here are also interesting to me. But the atmosphere seems quite closed and turning inwards.
I feel similarities to reddit’s Red Pill community. Previously “ignorant” people feel the community has opened a new world to them, they lived in darkness before, but now they found the “Way” (“Bayescraft”) and all this stuff is becoming an identity for them.
Sorry if it’s offensive, but I feel as if many people had no success in the “real world” matters and invented a fiction where they are the heroes by having joined some great organization much higher above the general public, who are just irrational automata still living in the dark.
I dislike the heavy use of insider terminology that make communication with “outsiders” about these ideas quite hard because you get used to referring to these things by the in-group terms, so you get kind of isolated from your real-life friends as you feel “they won’t understand, they’d have to read so much”. When actually many of the concepts are not all that new and could be phrased in a way that the “uninitiated” can also get it.
There are too many cross references in posts and it keeps you busy with the site longer than necessary. It seems that people try to prove they know some concept by using the jargon and including links to them. Instead, I’d prefer authors who actively try to minimize the need for links and jargon.
I also find the posts quite redundant. They seem to be reiterations of the same patterns in very long prose with people’s stories intertwined with the ideas, instead of striving for clarity and conciseness. Much of it feels a lot like self-help for people with derailed lives who try to engineer their life (back) to success. I may be wrong but I get a depressed vibe from reading the site too long. It may also be because there is no lighthearted humor or in-jokes or “fun” or self-irony at all. Maybe because the members are just like that in general (perhaps due to mental differences, like being on the autism spectrum, I’m not a psychiatrist).
I can see that people here are really smart and the comments are often very reasonable. And it makes me wonder why they’d regard a single person such as Yudkowsky in such high esteem as compared to established book authors or academics or industry people in these areas. I know there has been much discussion about cultishness, and I think it goes a lot deeper than surface issues. LessWrong seems to be quite isolated and distrusting towards the mainstream. Many people seem to have read stuff first from Yudkowsky, who often does not reference earlier works that basically state the same stuff, so people get the impression that all or most of the ideas in “The Sequences” come from him. I was quite disappointed several times when I found the same ideas in mainstream books. The Sequences often depict the whole outside world as dumber than it is (straw man tactics, etc).
Another thing is that discussion is often too meta (or meta-meta). There is discussion on Bayes theorem and math principles but no actual detailed, worked out stuff. Very little actual programming for example. I’d expect people to create github projects, IPython notebooks to show some examples of what they are talking about. Much of the meta-meta-discussion is very opinion-based because there is no immediate feedback about whether someone is wrong or right. It’s hard to test such hypotheses. For example, in this post I would have expected an example dataset and showing how PCA can uncover something surprising. Otherwise it’s just floating out there although it matches nicely with the pattern that “some math concept gave me insight that refined my rationality”. I’m not sure, maybe these “rationality improvements” are sometimes illusions.
I also don’t get why the rationality stuff is intermixed with friendly AI and cryonics and transhumanism. I just don’t see why these belong that much together. I find them too speculative and detached from the “real world” to be the central ideas. I realize they are important, but their prevalence could also be explained as “escapism” and it promotes the discussion of untestable meta things that I mentioned above, never having to face reality. There is much talk about what evidence is but not much talk that actually presents evidence.
I needed to develop a sort of immunity against topics like acausal trade that I can’t fully specify how they are wrong, but they feel wrong and are hard to translate to practical testable statements, and it just messes with my head in the wrong way.
And of course there is also that secrecy around and hiding of “certain things”.
That’s it. This place may just not be for me, which is fine. People can have their communities in the way they want. You just asked for elaboration.
Thanks for the detailed response! I’ll respond to a handful of points:
I certainly agree that there are people here who match that description, but it’s also worth pointing out that there are actual experts too.
One of the things I find most charming about LW, compared to places like RationalWiki, is how much emphasis there is on self-improvement and your mistakes, not mistakes made by other people because they’re dumb.
I’m not sure this is avoidable, and in full irony I’ll link to the wiki page that explains why.
In general, there are lots of concepts that seem useful, but the only way we have to refer to concepts is either to refer to a label or to explain the concept. A number of people read through the sequences and say “but the conclusions are just common sense!”, to which the response is, “yes, but how easy is it to communicate common sense?” It’s one thing to be able to recognize that there’s some vague problem, and another thing to be able to say “the problem here is inferential distance; knowledge takes many steps to explain, and attempts to explain it in fewer steps simply won’t work, and the justification for this potentially surprising claim is in Appendix A.” It is one thing to be able to recognize a concept as worthwhile; it is another thing to be able to recreate that concept when a need arises.
Now, I agree with you that having different labels to refer to the same concept, or conceptual boundaries or definitions that are drawn slightly differently, is a giant pain. When possible, I try to bring the wider community’s terminology to LW, but this requires being in both communities, which limits how much any individual person can do.
Part of that is just seeding effects—if you start a rationality site with a bunch of people interested in transhumanism, the site will remain disproportionately linked to transhumanism because people who aren’t transhumanists will be more likely to leave and people who are transhumanists will be more likely to find and join the site.
Part of it is that those are the cluster of ideas that seem weird but ‘hold up’ under investigation—most of the reasons to believe that the economy of fifty years from now will look like the economy of today are just confused, and if a community has good tools for dissolving confusions you should expect them to converge on the un-confused answer.
A final part seems to be availability; people who are convinced by the case for cryonics tend to be louder than the people who are unconvinced. The annual surveys show the perception of LW one gets from just reading posts (or posts and comments) is skewed from the perception of LW one gets from the survey results.
I agree that LW is much better than RationalWiki, but I still think that the norms for discussion are much too far in the direction of focus on how other commenters are wrong as opposed to how one might oneself be wrong.
I know that there’s a selection effect (with respect to the more frustrating interactions standing out). But people not infrequently mistakenly believe that I’m wrong about things that I know much more about than they do, with very high confidence, and in such instances I find the connotations that I’m unsound to be exasperating.
I don’t think that this is just a problem for me rather than a problem for the community in general: I know a number of very high quality thinkers in real life who are uninterested in participating on LW explicitly because they don’t want to engage with commenters who are highly confident that their own positions are incorrect. There’s another selection effect here: such people aren’t salient because they’re invisible to the online community.
I agree that those frustrating interactions both happen and are frustrating, and that it leads to a general acidification of the discussion as people who don’t want to deal with it leave. Reversing that process in a sustainable way is probably the most valuable way to improve LW in the medium term.
There’s also the whole Lesswrong-is-dying thing that might be contribute to the vibe you’re getting. I’ve been reading the forum for years and it hasn’t felt very healthy for a while now. A lot of the impressive people from earlier have moved on, we don’t seem to be getting that many new impressive people coming in and hanging out a lot on the forum turns out not to make you that much more impressive. What’s left is turning increasingly into a weird sort of cargo cult of a forum for impressive people.
Actually, I think that LessWrong used to be worse when the “impressive people” were posting about cryonics, FAI, many-world interpretation of quantum mechanics, and so on.
It has seemed to me that a lot of the commenters who come with their own solid competency are also less likely to get unquestioningly swept away following EY’s particular hobbyhorses.
The applicable word is metaphysics. Acausal trade is dabbling in metaphysics to “solve” a question in decision theory, which is itself mere philosophizing, and thus one has to wonder: what does Nature care for philosophies?
By the way, for the rest of your post I was going, “OH MY GOD I KNOW YOUR FEELS, MAN!” So it’s not as though nobody ever thinks these things. Those of us who do just tend to, in perfect evaporative cooling fashion, go get on with our lives outside this website, being relatively ordinary science nerds.
Sorry avoiding metaphysics doesn’t work. You just end up either reinventing them (badly) or using a bad 5th hand version of some old philospher’s metaphysics. Incidentally, Eliezer also tried avoiding metaphysics and wound up doing the former.
I don’t like Eliezer’s apparent mathematical/computational Platonism myself, but most working scientists manage to avoid metaphysical buggery by simply dealing with only those things with which what they can actually causally interact. I recall an Eliezer post on “Explain/Worship/Ignore”, and would add myself that while “Explain” eventually bottoms out in the limits of our current knowledge, the correct response is to hit “Ignore” at that stage, not to drop to one’s knees in Worship of a Sacred Mystery that is in fact just a limit to current evidence.
EDIT: This is also one of the reasons I enjoy being in this community: even when I disagree with someone’s view (eg: Eliezer’s), people here (including him) are often more productive and fun to talk to than someone who hits the limits of their scientific knowledge and just throws their hands up to the tune of “METAPHYSICS, SON!”, and then joins the bloody Catholic Church, as if that solved anything.
That works up until the point where you actually have to think about what it means to “causally interact” with something. Also questions like “does something that falls into a black hole cease to exist since it’s no longer possible to interact with it”?
But there are trivially easy answers to questions like that. Basically you have to ask “Cease to exist for whom?” i.e. it obviously ceases to exist for you. You just have to taboo words like “really” here such “does it really cease to exist” as they are meaningless, they don’t lead to predictions. What often people consider “really” reality is the perception of a perfect god-like omniscient observer but there is no such thing.
Essentially there are just two extremes to avoid, the po-mo “nothing is real, everything is mere perception” and the traditional, classical “but how things really really REALLY are?” and the middle way here is “reality is the sum of what could be perceived in principle”. A perception is right or wrong based on how much it meshes with all the other things that can in principle be perceived. Everything that cannot even be perceived in theory is not part of reality. There is no how things “really” are, the closest we have to that what is the sum of all potential, possible perceivables about a thing.
I picked up this approach from Eric S. Raymond, I think he worked it out decades before Eliezer did, possibly both working from Peirce.
This is basically anti-metaphysics.
Does this imply that only things that exist in my past light cone are real for me at any given moment?
I don’t know what real-for-me means here. Everything that in principle, in theory, could be observed, is real. Most of those you didn’t. This does not make them any less real.
I meant the “for whom?” not in the sense of me, you, or the barkeeper down the street. I meant it in the sense of normal beings who know only things that are in principle knowable, vs. some godlike being who can know how things really “are” regardless of whether they are knowable or not.
Well, that’s where it starts to break down; because what you can, in theory, observe is different from what I can, in theory, observe.
This is because, as far as anyone can tell, observations are limited by the speed of light. I cannot, even in principle, observe the 2015 Alpha Centauri until at least 2019 (if I observe it now, I am seeing light that left it around 2011). If Alpha Centauri had suddenly exploded in 2013, I have no way of observing that until at least 2018 - even in principle.
So if the barkeeper, instead of being down the street, is rather living on a planet orbiting Alpha Centauri, then the set of what he can observe in principle is not the same as the set of what I can observe in principle.
I’d like to congratulate you on developing your own “makes you sound insane to the man in the street” theory of metaphysics.
Man on the street needs to learn what counterfactual definiteness is.
Ilya, can you give me a definition of “counterfactual definiteness” please?
Physicists are not very precise about it, may I suggest looking into “potential outcomes” (the language some statisticians use to talk about counterfactuals):
https://en.wikipedia.org/wiki/Rubin_causal_model
https://en.wikipedia.org/wiki/Counterfactual_definiteness
Potential outcomes let you think about a model that contains a random variable for what happens to Fred if we give Fred aspirin, and a random variable for what happens to Fred if we give Fred placebo. Even though in reality we only gave Fred aspirin. This is “counterfactual definiteness” in statistics.
This paper uses potential outcomes to talk about outcomes of physics experiments (so there is an exact isomorphism between counterfactuals in physics and potential outcomes):
http://arxiv.org/pdf/1207.4913.pdf
Sounds like this is perhaps related to the counterfactual-consistency statement? In its simple form, that the counterfactual or potential outcome under policy “a” equals the factual observed outcome when you in fact undertake policy “a”, or formally, Y^a = Y when A = a.
Pearl has a nice (easy) discussion in the journal Epidemiology (http://www.ncbi.nlm.nih.gov/pubmed/20864888).
Is this what you are getting at, or am I missing the point?
No, not quite. Counterfactual consistency is what allows you to link observed and hypothetical data (so it is also extremely important). Counterfactual definiteness is even more basic than that. It basically sets the size of your ontology by allowing you to talk about Y(a) and Y(a’) together, even if we only observe Y under one value of A.
edit: Stephen, I think I realized who you are, please accept my apologies if I seemed to be talking down to you, re: potential outcomes, that was not my intention. My prior is people do not know what potential outcomes are.
edit 2: Good talks by Richard Gill and Jamie Robins at JSM on this:
http://www.amstat.org/meetings/jsm/2015/onlineprogram/ActivityDetails.cfm?SessionID=211222
No offense taken. I am sorry I did not get to see Gill & Robins at JSM. Jamie also talks about some of these issues online back in 2013 at https://www.youtube.com/watch?v=rjcoJ0gC_po
Well, this whole thread started because minusdash and eli_sennesh objected to the concept of accusal trade for being too metaphysical.
I just need to translate that for him to street lingo.
“There is shit we know, shit we could know, and shit could not know no matter how good tech we had, we could not even know the effects it has on other stuff. So why should we say this later stuff exists? Or why should we say this does not exist? We cannot prove either.”
My serious point is that one cannot avoid metaphysics, and that way too many people start out from “all this metaphysics stuff is BS, I’ll just use common sense” and end up with there own (bad) counter-intuitive metaphysical theory that they insist is “not metaphysics”.
You could charitably understand everything that such people (who assert that metaphysics is BS) say with a silent “up to empirical equivalence”. Doesn’t the problem disappear then?
No because you need a theory of metaphysics to explain what “empirical equivalence” means.
To be honest, I don’t see that at all.
So how would you define “empirical equivalence”?
Its insufficiently appreciated that physicalism is metaphysics too.