So, see my conversation with Ben Levinstein and my reply to adrusi for some of my reply. An example of what I have in mind by ‘LWers outperforming’ is the 2009 PhilPapers survey: I’d expect a survey of LW users with 200+ karma to...
… have fewer than 9.1% of respondents endorse “skepticism” or “idealism” about the external world.
… have fewer than 13.7% endorse “libertarianism” about free will (roughly defined as the view “(1) that we do have free will, (2) that free will is not compatible with determinism, and (3) that determinism is therefore false”).
… have fewer than 14.6% endorse “theism”.
… have fewer than 27.1% endorse “non-physicalism” about minds.
… have fewer than 59.6% endorse “two boxes” in Newcomb’s problem, out of the people who gave a non-”Other” answer.
… have fewer than 44% endorse “deontology” or “virtue ethics”.
… have fewer than 12.2% endorse the “further-fact view” of personal identity (roughly defined as “the facts about persons and personal identity consist in some further [irreducible, non-physical] fact, typically a fact about Cartesian egos or souls”).
… have fewer than 16.9% endorse the “biological view” of personal identity (which says that, e.g., if my brain were put in a new body, I should worry about the welfare of my old brainless body, not about the welfare of my mind or brain).
… have fewer than 31.1% endorse “death” as the thing that happens in “teletransporter (new matter)” thought experiments.
… have fewer than 37% endorse the “A-theory” of time (which rejects the idea of “spacetime as a spread-out manifold with events occurring at different locations in the manifold”), out of the people who gave a non-”Other” answer.
… have fewer than 6.9% endorse an “epistemic” theory of truth (i.e., a view that what’s true is what’s knowable, or known, or verifiable, or something to that effect).
This is in no way a perfect or complete operationalization, but it at least gestures at the kind of thing I have in mind.
Well, it looks like you declare “outperforming” by your own metric, not by anything generally accepted.
(Also, I take issue with the last two. The philosophical ideas about time are generally not about time, but about “time”, i.e. about how humans perceive and understand passage of time. So distinguishing between A and B is about humans, not about time, unlike, say, Special and General Relativity, which provide a useful model of time and spacetime.
A non-epistemic theory of truth (e.g. there is an objective truth we try to learn) is detrimental in general, because it inevitably deteriorates into debates about untestables, like other branches of a hypothetical multiverse and how to behave morally in an infinite universe.)
Also, most people here, while giving lip service to non-libertarian views of free will, sneak it in anyway, as evidenced by relying on “free choice” in nearly all decision theory discussions.
Well, it looks like you declare “outperforming” by your own metric, not by anything generally accepted.
I am indeed basing my view that philosophers are wrong about stuff on investigating the specific claims philosophers make.
If there were a (short) proof that philosophers were wrong about X that philosophers already accepted, I assume they would just stop believing X and the problem would be solved.
The philosophical ideas about time are generally not about time, but about “time”, i.e. about how humans perceive and understand passage of time.
Nope, the 20th-century philosophical literature discussing time is about time itself, not about (e.g.) human psychological or cultural perceptions of time.
There is also discussion of humans’ perception and construction of time—e.g., in Kant—but that’s not the context in which A-theory and B-theory are debated.
The A-theory and B-theory were introduced in 1908, before many philosophers (or even physicsts) had heard of special relativity; and ‘this view seems unbelievably crazy given special relativity’ is in fact one of the main arguments that gets cited in the literature against the A-theory of time.
A non-epistemic theory of truth (e.g. there is an objective truth we try to learn) is detrimental in general, because it inevitably deteriorates into debates about untestables, like other branches of a hypothetical multiverse and how to behave morally in an infinite universe.)
“It’s raining” is true even if you can’t check. Also, what’s testable for one person is different from what’s testable for another person. Rather than saying that different things are ‘true’ or ‘false’ or ‘neither true nor false’ depending on which person you are, simpler to just say that “snow is white” is true iff snow is white.
It’s not like there’s any difficulty in defining a predicate that satisfies the correspondence theory of truth, and this predicate is much closer to what people ordinarily mean by “true” than any epistemic theory of truth’s “true” is. So demanding that we abandon the ordinary thing people mean by “truth” just seems confusing and unnecessary.
Doubly so when there’s uncertainty or flux about which things are testable. Who can possibly keep track of which things are true vs. false vs. meaningless, when the limits of testability are always changing? Seems exhausting.
Also, most people here, while giving lip service to non-libertarian views of free will, sneak it in anyway, as evidenced by relying on “free choice” in nearly all decision theory discussions.
This is a very bad argument. Using the phrase “free choice” doesn’t imply that you endorse libertarian free will.
Well, we may have had this argument before, likely more than once, so probably no point rehashing it. I appreciate you expressing your views succinctly though.
So, see my conversation with Ben Levinstein and my reply to adrusi for some of my reply. An example of what I have in mind by ‘LWers outperforming’ is the 2009 PhilPapers survey: I’d expect a survey of LW users with 200+ karma to...
… have fewer than 9.1% of respondents endorse “skepticism” or “idealism” about the external world.
… have fewer than 13.7% endorse “libertarianism” about free will (roughly defined as the view “(1) that we do have free will, (2) that free will is not compatible with determinism, and (3) that determinism is therefore false”).
… have fewer than 14.6% endorse “theism”.
… have fewer than 27.1% endorse “non-physicalism” about minds.
… have fewer than 59.6% endorse “two boxes” in Newcomb’s problem, out of the people who gave a non-”Other” answer.
… have fewer than 44% endorse “deontology” or “virtue ethics”.
… have fewer than 12.2% endorse the “further-fact view” of personal identity (roughly defined as “the facts about persons and personal identity consist in some further [irreducible, non-physical] fact, typically a fact about Cartesian egos or souls”).
… have fewer than 16.9% endorse the “biological view” of personal identity (which says that, e.g., if my brain were put in a new body, I should worry about the welfare of my old brainless body, not about the welfare of my mind or brain).
… have fewer than 31.1% endorse “death” as the thing that happens in “teletransporter (new matter)” thought experiments.
… have fewer than 37% endorse the “A-theory” of time (which rejects the idea of “spacetime as a spread-out manifold with events occurring at different locations in the manifold”), out of the people who gave a non-”Other” answer.
… have fewer than 6.9% endorse an “epistemic” theory of truth (i.e., a view that what’s true is what’s knowable, or known, or verifiable, or something to that effect).
This is in no way a perfect or complete operationalization, but it at least gestures at the kind of thing I have in mind.
Well, it looks like you declare “outperforming” by your own metric, not by anything generally accepted.
(Also, I take issue with the last two. The philosophical ideas about time are generally not about time, but about “time”, i.e. about how humans perceive and understand passage of time. So distinguishing between A and B is about humans, not about time, unlike, say, Special and General Relativity, which provide a useful model of time and spacetime.
A non-epistemic theory of truth (e.g. there is an objective truth we try to learn) is detrimental in general, because it inevitably deteriorates into debates about untestables, like other branches of a hypothetical multiverse and how to behave morally in an infinite universe.)
Also, most people here, while giving lip service to non-libertarian views of free will, sneak it in anyway, as evidenced by relying on “free choice” in nearly all decision theory discussions.
I am indeed basing my view that philosophers are wrong about stuff on investigating the specific claims philosophers make.
If there were a (short) proof that philosophers were wrong about X that philosophers already accepted, I assume they would just stop believing X and the problem would be solved.
Nope, the 20th-century philosophical literature discussing time is about time itself, not about (e.g.) human psychological or cultural perceptions of time.
There is also discussion of humans’ perception and construction of time—e.g., in Kant—but that’s not the context in which A-theory and B-theory are debated.
The A-theory and B-theory were introduced in 1908, before many philosophers (or even physicsts) had heard of special relativity; and ‘this view seems unbelievably crazy given special relativity’ is in fact one of the main arguments that gets cited in the literature against the A-theory of time.
“It’s raining” is true even if you can’t check. Also, what’s testable for one person is different from what’s testable for another person. Rather than saying that different things are ‘true’ or ‘false’ or ‘neither true nor false’ depending on which person you are, simpler to just say that “snow is white” is true iff snow is white.
It’s not like there’s any difficulty in defining a predicate that satisfies the correspondence theory of truth, and this predicate is much closer to what people ordinarily mean by “true” than any epistemic theory of truth’s “true” is. So demanding that we abandon the ordinary thing people mean by “truth” just seems confusing and unnecessary.
Doubly so when there’s uncertainty or flux about which things are testable. Who can possibly keep track of which things are true vs. false vs. meaningless, when the limits of testability are always changing? Seems exhausting.
This is a very bad argument. Using the phrase “free choice” doesn’t imply that you endorse libertarian free will.
Well, we may have had this argument before, likely more than once, so probably no point rehashing it. I appreciate you expressing your views succinctly though.