I’m excited by the prediction that you’ll do this.
One quibble:
LW has built a huge positivized reductionist metaphysics, and a Bayesain epistemology which can almost be read as a self improvement manual. These are unprecedented...
Positivized reductionist metaphysics is quite old, as is Bayesian epistemology. I suppose “Bayesian epistemology as self-improvement” might be kind of a new twist, but it’s implicit in Savage (1954) and other works.
They’re not quite original to LW. I know of plenty of texts I would suggest to someone on Bayesian epistemology, that are not LWish. And G.E.B. say’s most things that LW would say about reductionism, though it doesn’t go looking for hard problems to reduce, like LW does.
The reason I think LW has an advantage over formal epistimologists, and contemporary philosophers, is because of how clear, entertaining, useful, diverse, ground breaking, scrutinized, precise, narrow. etc our sequences and terminology in general are. And because of our practices of constant focused argument, and karma selection, to select amongst positions, instead of the usual trend-method of philosophy. It also can’t hurt that LW makes values out of optimally doing stuff so as to win, so when we find a reason to not hold a hypothesis, we drop it without a single tear.
And because of our practices of constant focused argument, and karma selection, to select amongst positions, instead of the usual trend-method of philosophy.
I don’t understand this. Are you saying that a casual voting system by a group of amateurs on a website consisting of informal blog posts is superior to rigorous peer-review by experts of literature-aware arguments?
Yes, that is exactly what I am saying, If by “amateur” we mean non-professional. On the other hand if by “amateur” we mean only slightly more competent than average, I would disagree that LWers are amateur.
I guess I can’t really imagine how you came to that conclusion. You seem to be going preposterously overboard with your enthusiasm for LW here. Don’t mean to offend, but that’s the only way I know how to express the extent of my incredulity. Can you imagine a message board of dabblers in molecular biology congratulating each other over the advantages their board’s upvoting system has over peer review?
I know it sounds crazy, that is why i wanna test it. My probability that what I am saying is true is probably too high, I agree, and I suck for not being able to correct it right now. But if it is or isn’t, I should have some better idea after these posts.
If I didn’t have the stark contrast between the friendly arguments I have with my class mates and professors, and the arguments I have here on LW, I would react to someone else saying what I am saying roughly as you are reacting.
But let us not forget, that comparing molecular biology and philosophy, is like comparing self-help and physics. We should not be as surprised if a bunch of clever enthusiasts make better self-help than professionals, as if a bunch of clever enthusiasts made better physics than physicists. This is because physicists are better at physics than self-help writers are at self-help, the same is true of biologists and philosophers respectively.
But let us not forget, that comparing molecular biology and philosophy, is like comparing self-help and physics.
I’m comparing the review processes of molecular biology and philosophy. In both cases, experts with a deep grasp of most/all the relevant pitfalls provide extensive, specific, technical feedback regarding likely sources of error, failure to address existing objections and important points of clarification. That this is superior to a glorified Facebook “Like” button used by individuals with often highly limited familiarity with the subject matter—often consisting of having read a few blog posts by the same individual who himself has highly limited familiarity with the subject matter—should go without saying, right?
The problem with self-help writers is that, in general, they are insufficiently critical. It has never been seriously alleged that philosophers are insufficiently critical, whatever their other faults. Philosophers are virtually dying to bury each other’s arguments, and spend their entire careers successfully honing their abilities to do so. Therefore, surviving the gauntlet of their reviews is a better system of natural selection than having a few casually interested and generally like-minded individuals agree that they like your non-technical idea.
If they really honed their skills in crushing their opponents arguments, and could transmit this skill to other successfully, then we wouldn’t have so many open questions in philosophy, and we would notice the sort of exponential growth of the power of our methods, like we see in molecular bio.
I think philosophers are critical, but they still argue about things which they do not know how to settle far too often, at least when biologists or physicists argue, they can work on settling it right away nine times out of ten, instead of first spending time figuring out what procedure we could use to decide. This can make it as if philosophers aren’t critical at all; if I don’t know how to figure out which one of us is right, then if you critique me I won’t have any reason to change my position, since I don’t know if what you just said is independent of my position. What’s worse is that sometimes we argue still without even trying to figure out a procedure that would decide amongst solutions. These problems are not as rampant in philosophy as they are in self-gelp, but those are the issues I was trying to get at.
That this is superior to a glorified Facebook “Like” button used by individuals with often highly limited familiarity with the subject matter—often consisting of having read a few blog posts by the same individual who himself has highly limited familiarity with the subject matter—should go without saying, right?
Well there is more, do not forget that most LWers are heavily sequenced, and that is nothing to disregard. It is part of my hypothesis which predicts that LW will do better than analytics, that being trained in the history of philosophy, and learning phiosophical concepts through their history, inevitably makes them confusing. And that is the common practice in academic philosophy. Might you say that someone might have a better understanding of Quantum physics after reading the sequence than after reading and completing a textbook on Quantum physics for a university class? They are at least not too far off. And I have many friends whom are qualified whom have told me that the quantum physics sequence helped them understand quantum physics more than any class they have taken.
But either way, these posts should help us decide how far off my optimism is, and how far of your realism is. Can’t wait to argue about the results. Do you wanna make any suggestions in my methods of comparison and sampling? All ears.
If they really honed their skills in crushing their opponents arguments, and could transmit this skill to other successfully, then we wouldn’t have so many open questions in philosophy
What is your basis for concluding this? “Philosophers are really good at demolishing unsound arguments” is compatible with “Philosophers are really bad at coming to agreement.” The primary difference between philosophy and biology that explains the ideological diversity of the former and the consensus of the latter is not that philosophers are worse critical thinkers. It is that, unlike in biology, virtually all of the evidence in philosophy is itself subject to controversy.
But either way, these posts should help us decide how far off my optimism is, and how far of your realism is. Can’t wait to argue about the results. Do you wanna make any suggestions in my methods of comparison and sampling? All ears.
I’m not sure that your experiment makes any sense. What exactly are you going to be comparing? Most analytic philosophers in most articles don’t take themselves to be offering “solutions” to any problems. They take themselves to be offering detailed, specific lines of argumentation which suggest a certain conclusion, while accommodating or defusing rival lines of argumentation that have appeared in the literature. That someone here may come up with a vaguely similar position to philosopher X’s on issue Y tells us very little and ignores the meat of X’s contribution.
I am going to look for problems that Analytics say have not been solved, let LW work on them, and then ask Analytics if they think LWers solved them. I’ll be looking for problems that have not been settled in modern philosophy with 2/3ds agreeance, and seeing if we can have 2/3ds agreeance here.
I’ll compare all of our solutions to analytic solutions of varying kinds. I’ll try to randomize the Analytics I use as much as possible.
I predict that LWers will not be stumped by many of the problems that are considered hard in analytic philosophy, and that they will be able to reach 2/3ds consensus a few orders of magnitude faster than analytic philosophers. Also i predict that eventually Analytics will end up agreeing with us, if they ever do reach 2/3ds consensus, it just takes them longer.
and then ask Analytics if they think LWers solved them
This is the step that will probably fail. Our solutions will most likely utilize techniques like Dissolving the Question and cognitive science in general, but communicating these techniques is not easy.
The language is a positive factor in some cases. For example, for me lukeprog is more clear than Eliezer, but I learn more rapidly reading the sequences.
Another aspect, the method of dialogs help to understand the point, but if your objetive is understand the resolution of the problem, you have to learn math.
What about the mixture of postivized reductionist metaphysics, with Bayesian epistemology, with decision theory, and correspondence theory, together practiced as self/system optimization and philosophical method? How original is that? I’m asking.
Those all seem to be things that philosophicalish LWers often have in common.
“No, of course they were not in this new reference class which you have just now constructed in such a way as to contain only yourself.”
-Severus Snape, HPMOR
[A]fterward I quickly walked over to Ms. Egan and explained the
correspondence theory of truth: “The sentence ‘snow is white’ is true if and
only if snow is white”; if you’re using a bucket of pebbles to count sheep
then an empty bucket is true if and only if the pastures are
empty.
Thanks! That does seem to be his position, then. I’m tempted to write a post of objections to the correspondance theory, but… my post queue is already so long! And I’m not sure how much Eliezer and I disagree on the issue in practice.
I would love to read you critique the correspondence theory. I’ve never been able to really imagine a non-correspondence view of truth for more than ten minutes. Every now and again I can use pragmatist goggles, but then I ask why true sentences are more useful to believe than false ones, and I come right back to correspondence.
As I understand things, EY is a hardcore correspondence dude, and so is the rest of most of LW. I’d be very interested to hear what problems you have with correspondence. I just can’t imagine a belief being rational for any reason other than it modeling reality, and I can’t imagine what it means for one statement to be truer than another without imagining that one models reality more accurately than the other. I’d love to know how you do this, if you do. Or if you could recommend a couple texts on correspondence you sympathize with.
I was thinking I would try to stay away from things already on LW. But correspondence is a big topic. I’ll include it, but I already started working on something else for the first question.
suggesting that this got to the heart of Truth. Without any explicit mention of either correspondence or facts, I’ve always thought of this as a deflationary theory.
To understand whether a belief is true, we need (only) to understand what possible states of the world would make it true or false, and then ask directly about the world.
That is from the LW entry on truth. Pretty clearly correspondence if you ask me.
I’m excited by the prediction that you’ll do this.
One quibble:
Positivized reductionist metaphysics is quite old, as is Bayesian epistemology. I suppose “Bayesian epistemology as self-improvement” might be kind of a new twist, but it’s implicit in Savage (1954) and other works.
They’re not quite original to LW. I know of plenty of texts I would suggest to someone on Bayesian epistemology, that are not LWish. And G.E.B. say’s most things that LW would say about reductionism, though it doesn’t go looking for hard problems to reduce, like LW does.
The reason I think LW has an advantage over formal epistimologists, and contemporary philosophers, is because of how clear, entertaining, useful, diverse, ground breaking, scrutinized, precise, narrow. etc our sequences and terminology in general are. And because of our practices of constant focused argument, and karma selection, to select amongst positions, instead of the usual trend-method of philosophy. It also can’t hurt that LW makes values out of optimally doing stuff so as to win, so when we find a reason to not hold a hypothesis, we drop it without a single tear.
I don’t understand this. Are you saying that a casual voting system by a group of amateurs on a website consisting of informal blog posts is superior to rigorous peer-review by experts of literature-aware arguments?
Yes, that is exactly what I am saying, If by “amateur” we mean non-professional. On the other hand if by “amateur” we mean only slightly more competent than average, I would disagree that LWers are amateur.
I guess I can’t really imagine how you came to that conclusion. You seem to be going preposterously overboard with your enthusiasm for LW here. Don’t mean to offend, but that’s the only way I know how to express the extent of my incredulity. Can you imagine a message board of dabblers in molecular biology congratulating each other over the advantages their board’s upvoting system has over peer review?
I know it sounds crazy, that is why i wanna test it. My probability that what I am saying is true is probably too high, I agree, and I suck for not being able to correct it right now. But if it is or isn’t, I should have some better idea after these posts.
If I didn’t have the stark contrast between the friendly arguments I have with my class mates and professors, and the arguments I have here on LW, I would react to someone else saying what I am saying roughly as you are reacting.
But let us not forget, that comparing molecular biology and philosophy, is like comparing self-help and physics. We should not be as surprised if a bunch of clever enthusiasts make better self-help than professionals, as if a bunch of clever enthusiasts made better physics than physicists. This is because physicists are better at physics than self-help writers are at self-help, the same is true of biologists and philosophers respectively.
I’m comparing the review processes of molecular biology and philosophy. In both cases, experts with a deep grasp of most/all the relevant pitfalls provide extensive, specific, technical feedback regarding likely sources of error, failure to address existing objections and important points of clarification. That this is superior to a glorified Facebook “Like” button used by individuals with often highly limited familiarity with the subject matter—often consisting of having read a few blog posts by the same individual who himself has highly limited familiarity with the subject matter—should go without saying, right?
The problem with self-help writers is that, in general, they are insufficiently critical. It has never been seriously alleged that philosophers are insufficiently critical, whatever their other faults. Philosophers are virtually dying to bury each other’s arguments, and spend their entire careers successfully honing their abilities to do so. Therefore, surviving the gauntlet of their reviews is a better system of natural selection than having a few casually interested and generally like-minded individuals agree that they like your non-technical idea.
If they really honed their skills in crushing their opponents arguments, and could transmit this skill to other successfully, then we wouldn’t have so many open questions in philosophy, and we would notice the sort of exponential growth of the power of our methods, like we see in molecular bio.
I think philosophers are critical, but they still argue about things which they do not know how to settle far too often, at least when biologists or physicists argue, they can work on settling it right away nine times out of ten, instead of first spending time figuring out what procedure we could use to decide. This can make it as if philosophers aren’t critical at all; if I don’t know how to figure out which one of us is right, then if you critique me I won’t have any reason to change my position, since I don’t know if what you just said is independent of my position. What’s worse is that sometimes we argue still without even trying to figure out a procedure that would decide amongst solutions. These problems are not as rampant in philosophy as they are in self-gelp, but those are the issues I was trying to get at.
Well there is more, do not forget that most LWers are heavily sequenced, and that is nothing to disregard. It is part of my hypothesis which predicts that LW will do better than analytics, that being trained in the history of philosophy, and learning phiosophical concepts through their history, inevitably makes them confusing. And that is the common practice in academic philosophy. Might you say that someone might have a better understanding of Quantum physics after reading the sequence than after reading and completing a textbook on Quantum physics for a university class? They are at least not too far off. And I have many friends whom are qualified whom have told me that the quantum physics sequence helped them understand quantum physics more than any class they have taken.
But either way, these posts should help us decide how far off my optimism is, and how far of your realism is. Can’t wait to argue about the results. Do you wanna make any suggestions in my methods of comparison and sampling? All ears.
What is your basis for concluding this? “Philosophers are really good at demolishing unsound arguments” is compatible with “Philosophers are really bad at coming to agreement.” The primary difference between philosophy and biology that explains the ideological diversity of the former and the consensus of the latter is not that philosophers are worse critical thinkers. It is that, unlike in biology, virtually all of the evidence in philosophy is itself subject to controversy.
I’m not sure that your experiment makes any sense. What exactly are you going to be comparing? Most analytic philosophers in most articles don’t take themselves to be offering “solutions” to any problems. They take themselves to be offering detailed, specific lines of argumentation which suggest a certain conclusion, while accommodating or defusing rival lines of argumentation that have appeared in the literature. That someone here may come up with a vaguely similar position to philosopher X’s on issue Y tells us very little and ignores the meat of X’s contribution.
I am going to look for problems that Analytics say have not been solved, let LW work on them, and then ask Analytics if they think LWers solved them. I’ll be looking for problems that have not been settled in modern philosophy with 2/3ds agreeance, and seeing if we can have 2/3ds agreeance here.
I’ll compare all of our solutions to analytic solutions of varying kinds. I’ll try to randomize the Analytics I use as much as possible.
I predict that LWers will not be stumped by many of the problems that are considered hard in analytic philosophy, and that they will be able to reach 2/3ds consensus a few orders of magnitude faster than analytic philosophers. Also i predict that eventually Analytics will end up agreeing with us, if they ever do reach 2/3ds consensus, it just takes them longer.
This is the step that will probably fail. Our solutions will most likely utilize techniques like Dissolving the Question and cognitive science in general, but communicating these techniques is not easy.
The language is a positive factor in some cases. For example, for me lukeprog is more clear than Eliezer, but I learn more rapidly reading the sequences.
Another aspect, the method of dialogs help to understand the point, but if your objetive is understand the resolution of the problem, you have to learn math.
What about the mixture of postivized reductionist metaphysics, with Bayesian epistemology, with decision theory, and correspondence theory, together practiced as self/system optimization and philosophical method? How original is that? I’m asking.
Those all seem to be things that philosophicalish LWers often have in common.
“No, of course they were not in this new reference class which you have just now constructed in such a way as to contain only yourself.” -Severus Snape, HPMOR
Actually, it was the sorting hat.
Not sure.
But I’m not sure LWers share correspondence theory much. Eliezer’s position seems to be more Peircian.
Eliezer considers himself to hold the correspondence theory (and for what it’s worth I agree, though I’m not familiar with Peirce’s position):
Thanks! That does seem to be his position, then. I’m tempted to write a post of objections to the correspondance theory, but… my post queue is already so long! And I’m not sure how much Eliezer and I disagree on the issue in practice.
I would love to read you critique the correspondence theory. I’ve never been able to really imagine a non-correspondence view of truth for more than ten minutes. Every now and again I can use pragmatist goggles, but then I ask why true sentences are more useful to believe than false ones, and I come right back to correspondence.
As I understand things, EY is a hardcore correspondence dude, and so is the rest of most of LW. I’d be very interested to hear what problems you have with correspondence. I just can’t imagine a belief being rational for any reason other than it modeling reality, and I can’t imagine what it means for one statement to be truer than another without imagining that one models reality more accurately than the other. I’d love to know how you do this, if you do. Or if you could recommend a couple texts on correspondence you sympathize with.
Perhaps this can be the subject of your first post in the series on philosophical questions for LW discussion.
I was thinking I would try to stay away from things already on LW. But correspondence is a big topic. I’ll include it, but I already started working on something else for the first question.
Peircian as in pragmatist? I always thought EY was a correspondence theorist with a hint of redundancy theory.
He’s written stuff like
suggesting that this got to the heart of Truth. Without any explicit mention of either correspondence or facts, I’ve always thought of this as a deflationary theory.
That is from the LW entry on truth. Pretty clearly correspondence if you ask me.