I don’t know what percentage of writing that gets called “philosophy” is worthwhile but it isn’t that hard to narrow your reading material down to relevant and worthwhile texts. It’s really weird to see comments like this here because so much of what I’ve found on Less Wrong are ideas I’ve seen previously in philosophy I’ve read. Moreover, a large fraction of my karma I got just by repeating or synthesizing things I learned doing philosophy- and I’m not the only one whose gotten karma this way.
I find it particularly perplexing that you think it’s a good idea to only read pre-WWII philosophers as their ideas are almost always better said by contemporary authors. One of my major problems with the discipline is that it is mostly taught by doing history of philosophy- forcing students to struggle with the prose of a Plato translation and distilling the philosophy from the mysticism instead of just reading Bertrand Russell on universals.
so much of what I’ve found on Less Wrong are ideas I’ve seen previously in philosophy I’ve read. Moreover, a large fraction of my karma I got just by repeating or synthesizing things I learned doing philosophy- and I’m not the only one whose gotten karma this way.
Examples would probably make your point much more persuasive (not that I’m saying that it’s unpersuasive, just that it feels a bit abstract right now).
I already didn’t believe in the Copenhagen Interpretation because of a Philosophy of Physics course where my professor took Copenhagen to be the problem statement instead of a possible solution. That whole sequence is more or less something one could find in a philosophy of physics book- though I don’t myself think it is Eliezer’s best series.
Before coming here my metaethics were already subjectivist/anti-realist. There’s about a century’s worth of conceptual distinctions that would make the Metaethics Sequence clearer- a few of which I’ve made in comments leading to constructive discussion. I feel like I’m constantly paraphrasing Hume in these discussions where people try to reason their way to a terminal value.
There is Philosophy of Math, where there was a +12 comment suggesting the suggestion be better tied to academic work on the subject. My comments were well upvoted and I was mostly just prodding Silas with the standard Platonist line plus a little Quine.
History and Philosophy of Science comes up. That discussion was basically a combination of Kuhn and Quine (plus a bunch of less recognizable names who talk about the same things).
Bayesian epistemology is, itself, a subfield of philosophy but people here seem mostly unfamiliar with the things academics consider to be open problems. Multiple times I’ve seen comments that take a couple paragraphs to hint at the fact that logical fallibility is an open problem for Bayesian epistemology- which suggests the author hadn’t even read the SEP entry on the subject. The Dutch Book post I made recently (which I admittedly didn’t motivate very well) was all philosophy.
Eliezer’s posts on subjective probability are all Philosophy of Probability. Eliezer and other have written more generally about epistemology and in these cases they’ve almost always been repeating or synthesizing things said by people like Popper and Carnap.
On the subject of personal identity much of what I’ve said comes from a few papers I wrote on the subject and many times I’ve thought the discussion here would be clearer if supplemented by the concepts invented by people like Nozick and Parfit. In any case, this is a well developed subfield.
The decision theory stuff on this site, were it to be published, would almost certainly be published in a philosophy journal.
Causality hasn’t been discussed here much except for people telling other people to read Judea Pearl (and sometimes people trying to summarize him, though often poorly). I heard about Pearl’s book because it argues much the same thing as Making Things Happen which I read for a philosophy class. Woodward’s book is a bit less mathy and more concerned with philosophical and conceptual issues. Nonetheless, both are fairly categorized as contemporary philosophy. Pearl may not hold a teaching position in philosophy- but he’s widely cited as one and Causality won numerous awards from philosophical institutions.
The creator of the Sleeping Beauty Problem is a philosopher.
I’m certain I’ll think of more examples after I publish this.
I would not underestimate the value of synthesizing the correct parts of philosophy vs. being exposed to a lot of philosophy.
The Bayesian epistemology stuff looks like something I should look into. The central logic of Hume was intuitively obvious to me, philosophy of math doesn’t strike me as important once you convince yourself that you’re allowed to do math, philosophy of science isn’t important once you understand epistemology, personal identity isn’t important except as it plays into ethics, which is too hard.
I’m interested in the fact that you seem to suggest that the decision theory stuff is cutting-edge level. Since that is the part I spend the most time thinking and talking about, is my activity relatively less susceptible to the scholastic critique? Is there academic philosophy that has things to say to TDT, UDT, and so on?
Since that is the part I spend the most time thinking and talking about, is my activity relatively less susceptible to the scholastic critique?
No, it makes you more susceptible- if you’re actually working on a problem in the field that’s all the more reason to know the scholarly work.
Is there academic philosophy that has things to say to TDT, UDT, and so on?
Obviously, since TDT and UDT were invented like two years ago and haven’t been published, academic philosophy says nothing directly about them. But there is a pretty robust literature on Causal vs. Evidential Decision theory and Newcomb’s problem. You’ve read Eliezer’s paper haven’t you? He has a bibliography. Where did you think the issue came from? The whole thing is a philosophy problem. Also see the SEP.
Obviously, since TDT and UDT were invented like two years ago and haven’t been published, academic philosophy says nothing directly about them. But there is a pretty robust literature on Causal vs. Evidential Decision theory and Newcomb’s problem. You’ve read Eliezer’s paper haven’t you? He has a bibliography. Where did you think the issue came from? The whole thing is a philosophy problem. Also see the SEP.
“To say to” means something different then “to talk about”. For example, if someone makes epistemological claim XYZ, even if no Bayesian epistemologist has refuted that exact claim, their general arguments can be used in evaluating the claim.
If mainstream philosophers had come up with a decision theory better than evidential and causal (which are both wrong), then people who had already surpassed EDT and CDT would be forced to read them. But if they haven’t, then lesswrong has already surpassed the limit of the philosophical literature. That’s what I’m asking.
I will look at the SEP when I next have time.
No, it makes you more susceptible- if you’re actually working on a problem in the field that’s all the more reason to know the scholarly work.
You think that the one who ignores the literature while working on a problem that is unsolved in the literature is more blameworthy than one who ignores the literature while working on a problem that is solved in the literature?
You think that the one who ignores the literature while working on a problem that is unsolved in the literature is more blameworthy than one who ignores the literature while working on a problem that is solved in the literature?
I suppose it is about the same. I think anyone working on a problem while not knowing if it has been solved, partly solved or not solved at all in the literature is very blameworthy.
For example, if someone makes epistemological claim XYZ, even if no Bayesian epistemologist has refuted that exact claim, their general arguments can be used in evaluating the claim.
Right, I don’t know the field nearly well enough to answer this question. I would be surprised if nothing in the literature was a generalizable concern that TDT/UDT should deal with.
If mainstream philosophers had come up with a decision theory better than evidential and causal (which are both wrong), then people who had already surpassed EDT and CDT would be forced to read them. But if they haven’t, then lesswrong has already surpassed the limit of the philosophical literature. That’s what I’m asking.
There have been lots of attempts to solve Newcomb’s problem-by amending EDT or CDT, or inventing a new decision theory. Many, perhaps most of these, use concepts related to TDT/UDT- possible worlds, counterfactuals, and Jeffrey’s notion of ratifiability (all three of these concepts are mentioned in Eliezer’s paper). Again, I don’t know the details of the major proposals, though skimming the literature it looks like none have been conclusive or totally convincing. But it seems plausible that the arguments which sink those theories might also sink the Less Wrong developed ones. It also seems very plausible that the theoretical innovations involved in those theories might be fruitful things for LW decision theorists to consider.
There have also been lots of things written about Newcomb’s problem- papers that don’t claim to solve anything but which claim to point out interesting features of this problem.
I don’t really understand the resistance to reading the literature. Why would you think insight in this subject area would be restricted to a cloistered little internet community (wonderful though we are)?
I suppose it is about the same. I think anyone working on a problem while not knowing if it has been solved, partly solved or not solved at all in the literature is very blameworthy.
I was previously aware that Newcomb’s problem was somewhere between partly solved and not solved at all, which is at least something. With the critique brought to my attention, I attempted cheap ways of figuring it out, first asking you and then reading the SEP article on your recommendation.
Right, I don’t know the field nearly well enough to answer this question. I would be surprised if nothing in the literature was a generalizable concern that TDT/UDT should deal with.
That is a point.
I also didn’t say what I think I really wanted to say, which is that: If I read someone advocating a non-Bayesian epistemology, I react: “This is gibberish. Come back to me once you’ve understood Bayesian epistemology and adopted it or come up with a good counterargument.” The same thing is true of the is-ought distinction: An insight which is obviously fundamental to further analysis in its field.
Reflective consistency, the question of why you build an agent with a Could-Should Architecture, Updateless decision theory—these seem like those kinds of insights in decision theory. Nothing on the SEP page (most of which I’d seen before, in the TDT paper or wikipedia or whatever), seemed like that. I presume that if philosophers had insights like that, they would put them on the page.
I conclude (with two pretty big ifs) that while philosophers have insights, they don’t have very good insights.
I don’t really understand the resistance to reading the literature. Why would you think insight in this subject area would be restricted to a cloistered little internet community (wonderful though we are)?
I freely admit to some motivated cognition here. Reading papers is not fun, or, at least, less fun than thinking about problems, while believing that insight is restricted to a cloistered community is fun.
You make claim X, I see possible counterargument Y, responding argumentatively with Y is a good way to see whether you have any data on Y that sheds light on the specifics of X.
Knowning what I know about academic philosophy and the minds behind lesswrong’s take on decision theory, that strikes me as totally possible.
Reflective consistency, the question of why you build an agent with a Could-Should Architecture, Updateless decision theory—these seem like those kinds of insights in decision theory. Nothing on the SEP page (most of which I’d seen before, in the TDT paper or wikipedia or whatever), seemed like that. I presume that if philosophers had insights like that, they would put them on the page.
Well presumably you find Nozick’s work, formulating Newcomb’s and Solomon’s problems insightful. Less Wrong’s decision theory work isn’t sui generis. I suspect a number of things on that page are insightful solutions to problems you hadn’t considered. That some of them are made in the context of CDT might make them less useful to those seeking to break from CDT, but it doesn’t make them less insightful. Keep in mind- this is the SEP page on Causal Decision Theory, not Newcomb’s problem or any other decision theory problem. It’s going to be a lot of people defending two-boxing. And it’s an encyclopedia article, which means there isn’t a lot of room to motivate or explain in detail the proposals. To see Eliezer’s insights into decision theory it really helps to read his paper, not just his blog posts. Same goes for other philosophers. I just linked to to the SEP because it was convenient and I was trying to show that yes, philosophers do have things to say about this. If you want more targeted material you’re gonna have to get access to an article database and do a few searches.
Also, keep in mind that if you don’t care about AI decision theory is a pretty parochial concern. If Eliezer published his TDT paper it wouldn’t make him famous or anything. Expecting all the insights on a subject to show up in an online encyclopedia article about an adjacent subject is unrealistic.
From what I see on the SEP page ratification, in particular seems insightful and capable of doing some of the same things TDT does. The Death in Damascus/decision instability problem is something for TDT/UDT to address.
In general, I’m not at all equipped to give you a guided tour of the philosophical literature. I know only the vaguest outline of the subfield. All I know is that if I was really interested in a problem and someone told me “Look, over here there’s a bunch of papers written by people from the
moderately intelligent to the genius on your subject and closely related subjects” I’d be like “AWESOME! OUT OF MY WAY!”. Even if you don’t find any solutions to your problems, the way other people formulate the problems is likely to provoke insights.
I conclude (with two pretty big ifs) that while philosophers have insights, they don’t have very good insights.
Concluding anything about philosopher’s insights when you haven’t read any papers and two days ago you weren’t aware there were any papers is a bit absurd.
Knowning what I know about academic philosophy and the minds behind lesswrong’s take on decision theory, that strikes me as totally possible.
As far as I can tell you don’t know much at all about academic philosophy. As for the minds behind the LW take on decision theory, I’m not sure what it is they’ve accomplished besides writing some insightful things about decision theory.
I’m not really interested in decision theory. It is one of several fun things I like to think about. To demonstrate an extreme version of this attitude, I am thinking about a math problem right now. I know that there is a solution in the literature—someone told me. I do not plan to find that solution in the literature.
Now, I am more interested in getting the correct answer vs. finding the answer myself in decision theory than that. But the primary reason I think about decision theory is not because I want to know the answer. So if someone was like, “here’s a paper that I think contains important insights on this problem,” I’d read it, but if they were like, “here’s a bunch of papers written by a community whose biases you find personally annoying and do not think are conducive to solving this particular problem, some of which probably contain some insights,” I’ll be more wary.
It should be noted that I do agree with your point to some extent, which is why we are having this discussion.
Well presumably you find Nozick’s work, formulating Newcomb’s and Solomon’s problems insightful.
Indeed.
I suspect a number of things on that page are insightful solutions to problems you hadn’t considered.
That did not appear to be the case when I looked.
Keep in mind- this is the SEP page on Causal Decision Theory, not Newcomb’s problem or any other decision theory problem.
which you linked to because, AFAICT, it is one of only three SEP pages that mentions Newcomb’s Problem, two of which I have read the relevant parts of and one of which I will soon.
To see Eliezer’s insights into decision theory it really helps to read his paper, not just his blog posts.
To see that he has insights, you just need to read his blog posts, although to be fair many of the ideas get less than a lesswrong-length post of explanation.
Expecting all the insights on a subject to show up in an online encyclopedia article about an adjacent subject is unrealistic.
I’d expect the best ones to.
In general, I’m not at all equipped to give you a guided tour of the philosophical literature.
It seems like, once I exhaust your limited but easily-accessible knowledge, which seems like about now, I should look up philosophical decision theory papers at the same leisurely pace I think about decision theory. My university should have some sort of database.
From what I see on the SEP page ratification, in particular seems insightful and capable of doing some of the same things TDT does.
It seems like it does just the wrong thing to me. For example, it two-boxes on Newcomb’s problem.
However, the amount of sense it seems to make leads me to suspect that I don’t understand it. When I have time, I will read the appropriate paper(s?) until I’m certain I understand what he means.
The Death in Damascus/decision instability problem is something for TDT/UDT to address.
TDT and UDT as currently formulated would make the correct counterfactual prediction:
“If I go to Damascus, I’ll die, if I go to Aleppo, I die, if I use a source of bits that Death doesn’t have access to, I’ll live with probability 1⁄2.”
which avoids decision instability, and, in general, don’t let you consider your decisions in view of your decisions.
Concluding anything about philosopher’s insights when you haven’t read any papers and two days ago you weren’t aware there were any papers is a bit absurd.
I was aware of the existence of papers, and I knew some of the main ideas that were contained in them.
As far as I can tell you don’t know much at all about academic philosophy.
There is something about academic philosophy that is not conducive to coming to conclusions about problems and then moving on to other, harder problems, at nearly the rate many other academic disciplines do so. Clearly some of this is based on philosophy being hard, but some of it is also based on the collective irrationality of philosophers.
I don’t know as much as I should. I know some.
As for the minds behind the LW take on decision theory, I’m not sure what it is they’ve accomplished besides writing some insightful things about decision theory.
Writing up a large collection of true statements of philosophy that contains very few false statements of philosophy, while not much of an achievement, is an indicator of what I think is the right attitude, especially for problems like decision theory.
AI theory is also an enormous intuition pump for this type of problem.
I mean, christ consider the outside view!
Considering the outside view leads me to two conclusions:
You’re right.
The best way to make progress on DT is to, if possible, get our ideas published, thus allowing TDT and academic philosophy’s ideas to mingle and recombine into superior ideas in the minds of more than O(5) people. Alternately, if TDT sucks then attempting to do this will lead to the creation by academic philosophers of strong arguments why TDT sucks that will also help figure out the problem.
I believe my current planned actions WRT reading philosophy papers are sufficient to cover the outside and inside evidence for 1, and I”m trying to figure out if there are better strategies than Eliezer’s current one for 2 and what the costs are.
I don’t know what percentage of writing that gets called “philosophy” is worthwhile but it isn’t that hard to narrow your reading material down to relevant and worthwhile texts. It’s really weird to see comments like this here because so much of what I’ve found on Less Wrong are ideas I’ve seen previously in philosophy I’ve read. Moreover, a large fraction of my karma I got just by repeating or synthesizing things I learned doing philosophy- and I’m not the only one whose gotten karma this way.
I find it particularly perplexing that you think it’s a good idea to only read pre-WWII philosophers as their ideas are almost always better said by contemporary authors. One of my major problems with the discipline is that it is mostly taught by doing history of philosophy- forcing students to struggle with the prose of a Plato translation and distilling the philosophy from the mysticism instead of just reading Bertrand Russell on universals.
Examples would probably make your point much more persuasive (not that I’m saying that it’s unpersuasive, just that it feels a bit abstract right now).
Agreed. I was just being lazy.
I already didn’t believe in the Copenhagen Interpretation because of a Philosophy of Physics course where my professor took Copenhagen to be the problem statement instead of a possible solution. That whole sequence is more or less something one could find in a philosophy of physics book- though I don’t myself think it is Eliezer’s best series.
Before coming here my metaethics were already subjectivist/anti-realist. There’s about a century’s worth of conceptual distinctions that would make the Metaethics Sequence clearer- a few of which I’ve made in comments leading to constructive discussion. I feel like I’m constantly paraphrasing Hume in these discussions where people try to reason their way to a terminal value.
There is Philosophy of Math, where there was a +12 comment suggesting the suggestion be better tied to academic work on the subject. My comments were well upvoted and I was mostly just prodding Silas with the standard Platonist line plus a little Quine.
History and Philosophy of Science comes up. That discussion was basically a combination of Kuhn and Quine (plus a bunch of less recognizable names who talk about the same things).
Bayesian epistemology is, itself, a subfield of philosophy but people here seem mostly unfamiliar with the things academics consider to be open problems. Multiple times I’ve seen comments that take a couple paragraphs to hint at the fact that logical fallibility is an open problem for Bayesian epistemology- which suggests the author hadn’t even read the SEP entry on the subject. The Dutch Book post I made recently (which I admittedly didn’t motivate very well) was all philosophy.
Eliezer’s posts on subjective probability are all Philosophy of Probability. Eliezer and other have written more generally about epistemology and in these cases they’ve almost always been repeating or synthesizing things said by people like Popper and Carnap.
On the subject of personal identity much of what I’ve said comes from a few papers I wrote on the subject and many times I’ve thought the discussion here would be clearer if supplemented by the concepts invented by people like Nozick and Parfit. In any case, this is a well developed subfield.
The decision theory stuff on this site, were it to be published, would almost certainly be published in a philosophy journal.
Causality hasn’t been discussed here much except for people telling other people to read Judea Pearl (and sometimes people trying to summarize him, though often poorly). I heard about Pearl’s book because it argues much the same thing as Making Things Happen which I read for a philosophy class. Woodward’s book is a bit less mathy and more concerned with philosophical and conceptual issues. Nonetheless, both are fairly categorized as contemporary philosophy. Pearl may not hold a teaching position in philosophy- but he’s widely cited as one and Causality won numerous awards from philosophical institutions.
The creator of the Sleeping Beauty Problem is a philosopher.
I’m certain I’ll think of more examples after I publish this.
I would not underestimate the value of synthesizing the correct parts of philosophy vs. being exposed to a lot of philosophy.
The Bayesian epistemology stuff looks like something I should look into. The central logic of Hume was intuitively obvious to me, philosophy of math doesn’t strike me as important once you convince yourself that you’re allowed to do math, philosophy of science isn’t important once you understand epistemology, personal identity isn’t important except as it plays into ethics, which is too hard.
I’m interested in the fact that you seem to suggest that the decision theory stuff is cutting-edge level. Since that is the part I spend the most time thinking and talking about, is my activity relatively less susceptible to the scholastic critique? Is there academic philosophy that has things to say to TDT, UDT, and so on?
No, it makes you more susceptible- if you’re actually working on a problem in the field that’s all the more reason to know the scholarly work.
Obviously, since TDT and UDT were invented like two years ago and haven’t been published, academic philosophy says nothing directly about them. But there is a pretty robust literature on Causal vs. Evidential Decision theory and Newcomb’s problem. You’ve read Eliezer’s paper haven’t you? He has a bibliography. Where did you think the issue came from? The whole thing is a philosophy problem. Also see the SEP.
“To say to” means something different then “to talk about”. For example, if someone makes epistemological claim XYZ, even if no Bayesian epistemologist has refuted that exact claim, their general arguments can be used in evaluating the claim.
If mainstream philosophers had come up with a decision theory better than evidential and causal (which are both wrong), then people who had already surpassed EDT and CDT would be forced to read them. But if they haven’t, then lesswrong has already surpassed the limit of the philosophical literature. That’s what I’m asking.
I will look at the SEP when I next have time.
You think that the one who ignores the literature while working on a problem that is unsolved in the literature is more blameworthy than one who ignores the literature while working on a problem that is solved in the literature?
I suppose it is about the same. I think anyone working on a problem while not knowing if it has been solved, partly solved or not solved at all in the literature is very blameworthy.
Right, I don’t know the field nearly well enough to answer this question. I would be surprised if nothing in the literature was a generalizable concern that TDT/UDT should deal with.
There have been lots of attempts to solve Newcomb’s problem-by amending EDT or CDT, or inventing a new decision theory. Many, perhaps most of these, use concepts related to TDT/UDT- possible worlds, counterfactuals, and Jeffrey’s notion of ratifiability (all three of these concepts are mentioned in Eliezer’s paper). Again, I don’t know the details of the major proposals, though skimming the literature it looks like none have been conclusive or totally convincing. But it seems plausible that the arguments which sink those theories might also sink the Less Wrong developed ones. It also seems very plausible that the theoretical innovations involved in those theories might be fruitful things for LW decision theorists to consider.
There have also been lots of things written about Newcomb’s problem- papers that don’t claim to solve anything but which claim to point out interesting features of this problem.
I don’t really understand the resistance to reading the literature. Why would you think insight in this subject area would be restricted to a cloistered little internet community (wonderful though we are)?
I was previously aware that Newcomb’s problem was somewhere between partly solved and not solved at all, which is at least something. With the critique brought to my attention, I attempted cheap ways of figuring it out, first asking you and then reading the SEP article on your recommendation.
That is a point.
I also didn’t say what I think I really wanted to say, which is that: If I read someone advocating a non-Bayesian epistemology, I react: “This is gibberish. Come back to me once you’ve understood Bayesian epistemology and adopted it or come up with a good counterargument.” The same thing is true of the is-ought distinction: An insight which is obviously fundamental to further analysis in its field.
Reflective consistency, the question of why you build an agent with a Could-Should Architecture, Updateless decision theory—these seem like those kinds of insights in decision theory. Nothing on the SEP page (most of which I’d seen before, in the TDT paper or wikipedia or whatever), seemed like that. I presume that if philosophers had insights like that, they would put them on the page.
I conclude (with two pretty big ifs) that while philosophers have insights, they don’t have very good insights.
I freely admit to some motivated cognition here. Reading papers is not fun, or, at least, less fun than thinking about problems, while believing that insight is restricted to a cloistered community is fun.
You make claim X, I see possible counterargument Y, responding argumentatively with Y is a good way to see whether you have any data on Y that sheds light on the specifics of X.
Knowning what I know about academic philosophy and the minds behind lesswrong’s take on decision theory, that strikes me as totally possible.
Well presumably you find Nozick’s work, formulating Newcomb’s and Solomon’s problems insightful. Less Wrong’s decision theory work isn’t sui generis. I suspect a number of things on that page are insightful solutions to problems you hadn’t considered. That some of them are made in the context of CDT might make them less useful to those seeking to break from CDT, but it doesn’t make them less insightful. Keep in mind- this is the SEP page on Causal Decision Theory, not Newcomb’s problem or any other decision theory problem. It’s going to be a lot of people defending two-boxing. And it’s an encyclopedia article, which means there isn’t a lot of room to motivate or explain in detail the proposals. To see Eliezer’s insights into decision theory it really helps to read his paper, not just his blog posts. Same goes for other philosophers. I just linked to to the SEP because it was convenient and I was trying to show that yes, philosophers do have things to say about this. If you want more targeted material you’re gonna have to get access to an article database and do a few searches.
Also, keep in mind that if you don’t care about AI decision theory is a pretty parochial concern. If Eliezer published his TDT paper it wouldn’t make him famous or anything. Expecting all the insights on a subject to show up in an online encyclopedia article about an adjacent subject is unrealistic.
From what I see on the SEP page ratification, in particular seems insightful and capable of doing some of the same things TDT does. The Death in Damascus/decision instability problem is something for TDT/UDT to address.
In general, I’m not at all equipped to give you a guided tour of the philosophical literature. I know only the vaguest outline of the subfield. All I know is that if I was really interested in a problem and someone told me “Look, over here there’s a bunch of papers written by people from the moderately intelligent to the genius on your subject and closely related subjects” I’d be like “AWESOME! OUT OF MY WAY!”. Even if you don’t find any solutions to your problems, the way other people formulate the problems is likely to provoke insights.
Concluding anything about philosopher’s insights when you haven’t read any papers and two days ago you weren’t aware there were any papers is a bit absurd.
As far as I can tell you don’t know much at all about academic philosophy. As for the minds behind the LW take on decision theory, I’m not sure what it is they’ve accomplished besides writing some insightful things about decision theory.
I mean, christ consider the outside view!
I’m not really interested in decision theory. It is one of several fun things I like to think about. To demonstrate an extreme version of this attitude, I am thinking about a math problem right now. I know that there is a solution in the literature—someone told me. I do not plan to find that solution in the literature.
Now, I am more interested in getting the correct answer vs. finding the answer myself in decision theory than that. But the primary reason I think about decision theory is not because I want to know the answer. So if someone was like, “here’s a paper that I think contains important insights on this problem,” I’d read it, but if they were like, “here’s a bunch of papers written by a community whose biases you find personally annoying and do not think are conducive to solving this particular problem, some of which probably contain some insights,” I’ll be more wary.
It should be noted that I do agree with your point to some extent, which is why we are having this discussion.
Indeed.
That did not appear to be the case when I looked.
which you linked to because, AFAICT, it is one of only three SEP pages that mentions Newcomb’s Problem, two of which I have read the relevant parts of and one of which I will soon.
To see that he has insights, you just need to read his blog posts, although to be fair many of the ideas get less than a lesswrong-length post of explanation.
I’d expect the best ones to.
It seems like, once I exhaust your limited but easily-accessible knowledge, which seems like about now, I should look up philosophical decision theory papers at the same leisurely pace I think about decision theory. My university should have some sort of database.
It seems like it does just the wrong thing to me. For example, it two-boxes on Newcomb’s problem.
However, the amount of sense it seems to make leads me to suspect that I don’t understand it. When I have time, I will read the appropriate paper(s?) until I’m certain I understand what he means.
TDT and UDT as currently formulated would make the correct counterfactual prediction:
“If I go to Damascus, I’ll die, if I go to Aleppo, I die, if I use a source of bits that Death doesn’t have access to, I’ll live with probability 1⁄2.”
which avoids decision instability, and, in general, don’t let you consider your decisions in view of your decisions.
I was aware of the existence of papers, and I knew some of the main ideas that were contained in them.
There is something about academic philosophy that is not conducive to coming to conclusions about problems and then moving on to other, harder problems, at nearly the rate many other academic disciplines do so. Clearly some of this is based on philosophy being hard, but some of it is also based on the collective irrationality of philosophers.
I don’t know as much as I should. I know some.
Writing up a large collection of true statements of philosophy that contains very few false statements of philosophy, while not much of an achievement, is an indicator of what I think is the right attitude, especially for problems like decision theory.
AI theory is also an enormous intuition pump for this type of problem.
Considering the outside view leads me to two conclusions:
You’re right.
The best way to make progress on DT is to, if possible, get our ideas published, thus allowing TDT and academic philosophy’s ideas to mingle and recombine into superior ideas in the minds of more than O(5) people. Alternately, if TDT sucks then attempting to do this will lead to the creation by academic philosophers of strong arguments why TDT sucks that will also help figure out the problem.
I believe my current planned actions WRT reading philosophy papers are sufficient to cover the outside and inside evidence for 1, and I”m trying to figure out if there are better strategies than Eliezer’s current one for 2 and what the costs are.