I suppose it is about the same. I think anyone working on a problem while not knowing if it has been solved, partly solved or not solved at all in the literature is very blameworthy.
I was previously aware that Newcomb’s problem was somewhere between partly solved and not solved at all, which is at least something. With the critique brought to my attention, I attempted cheap ways of figuring it out, first asking you and then reading the SEP article on your recommendation.
Right, I don’t know the field nearly well enough to answer this question. I would be surprised if nothing in the literature was a generalizable concern that TDT/UDT should deal with.
That is a point.
I also didn’t say what I think I really wanted to say, which is that: If I read someone advocating a non-Bayesian epistemology, I react: “This is gibberish. Come back to me once you’ve understood Bayesian epistemology and adopted it or come up with a good counterargument.” The same thing is true of the is-ought distinction: An insight which is obviously fundamental to further analysis in its field.
Reflective consistency, the question of why you build an agent with a Could-Should Architecture, Updateless decision theory—these seem like those kinds of insights in decision theory. Nothing on the SEP page (most of which I’d seen before, in the TDT paper or wikipedia or whatever), seemed like that. I presume that if philosophers had insights like that, they would put them on the page.
I conclude (with two pretty big ifs) that while philosophers have insights, they don’t have very good insights.
I don’t really understand the resistance to reading the literature. Why would you think insight in this subject area would be restricted to a cloistered little internet community (wonderful though we are)?
I freely admit to some motivated cognition here. Reading papers is not fun, or, at least, less fun than thinking about problems, while believing that insight is restricted to a cloistered community is fun.
You make claim X, I see possible counterargument Y, responding argumentatively with Y is a good way to see whether you have any data on Y that sheds light on the specifics of X.
Knowning what I know about academic philosophy and the minds behind lesswrong’s take on decision theory, that strikes me as totally possible.
Reflective consistency, the question of why you build an agent with a Could-Should Architecture, Updateless decision theory—these seem like those kinds of insights in decision theory. Nothing on the SEP page (most of which I’d seen before, in the TDT paper or wikipedia or whatever), seemed like that. I presume that if philosophers had insights like that, they would put them on the page.
Well presumably you find Nozick’s work, formulating Newcomb’s and Solomon’s problems insightful. Less Wrong’s decision theory work isn’t sui generis. I suspect a number of things on that page are insightful solutions to problems you hadn’t considered. That some of them are made in the context of CDT might make them less useful to those seeking to break from CDT, but it doesn’t make them less insightful. Keep in mind- this is the SEP page on Causal Decision Theory, not Newcomb’s problem or any other decision theory problem. It’s going to be a lot of people defending two-boxing. And it’s an encyclopedia article, which means there isn’t a lot of room to motivate or explain in detail the proposals. To see Eliezer’s insights into decision theory it really helps to read his paper, not just his blog posts. Same goes for other philosophers. I just linked to to the SEP because it was convenient and I was trying to show that yes, philosophers do have things to say about this. If you want more targeted material you’re gonna have to get access to an article database and do a few searches.
Also, keep in mind that if you don’t care about AI decision theory is a pretty parochial concern. If Eliezer published his TDT paper it wouldn’t make him famous or anything. Expecting all the insights on a subject to show up in an online encyclopedia article about an adjacent subject is unrealistic.
From what I see on the SEP page ratification, in particular seems insightful and capable of doing some of the same things TDT does. The Death in Damascus/decision instability problem is something for TDT/UDT to address.
In general, I’m not at all equipped to give you a guided tour of the philosophical literature. I know only the vaguest outline of the subfield. All I know is that if I was really interested in a problem and someone told me “Look, over here there’s a bunch of papers written by people from the
moderately intelligent to the genius on your subject and closely related subjects” I’d be like “AWESOME! OUT OF MY WAY!”. Even if you don’t find any solutions to your problems, the way other people formulate the problems is likely to provoke insights.
I conclude (with two pretty big ifs) that while philosophers have insights, they don’t have very good insights.
Concluding anything about philosopher’s insights when you haven’t read any papers and two days ago you weren’t aware there were any papers is a bit absurd.
Knowning what I know about academic philosophy and the minds behind lesswrong’s take on decision theory, that strikes me as totally possible.
As far as I can tell you don’t know much at all about academic philosophy. As for the minds behind the LW take on decision theory, I’m not sure what it is they’ve accomplished besides writing some insightful things about decision theory.
I’m not really interested in decision theory. It is one of several fun things I like to think about. To demonstrate an extreme version of this attitude, I am thinking about a math problem right now. I know that there is a solution in the literature—someone told me. I do not plan to find that solution in the literature.
Now, I am more interested in getting the correct answer vs. finding the answer myself in decision theory than that. But the primary reason I think about decision theory is not because I want to know the answer. So if someone was like, “here’s a paper that I think contains important insights on this problem,” I’d read it, but if they were like, “here’s a bunch of papers written by a community whose biases you find personally annoying and do not think are conducive to solving this particular problem, some of which probably contain some insights,” I’ll be more wary.
It should be noted that I do agree with your point to some extent, which is why we are having this discussion.
Well presumably you find Nozick’s work, formulating Newcomb’s and Solomon’s problems insightful.
Indeed.
I suspect a number of things on that page are insightful solutions to problems you hadn’t considered.
That did not appear to be the case when I looked.
Keep in mind- this is the SEP page on Causal Decision Theory, not Newcomb’s problem or any other decision theory problem.
which you linked to because, AFAICT, it is one of only three SEP pages that mentions Newcomb’s Problem, two of which I have read the relevant parts of and one of which I will soon.
To see Eliezer’s insights into decision theory it really helps to read his paper, not just his blog posts.
To see that he has insights, you just need to read his blog posts, although to be fair many of the ideas get less than a lesswrong-length post of explanation.
Expecting all the insights on a subject to show up in an online encyclopedia article about an adjacent subject is unrealistic.
I’d expect the best ones to.
In general, I’m not at all equipped to give you a guided tour of the philosophical literature.
It seems like, once I exhaust your limited but easily-accessible knowledge, which seems like about now, I should look up philosophical decision theory papers at the same leisurely pace I think about decision theory. My university should have some sort of database.
From what I see on the SEP page ratification, in particular seems insightful and capable of doing some of the same things TDT does.
It seems like it does just the wrong thing to me. For example, it two-boxes on Newcomb’s problem.
However, the amount of sense it seems to make leads me to suspect that I don’t understand it. When I have time, I will read the appropriate paper(s?) until I’m certain I understand what he means.
The Death in Damascus/decision instability problem is something for TDT/UDT to address.
TDT and UDT as currently formulated would make the correct counterfactual prediction:
“If I go to Damascus, I’ll die, if I go to Aleppo, I die, if I use a source of bits that Death doesn’t have access to, I’ll live with probability 1⁄2.”
which avoids decision instability, and, in general, don’t let you consider your decisions in view of your decisions.
Concluding anything about philosopher’s insights when you haven’t read any papers and two days ago you weren’t aware there were any papers is a bit absurd.
I was aware of the existence of papers, and I knew some of the main ideas that were contained in them.
As far as I can tell you don’t know much at all about academic philosophy.
There is something about academic philosophy that is not conducive to coming to conclusions about problems and then moving on to other, harder problems, at nearly the rate many other academic disciplines do so. Clearly some of this is based on philosophy being hard, but some of it is also based on the collective irrationality of philosophers.
I don’t know as much as I should. I know some.
As for the minds behind the LW take on decision theory, I’m not sure what it is they’ve accomplished besides writing some insightful things about decision theory.
Writing up a large collection of true statements of philosophy that contains very few false statements of philosophy, while not much of an achievement, is an indicator of what I think is the right attitude, especially for problems like decision theory.
AI theory is also an enormous intuition pump for this type of problem.
I mean, christ consider the outside view!
Considering the outside view leads me to two conclusions:
You’re right.
The best way to make progress on DT is to, if possible, get our ideas published, thus allowing TDT and academic philosophy’s ideas to mingle and recombine into superior ideas in the minds of more than O(5) people. Alternately, if TDT sucks then attempting to do this will lead to the creation by academic philosophers of strong arguments why TDT sucks that will also help figure out the problem.
I believe my current planned actions WRT reading philosophy papers are sufficient to cover the outside and inside evidence for 1, and I”m trying to figure out if there are better strategies than Eliezer’s current one for 2 and what the costs are.
I was previously aware that Newcomb’s problem was somewhere between partly solved and not solved at all, which is at least something. With the critique brought to my attention, I attempted cheap ways of figuring it out, first asking you and then reading the SEP article on your recommendation.
That is a point.
I also didn’t say what I think I really wanted to say, which is that: If I read someone advocating a non-Bayesian epistemology, I react: “This is gibberish. Come back to me once you’ve understood Bayesian epistemology and adopted it or come up with a good counterargument.” The same thing is true of the is-ought distinction: An insight which is obviously fundamental to further analysis in its field.
Reflective consistency, the question of why you build an agent with a Could-Should Architecture, Updateless decision theory—these seem like those kinds of insights in decision theory. Nothing on the SEP page (most of which I’d seen before, in the TDT paper or wikipedia or whatever), seemed like that. I presume that if philosophers had insights like that, they would put them on the page.
I conclude (with two pretty big ifs) that while philosophers have insights, they don’t have very good insights.
I freely admit to some motivated cognition here. Reading papers is not fun, or, at least, less fun than thinking about problems, while believing that insight is restricted to a cloistered community is fun.
You make claim X, I see possible counterargument Y, responding argumentatively with Y is a good way to see whether you have any data on Y that sheds light on the specifics of X.
Knowning what I know about academic philosophy and the minds behind lesswrong’s take on decision theory, that strikes me as totally possible.
Well presumably you find Nozick’s work, formulating Newcomb’s and Solomon’s problems insightful. Less Wrong’s decision theory work isn’t sui generis. I suspect a number of things on that page are insightful solutions to problems you hadn’t considered. That some of them are made in the context of CDT might make them less useful to those seeking to break from CDT, but it doesn’t make them less insightful. Keep in mind- this is the SEP page on Causal Decision Theory, not Newcomb’s problem or any other decision theory problem. It’s going to be a lot of people defending two-boxing. And it’s an encyclopedia article, which means there isn’t a lot of room to motivate or explain in detail the proposals. To see Eliezer’s insights into decision theory it really helps to read his paper, not just his blog posts. Same goes for other philosophers. I just linked to to the SEP because it was convenient and I was trying to show that yes, philosophers do have things to say about this. If you want more targeted material you’re gonna have to get access to an article database and do a few searches.
Also, keep in mind that if you don’t care about AI decision theory is a pretty parochial concern. If Eliezer published his TDT paper it wouldn’t make him famous or anything. Expecting all the insights on a subject to show up in an online encyclopedia article about an adjacent subject is unrealistic.
From what I see on the SEP page ratification, in particular seems insightful and capable of doing some of the same things TDT does. The Death in Damascus/decision instability problem is something for TDT/UDT to address.
In general, I’m not at all equipped to give you a guided tour of the philosophical literature. I know only the vaguest outline of the subfield. All I know is that if I was really interested in a problem and someone told me “Look, over here there’s a bunch of papers written by people from the moderately intelligent to the genius on your subject and closely related subjects” I’d be like “AWESOME! OUT OF MY WAY!”. Even if you don’t find any solutions to your problems, the way other people formulate the problems is likely to provoke insights.
Concluding anything about philosopher’s insights when you haven’t read any papers and two days ago you weren’t aware there were any papers is a bit absurd.
As far as I can tell you don’t know much at all about academic philosophy. As for the minds behind the LW take on decision theory, I’m not sure what it is they’ve accomplished besides writing some insightful things about decision theory.
I mean, christ consider the outside view!
I’m not really interested in decision theory. It is one of several fun things I like to think about. To demonstrate an extreme version of this attitude, I am thinking about a math problem right now. I know that there is a solution in the literature—someone told me. I do not plan to find that solution in the literature.
Now, I am more interested in getting the correct answer vs. finding the answer myself in decision theory than that. But the primary reason I think about decision theory is not because I want to know the answer. So if someone was like, “here’s a paper that I think contains important insights on this problem,” I’d read it, but if they were like, “here’s a bunch of papers written by a community whose biases you find personally annoying and do not think are conducive to solving this particular problem, some of which probably contain some insights,” I’ll be more wary.
It should be noted that I do agree with your point to some extent, which is why we are having this discussion.
Indeed.
That did not appear to be the case when I looked.
which you linked to because, AFAICT, it is one of only three SEP pages that mentions Newcomb’s Problem, two of which I have read the relevant parts of and one of which I will soon.
To see that he has insights, you just need to read his blog posts, although to be fair many of the ideas get less than a lesswrong-length post of explanation.
I’d expect the best ones to.
It seems like, once I exhaust your limited but easily-accessible knowledge, which seems like about now, I should look up philosophical decision theory papers at the same leisurely pace I think about decision theory. My university should have some sort of database.
It seems like it does just the wrong thing to me. For example, it two-boxes on Newcomb’s problem.
However, the amount of sense it seems to make leads me to suspect that I don’t understand it. When I have time, I will read the appropriate paper(s?) until I’m certain I understand what he means.
TDT and UDT as currently formulated would make the correct counterfactual prediction:
“If I go to Damascus, I’ll die, if I go to Aleppo, I die, if I use a source of bits that Death doesn’t have access to, I’ll live with probability 1⁄2.”
which avoids decision instability, and, in general, don’t let you consider your decisions in view of your decisions.
I was aware of the existence of papers, and I knew some of the main ideas that were contained in them.
There is something about academic philosophy that is not conducive to coming to conclusions about problems and then moving on to other, harder problems, at nearly the rate many other academic disciplines do so. Clearly some of this is based on philosophy being hard, but some of it is also based on the collective irrationality of philosophers.
I don’t know as much as I should. I know some.
Writing up a large collection of true statements of philosophy that contains very few false statements of philosophy, while not much of an achievement, is an indicator of what I think is the right attitude, especially for problems like decision theory.
AI theory is also an enormous intuition pump for this type of problem.
Considering the outside view leads me to two conclusions:
You’re right.
The best way to make progress on DT is to, if possible, get our ideas published, thus allowing TDT and academic philosophy’s ideas to mingle and recombine into superior ideas in the minds of more than O(5) people. Alternately, if TDT sucks then attempting to do this will lead to the creation by academic philosophers of strong arguments why TDT sucks that will also help figure out the problem.
I believe my current planned actions WRT reading philosophy papers are sufficient to cover the outside and inside evidence for 1, and I”m trying to figure out if there are better strategies than Eliezer’s current one for 2 and what the costs are.