What does the length of the answer have to do with how hard a problem is? The answer to P=NP can fit in 1 bit, but that’s still a hard problem, I assume you agree?
Perhaps by “answer” you also mean to include all the the justifications necessarily to show that the answer is correct. If so, I don’t think we can fit the justification to an actual answers to a hard philosophical problem on one page or less. Actually I don’t think we know how to justify a philosophical answer (in the way that we might justify P!=NP by giving a mathematical proof), so the best we can do is very slowly gain confidence in an idea, by continuously trying (and failing) to poke holes in it or trying (and failing) to find better solutions.
In a PM you imply that you’ve found the true answers to ‘free will’, ‘does a tree fall in the forest’, ‘the nature of truth’. I’ll grant you ‘does a tree fall in the forest’ (since your solution appears to be the standardanswer in philosophy, although note how it says the problem is “untypically simple”). However I have strong reservations about ‘free will’ and ‘the nature of truth’ from both the inside-view perspective and (more relevant to the current post) the outside-view perspective. Given the history of philosophy and the outside view, I don’t see how you can be as confident about your ideas as you appear to be. Do you think the outside view is inapplicable here, or that I’m using it wrong?
Well, given what you seem to believe, you must either be more impressed with the alleged unsolvability of the problems than I am (implying that you think I would need more of a hero license than I think I would need to possess), or we agree about the problems being ultimately simple but you think it’s unreasonable to try to solve some ultimately simple problems with the fate of the world at stake. So it sounds like it’s mostly the former fork; but possibly with a side order of you thinking that it’s invalid for me to shrug and go ‘Meh’ at the fact that some other people taking completely different approaches failed to solve some ultimately simple problems, because the fact that they’re all arguing with each other means I can’t get into an epistemic state where I know I’m right, or something like that, whereas I don’t particularly see them as being in my reference class one way or another—their ways of thinking, the way they talk, the way they approach the problem, etc., all seem completely unlike anything I do or would ever consider trying.
Let’s say when I’d discovered Gary Drescher, he’d previously solved ‘free will’ the same way I had, but had spent decades using the same type of approaches I would intend to use on trying to produce a good nonperson predicate. Then although it would be only N=1, and I do kinda intend to surpass Drescher, I would still be nervous on account of this relevant evidence. The philosophers who can’t agree on free will seem like entirely different sorts of creatures to me.
but you think it’s unreasonable to try to solve some ultimately simple problems with the fate of the world at stake
To be clear, I’m not afraid that you’ll fail to solve one or more philosophical problems and waste your donors’ money. If that was the only worry I’d certainly want you to try. (ETA: Well, aside from the problem of shortening AI timelines.) What I’m afraid of is that you’ll solve them incorrectly while thinking that you’ve solved them correctly.
I recall you used to often say that you’ve “solved metaethics”. But when I looked at your solution I was totally dissatisfied, and wrote several posts explaining why. I also thought you were overconfident about utilitarianism and personal identity, and wrote posts pointing out holes in your arguments about those. “Free will” and “nature of truth” happen to be topics that I’ve given less time to, but I could write down my inside view of why I’m not confident they are solved problems, if you think that would help with our larger disagreements.
The philosophers who can’t agree on free will seem like entirely different sorts of creatures to me.
Is there anyone besides Gary Drescher who you’d consider to be in your reference class? What about the people who came up with the same exact solution to “tree falls in forest” as you? (Did you follow the links I provided?) Or the people who originally came up with utilitarianism, Bayesian decision theory, and Solomonoff induction (all of whom failed to notice the problems later discovered in those ideas)? Do you consider me to be in your reference class, given that I independently came up with some of the same decision theory ideas as you?
Or if it’s just Drescher, would it change your mind on how confident you ought to be in your ideas if he was to express disagreement with several of them?
“Free will” and “nature of truth” happen to be topics that I’ve given less time to, but I could write down my inside view of why I’m not confident they are solved problems
For “truth” see this comment. The problem with understanding “free will” is that it has a dependency on “nature of decisions” which I’m not entirely sure I understand. The TDT/UDT notion of “decisions as logical facts” seems to be a step in the right direction, but there are still unresolved paradoxes with that approach that make me wonder if there isn’t a fundamentally different approach that makes more sense. (Plus, Gary Drescher, when we last discussed this, wasn’t convinced to a high degree of confidence that “decisions as logical facts” is the right approach, and was still looking for alternatives, but I suppose that’s more of an outside-view reason for me to not be very confident.)
“Free will” and “nature of truth” happen to be topics that I’ve given less time to, but I could write down my inside view of why I’m not confident they are solved problems
This depends on the threshold of “solved”, which doesn’t seem particularly relevant to this conversation. What philosophy would consider “solved” is less of an issue than its propensity to miss/ignore available insight (as compared to, say, mathematics). “Free will” and “nature of truth”, for example, still have important outstanding confusions, but they also have major resolved issues, and those remaining confusions are subtle, hard to frame/notice when one is busy arguing on the other sides of the resolved issues.
Free will and Nature of Truth are subjects I have devoted plenty of time to, and it terms to .me that conclusion -and overconfidence—abound on the Less Wrong side of the fence,
(Of algorithmic information theorists, I’m familiar with only Chaitin’s writings on the philosophy thereof; I think that though he wouldn’t have found some of the problems later found by others, he also wouldn’t have placed the confidence in it that would lead to premature AGI development failure modes. (I am, as usual, much too lazy to give references.))
What does the length of the answer have to do with how hard a problem is? The answer to P=NP can fit in 1 bit, but that’s still a hard problem, I assume you agree?
Perhaps by “answer” you also mean to include all the the justifications necessarily to show that the answer is correct. If so, I don’t think we can fit the justification to an actual answers to a hard philosophical problem on one page or less. Actually I don’t think we know how to justify a philosophical answer (in the way that we might justify P!=NP by giving a mathematical proof), so the best we can do is very slowly gain confidence in an idea, by continuously trying (and failing) to poke holes in it or trying (and failing) to find better solutions.
In a PM you imply that you’ve found the true answers to ‘free will’, ‘does a tree fall in the forest’, ‘the nature of truth’. I’ll grant you ‘does a tree fall in the forest’ (since your solution appears to be the standard answer in philosophy, although note how it says the problem is “untypically simple”). However I have strong reservations about ‘free will’ and ‘the nature of truth’ from both the inside-view perspective and (more relevant to the current post) the outside-view perspective. Given the history of philosophy and the outside view, I don’t see how you can be as confident about your ideas as you appear to be. Do you think the outside view is inapplicable here, or that I’m using it wrong?
Well, given what you seem to believe, you must either be more impressed with the alleged unsolvability of the problems than I am (implying that you think I would need more of a hero license than I think I would need to possess), or we agree about the problems being ultimately simple but you think it’s unreasonable to try to solve some ultimately simple problems with the fate of the world at stake. So it sounds like it’s mostly the former fork; but possibly with a side order of you thinking that it’s invalid for me to shrug and go ‘Meh’ at the fact that some other people taking completely different approaches failed to solve some ultimately simple problems, because the fact that they’re all arguing with each other means I can’t get into an epistemic state where I know I’m right, or something like that, whereas I don’t particularly see them as being in my reference class one way or another—their ways of thinking, the way they talk, the way they approach the problem, etc., all seem completely unlike anything I do or would ever consider trying.
Let’s say when I’d discovered Gary Drescher, he’d previously solved ‘free will’ the same way I had, but had spent decades using the same type of approaches I would intend to use on trying to produce a good nonperson predicate. Then although it would be only N=1, and I do kinda intend to surpass Drescher, I would still be nervous on account of this relevant evidence. The philosophers who can’t agree on free will seem like entirely different sorts of creatures to me.
To be clear, I’m not afraid that you’ll fail to solve one or more philosophical problems and waste your donors’ money. If that was the only worry I’d certainly want you to try. (ETA: Well, aside from the problem of shortening AI timelines.) What I’m afraid of is that you’ll solve them incorrectly while thinking that you’ve solved them correctly.
I recall you used to often say that you’ve “solved metaethics”. But when I looked at your solution I was totally dissatisfied, and wrote several posts explaining why. I also thought you were overconfident about utilitarianism and personal identity, and wrote posts pointing out holes in your arguments about those. “Free will” and “nature of truth” happen to be topics that I’ve given less time to, but I could write down my inside view of why I’m not confident they are solved problems, if you think that would help with our larger disagreements.
Is there anyone besides Gary Drescher who you’d consider to be in your reference class? What about the people who came up with the same exact solution to “tree falls in forest” as you? (Did you follow the links I provided?) Or the people who originally came up with utilitarianism, Bayesian decision theory, and Solomonoff induction (all of whom failed to notice the problems later discovered in those ideas)? Do you consider me to be in your reference class, given that I independently came up with some of the same decision theory ideas as you?
Or if it’s just Drescher, would it change your mind on how confident you ought to be in your ideas if he was to express disagreement with several of them?
I’d be really interested in reading that.
For “truth” see this comment. The problem with understanding “free will” is that it has a dependency on “nature of decisions” which I’m not entirely sure I understand. The TDT/UDT notion of “decisions as logical facts” seems to be a step in the right direction, but there are still unresolved paradoxes with that approach that make me wonder if there isn’t a fundamentally different approach that makes more sense. (Plus, Gary Drescher, when we last discussed this, wasn’t convinced to a high degree of confidence that “decisions as logical facts” is the right approach, and was still looking for alternatives, but I suppose that’s more of an outside-view reason for me to not be very confident.)
This depends on the threshold of “solved”, which doesn’t seem particularly relevant to this conversation. What philosophy would consider “solved” is less of an issue than its propensity to miss/ignore available insight (as compared to, say, mathematics). “Free will” and “nature of truth”, for example, still have important outstanding confusions, but they also have major resolved issues, and those remaining confusions are subtle, hard to frame/notice when one is busy arguing on the other sides of the resolved issues.
Free will and Nature of Truth are subjects I have devoted plenty of time to, and it terms to .me that conclusion -and overconfidence—abound on the Less Wrong side of the fence,
As the ultimate question seems to be: “Is this FAI design safe?” I think solved should have a high bar.
(Of algorithmic information theorists, I’m familiar with only Chaitin’s writings on the philosophy thereof; I think that though he wouldn’t have found some of the problems later found by others, he also wouldn’t have placed the confidence in it that would lead to premature AGI development failure modes. (I am, as usual, much too lazy to give references.))