but you think it’s unreasonable to try to solve some ultimately simple problems with the fate of the world at stake
To be clear, I’m not afraid that you’ll fail to solve one or more philosophical problems and waste your donors’ money. If that was the only worry I’d certainly want you to try. (ETA: Well, aside from the problem of shortening AI timelines.) What I’m afraid of is that you’ll solve them incorrectly while thinking that you’ve solved them correctly.
I recall you used to often say that you’ve “solved metaethics”. But when I looked at your solution I was totally dissatisfied, and wrote several posts explaining why. I also thought you were overconfident about utilitarianism and personal identity, and wrote posts pointing out holes in your arguments about those. “Free will” and “nature of truth” happen to be topics that I’ve given less time to, but I could write down my inside view of why I’m not confident they are solved problems, if you think that would help with our larger disagreements.
The philosophers who can’t agree on free will seem like entirely different sorts of creatures to me.
Is there anyone besides Gary Drescher who you’d consider to be in your reference class? What about the people who came up with the same exact solution to “tree falls in forest” as you? (Did you follow the links I provided?) Or the people who originally came up with utilitarianism, Bayesian decision theory, and Solomonoff induction (all of whom failed to notice the problems later discovered in those ideas)? Do you consider me to be in your reference class, given that I independently came up with some of the same decision theory ideas as you?
Or if it’s just Drescher, would it change your mind on how confident you ought to be in your ideas if he was to express disagreement with several of them?
“Free will” and “nature of truth” happen to be topics that I’ve given less time to, but I could write down my inside view of why I’m not confident they are solved problems
For “truth” see this comment. The problem with understanding “free will” is that it has a dependency on “nature of decisions” which I’m not entirely sure I understand. The TDT/UDT notion of “decisions as logical facts” seems to be a step in the right direction, but there are still unresolved paradoxes with that approach that make me wonder if there isn’t a fundamentally different approach that makes more sense. (Plus, Gary Drescher, when we last discussed this, wasn’t convinced to a high degree of confidence that “decisions as logical facts” is the right approach, and was still looking for alternatives, but I suppose that’s more of an outside-view reason for me to not be very confident.)
“Free will” and “nature of truth” happen to be topics that I’ve given less time to, but I could write down my inside view of why I’m not confident they are solved problems
This depends on the threshold of “solved”, which doesn’t seem particularly relevant to this conversation. What philosophy would consider “solved” is less of an issue than its propensity to miss/ignore available insight (as compared to, say, mathematics). “Free will” and “nature of truth”, for example, still have important outstanding confusions, but they also have major resolved issues, and those remaining confusions are subtle, hard to frame/notice when one is busy arguing on the other sides of the resolved issues.
Free will and Nature of Truth are subjects I have devoted plenty of time to, and it terms to .me that conclusion -and overconfidence—abound on the Less Wrong side of the fence,
(Of algorithmic information theorists, I’m familiar with only Chaitin’s writings on the philosophy thereof; I think that though he wouldn’t have found some of the problems later found by others, he also wouldn’t have placed the confidence in it that would lead to premature AGI development failure modes. (I am, as usual, much too lazy to give references.))
To be clear, I’m not afraid that you’ll fail to solve one or more philosophical problems and waste your donors’ money. If that was the only worry I’d certainly want you to try. (ETA: Well, aside from the problem of shortening AI timelines.) What I’m afraid of is that you’ll solve them incorrectly while thinking that you’ve solved them correctly.
I recall you used to often say that you’ve “solved metaethics”. But when I looked at your solution I was totally dissatisfied, and wrote several posts explaining why. I also thought you were overconfident about utilitarianism and personal identity, and wrote posts pointing out holes in your arguments about those. “Free will” and “nature of truth” happen to be topics that I’ve given less time to, but I could write down my inside view of why I’m not confident they are solved problems, if you think that would help with our larger disagreements.
Is there anyone besides Gary Drescher who you’d consider to be in your reference class? What about the people who came up with the same exact solution to “tree falls in forest” as you? (Did you follow the links I provided?) Or the people who originally came up with utilitarianism, Bayesian decision theory, and Solomonoff induction (all of whom failed to notice the problems later discovered in those ideas)? Do you consider me to be in your reference class, given that I independently came up with some of the same decision theory ideas as you?
Or if it’s just Drescher, would it change your mind on how confident you ought to be in your ideas if he was to express disagreement with several of them?
I’d be really interested in reading that.
For “truth” see this comment. The problem with understanding “free will” is that it has a dependency on “nature of decisions” which I’m not entirely sure I understand. The TDT/UDT notion of “decisions as logical facts” seems to be a step in the right direction, but there are still unresolved paradoxes with that approach that make me wonder if there isn’t a fundamentally different approach that makes more sense. (Plus, Gary Drescher, when we last discussed this, wasn’t convinced to a high degree of confidence that “decisions as logical facts” is the right approach, and was still looking for alternatives, but I suppose that’s more of an outside-view reason for me to not be very confident.)
This depends on the threshold of “solved”, which doesn’t seem particularly relevant to this conversation. What philosophy would consider “solved” is less of an issue than its propensity to miss/ignore available insight (as compared to, say, mathematics). “Free will” and “nature of truth”, for example, still have important outstanding confusions, but they also have major resolved issues, and those remaining confusions are subtle, hard to frame/notice when one is busy arguing on the other sides of the resolved issues.
Free will and Nature of Truth are subjects I have devoted plenty of time to, and it terms to .me that conclusion -and overconfidence—abound on the Less Wrong side of the fence,
As the ultimate question seems to be: “Is this FAI design safe?” I think solved should have a high bar.
(Of algorithmic information theorists, I’m familiar with only Chaitin’s writings on the philosophy thereof; I think that though he wouldn’t have found some of the problems later found by others, he also wouldn’t have placed the confidence in it that would lead to premature AGI development failure modes. (I am, as usual, much too lazy to give references.))