What do you think are the most interesting philosophical problems within our grasp to be solved?
I’m not sure there is any. A big part of it is that metaphilosophy is essentially a complete blank, so we have no way of saying what counts as a correct solution to a philosophical problem, and hence no way of achieving high confidence that any particular philosophical problem has been solved, except maybe simple (and hence not very interesting) problems, where the solution is just intuitively obvious to everyone or nearly everyone. It’s also been my experience that any time we seem to make real progress on some interesting philosophical problem, additional complications are revealed that we didn’t foresee, which makes the problem seem even harder to solve than before the progress was made. I think we have to expect this trend to continue for a while yet.
If you instead ask what are some interesting philosophical problems that we can expect visible progress on in the near future, I’d cite decision theory and logical uncertainty, just based on how much new effort people are putting into them, and results from the recent past.
Do you think that solving normative ethics won’t happen until a FAI? If so, why?
No I don’t think that’s necessarily true. It’s possible that normative ethics, metaethics, and metaphilosophy are all solved before someone builds an FAI, especially if we can get significant intelligence enhancement to happen first. (Again, I think we need to solve metaethics and metaphilosophy first, otherwise how do we know that any proposed solution to normative ethics is actually correct?)
You argued previously that metaphilosophy and singularity strategies are fields with low hanging fruit. Do you have any examples of progress in metaphilosophy?
Unfortunately, not yet. BTW I’m not saying these are fields that definitely have low hanging fruit. I’m saying these are fields that could have low hanging fruit, based on how few people have worked in them.
Do you have any role models?
I do have some early role models. I recall wanting to be a real-life version of the fictional “Sandor Arbitration Intelligence at the Zoo” (from Vernor Vinge’s novel A Fire Upon the Deep) who in the story is known for consistently writing the clearest and most insightful posts on the Net. And then there was Hal Finney who probably came closest to an actual real-life version of Sandor at the Zoo, and Tim May who besides inspiring me with his vision of cryptoanarchy was also a role model for doing early retirement from the tech industry and working on his own interests/causes.
I recall wanting to be a real-life version of the fictional “Sandor Arbitration Intelligence at the Zoo” (from Vernor Vinge’s novel A Fire Upon the Deep) who in the story is known for consistently writing the clearest and most insightful posts on the Net.
FWIW, I have always been impressed by the consistent clarity and conciseness of your LW posts. Your ratio of insights imparted to words used is very high. So, congratulations! And as an LW reader, thanks for your contributions! :)
What projects are you currently working on?/What confusing questions are you attempting to answer?
Do you think that most people should be very uncertain about their values, e.g. altruism?
Do you think that your views about the path to FAI are contrarian (amongst people working on FAI/AGI, e.g. you believing most of the problems are philosophical in nature)? If so, why?
Where do you hang out online these days? Anywhere other than LW?
Please correct me if I’ve misrepresented your views.
What projects are you currently working on?/What confusing questions are you attempting to answer?
If you go through my posts on LW, you can read most of the questions that I’ve been thinking about in the last few years. I don’t think any of the problems that I raised have been solved so I’m still attempting to answer them. To give a general idea, these include questions in philosophy of mind, philosophy of math, decision theory, normative ethics, meta-ethics, meta-philosophy. And to give a specific example I’ve just been thinking about again recently: What is pain exactly (e.g., in a mathematical or algorithmic sense) and why is it bad? For example can certain simple decision algorithms be said to have pain? Is pain intrinsically bad, or just because people prefer not to be in pain?
As a side note, I don’t know if it’s good from a productivity perspective to jump around amongst so many different questions. It might be better to focus on just a few with the others in the back of one’s mind. But now that I have so many unanswered questions that I’m all very interested in, it’s hard to stay on any of them for very long. So reader beware. :)
Do you think that most people should be very uncertain about their values, e.g. altruism?
Yes, but I tend not to advertise too much that people should be less certain about their altruism, since it’s hard to see how that could be good for me regardless of what my values are or ought to be. I make an exception of this for people who might be in a position to build an FAI, since if they’re too confident about altruism then they’re likely to be too confident about many other philosophical problems, but even then I don’t stress it too much.
Do you think that your views about the path to FAI are contrarian (amongst people working on FAI/AGI, e.g. you believing most of the problems are philosophical in nature)? If so, why?
I guess there is a spectrum of concern over philosophical problems involved in building an FAI/AGI, and I’m on the far end of the that spectrum. I think most people building AGI mainly want short term benefits like profits or academic fame, and do not care as much about the far reaches of time and space, in which case they’d naturally focus more on the immediate engineering issues.
Among people working on FAI, I guess they either have not thought as much about philosophical problems as I have and therefore don’t have a strong sense of how difficult those problems are, or are just overconfident about their solutions. For example when I started in 1997 to think about certain seemingly minor problems about how minds that can be copied should handle probabilities (within a seemingly well-founded Bayesian philosophy), I certainly didn’t foresee how difficult those problems would turn out to be. This and other similar experiences made me update my estimates of how difficult solving philosophical problems is in general.
BTW I would not describe myself as “working on FAI” since that seems to imply that I endorse the building of an FAI. I like to use “working on philosophical problems possibly relevant to FAI”.
Where do you hang out online these days? Anywhere other than LW?
Pretty much just here. I do read a bunch of other blogs, but tend not to comment much elsewhere since I like having an archive of my writings for future reference, and it’s too much trouble to do that if I distribute them over many different places. If I change my main online hangout in the future, I’ll note that on my home page.
What is pain exactly (e.g., in a mathematical or algorithmic sense) and why is it bad? For example can certain simple decision algorithms be said to have pain? Is pain intrinsically bad, or just because people prefer not to be in pain?
Pain isn’t reliably bad, or at least some people (possibly a fairly proportion), seek it out in some contexts. I’m including very spicy food, SMBD, deliberately reading things that make one sad and/or angry without it leading to any useful action, horror fiction, pushing one’s limits for its own sake, and staying attached to losing sports teams.
I think this leads to the question of what people are trying to maximize.
Yes, but I tend not to advertise too much that people should be less certain about their altruism, since it’s hard to see how that could be good for me regardless of what my values are or ought to be.
One issue is that an altruist has a harder time noticing if he’s doing something wrong. An altruist with false beliefs is much more dangerous than an egotist with false beliefs.
I’m not sure there is any. A big part of it is that metaphilosophy is essentially a complete blank, so we have no way of saying what counts as a correct solution to a philosophical problem, and hence no way of achieving high confidence that any particular philosophical problem has been solved, except maybe simple (and hence not very interesting) problems, where the solution is just intuitively obvious to everyone or nearly everyone. It’s also been my experience that any time we seem to make real progress on some interesting philosophical problem, additional complications are revealed that we didn’t foresee, which makes the problem seem even harder to solve than before the progress was made. I think we have to expect this trend to continue for a while yet.
If you instead ask what are some interesting philosophical problems that we can expect visible progress on in the near future, I’d cite decision theory and logical uncertainty, just based on how much new effort people are putting into them, and results from the recent past.
No I don’t think that’s necessarily true. It’s possible that normative ethics, metaethics, and metaphilosophy are all solved before someone builds an FAI, especially if we can get significant intelligence enhancement to happen first. (Again, I think we need to solve metaethics and metaphilosophy first, otherwise how do we know that any proposed solution to normative ethics is actually correct?)
Unfortunately, not yet. BTW I’m not saying these are fields that definitely have low hanging fruit. I’m saying these are fields that could have low hanging fruit, based on how few people have worked in them.
I do have some early role models. I recall wanting to be a real-life version of the fictional “Sandor Arbitration Intelligence at the Zoo” (from Vernor Vinge’s novel A Fire Upon the Deep) who in the story is known for consistently writing the clearest and most insightful posts on the Net. And then there was Hal Finney who probably came closest to an actual real-life version of Sandor at the Zoo, and Tim May who besides inspiring me with his vision of cryptoanarchy was also a role model for doing early retirement from the tech industry and working on his own interests/causes.
FWIW, I have always been impressed by the consistent clarity and conciseness of your LW posts. Your ratio of insights imparted to words used is very high. So, congratulations! And as an LW reader, thanks for your contributions! :)
Thanks. I have some followup questions :)
What projects are you currently working on?/What confusing questions are you attempting to answer?
Do you think that most people should be very uncertain about their values, e.g. altruism?
Do you think that your views about the path to FAI are contrarian (amongst people working on FAI/AGI, e.g. you believing most of the problems are philosophical in nature)? If so, why?
Where do you hang out online these days? Anywhere other than LW?
Please correct me if I’ve misrepresented your views.
If you go through my posts on LW, you can read most of the questions that I’ve been thinking about in the last few years. I don’t think any of the problems that I raised have been solved so I’m still attempting to answer them. To give a general idea, these include questions in philosophy of mind, philosophy of math, decision theory, normative ethics, meta-ethics, meta-philosophy. And to give a specific example I’ve just been thinking about again recently: What is pain exactly (e.g., in a mathematical or algorithmic sense) and why is it bad? For example can certain simple decision algorithms be said to have pain? Is pain intrinsically bad, or just because people prefer not to be in pain?
As a side note, I don’t know if it’s good from a productivity perspective to jump around amongst so many different questions. It might be better to focus on just a few with the others in the back of one’s mind. But now that I have so many unanswered questions that I’m all very interested in, it’s hard to stay on any of them for very long. So reader beware. :)
Yes, but I tend not to advertise too much that people should be less certain about their altruism, since it’s hard to see how that could be good for me regardless of what my values are or ought to be. I make an exception of this for people who might be in a position to build an FAI, since if they’re too confident about altruism then they’re likely to be too confident about many other philosophical problems, but even then I don’t stress it too much.
I guess there is a spectrum of concern over philosophical problems involved in building an FAI/AGI, and I’m on the far end of the that spectrum. I think most people building AGI mainly want short term benefits like profits or academic fame, and do not care as much about the far reaches of time and space, in which case they’d naturally focus more on the immediate engineering issues.
Among people working on FAI, I guess they either have not thought as much about philosophical problems as I have and therefore don’t have a strong sense of how difficult those problems are, or are just overconfident about their solutions. For example when I started in 1997 to think about certain seemingly minor problems about how minds that can be copied should handle probabilities (within a seemingly well-founded Bayesian philosophy), I certainly didn’t foresee how difficult those problems would turn out to be. This and other similar experiences made me update my estimates of how difficult solving philosophical problems is in general.
BTW I would not describe myself as “working on FAI” since that seems to imply that I endorse the building of an FAI. I like to use “working on philosophical problems possibly relevant to FAI”.
Pretty much just here. I do read a bunch of other blogs, but tend not to comment much elsewhere since I like having an archive of my writings for future reference, and it’s too much trouble to do that if I distribute them over many different places. If I change my main online hangout in the future, I’ll note that on my home page.
Pain isn’t reliably bad, or at least some people (possibly a fairly proportion), seek it out in some contexts. I’m including very spicy food, SMBD, deliberately reading things that make one sad and/or angry without it leading to any useful action, horror fiction, pushing one’s limits for its own sake, and staying attached to losing sports teams.
I think this leads to the question of what people are trying to maximize.
One issue is that an altruist has a harder time noticing if he’s doing something wrong. An altruist with false beliefs is much more dangerous than an egotist with false beliefs.
What is he doing, by the way? Wikipedia says he’s still alive but he looks to be either retired or in deep cover...