So let’s say that you go around saying that philosophy has suddenly been struck by a SERIOUS problem, as in lives are at stake, and philosophers don’t seem to pay any attention. Not to the problem itself, at any rate, though some of them may seem annoyed at outsiders infringing on their territory, and nonplussed at the thought of their field trying to arrive at answers to questions where the proper procedure is to go on coming up with new arguments and respectfully disputing them with other people who think differently, thus ensuring a steady flow of papers for all.
Let us say that this is what happens; which of your current beliefs, which seem to lead you to expect something else to happen, would you update?
No, that is exactly what I expect to happen with more than 99% of all philosophers. But we already have David Chalmers arguing it may be a serious problem. We have Nick Bostrom and the people at Oxford’s Future of Humanity Institute. We probably can expect some work on SIAI’s core concerns from philosophy grad students we haven’t yet heard from because they haven’t published much, for example Nick Beckstead, whose interests are formal epistemology and the normative ethics of global catastrophic risks.
As you’ve said before, any philosophy that would be useful to you and SIAI is hard to find. But it’s out there, in tiny piles, and more of it is coming.
The problems appear to be urgent, and in need of actual solutions, not simply further debate, but it’s not at all clear to me that people who currently identify as philosophers are, as a group, those most suited to work on them.
I’m not saying they are ‘most suited to work on them’, either. But I think they can contribute. Do you think that Chalmers and Bostrom have not already contributed, in small ways?
At the risk of repeating myself, or worse, sounding like an organizational skills guru rambling on about win-win opportunities, might it not be possible to change the environment so that philosophers can do both—publish a steady flow of papers containing respectful disputation AND work on a serious problem?
I might be wrong here, but I wonder if at least some philosophers have a niggling little worry that they are wasting their considerable intellectual gifts (no, I don’t think that all philosophers are stupid) on something useless. If these people exist they might be pleased rather than annoyed to hear that the problems they are thinking about were actually important, and this might spur them to rise to the challenge.
This all sounds hideously optimistic of course, but it suggests a line of attack if we really do want their help.
I don’t remember the specifics, and so don’t have the terms to do a proper search, but I think I recall being taught in one course about a philosopher who, based on the culmination of all his own arguments on ethics, came to the conclusion that being a philosopher was useless, and thus changed careers.
I know of a philosopher who claimed to have finished a grand theory he was working on, concluded that all life was meaningless, and thus withdrew from society and lived on a boat for many years fishing to live and practicing lucid dreaming. His doctrine was that we can’t control reality, so we might as well withdraw to dreams, where complete control can be exercised by the trained.
I also remember reading about a philosopher who finished some sort of ultra-nihilist theory, concluded that life was indeed completely meaningless, and committed suicide—getting wound up too tightly in a theory can be hazardous to your physical as well as epistemic health!
As a layman I’m still puzzled how the LW sequences do not fall into the category of philosophy. Bashing philosophy seems to be over the top, there is probably as much “useless” mathematics.
I think the problem is that philosophy has, as a field, done a shockingly bad job of evicting obsolete and incorrect ideas (not just useless ones). Someone who seeks a philosophy degree can expect to waste most of their time and potential on garbage. To use a mathematics analogy, it’s as if mathematicians were still holding debates between binaryists, decimists, tallyists and nominalists.
Most of what’s written on Less Wrong is philosophy, there’s just so much garbage under philosophy’s name that it made sense to invent a new name (“rationalism”), pretend it’s unrelated, and guard that name so that people can use it as a way to find good philosophy without wading through the bad. It’s the only reference class I know of for philosophy writings that’s (a) larger than one author, (b) mostly sane, and (c) enumerable by someone who isn’t an expert.
I think the problem is that philosophy has, as a field, done a shockingly bad job of evicting obsolete and incorrect ideas (not just useless ones).
Totally agree.
Someone who seeks a philosophy degree can expect to waste most of their time and potential on garbage.
Not exactly. The subfields are more than specialized enough to make it pretty easy to avoid garbage. Once you’re in the field it isn’t hard to locate the good stuff. For institutional and political reasons the sane philosophers tend to ignore the insane philosophers and vice versa, with just the occasional flare up. It is a problem.
It’s the only reference class I know of for philosophy writings that’s (a) larger than one author, (b) mostly sane, and (c) enumerable by someone who isn’t an expert.
Er, I suspect the majority of “naturalistic philosophy in the analytic tradition” would meet the sanity waterline of Less Wrong, particularly the sub-fields of epistemology and philosophy of science.
They do. (Many of EY’s own posts are tagged “philosophy”.) Indeed, FAI will require robust solutions to several standard big philosophical problems, not just metaethics; e.g. subjective experience (to make sure that CEV doesn’t create any conscious persons while extrapolating, etc.), the ultimate nature of existence (to sort out some of the anthropic problems in decision theory), and so on. The difference isn’t (just) in what questions are being asked, but in how we go about answering them. In traditional philosophy, you’re usually working on problems you personally find interesting, and if you can convince a lot of other philosophers that you’re right, write some books, and give a lot of lectures, then that counts as a successful career. LW-style philosophy (as in the “Reductionism” and “Mysterious Answers” sequences) is distinguished in that there is a deep need for precise right answers, with more important criteria for success than what anyone’s academic peers think.
Basically, it’s a computer science approach to philosophy: any progress on understanding a phenomenon is measured by how much closer it gets you to an algorithmic description of it. Academic philosophy occasionally generates insights on that level, but overall it doesn’t operate with that ethic, and it’s not set up to reward that kind of progress specifically; too much of it is about rhetoric, formality as an imitation of precision, and apparent impressiveness instead of usefulness.
Not in the slightest. First, uploads are continuing conscious persons. Second, creating conscious persons is a problem if they might be created in uncomfortable or possibly hellish conditions—if, say, the AI was brute-forcing every decision it would simulate countless numbers of humans in pain before it found the least painful world. I do not think we would have a problem with the AI creating conscious persons in a good environment. I mean, we don’t have that problem with parenthood.
What if it’s researching pain qualia at ordinary levels because it wants to understand the default human experience?
I don’t know if we’re getting into eye-speck territory, but what are the ethics of simulating an adult human who’s just stubbed their toe, and then ending the simulation?
I feel like the consequences are net positive, but I don’t trust my human brain to correctly determine this question. I would feel uncomfortable with an FAI deciding it, but I would also feel uncomfortable with a person deciding it. It’s just a hard question.
What if they were created in a good environment and then abruptly destroyed because the AI only needed to simulate them for a few moments to get whatever information it needed?
I think closer to the latter. Starting a simulated person, running them for a while, and then ending and discarding the resulting state effectively murders the person. If you then start another copy of that person, then depending on how you think about identity, that goes two ways:
Option A: The new person, being a separate running copy, is unrelated to the first person identity-wise, and therefore the act of starting the second person does not change the moral status of ending the first. Result: Infinite series of murders.
Option B: The new person, since they are running identically to the old person, is therefore actually the same person identity-wise. Thus, you could in a sense un-murder them by letting the simulation continue to run after the reset point. If you do the reset again, however, you’re just recreating the original murder as it was. Result: Single murder.
Neither way is a desirable immortal life, which I think is a more useful way to look at it then “happy”.
That it would be wrong. If I had the ability to spontaneously create fully-formed adult people, it would be wrong to subsequently kill them, even if I did so painlessly and in an instant. Whether a person lives or dies should be under the control of that person, and exceptions to this rule should lean towards preventing death, not encouraging it.
The sequences are definitely philosophy, but written (mostly) without referencing the philosophers who have given (roughly) the same arguments or defended (roughly) the same positions.
I really like Eliezer’s way of covering many of these classic debates in philosophy. In other cases, for example in the meta-ethics sequence, I found EY’s presentation unnecessarily difficult.
I’d appreciate an annotation to EY’s writings that includes such references, as I’m not aware of philosophers who have given similar arguments (except Dennett and Drescher).
That would make for a very interesting project! If I find the time, maybe I’ll do this for a post here or there. It would integrate Less Wrong into the broader philosophical discussion, in a way.
I have mixed feelings about that. One big difference in style between the sciences and the humanities lies in the complete lack of respect for tradition in the sciences. The humanities deal in annotations and critical comparisons of received texts. The sciences deal with efficient pedagogy.
I think that the sequences are good in that they try to cover this philosophical material in the great-idea oriented style of the sciences rather than the great-thinker oriented style of the humanities. My only complaint about the sequences is that in some places the pedagogy is not really great—some technical ideas are not explained as clearly as they might be, some of the straw men are a little too easy to knock down, and in a few places Eliezer may have even reached the wrong conclusions.
So, rather than annotating The Sequences (in the tradition of the humanities), it might be better to re-present the material covered by the sequences (in the tradition of the sciences). Or, produce a mixed-mode presentation which (like Eliezer’s) focuses on getting the ideas across, but adds some scholarship (unlike Eliezer) in that it provides the standard Googleable names to the ideas discussed—both the good ideas and the bad ones.
I certainly think that positioning the philosophical foundations assumed by the quest for Friendly AI would give SIAI more credibility in academic circles. But right now SIAI seems to be very anti-academia in some ways, which I think is unfortunate.
But right now SIAI seems to be very anti-academia in some ways,
I really don’t think it is, as a whole. Vassar and Yudkowsky are somewhat, but there are other people within and closely associated with the organization who are actively trying to get papers published, etc. And EY himself just gave a couple of talks at Oxford, so I understand.
(In fact it would probably be more accurate to say that academia is somewhat more anti-SIAI than the other way around, at the moment.)
As for EY’s book, my understanding is that it is targeted at popular rather than academic audiences, so it presumably won’t be appropriate for it to trace the philosophical history of all the ideas contained therein, at least not in detail. But there’s no reason it can’t be done elsewhere.
I’m thinking of what Dennett did in Consciousness Explained, where he put all the academic-philosophy stuff in an appendix so that people interested in how his stuff relates to the broader philosophical discourse can follow that, and people not interested in it can ignore it.
Near the end of the meta-ethics sequence, Eliezer wrote that he chose to postpone reading Good and Real until he finished writing about meta-ethics because otherwise he might not finish it. For most of his life, writing for public consumption was slow and tedious, and he often got stuck. That seemed to change after he started blogging daily on Overcoming Bias, but the change was recent enough that he probably questioned its permanence.
There definitely is, and I would suspect that many pure mathematicians have the same worry (in fact I don’t need to suspect it, sources like A Mathematicians Apology provide clear evidence of this). These people might be another good source of thinkers for a different side of the problem, although I do wonder if anything they can do to help couldn’t be done better by an above average computer programmer.
I would say the difference between the sequences and most philosophy is one of approach rather than content.
So let’s say that you go around saying that philosophy has suddenly been struck by a SERIOUS problem, as in lives are at stake, and philosophers don’t seem to pay any attention. Not to the problem itself, at any rate, though some of them may seem annoyed at outsiders infringing on their territory, and nonplussed at the thought of their field trying to arrive at answers to questions where the proper procedure is to go on coming up with new arguments and respectfully disputing them with other people who think differently, thus ensuring a steady flow of papers for all.
Let us say that this is what happens; which of your current beliefs, which seem to lead you to expect something else to happen, would you update?
No, that is exactly what I expect to happen with more than 99% of all philosophers. But we already have David Chalmers arguing it may be a serious problem. We have Nick Bostrom and the people at Oxford’s Future of Humanity Institute. We probably can expect some work on SIAI’s core concerns from philosophy grad students we haven’t yet heard from because they haven’t published much, for example Nick Beckstead, whose interests are formal epistemology and the normative ethics of global catastrophic risks.
As you’ve said before, any philosophy that would be useful to you and SIAI is hard to find. But it’s out there, in tiny piles, and more of it is coming.
The problems appear to be urgent, and in need of actual solutions, not simply further debate, but it’s not at all clear to me that people who currently identify as philosophers are, as a group, those most suited to work on them.
I’m not saying they are ‘most suited to work on them’, either. But I think they can contribute. Do you think that Chalmers and Bostrom have not already contributed, in small ways?
Bostrom, yes, Chalmers, I have to admit that I haven’t followed his work enough to issue an opinion.
At the risk of repeating myself, or worse, sounding like an organizational skills guru rambling on about win-win opportunities, might it not be possible to change the environment so that philosophers can do both—publish a steady flow of papers containing respectful disputation AND work on a serious problem?
I might be wrong here, but I wonder if at least some philosophers have a niggling little worry that they are wasting their considerable intellectual gifts (no, I don’t think that all philosophers are stupid) on something useless. If these people exist they might be pleased rather than annoyed to hear that the problems they are thinking about were actually important, and this might spur them to rise to the challenge.
This all sounds hideously optimistic of course, but it suggests a line of attack if we really do want their help.
I don’t remember the specifics, and so don’t have the terms to do a proper search, but I think I recall being taught in one course about a philosopher who, based on the culmination of all his own arguments on ethics, came to the conclusion that being a philosopher was useless, and thus changed careers.
I know of a philosopher who claimed to have finished a grand theory he was working on, concluded that all life was meaningless, and thus withdrew from society and lived on a boat for many years fishing to live and practicing lucid dreaming. His doctrine was that we can’t control reality, so we might as well withdraw to dreams, where complete control can be exercised by the trained.
I also remember reading about a philosopher who finished some sort of ultra-nihilist theory, concluded that life was indeed completely meaningless, and committed suicide—getting wound up too tightly in a theory can be hazardous to your physical as well as epistemic health!
This doesn’t automatically follow unless you first prove he was wrong =P
As a layman I’m still puzzled how the LW sequences do not fall into the category of philosophy. Bashing philosophy seems to be over the top, there is probably as much “useless” mathematics.
I think the problem is that philosophy has, as a field, done a shockingly bad job of evicting obsolete and incorrect ideas (not just useless ones). Someone who seeks a philosophy degree can expect to waste most of their time and potential on garbage. To use a mathematics analogy, it’s as if mathematicians were still holding debates between binaryists, decimists, tallyists and nominalists.
Most of what’s written on Less Wrong is philosophy, there’s just so much garbage under philosophy’s name that it made sense to invent a new name (“rationalism”), pretend it’s unrelated, and guard that name so that people can use it as a way to find good philosophy without wading through the bad. It’s the only reference class I know of for philosophy writings that’s (a) larger than one author, (b) mostly sane, and (c) enumerable by someone who isn’t an expert.
Totally agree.
Not exactly. The subfields are more than specialized enough to make it pretty easy to avoid garbage. Once you’re in the field it isn’t hard to locate the good stuff. For institutional and political reasons the sane philosophers tend to ignore the insane philosophers and vice versa, with just the occasional flare up. It is a problem.
Er, I suspect the majority of “naturalistic philosophy in the analytic tradition” would meet the sanity waterline of Less Wrong, particularly the sub-fields of epistemology and philosophy of science.
They do. (Many of EY’s own posts are tagged “philosophy”.) Indeed, FAI will require robust solutions to several standard big philosophical problems, not just metaethics; e.g. subjective experience (to make sure that CEV doesn’t create any conscious persons while extrapolating, etc.), the ultimate nature of existence (to sort out some of the anthropic problems in decision theory), and so on. The difference isn’t (just) in what questions are being asked, but in how we go about answering them. In traditional philosophy, you’re usually working on problems you personally find interesting, and if you can convince a lot of other philosophers that you’re right, write some books, and give a lot of lectures, then that counts as a successful career. LW-style philosophy (as in the “Reductionism” and “Mysterious Answers” sequences) is distinguished in that there is a deep need for precise right answers, with more important criteria for success than what anyone’s academic peers think.
Basically, it’s a computer science approach to philosophy: any progress on understanding a phenomenon is measured by how much closer it gets you to an algorithmic description of it. Academic philosophy occasionally generates insights on that level, but overall it doesn’t operate with that ethic, and it’s not set up to reward that kind of progress specifically; too much of it is about rhetoric, formality as an imitation of precision, and apparent impressiveness instead of usefulness.
e.g. subjective experience (to make sure that CEV doesn’t create any conscious persons while extrapolating, etc.),
Also, to figure out whether particular uploads have qualia, and whether those qualia resemble pre-upload qualia, it that’s wanted.
I should just point out that these two goals (researching uploads, and not creating conscious persons) are starkly antagonistic.
Not in the slightest. First, uploads are continuing conscious persons. Second, creating conscious persons is a problem if they might be created in uncomfortable or possibly hellish conditions—if, say, the AI was brute-forcing every decision it would simulate countless numbers of humans in pain before it found the least painful world. I do not think we would have a problem with the AI creating conscious persons in a good environment. I mean, we don’t have that problem with parenthood.
What if it’s researching pain qualia at ordinary levels because it wants to understand the default human experience?
I don’t know if we’re getting into eye-speck territory, but what are the ethics of simulating an adult human who’s just stubbed their toe, and then ending the simulation?
I feel like the consequences are net positive, but I don’t trust my human brain to correctly determine this question. I would feel uncomfortable with an FAI deciding it, but I would also feel uncomfortable with a person deciding it. It’s just a hard question.
What if they were created in a good environment and then abruptly destroyed because the AI only needed to simulate them for a few moments to get whatever information it needed?
What if they were created in a good environment, (20) stopped, and then restarted (goto 20) ?
Is that one happy immortal life or an infinite series of murders?
I think closer to the latter. Starting a simulated person, running them for a while, and then ending and discarding the resulting state effectively murders the person. If you then start another copy of that person, then depending on how you think about identity, that goes two ways:
Option A: The new person, being a separate running copy, is unrelated to the first person identity-wise, and therefore the act of starting the second person does not change the moral status of ending the first. Result: Infinite series of murders.
Option B: The new person, since they are running identically to the old person, is therefore actually the same person identity-wise. Thus, you could in a sense un-murder them by letting the simulation continue to run after the reset point. If you do the reset again, however, you’re just recreating the original murder as it was. Result: Single murder.
Neither way is a desirable immortal life, which I think is a more useful way to look at it then “happy”.
Well—what if a real person went through the same thing? What does your moral intuition say?
That it would be wrong. If I had the ability to spontaneously create fully-formed adult people, it would be wrong to subsequently kill them, even if I did so painlessly and in an instant. Whether a person lives or dies should be under the control of that person, and exceptions to this rule should lean towards preventing death, not encouraging it.
The sequences are definitely philosophy, but written (mostly) without referencing the philosophers who have given (roughly) the same arguments or defended (roughly) the same positions.
I really like Eliezer’s way of covering many of these classic debates in philosophy. In other cases, for example in the meta-ethics sequence, I found EY’s presentation unnecessarily difficult.
I’d appreciate an annotation to EY’s writings that includes such references, as I’m not aware of philosophers who have given similar arguments (except Dennett and Drescher).
That would make for a very interesting project! If I find the time, maybe I’ll do this for a post here or there. It would integrate Less Wrong into the broader philosophical discussion, in a way.
I have mixed feelings about that. One big difference in style between the sciences and the humanities lies in the complete lack of respect for tradition in the sciences. The humanities deal in annotations and critical comparisons of received texts. The sciences deal with efficient pedagogy.
I think that the sequences are good in that they try to cover this philosophical material in the great-idea oriented style of the sciences rather than the great-thinker oriented style of the humanities. My only complaint about the sequences is that in some places the pedagogy is not really great—some technical ideas are not explained as clearly as they might be, some of the straw men are a little too easy to knock down, and in a few places Eliezer may have even reached the wrong conclusions.
So, rather than annotating The Sequences (in the tradition of the humanities), it might be better to re-present the material covered by the sequences (in the tradition of the sciences). Or, produce a mixed-mode presentation which (like Eliezer’s) focuses on getting the ideas across, but adds some scholarship (unlike Eliezer) in that it provides the standard Googleable names to the ideas discussed—both the good ideas and the bad ones.
I like this idea.
You and EY might find it particularly useful to provide such an annotation as an appendix for the material that he’s assembling into his book.
Or not.
I certainly think that positioning the philosophical foundations assumed by the quest for Friendly AI would give SIAI more credibility in academic circles. But right now SIAI seems to be very anti-academia in some ways, which I think is unfortunate.
I really don’t think it is, as a whole. Vassar and Yudkowsky are somewhat, but there are other people within and closely associated with the organization who are actively trying to get papers published, etc. And EY himself just gave a couple of talks at Oxford, so I understand.
(In fact it would probably be more accurate to say that academia is somewhat more anti-SIAI than the other way around, at the moment.)
As for EY’s book, my understanding is that it is targeted at popular rather than academic audiences, so it presumably won’t be appropriate for it to trace the philosophical history of all the ideas contained therein, at least not in detail. But there’s no reason it can’t be done elsewhere.
I’m thinking of what Dennett did in Consciousness Explained, where he put all the academic-philosophy stuff in an appendix so that people interested in how his stuff relates to the broader philosophical discourse can follow that, and people not interested in it can ignore it.
Near the end of the meta-ethics sequence, Eliezer wrote that he chose to postpone reading Good and Real until he finished writing about meta-ethics because otherwise he might not finish it. For most of his life, writing for public consumption was slow and tedious, and he often got stuck. That seemed to change after he started blogging daily on Overcoming Bias, but the change was recent enough that he probably questioned its permanence.
Why, they do fall in the category of philosophy (for the most part). You can imagine that bashing bad math is just as rewarding.
There definitely is, and I would suspect that many pure mathematicians have the same worry (in fact I don’t need to suspect it, sources like A Mathematicians Apology provide clear evidence of this). These people might be another good source of thinkers for a different side of the problem, although I do wonder if anything they can do to help couldn’t be done better by an above average computer programmer.
I would say the difference between the sequences and most philosophy is one of approach rather than content.
“Most philosophers” is not necessarily the target audience of such argument.