They do. (Many of EY’s own posts are tagged “philosophy”.) Indeed, FAI will require robust solutions to several standard big philosophical problems, not just metaethics; e.g. subjective experience (to make sure that CEV doesn’t create any conscious persons while extrapolating, etc.), the ultimate nature of existence (to sort out some of the anthropic problems in decision theory), and so on. The difference isn’t (just) in what questions are being asked, but in how we go about answering them. In traditional philosophy, you’re usually working on problems you personally find interesting, and if you can convince a lot of other philosophers that you’re right, write some books, and give a lot of lectures, then that counts as a successful career. LW-style philosophy (as in the “Reductionism” and “Mysterious Answers” sequences) is distinguished in that there is a deep need for precise right answers, with more important criteria for success than what anyone’s academic peers think.
Basically, it’s a computer science approach to philosophy: any progress on understanding a phenomenon is measured by how much closer it gets you to an algorithmic description of it. Academic philosophy occasionally generates insights on that level, but overall it doesn’t operate with that ethic, and it’s not set up to reward that kind of progress specifically; too much of it is about rhetoric, formality as an imitation of precision, and apparent impressiveness instead of usefulness.
Not in the slightest. First, uploads are continuing conscious persons. Second, creating conscious persons is a problem if they might be created in uncomfortable or possibly hellish conditions—if, say, the AI was brute-forcing every decision it would simulate countless numbers of humans in pain before it found the least painful world. I do not think we would have a problem with the AI creating conscious persons in a good environment. I mean, we don’t have that problem with parenthood.
What if it’s researching pain qualia at ordinary levels because it wants to understand the default human experience?
I don’t know if we’re getting into eye-speck territory, but what are the ethics of simulating an adult human who’s just stubbed their toe, and then ending the simulation?
I feel like the consequences are net positive, but I don’t trust my human brain to correctly determine this question. I would feel uncomfortable with an FAI deciding it, but I would also feel uncomfortable with a person deciding it. It’s just a hard question.
What if they were created in a good environment and then abruptly destroyed because the AI only needed to simulate them for a few moments to get whatever information it needed?
I think closer to the latter. Starting a simulated person, running them for a while, and then ending and discarding the resulting state effectively murders the person. If you then start another copy of that person, then depending on how you think about identity, that goes two ways:
Option A: The new person, being a separate running copy, is unrelated to the first person identity-wise, and therefore the act of starting the second person does not change the moral status of ending the first. Result: Infinite series of murders.
Option B: The new person, since they are running identically to the old person, is therefore actually the same person identity-wise. Thus, you could in a sense un-murder them by letting the simulation continue to run after the reset point. If you do the reset again, however, you’re just recreating the original murder as it was. Result: Single murder.
Neither way is a desirable immortal life, which I think is a more useful way to look at it then “happy”.
That it would be wrong. If I had the ability to spontaneously create fully-formed adult people, it would be wrong to subsequently kill them, even if I did so painlessly and in an instant. Whether a person lives or dies should be under the control of that person, and exceptions to this rule should lean towards preventing death, not encouraging it.
They do. (Many of EY’s own posts are tagged “philosophy”.) Indeed, FAI will require robust solutions to several standard big philosophical problems, not just metaethics; e.g. subjective experience (to make sure that CEV doesn’t create any conscious persons while extrapolating, etc.), the ultimate nature of existence (to sort out some of the anthropic problems in decision theory), and so on. The difference isn’t (just) in what questions are being asked, but in how we go about answering them. In traditional philosophy, you’re usually working on problems you personally find interesting, and if you can convince a lot of other philosophers that you’re right, write some books, and give a lot of lectures, then that counts as a successful career. LW-style philosophy (as in the “Reductionism” and “Mysterious Answers” sequences) is distinguished in that there is a deep need for precise right answers, with more important criteria for success than what anyone’s academic peers think.
Basically, it’s a computer science approach to philosophy: any progress on understanding a phenomenon is measured by how much closer it gets you to an algorithmic description of it. Academic philosophy occasionally generates insights on that level, but overall it doesn’t operate with that ethic, and it’s not set up to reward that kind of progress specifically; too much of it is about rhetoric, formality as an imitation of precision, and apparent impressiveness instead of usefulness.
e.g. subjective experience (to make sure that CEV doesn’t create any conscious persons while extrapolating, etc.),
Also, to figure out whether particular uploads have qualia, and whether those qualia resemble pre-upload qualia, it that’s wanted.
I should just point out that these two goals (researching uploads, and not creating conscious persons) are starkly antagonistic.
Not in the slightest. First, uploads are continuing conscious persons. Second, creating conscious persons is a problem if they might be created in uncomfortable or possibly hellish conditions—if, say, the AI was brute-forcing every decision it would simulate countless numbers of humans in pain before it found the least painful world. I do not think we would have a problem with the AI creating conscious persons in a good environment. I mean, we don’t have that problem with parenthood.
What if it’s researching pain qualia at ordinary levels because it wants to understand the default human experience?
I don’t know if we’re getting into eye-speck territory, but what are the ethics of simulating an adult human who’s just stubbed their toe, and then ending the simulation?
I feel like the consequences are net positive, but I don’t trust my human brain to correctly determine this question. I would feel uncomfortable with an FAI deciding it, but I would also feel uncomfortable with a person deciding it. It’s just a hard question.
What if they were created in a good environment and then abruptly destroyed because the AI only needed to simulate them for a few moments to get whatever information it needed?
What if they were created in a good environment, (20) stopped, and then restarted (goto 20) ?
Is that one happy immortal life or an infinite series of murders?
I think closer to the latter. Starting a simulated person, running them for a while, and then ending and discarding the resulting state effectively murders the person. If you then start another copy of that person, then depending on how you think about identity, that goes two ways:
Option A: The new person, being a separate running copy, is unrelated to the first person identity-wise, and therefore the act of starting the second person does not change the moral status of ending the first. Result: Infinite series of murders.
Option B: The new person, since they are running identically to the old person, is therefore actually the same person identity-wise. Thus, you could in a sense un-murder them by letting the simulation continue to run after the reset point. If you do the reset again, however, you’re just recreating the original murder as it was. Result: Single murder.
Neither way is a desirable immortal life, which I think is a more useful way to look at it then “happy”.
Well—what if a real person went through the same thing? What does your moral intuition say?
That it would be wrong. If I had the ability to spontaneously create fully-formed adult people, it would be wrong to subsequently kill them, even if I did so painlessly and in an instant. Whether a person lives or dies should be under the control of that person, and exceptions to this rule should lean towards preventing death, not encouraging it.