Hi, I’ve been lurking on Less Wrong for a few months now, making a few comments here and there, but never got around to introducing myself. Since I’m planning out an actual post at the moment, I figured I should tell people where I’m coming from.
I’m a male 30-year-old optical engineer in Sydney, Australia. I grew up in a very scientific family and have pretty much always assumed I had a scientific career ahead of me, and after a couple of false starts, it’s happened and I couldn’t ask for a better job.
Like many people, I came to Less Wrong from TVTropes via Methods of Rationality. Since I started reading, I’ve found that it’s been quite helpful in organising my own thoughts and casting aside unuseful arguments, and examining aspects of my life and beliefs that don’t stand up under scrutiny.
In particular, I’ve found that reading Less Wrong has allowed, nay forced, me to examine the logical consistency of everything I say, write, hear and read, which allows me to be a lot more efficient in dicussions, both by policing my own speech and being more usefully critical of others’ points (rather than making arguments that don’t go anywhere).
While I was raised in a substantively atheist household, my current beliefs are theist. The precise nature of these beliefs has shifted somewhat since I started reading Less Wrong, as I’ve discarded the parts that are inconsistent or even less likely than the others. There are still difficulties with my current model, but they’re smaller than the issues I have with my best atheist theory.
I’ve also had a surprising amount of success in introducing the logical and rationalist concepts from Less Wrong to one of my girlfriends, which is all the more impressive considering her dyscalculia. I’m really pleased that that this site has given me the tools to do that. It’s really easy now to short-circuit what might otherwise become an argument by showing that it’s merely a dispute about definitions. It’s this sort of success that has kept me reading the site these past months, and I hope I can contribute to that success for other people.
My biggest problem right now is all the stuff about zombies, and how that implies that, in the absence of some kind of soul, a computer program or other entity that is capable of the same reasoning processes as a person, is morally equivalent to a person. I agree with every step of the logic (I think, it’s been a while since I last read the sequence), but I end up applying it in the other direction. I don’t think a computer program can have any moral value, therefore, without the presence of a soul, people also have no moral value. Therefore I either accept a lack of moral value to humanity (both distasteful and unlikely), or accept the presence of something, let’s call it a soul, that makes people worthwhile (also unlikely). I’m leaning towards the latter, both as the less unlikely, and the one that produces the most harmonious behaviour from me.
It’s a work in progress. I’ve been considering the possibility that there is exactly one soul in the universe (since there’s no reason to consider souls to propagate along the time axis of spacetime in any classical sense), but that’s a low-probability hypothesis for now.
In the spirit of your (excellent) new post, I’ll attack all the weak points of your argument at once:
You define “soul” as:
the presence of something, let’s call it a soul, that makes people worthwhile
This definition doesn’t give souls any of their normal properties, like being the seat of subjective experience, or allowing free will, or surviving bodily death. That’s fine, but we need to be on the look-out in case these meanings sneak in as connotations later on. (In particular, the “Zombies” sequence doesn’t talk about moral worth, but does talk about subjective experience, so its application here isn’t straight forward. Do you believe that a simulation of a human would have subjective experience?)
“Souls” don’t provide any change in anticipation. You haven’t provided any mechanism by which other people having souls causes me to think that those other people have moral worth. Furthermore it seems that my belief that others have moral worth can be fully explained by my genes and my upbringing.
You haven’t stated any evidence for the claim that computer programs can’t have moral value, and this isn’t intuitively obvious to me.
You’ve produced a dichotomy between two very unlikely hypotheses. I think the correct answer in this case isn’t to believe the least unlikely hypothesis, but is instead to assume that the answer is some third option you haven’t thought of yet. For instance you could say “I withhold judgement on the existence of souls and the nature of moral worth until I understand the nature of subjective experience”.
The existence of souls as you’ve defined them doesn’t imply theism. Not even slightly. (EDIT: Your argument goes: ‘By the “Zombies” sequence, simulations are concious. By assumption, simulations have no moral worth. Therefore concious does not imply moral worth. Call whatever does imply moral worth a soul. Souls exist, therefore theism.’ The jump between the penultimate and the ultimate step is entirely powered by connotations of the word “soul”, and is therefore invalid.)
Also you say this:
I’ve been considering the possibility that there is exactly one soul in the universe (since there’s no reason to consider souls to propagate along the time axis of spacetime in any classical sense), but that’s a low-probability hypothesis for now.
(I’m sorry if what I say next offends you.) This sounds like one of those arguments clever people come up with to justify some previously decided conclusion. It looks like you’ve just picked a nice sounding theory out of hypothesis space without nearly enough evidence to support it. It would be a real shame if your mind became tangled up like an Escher painting because you were too good at thinking up clever arguments.
You don’t need an additional ontological entity to reflect a judgment (and judgments can differ between different people or agents). You don’t need special angry atoms to form an angry person, that property can be either in the pattern of how the atoms are arranged, or in the way you perceive their arrangement. See these posts:
I don’t think a computer program can have any moral value, therefore, without the presence of a soul, people also have no moral value.
It’s hard to build intuitions about the moral value of intelligent programs right now, because there aren’t any around to talk to. But consider a hypothetical that’s as close to human as possible: uploads. Suppose someone you knew decided to undergo a procedure where his brain would be scanned and destroyed, and then a program based on that scan was installed on a humanoid robot body, so that it would act and think like he did; and when you talked to the robot, he told you that he still felt like the same person. Would that robot and the software on it have moral value?
It is interesting that HopeFox’s intuitions rebel at assigning moral worth to something that is easily copied. I think she is on to something. The pets and Chang-software-objects which acquire moral worth do so by long acquaintance with the bestower of worth. In fact, my intuitions do the same with the humans whom I value.
I agree that HopeFox is onto something there: most people think great works of art, or unique features of the natural world have value, but that has nothing to do with having a soul...it has to do with irredicubility.An atom-by-atom duplicate oft the Mona Lisa wouldl, not be the Mona Lisa, it would be a great work of science...
Well, it has nothing to do with what you think of as a ‘soul’.
Personally, I’m not that taken with the local tendency to demand that any problematic word be tabooed. But I think that it might have been worthwhile to make that demand of HopeFox when she first used the word ‘soul’.
Given my own background, I immediately attached a connotation of immortality upon seeing the word. And for that reason, I was puzzled at the conflation of moral worth with possession of a soul. Because my intuition tells me I should be more respectful of something that I might seriously damage than of someone that can survive anything I might do to it.
I agree, intuition is very difficult here. In this specific scenario, I’d lean towards saying yes—it’s the same person with a physically different body and brain, so I’d like to think that there is some continuity of the “person” in that situation. My brain isn’t made of the “same atoms” it was when I was born, after all. So I’d say yes. In fact, in practice, I would definitely assume said robot and software to have moral value, even if I wasn’t 100% sure.
However, if the original brain and body weren’t destroyed, and we now had two apparently identical individuals claiming to be people worthy of moral respect, then I’d be more dubious. I’d be extremely dubious of creating twenty robots running identical software (which seems entirely possible with the technology we’re supposing) and assigning them the moral status of twenty people. “People”, of the sort deserving of rights and dignity and so forth, shouldn’t be the sort of thing that can be arbitrarily created through a mechanical process. (And yes, human reproduction and growth is a mechanical process, so there’s a problem there too.)
Actually, come to think of it… if you have two copies of software (either electronic or neuron-based) running on two separate machines, but it’s the same software, could they be considered the same person? After all, they’ll make all the same decisions given similar stimuli, and thus are using the same decision process.
Yes, the consensus seems to be that running two copies of yourself in parallel doesn’t give you more measure or moral weight. But if the copies receive diferent inputs, they’ll eventually (frantic handwaving) diverge into two different people who both matter. (Maybe when we can’t retrieve Copy-A’s current state from Copy-B’s current state and the respective inputs, because information about the initial state has been destroyed?)
Have you read the quantum physics sequence? Would you agree with me that nothing you learn about seemingly unrelated topics like QM should have the power to destroy the whole basis of your morality?
Can you be more specific about what you mean by a soul? To me, it sounds like you’re just using it as a designation of something that has moral value to you. But that doesn’t need to imply anything supernatural; it’s just an axiom in your moral system.
It’s about how, if you’re attacking somebody’s argument, you should attack all of the bad points of it simultaneously, so that it doesn’t look like you’re attacking one and implicitly accepting the others. With any luck, it’ll be up tonight.
Hi, I’ve been lurking on Less Wrong for a few months now, making a few comments here and there, but never got around to introducing myself. Since I’m planning out an actual post at the moment, I figured I should tell people where I’m coming from.
I’m a male 30-year-old optical engineer in Sydney, Australia. I grew up in a very scientific family and have pretty much always assumed I had a scientific career ahead of me, and after a couple of false starts, it’s happened and I couldn’t ask for a better job.
Like many people, I came to Less Wrong from TVTropes via Methods of Rationality. Since I started reading, I’ve found that it’s been quite helpful in organising my own thoughts and casting aside unuseful arguments, and examining aspects of my life and beliefs that don’t stand up under scrutiny.
In particular, I’ve found that reading Less Wrong has allowed, nay forced, me to examine the logical consistency of everything I say, write, hear and read, which allows me to be a lot more efficient in dicussions, both by policing my own speech and being more usefully critical of others’ points (rather than making arguments that don’t go anywhere).
While I was raised in a substantively atheist household, my current beliefs are theist. The precise nature of these beliefs has shifted somewhat since I started reading Less Wrong, as I’ve discarded the parts that are inconsistent or even less likely than the others. There are still difficulties with my current model, but they’re smaller than the issues I have with my best atheist theory.
I’ve also had a surprising amount of success in introducing the logical and rationalist concepts from Less Wrong to one of my girlfriends, which is all the more impressive considering her dyscalculia. I’m really pleased that that this site has given me the tools to do that. It’s really easy now to short-circuit what might otherwise become an argument by showing that it’s merely a dispute about definitions. It’s this sort of success that has kept me reading the site these past months, and I hope I can contribute to that success for other people.
Welcome!
What issues does your best atheist theory have?
My biggest problem right now is all the stuff about zombies, and how that implies that, in the absence of some kind of soul, a computer program or other entity that is capable of the same reasoning processes as a person, is morally equivalent to a person. I agree with every step of the logic (I think, it’s been a while since I last read the sequence), but I end up applying it in the other direction. I don’t think a computer program can have any moral value, therefore, without the presence of a soul, people also have no moral value. Therefore I either accept a lack of moral value to humanity (both distasteful and unlikely), or accept the presence of something, let’s call it a soul, that makes people worthwhile (also unlikely). I’m leaning towards the latter, both as the less unlikely, and the one that produces the most harmonious behaviour from me.
It’s a work in progress. I’ve been considering the possibility that there is exactly one soul in the universe (since there’s no reason to consider souls to propagate along the time axis of spacetime in any classical sense), but that’s a low-probability hypothesis for now.
In the spirit of your (excellent) new post, I’ll attack all the weak points of your argument at once:
You define “soul” as:
This definition doesn’t give souls any of their normal properties, like being the seat of subjective experience, or allowing free will, or surviving bodily death. That’s fine, but we need to be on the look-out in case these meanings sneak in as connotations later on. (In particular, the “Zombies” sequence doesn’t talk about moral worth, but does talk about subjective experience, so its application here isn’t straight forward. Do you believe that a simulation of a human would have subjective experience?)
“Souls” don’t provide any change in anticipation. You haven’t provided any mechanism by which other people having souls causes me to think that those other people have moral worth. Furthermore it seems that my belief that others have moral worth can be fully explained by my genes and my upbringing.
You haven’t stated any evidence for the claim that computer programs can’t have moral value, and this isn’t intuitively obvious to me.
You’ve produced a dichotomy between two very unlikely hypotheses. I think the correct answer in this case isn’t to believe the least unlikely hypothesis, but is instead to assume that the answer is some third option you haven’t thought of yet. For instance you could say “I withhold judgement on the existence of souls and the nature of moral worth until I understand the nature of subjective experience”.
The existence of souls as you’ve defined them doesn’t imply theism. Not even slightly. (EDIT: Your argument goes: ‘By the “Zombies” sequence, simulations are concious. By assumption, simulations have no moral worth. Therefore concious does not imply moral worth. Call whatever does imply moral worth a soul. Souls exist, therefore theism.’ The jump between the penultimate and the ultimate step is entirely powered by connotations of the word “soul”, and is therefore invalid.)
Also you say this:
(I’m sorry if what I say next offends you.) This sounds like one of those arguments clever people come up with to justify some previously decided conclusion. It looks like you’ve just picked a nice sounding theory out of hypothesis space without nearly enough evidence to support it. It would be a real shame if your mind became tangled up like an Escher painting because you were too good at thinking up clever arguments.
You don’t need an additional ontological entity to reflect a judgment (and judgments can differ between different people or agents). You don’t need special angry atoms to form an angry person, that property can be either in the pattern of how the atoms are arranged, or in the way you perceive their arrangement. See these posts:
http://lesswrong.com/lw/oi/mind_projection_fallacy/
http://lesswrong.com/lw/oj/probability_is_in_the_mind/
http://lesswrong.com/lw/ro/2place_and_1place_words/
http://lesswrong.com/lw/oo/explaining_vs_explaining_away/
http://lesswrong.com/lw/p3/angry_atoms/
It’s hard to build intuitions about the moral value of intelligent programs right now, because there aren’t any around to talk to. But consider a hypothetical that’s as close to human as possible: uploads. Suppose someone you knew decided to undergo a procedure where his brain would be scanned and destroyed, and then a program based on that scan was installed on a humanoid robot body, so that it would act and think like he did; and when you talked to the robot, he told you that he still felt like the same person. Would that robot and the software on it have moral value?
I would have suggested pets. Or the software objects of Chang’s story.
It is interesting that HopeFox’s intuitions rebel at assigning moral worth to something that is easily copied. I think she is on to something. The pets and Chang-software-objects which acquire moral worth do so by long acquaintance with the bestower of worth. In fact, my intuitions do the same with the humans whom I value.
I agree that HopeFox is onto something there: most people think great works of art, or unique features of the natural world have value, but that has nothing to do with having a soul...it has to do with irredicubility.An atom-by-atom duplicate oft the Mona Lisa wouldl, not be the Mona Lisa, it would be a great work of science...
Well, it has nothing to do with what you think of as a ‘soul’.
Personally, I’m not that taken with the local tendency to demand that any problematic word be tabooed. But I think that it might have been worthwhile to make that demand of HopeFox when she first used the word ‘soul’.
Given my own background, I immediately attached a connotation of immortality upon seeing the word. And for that reason, I was puzzled at the conflation of moral worth with possession of a soul. Because my intuition tells me I should be more respectful of something that I might seriously damage than of someone that can survive anything I might do to it.
I agree, intuition is very difficult here. In this specific scenario, I’d lean towards saying yes—it’s the same person with a physically different body and brain, so I’d like to think that there is some continuity of the “person” in that situation. My brain isn’t made of the “same atoms” it was when I was born, after all. So I’d say yes. In fact, in practice, I would definitely assume said robot and software to have moral value, even if I wasn’t 100% sure.
However, if the original brain and body weren’t destroyed, and we now had two apparently identical individuals claiming to be people worthy of moral respect, then I’d be more dubious. I’d be extremely dubious of creating twenty robots running identical software (which seems entirely possible with the technology we’re supposing) and assigning them the moral status of twenty people. “People”, of the sort deserving of rights and dignity and so forth, shouldn’t be the sort of thing that can be arbitrarily created through a mechanical process. (And yes, human reproduction and growth is a mechanical process, so there’s a problem there too.)
Actually, come to think of it… if you have two copies of software (either electronic or neuron-based) running on two separate machines, but it’s the same software, could they be considered the same person? After all, they’ll make all the same decisions given similar stimuli, and thus are using the same decision process.
Yes, the consensus seems to be that running two copies of yourself in parallel doesn’t give you more measure or moral weight. But if the copies receive diferent inputs, they’ll eventually (frantic handwaving) diverge into two different people who both matter. (Maybe when we can’t retrieve Copy-A’s current state from Copy-B’s current state and the respective inputs, because information about the initial state has been destroyed?)
Have you read the quantum physics sequence? Would you agree with me that nothing you learn about seemingly unrelated topics like QM should have the power to destroy the whole basis of your morality?
Thanks.
Can you be more specific about what you mean by a soul? To me, it sounds like you’re just using it as a designation of something that has moral value to you. But that doesn’t need to imply anything supernatural; it’s just an axiom in your moral system.
I’d love to know why moral value ⇒ presence of a soul? Also theist is a very vague term taken by itself could mean anything. Care to enlighten us?
Welcome!
Exciting! What’s it about?
It’s about how, if you’re attacking somebody’s argument, you should attack all of the bad points of it simultaneously, so that it doesn’t look like you’re attacking one and implicitly accepting the others. With any luck, it’ll be up tonight.