Questions of priority—and the relative intensity of suffering between members of different species—need to be distinguished from the question of whether other sentient beings have moral status at all. I guess that was what shocked me about Eliezer’s bald assertion that frogs have no moral status. After all, humans may be less sentient than frogs compared to our posthuman successors. So it’s unsettling to think that posthumans might give simple-minded humans the same level of moral consideration that Elizeer accords frogs.
“Frogs have subjective experience” is the biggy, there’s a number of other things I already know myself to be confused about which impact on that, and so I don’t know exactly what I should be looking for in the frog that would make me think it had a sense of its own existence. Certainly there are any number of news items I could receive about the frog’s mental abilities, brain complexity, type of algorithmic processing, ability to reflect on its own thought processes, etcetera, which would make me think it was more likely that the frog was what a non-confused person of myself would regard as fulfilling the predicate I currently call “capable of experiencing pain”, as opposed to being a more complicated version of neural network reinforcement-learning algorithms that I have no qualms about running on a computer.
A simple example would be if frogs could recognize dots painted on them when seeing themselves in mirrors, or if frogs showed signs of being able to learn very simple grammar like “jump blue box”. (If all human beings were being cryonically suspended I would start agitating for the chimpanzees.)
I am very surprised that you suggest that “having subjective experience” is a yes/no thing. I thought it is consensus opinion here that it is not. I am not sure about others on LW, but I would even go three steps further: it is not even a strict ordering of things. It is not even a partial ordering of things. I believe it can be only defined in the context of an Observer and an Object, where Observer gives some amount of weight to the theory that Object’s subjective experience is similar to Observer’s own.
I thought it is consensus opinion here that it is not.
Links? I’d be interested in seeing what people on LW thought about this, if it’s been discussed before. I can understand the yes/no position, or the idea that there’s a blurry line somewhere between thermostats and humans, but I don’t understand what you mean about the Observer and Object. The Observer in your example has subjective experience?
We’re not looking for objective experience, thus we’re simply looking for experience. If we now define ‘a sense of one’s own existence’ as the experience of self-awareness, i.e. consciousness, and if we also regard unconscious experience as unworthy, we’re left with consciousness.
Now since we can not define consciousness, we need a working definition. What are some important aspects of consciousness? ‘Thinking’, which requires ‘knowledge’ (data), is not the operative point between being an iPhone and being human. It’s information processing after all. So what do we mean by unconscious, opposed to consciousness decision making? It’s about deliberate, purposeful (goal-oriented) adaption. Thus to be consciousness is to be able to shape your environment in a way that it suits your volition.
The ability to define a system within the environment in which it is embedded to be yourself.
To be goal-oriented
The specific effectiveness and order of transformation by which the self-defined system (you) shapes the outside environment, in which it is embedded, does trump the environmental influence on the defined system. (more)
How could this help with the frog-dilemma? Are frogs consciousness?
Are there signs of active environmental adaption by the frog-society as indicated by behavioral variability?
To what extent is frog behavior predictable?
That is, we have to fathom the extent of active adaption of the environment by frogs opposed to passive adaption of frogs by the the environment. Further, one needs to test the frogs ability of deliberate, spontaneous behavior given environmental (experimental) stimuli and see if frogs can evade, i.e. action vs reaction.
P.S.
No attempt at a solution, just some quick thoughts I wanted to write down for clearance and possible feedback.
I’m surprised. Do you mean you wouldn’t trade off a dust speck in your eye (in some post-singularity future where x-risk is settled one way or another) to avert the torture of a billion frogs, or of some noticeable portion of all frogs? If we plotted your attitudes to progressively more intelligent entities, where’s the discontinuity or discontinuities?
You’d need to change that to 10^6 specks and 10^15 frogs or something, because emotional reaction to choosing to kill the frogs is also part of the consequences of the decision, and this particular consequence might have moral value that outweighs one speck.
Your emotional reaction to a decision about human lives is irrelevant, the lives in question hold most of the moral worth, while with a decision to kill billions of cockroaches (to be safe from the question of moral worth of frogs), the lives of the cockroaches are irrelevant, while your emotional reaction holds most of moral worth.
Hopefully he still thinks there’s a small probability of frogs being able to experience pain, so that the expected suffering of frog torture would be hugely greater than a dust speck.
Do you mean you wouldn’t trade off a dust speck in your eye (in some post-singularity future where x-risk is settled one way or another) to avert the torture of a billion frogs, or of some noticeable portion of all frogs?
Depends. Would that make it harder to get frog legs?
Same questions to you, but with “rocks” for “frogs”.
Eliezer didn’t say he was 100% sure frogs weren’t objects of moral worth, nor is it a priori unreasonable to believe there exists a sharp cutoff without knowing where it is.
Axiom: The world is worth saving. Fact: Frogs are part of the world. Inference: Frogs are worth saving in proportion to their measure and effect on the world. Query: Is life worth living if all you do is save more of it?
Is life worth living if all you do is save more of it?
As a matter of practical human psychology, no. People cannot just give and give and get nothing back from it but self-generated warm fuzzies, a score kept in your head by rules of your own that no-one else knows or cares about. You can do some of that, but if that’s all you do, you just get drained and burned out.
Axiom: The world is worth saving.
Fact: Frogs are part of the world.
Inference: Frogs are worth saving in proportion to their measure and effect on the world.
Three does not follow from 1. It doesn’t follow that the world is more likely to be saved if I save frogs. It also doesn’t follow that saving frogs is the most efficient use of my time if I’m going to spend time saving the world. I could for example use that time to help reduce existential risk factors for everyone, which would happen to incidentally reduce the risk to frogs.
I find it difficult to explain, but know that I disagree with you. The world is worth saving precisely because of the components that make it up, including frogs. Three does follow from 1, unless you have a (fairly large) list of properties or objects in the world that you’ve deemed out of scope (not worth saving independently of the entire world). Do you have such a list, even implicitly? I might agree that frogs are out of scope, as that was one component of my motivation for posting this thread.
And stating that there are “more efficient” ways of saving frogs than directly saving frogs does not refute the initial inference that frogs are worth saving in proportion to their measure and effect on the world. Perhaps you are really saying “their proportion and measure is low enough as to make it not worth the time to stoop and pick them up”? Which I might also agree with.
But in my latest query, I was trying to point out that “a safe Singularity is a more efficient means of achieving goal X” or “a well thought out existential risk reduction project is a more efficient means of saving Y” can be used as a fully general counterargument, and I was wondering if people really believe they trump all other actions one might take.
I’m surprised by Eliezer’s stance. At the very least, it seems the pain endured by the frogs is terrible, no? For just one reference on the subject, see, e.g., KL Machin, “Amphibian pain and analgesia,” Journal of Zoo and Wildlife Medicine, 1999.
Rain, your dilemma reminds me of my own struggles regarding saving worms in the rain. While stepping on individual worms to put them out of their misery is arguably not the most efficient means to prevent worm suffering, as a practical matter, I think it’s probably an activity worth doing, because it builds the psychological habit of exerting effort to break from one’s routine of personal comfort and self-maintenance in order to reduce the pain of other creatures. It’s easy to say, “Oh, that’s not the most cost-effective use of my time,” but it can become too easy to say that all the time to the extent that one never ends up doing anything. Once you start doing something to help, and get in the habit of expending some effort to reduce suffering, it may actually be easier psychologically to take the efficiency of your work to the next level. (“If saving worms is good, then working toward technology to help all kinds of suffering wild animals is even better. So let me do that instead.”)
The above point applies primarily to those who find themselves devoting less effort to charitable projects than they could. For people who already come close to burning themselves out by their dedication to efficient causes, taking on additional burdens to reduce just a bit more suffering is probably not a good idea.
At the very least, it seems the pain endured by the frogs is terrible, no?
Maybe so, but the question is why we should care.
While stepping on individual worms to put them out of their misery is arguably not the most efficient means to prevent worm suffering, as a practical matter, I think it’s probably an activity worth doing
My point was that the action may have psychological value for oneself, as a way of getting in the habit of taking concrete steps to reduce suffering—habits that can grow into more efficient strategies later on. One could call this “signaling to oneself,” I suppose, but my point was that it might have value in the absence of being seen by others. (This is over and above the value to the worm itself, which is surely not unimportant.)
I don’t consider frogs to be objects of moral worth.
-- David Pearce via Facebook
Are there any possible facts that would make you consider frogs objects of moral worth if you found out they were true?
(Edited for clarity.)
“Frogs have subjective experience” is the biggy, there’s a number of other things I already know myself to be confused about which impact on that, and so I don’t know exactly what I should be looking for in the frog that would make me think it had a sense of its own existence. Certainly there are any number of news items I could receive about the frog’s mental abilities, brain complexity, type of algorithmic processing, ability to reflect on its own thought processes, etcetera, which would make me think it was more likely that the frog was what a non-confused person of myself would regard as fulfilling the predicate I currently call “capable of experiencing pain”, as opposed to being a more complicated version of neural network reinforcement-learning algorithms that I have no qualms about running on a computer.
A simple example would be if frogs could recognize dots painted on them when seeing themselves in mirrors, or if frogs showed signs of being able to learn very simple grammar like “jump blue box”. (If all human beings were being cryonically suspended I would start agitating for the chimpanzees.)
I am very surprised that you suggest that “having subjective experience” is a yes/no thing. I thought it is consensus opinion here that it is not. I am not sure about others on LW, but I would even go three steps further: it is not even a strict ordering of things. It is not even a partial ordering of things. I believe it can be only defined in the context of an Observer and an Object, where Observer gives some amount of weight to the theory that Object’s subjective experience is similar to Observer’s own.
Links? I’d be interested in seeing what people on LW thought about this, if it’s been discussed before. I can understand the yes/no position, or the idea that there’s a blurry line somewhere between thermostats and humans, but I don’t understand what you mean about the Observer and Object. The Observer in your example has subjective experience?
I like the way you phrased your concern for “subjective experience”—those are the types of characteristics I care about as well.
But I’m curious: What does ability to learn simple grammar have to do with subjective experience?
We’re not looking for objective experience, thus we’re simply looking for experience. If we now define ‘a sense of one’s own existence’ as the experience of self-awareness, i.e. consciousness, and if we also regard unconscious experience as unworthy, we’re left with consciousness.
Now since we can not define consciousness, we need a working definition. What are some important aspects of consciousness? ‘Thinking’, which requires ‘knowledge’ (data), is not the operative point between being an iPhone and being human. It’s information processing after all. So what do we mean by unconscious, opposed to consciousness decision making? It’s about deliberate, purposeful (goal-oriented) adaption. Thus to be consciousness is to be able to shape your environment in a way that it suits your volition.
The ability to define a system within the environment in which it is embedded to be yourself.
To be goal-oriented
The specific effectiveness and order of transformation by which the self-defined system (you) shapes the outside environment, in which it is embedded, does trump the environmental influence on the defined system. (more)
How could this help with the frog-dilemma? Are frogs consciousness?
Are there signs of active environmental adaption by the frog-society as indicated by behavioral variability?
To what extent is frog behavior predictable?
That is, we have to fathom the extent of active adaption of the environment by frogs opposed to passive adaption of frogs by the the environment. Further, one needs to test the frogs ability of deliberate, spontaneous behavior given environmental (experimental) stimuli and see if frogs can evade, i.e. action vs reaction.
P.S. No attempt at a solution, just some quick thoughts I wanted to write down for clearance and possible feedback.
I’m surprised. Do you mean you wouldn’t trade off a dust speck in your eye (in some post-singularity future where x-risk is settled one way or another) to avert the torture of a billion frogs, or of some noticeable portion of all frogs? If we plotted your attitudes to progressively more intelligent entities, where’s the discontinuity or discontinuities?
You’d need to change that to 10^6 specks and 10^15 frogs or something, because emotional reaction to choosing to kill the frogs is also part of the consequences of the decision, and this particular consequence might have moral value that outweighs one speck.
Your emotional reaction to a decision about human lives is irrelevant, the lives in question hold most of the moral worth, while with a decision to kill billions of cockroaches (to be safe from the question of moral worth of frogs), the lives of the cockroaches are irrelevant, while your emotional reaction holds most of moral worth.
I’m not so sure. I’m no expert on the subject, but I suspect cockroaches may have moderately rich emotional lives.
Hopefully he still thinks there’s a small probability of frogs being able to experience pain, so that the expected suffering of frog torture would be hugely greater than a dust speck.
Depends. Would that make it harder to get frog legs?
Same questions to you, but with “rocks” for “frogs”.
Eliezer didn’t say he was 100% sure frogs weren’t objects of moral worth, nor is it a priori unreasonable to believe there exists a sharp cutoff without knowing where it is.
Why not?
Seconded, and how do you (Eliezer) rate other creatures on the Great Chain of Being?
Would you save a stranded frog, though?
What about dogs?
Yeah, trying to save the world does that to you.
ETA (May 2012): wow, I can’t understand what prompted me to write a comment like this. Sorry.
Axiom: The world is worth saving.
Fact: Frogs are part of the world.
Inference: Frogs are worth saving in proportion to their measure and effect on the world.
Query: Is life worth living if all you do is save more of it?
I don’t know. I’m not Eliezer. I’d save the frogs because it’s fun, not because of some theory.
As a matter of practical human psychology, no. People cannot just give and give and get nothing back from it but self-generated warm fuzzies, a score kept in your head by rules of your own that no-one else knows or cares about. You can do some of that, but if that’s all you do, you just get drained and burned out.
Three does not follow from 1. It doesn’t follow that the world is more likely to be saved if I save frogs. It also doesn’t follow that saving frogs is the most efficient use of my time if I’m going to spend time saving the world. I could for example use that time to help reduce existential risk factors for everyone, which would happen to incidentally reduce the risk to frogs.
I find it difficult to explain, but know that I disagree with you. The world is worth saving precisely because of the components that make it up, including frogs. Three does follow from 1, unless you have a (fairly large) list of properties or objects in the world that you’ve deemed out of scope (not worth saving independently of the entire world). Do you have such a list, even implicitly? I might agree that frogs are out of scope, as that was one component of my motivation for posting this thread.
And stating that there are “more efficient” ways of saving frogs than directly saving frogs does not refute the initial inference that frogs are worth saving in proportion to their measure and effect on the world. Perhaps you are really saying “their proportion and measure is low enough as to make it not worth the time to stoop and pick them up”? Which I might also agree with.
But in my latest query, I was trying to point out that “a safe Singularity is a more efficient means of achieving goal X” or “a well thought out existential risk reduction project is a more efficient means of saving Y” can be used as a fully general counterargument, and I was wondering if people really believe they trump all other actions one might take.
I’m surprised by Eliezer’s stance. At the very least, it seems the pain endured by the frogs is terrible, no? For just one reference on the subject, see, e.g., KL Machin, “Amphibian pain and analgesia,” Journal of Zoo and Wildlife Medicine, 1999.
Rain, your dilemma reminds me of my own struggles regarding saving worms in the rain. While stepping on individual worms to put them out of their misery is arguably not the most efficient means to prevent worm suffering, as a practical matter, I think it’s probably an activity worth doing, because it builds the psychological habit of exerting effort to break from one’s routine of personal comfort and self-maintenance in order to reduce the pain of other creatures. It’s easy to say, “Oh, that’s not the most cost-effective use of my time,” but it can become too easy to say that all the time to the extent that one never ends up doing anything. Once you start doing something to help, and get in the habit of expending some effort to reduce suffering, it may actually be easier psychologically to take the efficiency of your work to the next level. (“If saving worms is good, then working toward technology to help all kinds of suffering wild animals is even better. So let me do that instead.”)
The above point applies primarily to those who find themselves devoting less effort to charitable projects than they could. For people who already come close to burning themselves out by their dedication to efficient causes, taking on additional burdens to reduce just a bit more suffering is probably not a good idea.
Maybe so, but the question is why we should care.
If only for the cheap signaling value.
My point was that the action may have psychological value for oneself, as a way of getting in the habit of taking concrete steps to reduce suffering—habits that can grow into more efficient strategies later on. One could call this “signaling to oneself,” I suppose, but my point was that it might have value in the absence of being seen by others. (This is over and above the value to the worm itself, which is surely not unimportant.)