I like the way you phrased your concern for “subjective experience”—those are the types of characteristics I care about as well.
But I’m curious: What does ability to learn simple grammar have to do with subjective experience?
I like the way you phrased your concern for “subjective experience”—those are the types of characteristics I care about as well.
But I’m curious: What does ability to learn simple grammar have to do with subjective experience?
the lives of the cockroaches are irrelevant
I’m not so sure. I’m no expert on the subject, but I suspect cockroaches may have moderately rich emotional lives.
If only for the cheap signaling value.
My point was that the action may have psychological value for oneself, as a way of getting in the habit of taking concrete steps to reduce suffering—habits that can grow into more efficient strategies later on. One could call this “signaling to oneself,” I suppose, but my point was that it might have value in the absence of being seen by others. (This is over and above the value to the worm itself, which is surely not unimportant.)
I’m surprised by Eliezer’s stance. At the very least, it seems the pain endured by the frogs is terrible, no? For just one reference on the subject, see, e.g., KL Machin, “Amphibian pain and analgesia,” Journal of Zoo and Wildlife Medicine, 1999.
Rain, your dilemma reminds me of my own struggles regarding saving worms in the rain. While stepping on individual worms to put them out of their misery is arguably not the most efficient means to prevent worm suffering, as a practical matter, I think it’s probably an activity worth doing, because it builds the psychological habit of exerting effort to break from one’s routine of personal comfort and self-maintenance in order to reduce the pain of other creatures. It’s easy to say, “Oh, that’s not the most cost-effective use of my time,” but it can become too easy to say that all the time to the extent that one never ends up doing anything. Once you start doing something to help, and get in the habit of expending some effort to reduce suffering, it may actually be easier psychologically to take the efficiency of your work to the next level. (“If saving worms is good, then working toward technology to help all kinds of suffering wild animals is even better. So let me do that instead.”)
The above point applies primarily to those who find themselves devoting less effort to charitable projects than they could. For people who already come close to burning themselves out by their dedication to efficient causes, taking on additional burdens to reduce just a bit more suffering is probably not a good idea.
Sure. Then what I meant was that I’m an emotivist with a strong desire to see suffering reduced and pleasure increased in the manner that a utilitarian would advocate, and I feel a deep impulse to do what I can to help make that happen. I don’t think utilitarianism is “true” (I don’t know what that could possibly mean), but I want to see it carried out.
Indeed. While still a bit muddled on the matter, I lean toward hedonistic utilitarianism, at least in the sense that the only preferences I care about are preferences regarding one’s own emotions, rather than arbitrary external events.
Environmental preservationists… er, no, I won’t try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!
Indeed. It may be rare among the LW community, but a number of people actually have a strong intuition that humans ought to preserve nature as it is, without interference, even if that means preserving suffering. As one example, Ned Hettinger wrote the following in his 1994 article, “Bambi Lovers versus Tree Huggers: A Critique of Rolston”s Environmental Ethics”: “Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.”
Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe.
Indeed. Like many others here, I subscribe to emotivism as well as utilitarianism.
Anyway, CEV is supposed to somehow take all of these details into account, and somehow generate an outcome that everyone will be satisfied with.
Yes, that’s the ideal. But the planning fallacy tells us how much harder it is to make things work in practice than to imagine how they should work. Actually implementing CEV requires work, not magic, and that’s precisely why we’re having this conversation, as well as why SIAI’s research is so important. :)
but I still suspect that if it really is such a good idea, then it should somehow be a part of the CEV extrapolation.
I hope so. Of course, it’s not as though the only two possibilities are “CEV” or “extinction.” There are lots of third possibilities for how the power politics of the future will play out (indeed, CEV seems exceedingly quixotic by comparison with many other political “realist” scenarios I can imagine), and having a broader base of memetic support is an important component of succeeding in those political battles. More wild-animal supporters also means more people with economic and intellectual clout.
I would hope that anyone who disagrees with utilitarianism, only disagrees because of an inconsistency in their value system, and that resolving this inconsistency would leave them with utilitarianism as their value system. But I’m estimating the probability that this is the case at… significantly less than 50%.
If you include paperclippers or suffering-maximizers in your definition of “anyone,” then I’d put the probability close to 0%. If “anyone” just includes humans, I’d still put it less than, say, 10^-3.
Just so long as they don’t force any other minds to experience pain.
Yeah, although if we take the perspective that individuals are different people over time (a “person” is just an observer-moment, not the entire set of observer-moments of an organism), then any choice at one instant for pain in another instant amounts to “forcing someone” to feel pain....
Bostrom’s estimate in “Astronomical Waste” is “10^38 human lives [...] lost every century that colonization of our local supercluster is delayed,” given various assumptions. Of course, there’s reason to be skeptical of such numbers at face value, in view of anthropic considerations, simulation-argument scenarios, etc., but I agree that this consideration probably still matters a lot in the final calculation.
Still, I’m concerned not just with wild-animal suffering on earth but throughout the cosmos. In particular, I fear that post-humans might actually increase the spread of wild-animal suffering through directed panspermia or lab-universe creation or various other means. The point of spreading the meme that wild-animal suffering matters and that “pristine wilderness” is not sacred would largely be to ensure that our post-human descendants place high ethical weight on the suffering that they might create by doing such things. (By comparison, environmental preservationists and physicists today never give a second thought to how many painful experiences are or would be caused by their actions.)
As far as CEV, the set of minds whose volitions are extrapolated clearly does make a difference. The space of ethical positions includes those who care deeply about sorting pebbles into correct heaps, as well as minds whose overriding ethical goal is to create as much suffering as possible. It’s not enough to “be smarter” and “more the people we wished we were”; the fundamental beliefs that you start with also matter. Some claim that all human volitions will converge (unlike, say, the volitions of humans and the volitions of suffering-maximizers); I’m curious to see an argument for this.
PeerInfinity, I’m rather struck by a number of similarities between us:
I, too, am a programmer making money and trying to live frugally in order to donate to high-expected-value projects, currently SIAI.
I share your skepticism about the cause and am not uncomfortable with your 1% probability of positive Singularity. I agree SIAI is a good option from an expected-value perspective even if the mainline-probability scenario is that these concerns won’t materialize.
As you might guess from my user name, I’m also a Utilitronium-supporting hedonistic utilitarian who is somewhat alarmed by Eliezer’s change of values but who feels that SIAI’s values are sufficiently similar to mine that it would be unwise to attempt an alternative friendly-AI organization.
I share the seriousness with which you regard Pascal’s wager, although in my case, I was pushed toward religion from atheism rather than the other way around, and I resisted Christian thinking the whole time I tried to subscribe to it. I think we largely agree in our current opinions on the subject. I do sometimes have dreams about going to the Christian hell, though.
I’m not sure if you share my focus on animal suffering (since animals outnumber current humans by orders of magnitude) or my concerns about the implications of CEV for wild-animal suffering. Because of these concerns, I think a serious alternative to SIAI in cost-effectiveness is to donate toward promoting good memes like concern about wild animals (possibly including insects) so that, should positive Singularity occur, our descendants will do the right sorts of things according to our values.
the largest impact you can make would be to simply become a vegetarian yourself.
You can also make a big impact by donating to animal-welfare causes like Vegan Outreach. In fact, if you think the numbers in this piece are within an order of magnitude of correct, then you could prevent the 3 or 4 life-years of animal suffering that your meat-eating would cause this year by donating at most $15 to Vegan Outreach. For many people, it’s probably a lot easier to offset their personal contribution to animal suffering by donating than by going vegetarian.
Of course, the idea of “offsetting your personal contribution” is a very non-utilitarian one, because if it’s good to donate at all, then you should have been doing that already and should almost certainly do so at an amount higher than $15. But from the perspective of behavior hacks that motivate people in the real world, this may not be a bad strategy.
By the way, Vegan Outreach—despite the organization’s name—is a big advocate of the “flexitarian” approach. One of their booklets is called, “Even if You Like Meat.”
Actually, you’re right—thanks for the correction! Indeed, in general, I want altruistic equal consideration of the pleasure and pain of all sentient organisms, but this need have little connection with what I like.
As it so happens, I do often feel pleasure in taking utilitarian actions, but from a utilitarian perspective, whether that’s the case is basically trivial. A miserable hard-core utilitarian would be much better for the suffering masses than a more happy only-sometimes-utilitarian (like myself).
I am the kind of donor who is much more motivated to give by seeing what specific projects are on offer. The reason boils down to the fact that I have slightly different values (namely, hedonistic utilitarianism focused on suffering) than the average of the SIAI decision-makers and so want to impose those values as much as I can.
Great post! I completely agree with the criticism of revealed preferences in economics.
As a hedonistic utilitarian, I can’t quite understand why we would favor anything other than the “liking” response. Converting the universe to utilitronium producing real pleasure is my preferred outcome. (And fortunately, there’s enough of a connection between my “wanting” and “liking” systems that I want this to happen!)
Agreed. And I think it’s important to consider just how small 1% really is. I doubt the fuzzies associated with using the credit card would actually be as small as 1% of the fuzzies associated with a 100% donation—fuzzies just don’t have high enough resolution. So I would fear, a la scope insensitivity, people getting more fuzzies from the credit card than are actually deserved from the donation. If that’s necessary in order for the fuzzies to exceed a threshold for carrying out the donation, so be it; but usually the problem is in the other direction: People get too many fuzzies from doing too little and so end up not doing enough.
What criteria do you use to decide upon the class of algorithms / computations / chemicals / physical operations that you consider “conscious” in the sense of “having experiences” that matter morally? I assume it includes many non-human animals (including wild animals)? Might it include insects? Is it weighted by some correlate of brain / hardware size? Might it include digital computers? Lego Turing machines? China brains? Reinforcement-learning algorithms? Simple Python scripts that I could run on my desktop? Molecule movements in the wall behind John Searle’s back that can be interpreted as running computations corresponding to conscious suffering? Rocks? How does it distinguish interpretations of numbers as signed vs. unsigned, or ones complement vs. twos complement? What physical details of the computations matter? Does it regard carbon differently from silicon?
I like all of the responses to the value-of-nature arguments you give in your second paragraph. However, as a hedonistic utilitarian, I would disagree with your claim that nature has value apart from its value to organisms with experiences. And I think we have a obligation to change nature in order to avert the massive amounts of wild-animal suffering that it contains, even if doing so would render it “unnatural” in some ways.
The 12-billion-utils example is similar to one I mention on this page under “What about Isolated Actions?” I agree that our decision here is ultimately arbitrary and up to us. But I also agree with the comments by others that this choice can be built into the standard expected-utility framework by changing the utilities. That is, unless your complaint is, as Nick suggests, with the independence axiom’s constraint on rational preference orderings in and of itself (for instance, if you agreed—as I don’t—that the popular choices in the Allais paradox should count as “rational”).
Indeed. Gaverick Matheny and Kai M. A. Chan have formalized that point in an excellent paper, “The Illogic of the Larder.”
For example if you claim to prefer non-existence of animals to them being used as food, then you clearly must support destruction of all nature reserves, as that’s exactly the same choice. And if you’re against animal suffering, you’d be totally happy to eat cows genetically modified not to have pain receptors. And so on. All positions never taken by any vegetarians.
I think most animal-welfare researchers would agree that animals on the nature reserve suffer less than those in factory farms, where conditions run contrary to the animals’ evolved instincts. As far as consistent vegetarians, I know at least 5-10 people (including myself) who are very concerned about the suffering of animals in the wild and who would strongly support genetically modified cows without pain receptors. (Indeed, one of my acquaintances has actually toyed with the idea of promoting the use of anencephalic farm animals.) Still, I sympathize with your frustration about the dearth of consequentialist thinking among animal advocates.
Agreed. I’m often somewhat embarrassed to mention SIAI’s full name, or the Singularity Summit, because of the term “singularity” which, in many people’s minds—to some extent including my own—is a red flag for “crazy”.
Honestly, even the “Artificial Intelligence” part of the name can misrepresent what SIAI is about. I would describe the organization as just “a philosophy institute researching hugely important fundamental questions.”