It’s pretty clear that individual murder doesn’t matter to me.
I mean, someone was murdered just now, as I write this sentence, and I care about that significantly less than I care about the quality of my coffee. I mean, I just spent five seconds adjusting the quality of my coffee, which is at least a noticeable quantity of effort if not a significant one. I can’t say the same about that anonymous murder.
Oh look, there goes another one. (Yawn.)
The metric I was using was not “caring whether someone is murdered”, which it’s clear I really don’t, but rather “being willing to murder someone,” which it’s relatively clear that I do, but not nearly as much as I could. (Insert typical spiel here about near/far mode, etc.)
I think the resolution to that is that you don’t have to have an immediate emotional reaction to care about it. There are lots of good and bad things happening in the world right now, but trying to feel all of them would be pointless, and a bad fit for our mental architecture. But we can still care, I think.
Well, I certainly agree that I don’t have to have an emotional reaction to each event, or indeed a reaction to the event at all, in order to be motivated to build systems that handle events in that class in different ways. I’m content to use the word “care” to refer to such motivation, either as well as or instead of referring to such emotional reactions. Ditto for “matters” in questions like “does murder matter”, in which case my answer to the above would change, but that certainly isn’t how I udnerstood MugaSofer’s question.
So the question now is: if you could prevent someone you would most likely never otherwise interact with from being murdered, but that would make your coffee taste worse, what would you do?
Bah! Listen, Eliezer, I’m tired of all your meta-hipsterism!
“Hey, let’s get some ethics at Starbucks” “Nah, it’s low-quality; I only buy a really obscure brand of ethics you’ve probably never heard of called MIRI”. “Hey man, you don’t look in good health, maybe you should see a doctor” “Nah, I like a really obscure form of healthcare, I bet you’re not signed up for it, it’s called ‘cryonics’; it’s the cool thing to do”. “I think I like you, let’s date” “Oh, I’m afraid I only date polyamorists; you’re just too square”. “Oh man, I just realized I committed hindsight bias the other day!” “I disagree, it’s really the more obscure backfire effect which just got published a year or two ago.” “Yo, check out this thing I did with statistics” “That’s cool. Did you use Bayesian techniques?”
Man, forget you!
/angrily sips his obscure mail-order loose tea, a kind of oolong you’ve never heard of (Formosa vintage tie-guan-yin)
This comment has been brought to you by me switching from Dvorak to Colemak.
I’m always amazed that people advocate Dvorak. If you are going to diverge from the herd and be a munchkin why do a half-assed job of it? Sure, if you already know Dvorak it isn’t worth switching but if you are switching from Qwerty anyway then Colemak (or at least Capewell) is better than Dvorak in all the ways that Dvorak is better than Qwerty.
If you can’t pick something non-average to meet your optimization criteria, you can’t optimize above the average.
But at the same time, there’s only so many possible low-hanging fruits etc, and at some level of finding more fruits, that indicates you aren’t optimizing at all...
(Had to google “backfire effect” to find out whether you had made it up on the spot.)
EDIT: Looks like I had already heard of that effect, and I even seem to recall E.T. Jaynes giving a theoretical explanation of it, but I didn’t remember whether it had a name.
Had to google “backfire effect” to find out whether you had made it up on the spot.
“Like I said, it’s a really obscure bias, you’ve probably never heard of it.”
I even seem to recall E.T. Jaynes giving a theoretical explanation of it
Really? I don’t remember ever seeing anything like that (although I haven’t read all of PT:TLoS yet). Maybe you’re conflating it with the thesis using Bayesian methods I link in http://www.gwern.net/backfire-effect ?
BTW, for some reason, certain “fair trade” products at my supermarket are astoundingly cheap (as in, I’ve bought very similar but non-“fair trade” stuff for more); I notice that I’m confused.
This comment was written under the misapprehension that Dave was speaking normatively.
It’s pretty clear that individual murder doesn’t matter to me.
I mean, someone was murdered just now, as I write this sentence, and I care about that significantly less than I care about the quality of my coffee. I mean, I just spent five seconds adjusting the quality of my coffee, which is at least a noticeable quantity of effort if not a significant one. I can’t say the same about that anonymous murder.
Oh look, there goes another one. (Yawn.)
I always attributed that to abstract nature of the knowledge. I mean, if you knew anything about the person, you’d care a lot more, which suggests the relevant factor is ignorance, and that’s a property of the map, not the territory.
The metric I was using was not “caring whether someone is murdered”, which it’s clear I really don’t, but rather “being willing to murder someone,” which it’s relatively clear that I do, but not nearly as much as I could. (Insert typical spiel here about near/far mode, etc.)
So you’re saying your preferences on this matter are inconsistent?
Yes, I agree completely that what I’m talking about is an attribute of “the map.” (I could challenge whether it’s ignorance or something else, but the key point here is that I’m discussing motivational psychology, and I agree.)
So you’re saying your preferences on this matter are inconsistent?
Well, that wasn’t my point, and I’m not quite sure how it follows from what I said, but I would certainly agree that my revealed preferences are both inconsistent with each other and inconsistent with my stated preferences (which are themselves inconsistent with each other).
I would certainly agree that my revealed preferences are both inconsistent with each other and inconsistent with my stated preferences (which are themselves inconsistent with each other).
Right. This is why I don’t use “revealed preferences” to derive ethics, personally.
And neither do you, I’m such an idiot.
That said.
Here’s a scenario:
Humanity has spread throughout the stars and come into its manifest destiny, yada yada. There are really ridiculous amounts of people. Trillions in every star system, and there are a lot of star systems. We all know this future.
Alas! Some aliens dislike this! They plan to follow you to a newly-settled planet—around a billion colonists. Then they will colonize the planet themselves, and live peacefully building stacks of pebbles or whatever valueless thing aliens do. These aliens are a hive mind, so they don’t count as people.
However! You could use your tracking beacon—of some sentimental value to you, it was a present from your dear old grandmother or something—to trick the aliens into attacking and settling on an automated mining world, without killing a single human.
I assume you would be willing to do it to save, say, a small country on modern-day Earth, although maybe I’m projecting here? Everything is certain, because revealed preferences suck at probability math.
Reorienting my understanding of this discussion to be, as you say, normative: yes, when offered a choice between destroying a sentimental but not otherwise valuable item and killing a billion humans, I endorse destroying the item, no matter how many other humans there are in the world.
I even endorse it if everything is uncertain, with the usual expected-value calculation.
That said, as is often true of hypothetical questions, I don’t quite agree that the example you describe quite maps to that choice, but I think it was meant to. If I really think about the example, it’s more complicated than that. If I missed the intended point of the example, let me know and I’ll try again.
Reorienting my understanding of this discussion to be, as you say, normative: yes, when offered a choice between destroying a sentimental but not otherwise valuable item and killing a billion humans, I endorse destroying the item, no matter how many other humans there are in the world.
I even endorse it if everything is uncertain, with the usual expected-value calculation.
Glad to hear it. Sorry about that misunderstanding.
That said, as is often true of hypothetical questions, I don’t quite agree that the example you describe quite maps to that choice, but I think it was meant to. If I really think about the example, it’s more complicated than that.
Curses. I knew I should have gone with the rogue nanotech.
If I missed the intended point of the example, let me know and I’ll try again.
It’s pretty clear that individual murder doesn’t matter to me.
I mean, someone was murdered just now, as I write this sentence, and I care about that significantly less than I care about the quality of my coffee. I mean, I just spent five seconds adjusting the quality of my coffee, which is at least a noticeable quantity of effort if not a significant one. I can’t say the same about that anonymous murder.
Oh look, there goes another one. (Yawn.)
The metric I was using was not “caring whether someone is murdered”, which it’s clear I really don’t, but rather “being willing to murder someone,” which it’s relatively clear that I do, but not nearly as much as I could. (Insert typical spiel here about near/far mode, etc.)
I think the resolution to that is that you don’t have to have an immediate emotional reaction to care about it. There are lots of good and bad things happening in the world right now, but trying to feel all of them would be pointless, and a bad fit for our mental architecture. But we can still care, I think.
Well, I certainly agree that I don’t have to have an emotional reaction to each event, or indeed a reaction to the event at all, in order to be motivated to build systems that handle events in that class in different ways. I’m content to use the word “care” to refer to such motivation, either as well as or instead of referring to such emotional reactions. Ditto for “matters” in questions like “does murder matter”, in which case my answer to the above would change, but that certainly isn’t how I udnerstood MugaSofer’s question.
So the question now is: if you could prevent someone you would most likely never otherwise interact with from being murdered, but that would make your coffee taste worse, what would you do?
Don’t we make this choice daily by choosing our preferred brand over Ethical Bean at Starbucks?
I hear the ethics at Starbucks are rather low-quality and in any case, surely Starbucks isn’t the cheapest place to purchase ethics.
Bah! Listen, Eliezer, I’m tired of all your meta-hipsterism!
“Hey, let’s get some ethics at Starbucks” “Nah, it’s low-quality; I only buy a really obscure brand of ethics you’ve probably never heard of called MIRI”. “Hey man, you don’t look in good health, maybe you should see a doctor” “Nah, I like a really obscure form of healthcare, I bet you’re not signed up for it, it’s called ‘cryonics’; it’s the cool thing to do”. “I think I like you, let’s date” “Oh, I’m afraid I only date polyamorists; you’re just too square”. “Oh man, I just realized I committed hindsight bias the other day!” “I disagree, it’s really the more obscure backfire effect which just got published a year or two ago.” “Yo, check out this thing I did with statistics” “That’s cool. Did you use Bayesian techniques?”
Man, forget you!
/angrily sips his obscure mail-order loose tea, a kind of oolong you’ve never heard of (Formosa vintage tie-guan-yin)
If you can’t pick something non-average to meet your optimization criteria, you can’t optimize above the average.
This comment has been brought to you by my Dvorak keyboard layout.
If you keep looking down the utility gradient, it’s harder to escape local maxima because you’re facing backwards.
This comment has been brought to you by me switching from Dvorak to Colemak.
I’m always amazed that people advocate Dvorak. If you are going to diverge from the herd and be a munchkin why do a half-assed job of it? Sure, if you already know Dvorak it isn’t worth switching but if you are switching from Qwerty anyway then Colemak (or at least Capewell) is better than Dvorak in all the ways that Dvorak is better than Qwerty.
Dvorak is for hipsters, not optimisers.
Tim Tyler is the actual optimizer here.
But at the same time, there’s only so many possible low-hanging fruits etc, and at some level of finding more fruits, that indicates you aren’t optimizing at all...
Ouch, that cuts a bit close to home...
(Had to google “backfire effect” to find out whether you had made it up on the spot.)
EDIT: Looks like I had already heard of that effect, and I even seem to recall E.T. Jaynes giving a theoretical explanation of it, but I didn’t remember whether it had a name.
“Like I said, it’s a really obscure bias, you’ve probably never heard of it.”
Really? I don’t remember ever seeing anything like that (although I haven’t read all of PT:TLoS yet). Maybe you’re conflating it with the thesis using Bayesian methods I link in http://www.gwern.net/backfire-effect ?
I can’t tell if I should feel good or bad that this was the only one where I said “well, actually...”
BTW, for some reason, certain “fair trade” products at my supermarket are astoundingly cheap (as in, I’ve bought very similar but non-“fair trade” stuff for more); I notice that I’m confused.
… we do? Nobody told me! I’ll start tomorrow.
Judging from experience, the answer is that it depends on how the choice is framed.
That said, I’d feel worse afterwards about choosing the tastier coffee.
I was, indeed, using “matters” normatively in that comment. Sorry for any confusion.
… I, like an idiot, assumed you were too; better go edit my replies.
I always attributed that to abstract nature of the knowledge. I mean, if you knew anything about the person, you’d care a lot more, which suggests the relevant factor is ignorance, and that’s a property of the map, not the territory.
So you’re saying your preferences on this matter are inconsistent?
Yes, I agree completely that what I’m talking about is an attribute of “the map.” (I could challenge whether it’s ignorance or something else, but the key point here is that I’m discussing motivational psychology, and I agree.)
Well, that wasn’t my point, and I’m not quite sure how it follows from what I said, but I would certainly agree that my revealed preferences are both inconsistent with each other and inconsistent with my stated preferences (which are themselves inconsistent with each other).
Right. This is why I don’t use “revealed preferences” to derive ethics, personally.
And neither do you, I’m such an idiot.That said.
Here’s a scenario:
I assume you would be willing to do it to save, say, a small country on modern-day Earth, although maybe I’m projecting here? Everything is certain, because revealed preferences suck at probability math.
Is it worth it?
Reorienting my understanding of this discussion to be, as you say, normative: yes, when offered a choice between destroying a sentimental but not otherwise valuable item and killing a billion humans, I endorse destroying the item, no matter how many other humans there are in the world.
I even endorse it if everything is uncertain, with the usual expected-value calculation.
That said, as is often true of hypothetical questions, I don’t quite agree that the example you describe quite maps to that choice, but I think it was meant to. If I really think about the example, it’s more complicated than that. If I missed the intended point of the example, let me know and I’ll try again.
Glad to hear it. Sorry about that misunderstanding.
Curses. I knew I should have gone with the rogue nanotech.
Nope, spot-on :)