I think the resolution to that is that you don’t have to have an immediate emotional reaction to care about it. There are lots of good and bad things happening in the world right now, but trying to feel all of them would be pointless, and a bad fit for our mental architecture. But we can still care, I think.
Well, I certainly agree that I don’t have to have an emotional reaction to each event, or indeed a reaction to the event at all, in order to be motivated to build systems that handle events in that class in different ways. I’m content to use the word “care” to refer to such motivation, either as well as or instead of referring to such emotional reactions. Ditto for “matters” in questions like “does murder matter”, in which case my answer to the above would change, but that certainly isn’t how I udnerstood MugaSofer’s question.
So the question now is: if you could prevent someone you would most likely never otherwise interact with from being murdered, but that would make your coffee taste worse, what would you do?
Bah! Listen, Eliezer, I’m tired of all your meta-hipsterism!
“Hey, let’s get some ethics at Starbucks” “Nah, it’s low-quality; I only buy a really obscure brand of ethics you’ve probably never heard of called MIRI”. “Hey man, you don’t look in good health, maybe you should see a doctor” “Nah, I like a really obscure form of healthcare, I bet you’re not signed up for it, it’s called ‘cryonics’; it’s the cool thing to do”. “I think I like you, let’s date” “Oh, I’m afraid I only date polyamorists; you’re just too square”. “Oh man, I just realized I committed hindsight bias the other day!” “I disagree, it’s really the more obscure backfire effect which just got published a year or two ago.” “Yo, check out this thing I did with statistics” “That’s cool. Did you use Bayesian techniques?”
Man, forget you!
/angrily sips his obscure mail-order loose tea, a kind of oolong you’ve never heard of (Formosa vintage tie-guan-yin)
This comment has been brought to you by me switching from Dvorak to Colemak.
I’m always amazed that people advocate Dvorak. If you are going to diverge from the herd and be a munchkin why do a half-assed job of it? Sure, if you already know Dvorak it isn’t worth switching but if you are switching from Qwerty anyway then Colemak (or at least Capewell) is better than Dvorak in all the ways that Dvorak is better than Qwerty.
If you can’t pick something non-average to meet your optimization criteria, you can’t optimize above the average.
But at the same time, there’s only so many possible low-hanging fruits etc, and at some level of finding more fruits, that indicates you aren’t optimizing at all...
(Had to google “backfire effect” to find out whether you had made it up on the spot.)
EDIT: Looks like I had already heard of that effect, and I even seem to recall E.T. Jaynes giving a theoretical explanation of it, but I didn’t remember whether it had a name.
Had to google “backfire effect” to find out whether you had made it up on the spot.
“Like I said, it’s a really obscure bias, you’ve probably never heard of it.”
I even seem to recall E.T. Jaynes giving a theoretical explanation of it
Really? I don’t remember ever seeing anything like that (although I haven’t read all of PT:TLoS yet). Maybe you’re conflating it with the thesis using Bayesian methods I link in http://www.gwern.net/backfire-effect ?
BTW, for some reason, certain “fair trade” products at my supermarket are astoundingly cheap (as in, I’ve bought very similar but non-“fair trade” stuff for more); I notice that I’m confused.
I think the resolution to that is that you don’t have to have an immediate emotional reaction to care about it. There are lots of good and bad things happening in the world right now, but trying to feel all of them would be pointless, and a bad fit for our mental architecture. But we can still care, I think.
Well, I certainly agree that I don’t have to have an emotional reaction to each event, or indeed a reaction to the event at all, in order to be motivated to build systems that handle events in that class in different ways. I’m content to use the word “care” to refer to such motivation, either as well as or instead of referring to such emotional reactions. Ditto for “matters” in questions like “does murder matter”, in which case my answer to the above would change, but that certainly isn’t how I udnerstood MugaSofer’s question.
So the question now is: if you could prevent someone you would most likely never otherwise interact with from being murdered, but that would make your coffee taste worse, what would you do?
Don’t we make this choice daily by choosing our preferred brand over Ethical Bean at Starbucks?
I hear the ethics at Starbucks are rather low-quality and in any case, surely Starbucks isn’t the cheapest place to purchase ethics.
Bah! Listen, Eliezer, I’m tired of all your meta-hipsterism!
“Hey, let’s get some ethics at Starbucks” “Nah, it’s low-quality; I only buy a really obscure brand of ethics you’ve probably never heard of called MIRI”. “Hey man, you don’t look in good health, maybe you should see a doctor” “Nah, I like a really obscure form of healthcare, I bet you’re not signed up for it, it’s called ‘cryonics’; it’s the cool thing to do”. “I think I like you, let’s date” “Oh, I’m afraid I only date polyamorists; you’re just too square”. “Oh man, I just realized I committed hindsight bias the other day!” “I disagree, it’s really the more obscure backfire effect which just got published a year or two ago.” “Yo, check out this thing I did with statistics” “That’s cool. Did you use Bayesian techniques?”
Man, forget you!
/angrily sips his obscure mail-order loose tea, a kind of oolong you’ve never heard of (Formosa vintage tie-guan-yin)
If you can’t pick something non-average to meet your optimization criteria, you can’t optimize above the average.
This comment has been brought to you by my Dvorak keyboard layout.
If you keep looking down the utility gradient, it’s harder to escape local maxima because you’re facing backwards.
This comment has been brought to you by me switching from Dvorak to Colemak.
I’m always amazed that people advocate Dvorak. If you are going to diverge from the herd and be a munchkin why do a half-assed job of it? Sure, if you already know Dvorak it isn’t worth switching but if you are switching from Qwerty anyway then Colemak (or at least Capewell) is better than Dvorak in all the ways that Dvorak is better than Qwerty.
Dvorak is for hipsters, not optimisers.
Tim Tyler is the actual optimizer here.
But at the same time, there’s only so many possible low-hanging fruits etc, and at some level of finding more fruits, that indicates you aren’t optimizing at all...
Ouch, that cuts a bit close to home...
(Had to google “backfire effect” to find out whether you had made it up on the spot.)
EDIT: Looks like I had already heard of that effect, and I even seem to recall E.T. Jaynes giving a theoretical explanation of it, but I didn’t remember whether it had a name.
“Like I said, it’s a really obscure bias, you’ve probably never heard of it.”
Really? I don’t remember ever seeing anything like that (although I haven’t read all of PT:TLoS yet). Maybe you’re conflating it with the thesis using Bayesian methods I link in http://www.gwern.net/backfire-effect ?
I can’t tell if I should feel good or bad that this was the only one where I said “well, actually...”
BTW, for some reason, certain “fair trade” products at my supermarket are astoundingly cheap (as in, I’ve bought very similar but non-“fair trade” stuff for more); I notice that I’m confused.
… we do? Nobody told me! I’ll start tomorrow.
Judging from experience, the answer is that it depends on how the choice is framed.
That said, I’d feel worse afterwards about choosing the tastier coffee.
I was, indeed, using “matters” normatively in that comment. Sorry for any confusion.
… I, like an idiot, assumed you were too; better go edit my replies.