This comment was written under the misapprehension that Dave was speaking normatively.
It’s pretty clear that individual murder doesn’t matter to me.
I mean, someone was murdered just now, as I write this sentence, and I care about that significantly less than I care about the quality of my coffee. I mean, I just spent five seconds adjusting the quality of my coffee, which is at least a noticeable quantity of effort if not a significant one. I can’t say the same about that anonymous murder.
Oh look, there goes another one. (Yawn.)
I always attributed that to abstract nature of the knowledge. I mean, if you knew anything about the person, you’d care a lot more, which suggests the relevant factor is ignorance, and that’s a property of the map, not the territory.
The metric I was using was not “caring whether someone is murdered”, which it’s clear I really don’t, but rather “being willing to murder someone,” which it’s relatively clear that I do, but not nearly as much as I could. (Insert typical spiel here about near/far mode, etc.)
So you’re saying your preferences on this matter are inconsistent?
Yes, I agree completely that what I’m talking about is an attribute of “the map.” (I could challenge whether it’s ignorance or something else, but the key point here is that I’m discussing motivational psychology, and I agree.)
So you’re saying your preferences on this matter are inconsistent?
Well, that wasn’t my point, and I’m not quite sure how it follows from what I said, but I would certainly agree that my revealed preferences are both inconsistent with each other and inconsistent with my stated preferences (which are themselves inconsistent with each other).
I would certainly agree that my revealed preferences are both inconsistent with each other and inconsistent with my stated preferences (which are themselves inconsistent with each other).
Right. This is why I don’t use “revealed preferences” to derive ethics, personally.
And neither do you, I’m such an idiot.
That said.
Here’s a scenario:
Humanity has spread throughout the stars and come into its manifest destiny, yada yada. There are really ridiculous amounts of people. Trillions in every star system, and there are a lot of star systems. We all know this future.
Alas! Some aliens dislike this! They plan to follow you to a newly-settled planet—around a billion colonists. Then they will colonize the planet themselves, and live peacefully building stacks of pebbles or whatever valueless thing aliens do. These aliens are a hive mind, so they don’t count as people.
However! You could use your tracking beacon—of some sentimental value to you, it was a present from your dear old grandmother or something—to trick the aliens into attacking and settling on an automated mining world, without killing a single human.
I assume you would be willing to do it to save, say, a small country on modern-day Earth, although maybe I’m projecting here? Everything is certain, because revealed preferences suck at probability math.
Reorienting my understanding of this discussion to be, as you say, normative: yes, when offered a choice between destroying a sentimental but not otherwise valuable item and killing a billion humans, I endorse destroying the item, no matter how many other humans there are in the world.
I even endorse it if everything is uncertain, with the usual expected-value calculation.
That said, as is often true of hypothetical questions, I don’t quite agree that the example you describe quite maps to that choice, but I think it was meant to. If I really think about the example, it’s more complicated than that. If I missed the intended point of the example, let me know and I’ll try again.
Reorienting my understanding of this discussion to be, as you say, normative: yes, when offered a choice between destroying a sentimental but not otherwise valuable item and killing a billion humans, I endorse destroying the item, no matter how many other humans there are in the world.
I even endorse it if everything is uncertain, with the usual expected-value calculation.
Glad to hear it. Sorry about that misunderstanding.
That said, as is often true of hypothetical questions, I don’t quite agree that the example you describe quite maps to that choice, but I think it was meant to. If I really think about the example, it’s more complicated than that.
Curses. I knew I should have gone with the rogue nanotech.
If I missed the intended point of the example, let me know and I’ll try again.
I always attributed that to abstract nature of the knowledge. I mean, if you knew anything about the person, you’d care a lot more, which suggests the relevant factor is ignorance, and that’s a property of the map, not the territory.
So you’re saying your preferences on this matter are inconsistent?
Yes, I agree completely that what I’m talking about is an attribute of “the map.” (I could challenge whether it’s ignorance or something else, but the key point here is that I’m discussing motivational psychology, and I agree.)
Well, that wasn’t my point, and I’m not quite sure how it follows from what I said, but I would certainly agree that my revealed preferences are both inconsistent with each other and inconsistent with my stated preferences (which are themselves inconsistent with each other).
Right. This is why I don’t use “revealed preferences” to derive ethics, personally.
And neither do you, I’m such an idiot.That said.
Here’s a scenario:
I assume you would be willing to do it to save, say, a small country on modern-day Earth, although maybe I’m projecting here? Everything is certain, because revealed preferences suck at probability math.
Is it worth it?
Reorienting my understanding of this discussion to be, as you say, normative: yes, when offered a choice between destroying a sentimental but not otherwise valuable item and killing a billion humans, I endorse destroying the item, no matter how many other humans there are in the world.
I even endorse it if everything is uncertain, with the usual expected-value calculation.
That said, as is often true of hypothetical questions, I don’t quite agree that the example you describe quite maps to that choice, but I think it was meant to. If I really think about the example, it’s more complicated than that. If I missed the intended point of the example, let me know and I’ll try again.
Glad to hear it. Sorry about that misunderstanding.
Curses. I knew I should have gone with the rogue nanotech.
Nope, spot-on :)