I would certainly agree that my revealed preferences are both inconsistent with each other and inconsistent with my stated preferences (which are themselves inconsistent with each other).
Right. This is why I don’t use “revealed preferences” to derive ethics, personally.
And neither do you, I’m such an idiot.
That said.
Here’s a scenario:
Humanity has spread throughout the stars and come into its manifest destiny, yada yada. There are really ridiculous amounts of people. Trillions in every star system, and there are a lot of star systems. We all know this future.
Alas! Some aliens dislike this! They plan to follow you to a newly-settled planet—around a billion colonists. Then they will colonize the planet themselves, and live peacefully building stacks of pebbles or whatever valueless thing aliens do. These aliens are a hive mind, so they don’t count as people.
However! You could use your tracking beacon—of some sentimental value to you, it was a present from your dear old grandmother or something—to trick the aliens into attacking and settling on an automated mining world, without killing a single human.
I assume you would be willing to do it to save, say, a small country on modern-day Earth, although maybe I’m projecting here? Everything is certain, because revealed preferences suck at probability math.
Reorienting my understanding of this discussion to be, as you say, normative: yes, when offered a choice between destroying a sentimental but not otherwise valuable item and killing a billion humans, I endorse destroying the item, no matter how many other humans there are in the world.
I even endorse it if everything is uncertain, with the usual expected-value calculation.
That said, as is often true of hypothetical questions, I don’t quite agree that the example you describe quite maps to that choice, but I think it was meant to. If I really think about the example, it’s more complicated than that. If I missed the intended point of the example, let me know and I’ll try again.
Reorienting my understanding of this discussion to be, as you say, normative: yes, when offered a choice between destroying a sentimental but not otherwise valuable item and killing a billion humans, I endorse destroying the item, no matter how many other humans there are in the world.
I even endorse it if everything is uncertain, with the usual expected-value calculation.
Glad to hear it. Sorry about that misunderstanding.
That said, as is often true of hypothetical questions, I don’t quite agree that the example you describe quite maps to that choice, but I think it was meant to. If I really think about the example, it’s more complicated than that.
Curses. I knew I should have gone with the rogue nanotech.
If I missed the intended point of the example, let me know and I’ll try again.
Right. This is why I don’t use “revealed preferences” to derive ethics, personally.
And neither do you, I’m such an idiot.That said.
Here’s a scenario:
I assume you would be willing to do it to save, say, a small country on modern-day Earth, although maybe I’m projecting here? Everything is certain, because revealed preferences suck at probability math.
Is it worth it?
Reorienting my understanding of this discussion to be, as you say, normative: yes, when offered a choice between destroying a sentimental but not otherwise valuable item and killing a billion humans, I endorse destroying the item, no matter how many other humans there are in the world.
I even endorse it if everything is uncertain, with the usual expected-value calculation.
That said, as is often true of hypothetical questions, I don’t quite agree that the example you describe quite maps to that choice, but I think it was meant to. If I really think about the example, it’s more complicated than that. If I missed the intended point of the example, let me know and I’ll try again.
Glad to hear it. Sorry about that misunderstanding.
Curses. I knew I should have gone with the rogue nanotech.
Nope, spot-on :)