To elaborate: Newtonian physics work within our “default” range of experience. If you go 99.99% of c, or are dealing with electrons, or a Dyson Sphere, then you’ll need new models. For the most part, our models of reality see certain “thresholds”, and you have to use different models for different sides of that threshold.
You see this in simple transitions like liquid <-> solid, and you see this pretty much any time you feed in incredibly small or large numbers. XKCD captures this nicely :)
So… the point? We shouldn’t expect our morality to scale past a certain situation, and in fact it is completely reasonable to assume that there is NO model that covers both normal human utilities AND utility monsters.
That’s a really great point. Do you think that attempts to create some sort of pluralistic consequentialism that tries to cover these huge situation more effectively, like I am doing, are a worthwhile effort, or do you think the odds of there being no model are high enough that the effort is probably wasted?
It’s worth pointing out that relativity gives the right answers at 0.01% light speed too, it just takes more computations to get the answer. A more complex model of morality that gives the same answers to our simple questions as our currently held system of morals seems quite desirable.
We shouldn’t expect our morality to scale past a certain situation
Indeed, it would be a little weird if it did, though I suppose that depends on what specific set of behaviors and values one chooses to draw the morality box around, too—I’m kind of wondering if “morality” is a red herring, although it’s hard to find the words here. In local lingo, I’m sort of thinking “pebblesorters”, as contrasted to moral agents, might be about as misleading as “p-zombies vs conscious humans.”
Heh, relativistic effects on morality.
To elaborate: Newtonian physics work within our “default” range of experience. If you go 99.99% of c, or are dealing with electrons, or a Dyson Sphere, then you’ll need new models. For the most part, our models of reality see certain “thresholds”, and you have to use different models for different sides of that threshold.
You see this in simple transitions like liquid <-> solid, and you see this pretty much any time you feed in incredibly small or large numbers. XKCD captures this nicely :)
So… the point? We shouldn’t expect our morality to scale past a certain situation, and in fact it is completely reasonable to assume that there is NO model that covers both normal human utilities AND utility monsters.
That’s a really great point. Do you think that attempts to create some sort of pluralistic consequentialism that tries to cover these huge situation more effectively, like I am doing, are a worthwhile effort, or do you think the odds of there being no model are high enough that the effort is probably wasted?
It’s worth pointing out that relativity gives the right answers at 0.01% light speed too, it just takes more computations to get the answer. A more complex model of morality that gives the same answers to our simple questions as our currently held system of morals seems quite desirable.
Indeed, it would be a little weird if it did, though I suppose that depends on what specific set of behaviors and values one chooses to draw the morality box around, too—I’m kind of wondering if “morality” is a red herring, although it’s hard to find the words here. In local lingo, I’m sort of thinking “pebblesorters”, as contrasted to moral agents, might be about as misleading as “p-zombies vs conscious humans.”