I don’t have anything against ftl in sf, either, but that seemed like an astonishingly bad argument for making it plausible.
Now that I think about it, the book may be of interest to LWers because it’s about telepathy making utilitarianism easier. And it’s a reasonably good sf novel.
It’s hard to say, which moral theories you had in mind, and what do you mean by “decent” ? For example, a strictly rule-based deontological system, such as the one outlined in certain holy books, may not benefit from telepathy, since its rules focus solely on prescribing certain specific actions.
Since this is LessWrong and there’s a strong leaning towards a certain view of normative ethics, I had better ask this before I go any further. Would you consider any form of deontology or virtue ethics to be a “decent moral theory”? It feels like I should check this before commenting any further. I know, for example, that at least one person here (not naming names) has openly said that all non-consequentialist approaches to ethics are “insane”.
I am not one of those who thinks non-consequentialist ethics are inherently nonsense. Reflecting on my position slightly, I was saying:
1) A “decent” moral system will very likely have the property that misleading others about one’s preferences will be advantageous to the individual, but bad for the group.
2) Telepathy makes misleading others about one’s preferences more difficult. That assumes telepathy is essentially involuntary mind-reading. If it is more like reliable cell phone service, then I’m not sure telepathy would make any moral system easier to implement.
Telepathy that’s more like reliable cellphone service would make a lot of general societal things, including any widely-agreed-upon moral system, easier to implement because transaction cost reductions benefit everyone involved.
Tentative: telepathy would be useful for consequentialism, but it would take more time and thought to gain the advantages from telepathy than it would for (preference?) utilitarianism.
I don’t have anything against ftl in sf, either, but that seemed like an astonishingly bad argument for making it plausible.
Now that I think about it, the book may be of interest to LWers because it’s about telepathy making utilitarianism easier. And it’s a reasonably good sf novel.
Is there any decent moral theory that wouldn’t be easier to implement with reliable telepathy?
It’s hard to say, which moral theories you had in mind, and what do you mean by “decent” ? For example, a strictly rule-based deontological system, such as the one outlined in certain holy books, may not benefit from telepathy, since its rules focus solely on prescribing certain specific actions.
Since this is LessWrong and there’s a strong leaning towards a certain view of normative ethics, I had better ask this before I go any further. Would you consider any form of deontology or virtue ethics to be a “decent moral theory”? It feels like I should check this before commenting any further. I know, for example, that at least one person here (not naming names) has openly said that all non-consequentialist approaches to ethics are “insane”.
I am not one of those who thinks non-consequentialist ethics are inherently nonsense. Reflecting on my position slightly, I was saying:
1) A “decent” moral system will very likely have the property that misleading others about one’s preferences will be advantageous to the individual, but bad for the group.
2) Telepathy makes misleading others about one’s preferences more difficult. That assumes telepathy is essentially involuntary mind-reading. If it is more like reliable cell phone service, then I’m not sure telepathy would make any moral system easier to implement.
Telepathy that’s more like reliable cellphone service would make a lot of general societal things, including any widely-agreed-upon moral system, easier to implement because transaction cost reductions benefit everyone involved.
I expect that if telepathy of this sort were common, self-deception would be even more common than it already is.
Tentative: telepathy would be useful for consequentialism, but it would take more time and thought to gain the advantages from telepathy than it would for (preference?) utilitarianism.