If it was sure that it’s far, far away—but it isn’t that sure at all—even then it would be a very important topic.
I am aware of that line of reasoning and reject it. Each person has about a 1 in 12000 chance of having an unruptured aneurysm in the brain that could be detected and then treated after having a virtually risk free magnetic resonance angiography. Given the utility you likely assign to your own life it would be rational to undergo such a screening. At least it would make much more sense than signing up for cryonics. Yet you don’t do it, do you?
There are literally thousands of activities that are rational given their associated utilities. But that line of reasoning, although technically correct, is completely useless because 1) you can’t really calculate shit 2) it’s impossible to do for any agent that isn’t computationally unbounded 3) you’ll just end up to sprinkle enough mathematics and logic over your fantasies to give them a veneer of respectability.
Expected utility maximization in combination with consequentialism is the ultimate recipe for extreme and absurd decisions and actions. People on lesswrong are fooling themselves by using formalized methods to evaluate informal evidence and pushing the use of intuition onto a lower level.
The right thing to do is to use the absurdity heuristic and discount crazy ideas that are merely possible but can’t be evaluated due to a lack of data.
Each person has about a 1 in 12000 chance of having an unruptured aneurysm in the brain that could be detected and then treated after having a virtually risk free magnetic resonance angiography. Given the utility you likely assign to your own life it would be rational to undergo such a screening. At least it would make much more sense than signing up for cryonics. Yet you don’t do it, do you?
Does this make sense? How much does the scan cost? How long does it take? What are the costs and risks of the treatment? Essentially, are the facts as you state them?
Expected utility maximization in combination with consequentialism is the ultimate recipe for extreme and absurd decisions and actions.
I don’t think so. Are you thinking of utilitarianism? If so, expected utility maximization != utilitarianism.
Ok what’s the difference here? By “utilitarianism” do you mean the old straw-man version of utilitarianism with bad utility function and no ethical injunctions?
I usually take utilitarianism to be consequentialism + max(E(U)) + sane human-value metaethics. Am I confused?
The term “utilitarianism” refers to maximising the combined happiness of all people. The page says:
Utilitarianism is an ethical theory holding that the proper course of action is the one that maximizes the overall “happiness”.
So: that’s a particular class of utility functions.
“Expected utility maximization” is a more general framework from decision theory. You can use any utility function with it—and you can use it to model practically any agent.
Utilitarianism is a pretty nutty personal moral philosophy, IMO. It is certainly very unnatural—due partly to its selflessness and lack of nepotism. It may have some merits as a politial philosophy (but even then...).
Is there a name for expected utility maximisation over a consequentialist utility function built from human value? Does “consequentialism” usually imply normal human value, or is it usually a general term?
Each person has about a 1 in 12000 chance of having an unruptured aneurysm in the brain that could be detected and then treated after having a virtually risk free magnetic resonance angiography. Given the utility you likely assign to your own life it would be rational to undergo such a screening. At least it would make much more sense than signing up for cryonics. Yet you don’t do it, do you?
My understanding from long-past reading of elective whole-body MRIs was that they were basically the perfect example of iatrogenics & how knowing about something can harm you / the danger of testing. What makes your example different?
(Note there is no such possible danger from cryonics: you’re already ‘dead’.)
I am aware of that line of reasoning and reject it. Each person has about a 1 in 12000 chance of having an unruptured aneurysm in the brain that could be detected and then treated after having a virtually risk free magnetic resonance angiography. Given the utility you likely assign to your own life it would be rational to undergo such a screening. At least it would make much more sense than signing up for cryonics. Yet you don’t do it, do you?
There are literally thousands of activities that are rational given their associated utilities. But that line of reasoning, although technically correct, is completely useless because 1) you can’t really calculate shit 2) it’s impossible to do for any agent that isn’t computationally unbounded 3) you’ll just end up to sprinkle enough mathematics and logic over your fantasies to give them a veneer of respectability.
Expected utility maximization in combination with consequentialism is the ultimate recipe for extreme and absurd decisions and actions. People on lesswrong are fooling themselves by using formalized methods to evaluate informal evidence and pushing the use of intuition onto a lower level.
The right thing to do is to use the absurdity heuristic and discount crazy ideas that are merely possible but can’t be evaluated due to a lack of data.
Does this make sense? How much does the scan cost? How long does it take? What are the costs and risks of the treatment? Essentially, are the facts as you state them?
I don’t think so. Are you thinking of utilitarianism? If so, expected utility maximization != utilitarianism.
Ok what’s the difference here? By “utilitarianism” do you mean the old straw-man version of utilitarianism with bad utility function and no ethical injunctions?
I usually take utilitarianism to be consequentialism + max(E(U)) + sane human-value metaethics. Am I confused?
The term “utilitarianism” refers to maximising the combined happiness of all people. The page says:
So: that’s a particular class of utility functions.
“Expected utility maximization” is a more general framework from decision theory. You can use any utility function with it—and you can use it to model practically any agent.
Utilitarianism is a pretty nutty personal moral philosophy, IMO. It is certainly very unnatural—due partly to its selflessness and lack of nepotism. It may have some merits as a politial philosophy (but even then...).
Thanks.
Is there a name for expected utility maximisation over a consequentialist utility function built from human value? Does “consequentialism” usually imply normal human value, or is it usually a general term?
See http://en.wikipedia.org/wiki/Consequentialism for your last question (it’s a general term).
The answer to your “Is there a name...” question is “no”—AFAIK.
I get the impression that most people around here approach morality from that perspective, it seems like something that ought to have a name.
My understanding from long-past reading of elective whole-body MRIs was that they were basically the perfect example of iatrogenics & how knowing about something can harm you / the danger of testing. What makes your example different?
(Note there is no such possible danger from cryonics: you’re already ‘dead’.)