Taking a cue from some earlier writing by Eli, I suppose one way to give ethical systems a functional test is to imagine having access to a genie. An altruist might ask the genie to maximize the amount of happiness in the universe or something like that, in which case the genie might create a huge number of wireheads. This seems to me like a bad outcome, and would likely be seen as a bad outcome by the altruist who made the request of the genie. A selfish person might say to the genie “create the scenario I most want/approve of.” Then it would be impossible for the genie to carry out some horrible scenario the selfish person doesn’t want. For this reason selfishness wins some points in my book. If the selfish person wants the desires of others to be met (as many people do), I, as an innocent bystander, might end up with a scenario that I approve of too. (I think the only way to improve upon this is if the person addressing the genie has the desire to want things which they would want if they had an unlimited amount of time and intelligence to think about it. I believe Eli calls this “external reference semantics.”)
It seems like this is based more on the person’s ability to optimize. The altruistic person who realized this flaw would then be able to (assuming s/he had the intelligence and rationality to do so) calculate the best possible wish to benefit the most number of people.
Notice how you had to assume the altruist to have the extraordinary degree of intelligence and rationality to calculate the best possible wish and Stephen merely had to assume that the selfishness was of the goodwill-toward-men-if-it-doesn’t-cost-me-anything sort? When you require less implausible assumptions to render a given ethical philosophy genie-resilient, the philosophy is more genie-resilient.
Taking a cue from some earlier writing by Eli, I suppose one way to give ethical systems a functional test is to imagine having access to a genie. An altruist might ask the genie to maximize the amount of happiness in the universe or something like that, in which case the genie might create a huge number of wireheads. This seems to me like a bad outcome, and would likely be seen as a bad outcome by the altruist who made the request of the genie. A selfish person might say to the genie “create the scenario I most want/approve of.” Then it would be impossible for the genie to carry out some horrible scenario the selfish person doesn’t want. For this reason selfishness wins some points in my book. If the selfish person wants the desires of others to be met (as many people do), I, as an innocent bystander, might end up with a scenario that I approve of too. (I think the only way to improve upon this is if the person addressing the genie has the desire to want things which they would want if they had an unlimited amount of time and intelligence to think about it. I believe Eli calls this “external reference semantics.”)
It seems like this is based more on the person’s ability to optimize. The altruistic person who realized this flaw would then be able to (assuming s/he had the intelligence and rationality to do so) calculate the best possible wish to benefit the most number of people.
Notice how you had to assume the altruist to have the extraordinary degree of intelligence and rationality to calculate the best possible wish and Stephen merely had to assume that the selfishness was of the goodwill-toward-men-if-it-doesn’t-cost-me-anything sort? When you require less implausible assumptions to render a given ethical philosophy genie-resilient, the philosophy is more genie-resilient.