Very good post! I feel like I understand the concept of slippery slopes much better now. The breakdown into different categories of “slippery slope argument”, and cases where they would be valid, is one I find especially useful.
It does make me wonder whether the existence of “slippery slopes” is based on the flawed design of human brains, i.e. hyperbolic discounting, and just the general tendency to alter beliefs for self-consistency or because they sound/feel better. (An example of this would be a real-life version of Gandhi’s pill, where starting out with a small not-very-bad act which someone can justify within their moral system, like stealing candy, could lead someone to be more comfortable with stealing in general and end up stealing much bigger things.) Would a non-human ‘perfectly rational’ alien, one that didn’t hyperbolically discount and had no tendency to automatically update their beliefs/moral system so that their past actions weren’t evil, still need to worry about slippery slope arguments?
Would a non-human ‘perfectly rational’ alien, one that didn’t hyperbolically discount and had no tendency to automatically update their beliefs/moral system so that their past actions weren’t evil, still need to worry about slippery slope arguments?
Not from hyperbolic discounting or value drift. Maybe from other sources, like the coalition argument presented by Yvain.
Very good post! I feel like I understand the concept of slippery slopes much better now. The breakdown into different categories of “slippery slope argument”, and cases where they would be valid, is one I find especially useful.
It does make me wonder whether the existence of “slippery slopes” is based on the flawed design of human brains, i.e. hyperbolic discounting, and just the general tendency to alter beliefs for self-consistency or because they sound/feel better. (An example of this would be a real-life version of Gandhi’s pill, where starting out with a small not-very-bad act which someone can justify within their moral system, like stealing candy, could lead someone to be more comfortable with stealing in general and end up stealing much bigger things.) Would a non-human ‘perfectly rational’ alien, one that didn’t hyperbolically discount and had no tendency to automatically update their beliefs/moral system so that their past actions weren’t evil, still need to worry about slippery slope arguments?
Not from hyperbolic discounting or value drift. Maybe from other sources, like the coalition argument presented by Yvain.
You could still hit them with large rewards for making themselves less rational, and thus recreate the slippery slope argument along that axis.
Or large rewards for changing their utility function.