The sophisticated reader presented with a slippery slope argument like that one first checks whether there really is a force driving us in a particular direction, that makes the metaphorical terrain a slippery slope rather than just a slippery field, and secondly they check whether there are any defensible points of cleavage in the metaphorical terrain that could be used to build a fence and stop the slide at some point.
The slippery slope argument you are quoting, when uprooted and placed in this context, seems to me to fail both tests. There’s no reason at all to descend progressively into the problems described, and even if there was you could draw a line and say “we’re just going to inform our mental model of any relevant facts we know that it doesn’t, and fix any mental processes our construct has that are clearly highly irrational”.
You haven’t given us a link but going by the principle of charity I imagine that what you’ve done here is take a genuine problem with building a weakly God-like friendly AI and tried to transplant the argument into the context of intervening in a suicide attempt, where it doesn’t belong.
The sophisticated reader presented with a slippery slope argument like that one first checks whether there really is a force driving us in a particular direction, that makes the metaphorical terrain a slippery slope rather than just a slippery field, and secondly they check whether there are any defensible points of cleavage in the metaphorical terrain that could be used to build a fence and stop the slide at some point.
The slippery slope argument you are quoting, when uprooted and placed in this context, seems to me to fail both tests. There’s no reason at all to descend progressively into the problems described, and even if there was you could draw a line and say “we’re just going to inform our mental model of any relevant facts we know that it doesn’t, and fix any mental processes our construct has that are clearly highly irrational”.
You haven’t given us a link but going by the principle of charity I imagine that what you’ve done here is take a genuine problem with building a weakly God-like friendly AI and tried to transplant the argument into the context of intervening in a suicide attempt, where it doesn’t belong.