And if not, if you want to argue against my claims in some other way, please do so without identifying them with a more specific storyline.
That’s one of the main problems I have with the whole existential risks prediction business. There is a specific storyline, it is comprised in the vagueness of your claims. If you tried to pin down a concept like ‘recursive self-improvement’ that supports the notion of an existential risk, you would end up with an argument that is strongly conjunctive. Most of the arguments in favor of risks from AI derive their appeal from vagueness, that doesn’t mean that they are disjunctive.
That’s one of the main problems I have with the whole existential risks prediction business. There is a specific storyline, it is comprised in the vagueness of your claims. If you tried to pin down a concept like ‘recursive self-improvement’ that supports the notion of an existential risk, you would end up with an argument that is strongly conjunctive. Most of the arguments in favor of risks from AI derive their appeal from vagueness, that doesn’t mean that they are disjunctive.