(2) is plausible-ish. I can certainly envisage decision theories in which cloning oneself is bad.
Suppose your decision theory is “I want to maximise the amount of good I cause” and your causal model is such that the actions of your clone do not count as caused by you (because the agency of the clone “cut off” causation flowing backwards, like a valve). Then you won’t want to clone yourself. Does this decision theory emerge from SGD? Idk, but it seems roughly as SGD-simple as other decision theories.
Or, suppose you’re worried that your clone will have different values than you. Maybe you think their values will drift. Or maybe you think your values will drift and you have a decision theory which tracks your future values.
(3) is this nonsense? Maybe. I think that something like “universal intelligence” might apply to collective humanity (~1.5% likelihood) in a way that makes speed and memory not that irrelevant.
More plausibly, it might be that humans are universally agentic, such that: (a) There exists some tool AI such that for all AGI, Human + Tool is at least as agentic as the AGI. (b) For all AGI, there exists some tool AI such that for all AGI, Human + Tool is at least as smart as the AGI.
Overall, none of these arguments gets p(Doom)<0.01, but I think they do get p(Doom)<0.99.
(p.s. I admire David Deutsch but his idiosyncratic ideology clouds his judgement. He’s very pro-tech and pro-progress, and also has this Popperian mindset where the best way humans can learn is trial-and-error (which is obviously blind to existential risk).)
(1) is clearly nonsense.
(2) is plausible-ish. I can certainly envisage decision theories in which cloning oneself is bad.
Suppose your decision theory is “I want to maximise the amount of good I cause” and your causal model is such that the actions of your clone do not count as caused by you (because the agency of the clone “cut off” causation flowing backwards, like a valve). Then you won’t want to clone yourself. Does this decision theory emerge from SGD? Idk, but it seems roughly as SGD-simple as other decision theories.
Or, suppose you’re worried that your clone will have different values than you. Maybe you think their values will drift. Or maybe you think your values will drift and you have a decision theory which tracks your future values.
(3) is this nonsense? Maybe. I think that something like “universal intelligence” might apply to collective humanity (~1.5% likelihood) in a way that makes speed and memory not that irrelevant.
More plausibly, it might be that humans are universally agentic, such that:
(a) There exists some tool AI such that for all AGI, Human + Tool is at least as agentic as the AGI.
(b) For all AGI, there exists some tool AI such that for all AGI, Human + Tool is at least as smart as the AGI.
Overall, none of these arguments gets p(Doom)<0.01, but I think they do get p(Doom)<0.99.
(p.s. I admire David Deutsch but his idiosyncratic ideology clouds his judgement. He’s very pro-tech and pro-progress, and also has this Popperian mindset where the best way humans can learn is trial-and-error (which is obviously blind to existential risk).)