I think your two assumptions lead to “the exponential increase in capabilities would likely break down at some point”. Whereas you say “the exponential increase in capabilities would likely break down before a singularity is reached”. Why? Hmm, are you thinking that “singularity” = “literally diverging to infinity”, or something? In that case there’s a much simpler argument: we live in a finite universe, therefore nothing will diverge to literally infinity. But I don’t think that’s the right definition of “singularity” anyway. Like, the wikipedia definition doesn’t say “literally infinity”. So what do you mean? Where does the “likely before a singularity” come from?
For my part, if there’s a recursive-self-improvement thing over the course of 1 week that leaves human intelligence in the dust, and results in AI for $1/hour that can trounce humans in every cognitive domain as soundly as AlphaZero can trounce us at chess, and it’s installed itself onto every hackable computer on Earth … well I’m gonna call that “definitely the singularity”, even if the recursive-self-improvement cycle “only” persisted for 10 doublings beyond human intelligence, or whatever, before petering out.
Incidentally, note that a human-brain-level computer can be ~10,000× less energy-efficient than the human brain itself, and its electricity bills would still be below human minimum wage in many countries.
Also, caricaturing slightly, but this comment section has some arguments of the form: A: “The probability of Singularity is <100%!” B: “No, the probability of Singularity is >0%!” A: “No, it’s <100%!!” … So I would encourage everyone here to agree that the probability is both >0% and <100%, which I am confident is not remotely controversial for anyone here. And then we can be more specific about what the disagreement is. :)
I think your two assumptions lead to “the exponential increase in capabilities would likely break down at some point”. Whereas you say “the exponential increase in capabilities would likely break down before a singularity is reached”. Why? Hmm, are you thinking that “singularity” = “literally diverging to infinity”, or something? In that case there’s a much simpler argument: we live in a finite universe, therefore nothing will diverge to literally infinity. But I don’t think that’s the right definition of “singularity” anyway. Like, the wikipedia definition doesn’t say “literally infinity”. So what do you mean? Where does the “likely before a singularity” come from?
For my part, if there’s a recursive-self-improvement thing over the course of 1 week that leaves human intelligence in the dust, and results in AI for $1/hour that can trounce humans in every cognitive domain as soundly as AlphaZero can trounce us at chess, and it’s installed itself onto every hackable computer on Earth … well I’m gonna call that “definitely the singularity”, even if the recursive-self-improvement cycle “only” persisted for 10 doublings beyond human intelligence, or whatever, before petering out.
Incidentally, note that a human-brain-level computer can be ~10,000× less energy-efficient than the human brain itself, and its electricity bills would still be below human minimum wage in many countries.
Also, caricaturing slightly, but this comment section has some arguments of the form:
A: “The probability of Singularity is <100%!”
B: “No, the probability of Singularity is >0%!”
A: “No, it’s <100%!!” …
So I would encourage everyone here to agree that the probability is both >0% and <100%, which I am confident is not remotely controversial for anyone here. And then we can be more specific about what the disagreement is. :)