To generate a plausible crux, one area I suspect may be driving differences in the scenario’s plausibility is the question “how exploitable are humans and reality with intelligence?” and a related question “how much effort does it take to generate improvements?”
To answer the question of “how much effort does it take to generate improvements?” we’d like to know how much progress is done usually, assuming we know little about the problem like say social engineering or hacking. And in this case, logarithmic returns are the usual best guide to progress because problems usually have a structure where a similar amount of resources solves some part of the problem. Some more justification can be found here:
https://www.fhi.ox.ac.uk/law-of-logarithmic-returns/
As far as the intelligence question, I do tend to say that reality is probably exploitable enough that an intelligence that is at most one order of magnitude more intelligent than us and with radically nonhuman motivations could plausibly collapse civilization, and the reason this hasn’t happened is that most people’s motivations aren’t to collapse the global civilization. This is because humans are very bounded in both motivations and intelligence.
Intelligence is the only thing in the real world that follows a normal distribution, and we have some reason to think that this isn’t the normal state of affairs and only arises due to constraints on energy, size and the fact that they add up additively, which is not normal for distributions. AI intelligence will look more like either a power-law or log-normal, and this implies more extreme conclusions on intelligence.
To generate a plausible crux, one area I suspect may be driving differences in the scenario’s plausibility is the question “how exploitable are humans and reality with intelligence?” and a related question “how much effort does it take to generate improvements?”
To answer the question of “how much effort does it take to generate improvements?” we’d like to know how much progress is done usually, assuming we know little about the problem like say social engineering or hacking. And in this case, logarithmic returns are the usual best guide to progress because problems usually have a structure where a similar amount of resources solves some part of the problem. Some more justification can be found here: https://www.fhi.ox.ac.uk/law-of-logarithmic-returns/
https://www.fhi.ox.ac.uk/theory-of-log-returns/
https://forum.effectivealtruism.org/posts/4rGpNNoHxxNyEHde3/most-problems-don-t-differ-dramatically-in-tractability
As far as the intelligence question, I do tend to say that reality is probably exploitable enough that an intelligence that is at most one order of magnitude more intelligent than us and with radically nonhuman motivations could plausibly collapse civilization, and the reason this hasn’t happened is that most people’s motivations aren’t to collapse the global civilization. This is because humans are very bounded in both motivations and intelligence.
Intelligence is the only thing in the real world that follows a normal distribution, and we have some reason to think that this isn’t the normal state of affairs and only arises due to constraints on energy, size and the fact that they add up additively, which is not normal for distributions. AI intelligence will look more like either a power-law or log-normal, and this implies more extreme conclusions on intelligence.