The pressure to replace humans with AIs can be framed as a general trend from evolutionary dynamics. Selection pressures incentivize AIs to act selfishly and evade safety measures.
Seems like the wrong frame? Evolution is based on mutation, which AIs won’t have. However, in the human world there is a similar and much faster dynamic based on the large natural variability between human agents (due to both genetic and environmental factors) which tends to cause people with certain characteristics to rise to the top (e.g. high intelligence, grit, power-seeking tendencies, narcissism, sociopathy). AIs and AGIs will have the additional characteristics of rapid training and being easy to copy, though I expect they’ll have less variety than humans.
Given the exponential increase in microprocessor speeds, AIs could process information at a pace that far exceeds human neurons. Due to the scalability of computational resources, AI could collaborate with an unlimited number of other AIs and form an unprecedented collective intelligence.
Worth noting that this was the case since long ago. Old AIs haven’t so much been slow as undersized.
Not sure what I would add to “Suggestions” other than “don’t build AGI” :D
Speaking of AGI, I’m often puzzled about the extent to which authors here say “AI” when they are talking about AGI. It seems to me that some observers think we’re crazy for worrying humanity GPT5 is going to kill all humans, when in fact our main concern is not AI but AGI.
Seems like the wrong frame? Evolution is based on mutation, which AIs won’t have. However, in the human world there is a similar and much faster dynamic based on the large natural variability between human agents (due to both genetic and environmental factors) which tends to cause people with certain characteristics to rise to the top (e.g. high intelligence, grit, power-seeking tendencies, narcissism, sociopathy). AIs and AGIs will have the additional characteristics of rapid training and being easy to copy, though I expect they’ll have less variety than humans.
Worth noting that this was the case since long ago. Old AIs haven’t so much been slow as undersized.
Not sure what I would add to “Suggestions” other than “don’t build AGI” :D
Speaking of AGI, I’m often puzzled about the extent to which authors here say “AI” when they are talking about AGI. It seems to me that some observers think we’re crazy for worrying humanity GPT5 is going to kill all humans, when in fact our main concern is not AI but AGI.