A lot of predictions about AI psychology are premised on the AI being some form of deep learning algorithm. From what I can see, deep learning requires geometric computing power for linear gains in intelligence, and thus (practically speaking) cannot scale to sentience.
For a more expert/in depth take look at: https://arxiv.org/pdf/2007.05558.pdf
Why do people think deep learning algorithms can scale to sentience without unreasonable amounts of computational power?
1: This doesn’t sound like what I’m hearing people say? Using the word sentience might have been a mistake. Is it reasonable to expect that the first AI to foom will be no more intelligent than say, a squirrel?
2a: Should we be convinced that neurons are basically doing deep learning? I didn’t think we understood neurons to that degree?
2b: What is meant by [most things a human can do]? This sounds to me like an empty statement. Most things a human can do are completely pointless flailing actions. Do we mean, most jobs in modern America? Do we expect roombas to foom? Self driving cars? Or like, most jobs in modern America still sounds like a really low standard, requiring very little intelligence?
My expected answer was somewhere along the lines of “We can achieve better results than that because of something something.” or “We can provide much better computers in the near future, so this doesn’t matter.”
What I’m hearing here is “Intelligence is unnecessary for AI to be (existentially) dangerous.” This is surprising, and I expect, wrong (in the sense of not being what’s being said/what the other side believes.) (though also in the sense of not being true, but that’s neither here nor there.)