Since you bring up selection bias, Grinblatt et al 2012 studies the entire Finnish population with a population registry approach and finds that.
Thanks for the citation. That is the kind of information I was hoping for. Do you think that slightly better than human intelligence is sufficient to present an x-risk, or do you think it needs some sort of takeoff or acceleraton to present an x-risk?
Do you think that slightly better than human intelligence is sufficient to present an x-risk, or do you think it needs some sort of takeoff or acceleraton to present an x-risk?
I think less than human intelligence is sufficient for an x-risk because that is probably what is sufficient for a takeoff.
(GPT-3 needed like 1k discrete GPUs to train. Nvidia alone ships something on the order of >73,000k discrete GPUs… per year. How fast exactly do you think returns diminish and how confident are you that there are precisely zero capability spikes anywhere in the human and superhuman regimes? How intelligent does an agent need to be to send a HTTP request to the URL /ldap://myfirstrootkit.com on a few million domains?)
I think less than human intelligence is sufficient for an x-risk because that is probably what is sufficient for a takeoff.
If less than human intelligence is sufficient, wouldn’t humans have already done it? (or are you saying we’re doing it right now?)
How intelligent does an agent need to be to send a HTTP request to the URL /ldap://myfirstrootkit.comon a few million domains?)
A human could do this or write a bot to do this.(and they’ve tried) But they’d also be detected, as would an AI. I don’t see this as an x-risk, so much as a manageable problem.
(GPT-3 needed like 1k discrete GPUs to train. Nvidia alone ships something on the order of >73,000k discrete GPUs… per year. How fast exactly do you think returns diminish
I suspect they’ll diminish exponentially, because threat requires solving problems of exponential hardness. To me “1% of annual Nvidia GPUs”, or “0.1% annual GPU production” sounds like we’re at roughly N-3 of problem size we could solve by using 100% of annual GPU production.
how confident are you that there are precisely zero capability spikes anywhere in the human and superhuman regimes?
If less than human intelligence is sufficient, wouldn’t humans have already done it?
No. Humans are inherently incapable of countless things that software is capable of. To give an earlier example, humans can do things that evolution never could. And just as evolution can only accomplish things like ‘going to the moon’ by making agents that operate on the next level of capabilities, humans cannot do things like copy themselves billions of times or directly fuse their minds or be immortal or wave a hand to increase their brain size 100x. All of these are impossible. Not hard, not incredibly difficult—impossible. There is no human who has, or ever will be able to do those. The only way to remove these extremely narrow rigid binding constraints by making tools that remove the restrictions ie. are software. Once the restrictions are removed so the impossible becomes possible, even a stupid agent with no ceiling will eventually beat a smart agent rigidly bound by immutable biological limitations.
(Incidentally, did you know that AlphaGo before the Lee Sedol tournament was going to lose? But Google threw a whole bunch of early TPUs at the project about a month before to try to rescue it and AlphaGo had no ceiling, while Lee Sedol was human, all too human, so, he got crushed.)
Thanks for the citation. That is the kind of information I was hoping for. Do you think that slightly better than human intelligence is sufficient to present an x-risk, or do you think it needs some sort of takeoff or acceleraton to present an x-risk?
I think I can probably explain the “so” in my response to Donald below.
I think less than human intelligence is sufficient for an x-risk because that is probably what is sufficient for a takeoff.
(GPT-3 needed like 1k discrete GPUs to train. Nvidia alone ships something on the order of >73,000k discrete GPUs… per year. How fast exactly do you think returns diminish and how confident are you that there are precisely zero capability spikes anywhere in the human and superhuman regimes? How intelligent does an agent need to be to send a HTTP request to the URL
/ldap://myfirstrootkit.com
on a few million domains?)If less than human intelligence is sufficient, wouldn’t humans have already done it? (or are you saying we’re doing it right now?)
A human could do this or write a bot to do this.(and they’ve tried) But they’d also be detected, as would an AI. I don’t see this as an x-risk, so much as a manageable problem.
I suspect they’ll diminish exponentially, because threat requires solving problems of exponential hardness. To me “1% of annual Nvidia GPUs”, or “0.1% annual GPU production” sounds like we’re at roughly N-3 of problem size we could solve by using 100% of annual GPU production.
I’m not confident in that.
No. Humans are inherently incapable of countless things that software is capable of. To give an earlier example, humans can do things that evolution never could. And just as evolution can only accomplish things like ‘going to the moon’ by making agents that operate on the next level of capabilities, humans cannot do things like copy themselves billions of times or directly fuse their minds or be immortal or wave a hand to increase their brain size 100x. All of these are impossible. Not hard, not incredibly difficult—impossible. There is no human who has, or ever will be able to do those. The only way to remove these extremely narrow rigid binding constraints by making tools that remove the restrictions ie. are software. Once the restrictions are removed so the impossible becomes possible, even a stupid agent with no ceiling will eventually beat a smart agent rigidly bound by immutable biological limitations.
(Incidentally, did you know that AlphaGo before the Lee Sedol tournament was going to lose? But Google threw a whole bunch of early TPUs at the project about a month before to try to rescue it and AlphaGo had no ceiling, while Lee Sedol was human, all too human, so, he got crushed.)