Not equivocating but if intelligence is hard to scale and slightly better is not a threat, then there is no reason to be concerned about AI risk. (maybe a 1% x-risk suggested by OP is in fact a 1e-9 x-risk)
there are considerable individual differences in weather forecasting performances (it’s one of the more common topics to study in the forecasting literature),
I’d be interested in seeing any papers on individual differences in weather forecasting performance (even if IQ is not mentioned). My understanding was that it has been all NWP for the last half-century or so.
IQ shows up all the time in other forecasting topics as a major predictor
I’d be curious to see this too. My undering was that, for example, stock prediction was not only uncorrelated with IQ, but that above average performance was primarily selection bias. (ie above average forecasters for a given time period tend to regress toward the mean over subsequent time periods)
My understanding was that it has been all NWP for the last half-century or so.
There’s still a lot of human input. That was one of the criticisms of, IIRC, DeepMind’s recent foray into DL weather modeling—“oh, but the humans still do better in the complex cells where the storms and other rare events happen, and those are what matter in practice”.
My undering was that, for example, stock prediction was not only uncorrelated with IQ
Where did you get that, Taleb? Untrue. Investing performance is correlated with IQ: better market timing, less inefficient trading, less behavioral bias, longer in the market, more accurate forecasts about inflation and returns, etc. Since you bring up selection bias, Grinblatt et al 2012 studies the entire Finnish population with a population registry approach and finds that.
I spent some time reading the Grinnblatt paper. Thanks again for the link. I stand corrected on IQ being uncorrelated with stock prediction. One part did catch my eye.
Our findings relate to three strands of the literature. First, the IQ and trading behavior analysis builds on mounting evidence that individual investors exhibit wealth-reducing behavioral biases. Research, exemplified by Barber and Odean (2000, 2001, 2002), Grinblatt and Keloharju (2001), Rashes (2001), Campbell (2006), and Calvet, Campbell, and Sodini (2007, 2009a, 2009b), shows that these investors grossly under-diversify, trade too much, enter wrong ticker symbols, are subject to the disposition effect, and buy index funds with exorbitant expense ratios. Behavioral biases like these may partly explain why so many individual investors lose when trading in the stock market (as suggested in Odean (1999), Barber, Lee, Liu, and Odean (2009); and, for Finland, Grinblatt and Keloharju (2000)). IQ is a fundamen- tal attribute that seems likely to correlate with wealth- inhibiting behaviors.
I went to some of references, this one seemed a particularly cogent summary.
The take home seems to be that high-IQ investors exceed the performance of low-IQ investors, but institutional investors exceed the performance of individual investors. Maybe it is just insitutions selecting the smartest, but another coherent view is that the joint intelligence of the group(“institution”) exceeds the intelligence of high-IQ individuals. We might need more data to figure it out.
Since you bring up selection bias, Grinblatt et al 2012 studies the entire Finnish population with a population registry approach and finds that.
Thanks for the citation. That is the kind of information I was hoping for. Do you think that slightly better than human intelligence is sufficient to present an x-risk, or do you think it needs some sort of takeoff or acceleraton to present an x-risk?
Do you think that slightly better than human intelligence is sufficient to present an x-risk, or do you think it needs some sort of takeoff or acceleraton to present an x-risk?
I think less than human intelligence is sufficient for an x-risk because that is probably what is sufficient for a takeoff.
(GPT-3 needed like 1k discrete GPUs to train. Nvidia alone ships something on the order of >73,000k discrete GPUs… per year. How fast exactly do you think returns diminish and how confident are you that there are precisely zero capability spikes anywhere in the human and superhuman regimes? How intelligent does an agent need to be to send a HTTP request to the URL /ldap://myfirstrootkit.com on a few million domains?)
I think less than human intelligence is sufficient for an x-risk because that is probably what is sufficient for a takeoff.
If less than human intelligence is sufficient, wouldn’t humans have already done it? (or are you saying we’re doing it right now?)
How intelligent does an agent need to be to send a HTTP request to the URL /ldap://myfirstrootkit.comon a few million domains?)
A human could do this or write a bot to do this.(and they’ve tried) But they’d also be detected, as would an AI. I don’t see this as an x-risk, so much as a manageable problem.
(GPT-3 needed like 1k discrete GPUs to train. Nvidia alone ships something on the order of >73,000k discrete GPUs… per year. How fast exactly do you think returns diminish
I suspect they’ll diminish exponentially, because threat requires solving problems of exponential hardness. To me “1% of annual Nvidia GPUs”, or “0.1% annual GPU production” sounds like we’re at roughly N-3 of problem size we could solve by using 100% of annual GPU production.
how confident are you that there are precisely zero capability spikes anywhere in the human and superhuman regimes?
If less than human intelligence is sufficient, wouldn’t humans have already done it?
No. Humans are inherently incapable of countless things that software is capable of. To give an earlier example, humans can do things that evolution never could. And just as evolution can only accomplish things like ‘going to the moon’ by making agents that operate on the next level of capabilities, humans cannot do things like copy themselves billions of times or directly fuse their minds or be immortal or wave a hand to increase their brain size 100x. All of these are impossible. Not hard, not incredibly difficult—impossible. There is no human who has, or ever will be able to do those. The only way to remove these extremely narrow rigid binding constraints by making tools that remove the restrictions ie. are software. Once the restrictions are removed so the impossible becomes possible, even a stupid agent with no ceiling will eventually beat a smart agent rigidly bound by immutable biological limitations.
(Incidentally, did you know that AlphaGo before the Lee Sedol tournament was going to lose? But Google threw a whole bunch of early TPUs at the project about a month before to try to rescue it and AlphaGo had no ceiling, while Lee Sedol was human, all too human, so, he got crushed.)
Not equivocating but if intelligence is hard to scale and slightly better is not a threat, then there is no reason to be concerned about AI risk. (maybe a 1% x-risk suggested by OP is in fact a 1e-9 x-risk)
I’d be interested in seeing any papers on individual differences in weather forecasting performance (even if IQ is not mentioned). My understanding was that it has been all NWP for the last half-century or so.
I’d be curious to see this too. My undering was that, for example, stock prediction was not only uncorrelated with IQ, but that above average performance was primarily selection bias. (ie above average forecasters for a given time period tend to regress toward the mean over subsequent time periods)
So?
There’s still a lot of human input. That was one of the criticisms of, IIRC, DeepMind’s recent foray into DL weather modeling—“oh, but the humans still do better in the complex cells where the storms and other rare events happen, and those are what matter in practice”.
Where did you get that, Taleb? Untrue. Investing performance is correlated with IQ: better market timing, less inefficient trading, less behavioral bias, longer in the market, more accurate forecasts about inflation and returns, etc. Since you bring up selection bias, Grinblatt et al 2012 studies the entire Finnish population with a population registry approach and finds that.
I spent some time reading the Grinnblatt paper. Thanks again for the link. I stand corrected on IQ being uncorrelated with stock prediction. One part did catch my eye.
I went to some of references, this one seemed a particularly cogent summary.
https://faculty.haas.berkeley.edu/odean/papers%20current%20versions/behavior%20of%20individual%20investors.pdf
The take home seems to be that high-IQ investors exceed the performance of low-IQ investors, but institutional investors exceed the performance of individual investors. Maybe it is just insitutions selecting the smartest, but another coherent view is that the joint intelligence of the group(“institution”) exceeds the intelligence of high-IQ individuals. We might need more data to figure it out.
Thanks for the citation. That is the kind of information I was hoping for. Do you think that slightly better than human intelligence is sufficient to present an x-risk, or do you think it needs some sort of takeoff or acceleraton to present an x-risk?
I think I can probably explain the “so” in my response to Donald below.
I think less than human intelligence is sufficient for an x-risk because that is probably what is sufficient for a takeoff.
(GPT-3 needed like 1k discrete GPUs to train. Nvidia alone ships something on the order of >73,000k discrete GPUs… per year. How fast exactly do you think returns diminish and how confident are you that there are precisely zero capability spikes anywhere in the human and superhuman regimes? How intelligent does an agent need to be to send a HTTP request to the URL
/ldap://myfirstrootkit.com
on a few million domains?)If less than human intelligence is sufficient, wouldn’t humans have already done it? (or are you saying we’re doing it right now?)
A human could do this or write a bot to do this.(and they’ve tried) But they’d also be detected, as would an AI. I don’t see this as an x-risk, so much as a manageable problem.
I suspect they’ll diminish exponentially, because threat requires solving problems of exponential hardness. To me “1% of annual Nvidia GPUs”, or “0.1% annual GPU production” sounds like we’re at roughly N-3 of problem size we could solve by using 100% of annual GPU production.
I’m not confident in that.
No. Humans are inherently incapable of countless things that software is capable of. To give an earlier example, humans can do things that evolution never could. And just as evolution can only accomplish things like ‘going to the moon’ by making agents that operate on the next level of capabilities, humans cannot do things like copy themselves billions of times or directly fuse their minds or be immortal or wave a hand to increase their brain size 100x. All of these are impossible. Not hard, not incredibly difficult—impossible. There is no human who has, or ever will be able to do those. The only way to remove these extremely narrow rigid binding constraints by making tools that remove the restrictions ie. are software. Once the restrictions are removed so the impossible becomes possible, even a stupid agent with no ceiling will eventually beat a smart agent rigidly bound by immutable biological limitations.
(Incidentally, did you know that AlphaGo before the Lee Sedol tournament was going to lose? But Google threw a whole bunch of early TPUs at the project about a month before to try to rescue it and AlphaGo had no ceiling, while Lee Sedol was human, all too human, so, he got crushed.)