I agree with some points here Bogdan, but not all of them.
I do think that current models are civilization-scale-catastrophe-risky (but importantly not x-risky!) from a misuse perspective, but not yet from a self-directed perspective. Which means neither Alignment nor Control are currently civilization-scale-catastrophe-risky, much less x-risky.
I also agree that pausing now would be counter-productive. My reasoning for this is that I agree with Samo Burja about some key points which are relevant here (while disagreeing with his conclusions due to other points).
To quote myself:
I agree with [Samo’s] premise that AGI will require fundamental scientific advances beyond currently deployed tech like transformer LLMs.
I agree that scientific progress is hard, usually slow and erratic, fundamentally different from engineering or bringing a product to market.
I agree with [Samo’s] estimate that the current hype around chat LLMs, and focus on bringing better versions to market, is slowing fundamental scientific progress by distracting top AI scientists from pursuit of theoretical advances.
Think about how you’d expect these factors to change if large AI training runs were paused. I think you might agree that this would likely result in a temporary shift in much of the top AI scientist talent to making theoretical progress. They’d want to be ready to come in strong after the pause was ended, with lots of new advances tested at small scale. I think this would actually result more high quality scientific thought directed at the heart of the problem of AGI, and thus make AGI very likely to be achieved sooner after the pause ends than it otherwise would have been.
I would go even farther, and make the claim that AGI could arise during a pause on large training runs. I think that the human brain is not a supercomputer, my upper estimate for ‘human brain inference’ is about at the level of a single 8x A100 server. Less than an 8x H100 server. Also, I have evidence from analysis of the long-range human connectome (long range axons are called tracts, so perhaps I should call this a ‘tractome’). [Hah, I just googled this term I came up with just now, and found it’s already in use, and that it brings up some very interesting neuroscience papers. Cool.] Anyway… I was saying, this evidence shows that the range of bandwidth (data throughput in bits per second) between two cortical regions in the human brain is typically around 5 mb/s, and maxes out at about 50 mb/s. In other words, well within range for distributed federated training runs to work over long distance internet connections. So unless you are willing to monitor the entire internet so robustly that nobody can scrape together the equivalent compute of an 8X A100 server, you can’t fully block AGI.
Of course, if you wanted to train the AGI in a reasonable amount of time, you’d want to do a parallel run of much more than a single inference instance of compute. So yeah, it’d definitely make things inconvenient if an international government were monitoring all datacenters… but far from impossible.
For the same reason, I don’t think a call to ‘Stop AI development permanently’ works without the hypothetical enforcement agency literally going around the world confiscating all personal computers and shutting down the internet. Not gonna happen, why even advocate for such a thing? Makes me think that Eliezer is advocating for this in order to have some intended effect other than this on the world.
I agree with some points here Bogdan, but not all of them.
I do think that current models are civilization-scale-catastrophe-risky (but importantly not x-risky!) from a misuse perspective, but not yet from a self-directed perspective. Which means neither Alignment nor Control are currently civilization-scale-catastrophe-risky, much less x-risky.
I also agree that pausing now would be counter-productive. My reasoning for this is that I agree with Samo Burja about some key points which are relevant here (while disagreeing with his conclusions due to other points).
To quote myself:
Think about how you’d expect these factors to change if large AI training runs were paused. I think you might agree that this would likely result in a temporary shift in much of the top AI scientist talent to making theoretical progress. They’d want to be ready to come in strong after the pause was ended, with lots of new advances tested at small scale. I think this would actually result more high quality scientific thought directed at the heart of the problem of AGI, and thus make AGI very likely to be achieved sooner after the pause ends than it otherwise would have been.
I would go even farther, and make the claim that AGI could arise during a pause on large training runs. I think that the human brain is not a supercomputer, my upper estimate for ‘human brain inference’ is about at the level of a single 8x A100 server. Less than an 8x H100 server. Also, I have evidence from analysis of the long-range human connectome (long range axons are called tracts, so perhaps I should call this a ‘tractome’). [Hah, I just googled this term I came up with just now, and found it’s already in use, and that it brings up some very interesting neuroscience papers. Cool.] Anyway… I was saying, this evidence shows that the range of bandwidth (data throughput in bits per second) between two cortical regions in the human brain is typically around 5 mb/s, and maxes out at about 50 mb/s. In other words, well within range for distributed federated training runs to work over long distance internet connections. So unless you are willing to monitor the entire internet so robustly that nobody can scrape together the equivalent compute of an 8X A100 server, you can’t fully block AGI.
Of course, if you wanted to train the AGI in a reasonable amount of time, you’d want to do a parallel run of much more than a single inference instance of compute. So yeah, it’d definitely make things inconvenient if an international government were monitoring all datacenters… but far from impossible.
For the same reason, I don’t think a call to ‘Stop AI development permanently’ works without the hypothetical enforcement agency literally going around the world confiscating all personal computers and shutting down the internet. Not gonna happen, why even advocate for such a thing? Makes me think that Eliezer is advocating for this in order to have some intended effect other than this on the world.