I mean, I agree that there are psycho-sociological similarities between religions and the AI risk movement (and indeed, I sometimes pejoratively refer to the latter as a “robot cult”), but analyzing the properties of the social group that believes that AI is an extinction risk is a separate question from whether AI in fact poses an extinction risk, which one could call Armageddon. (You could spend vast amounts of money trying to persuade people of true things, or false things; the money doesn’t care either way.)
Obviously, there’s not going to be a “proof” of things that haven’t happened yet, but there’s lots of informed speculation. Have you read, say, “The Alignment Problem from a Deep Learning Perspective”? (That may not be the best introduction for you, depending on the reasons for your skepticism, but it’s the one that happened to come to mind, which is more grounded in real AI research than previous informed speculation that had less empirical data to work from.)
I looked at the paper you recommended Zack. The specific section having to do with “how” AGI is developed (para 1.2) skirts around the problem.
“We assume that AGI is developed by pretraining a single large foundation model using selfsupervised learning on (possibly multi-modal) data [Bommasani et al., 2021], and then fine-tuning it using model-free reinforcement learning (RL) with a reward function learned from human feedback [Christiano et al., 2017] on a wide range of computer-based tasks.4 This setup combines elements of the techniques used to train cutting-edge systems such as GPT-4 [OpenAI, 2023a], Sparrow [Glaese et al., 2022], and ACT-1 [Adept, 2022]; we assume, however, that 2 the resulting policy goes far beyond their current capabilities, due to improvements in architectures, scale, and training tasks. We expect a similar analysis to apply if AGI training involves related techniques such as model-based RL and planning [Sutton and Barto, 2018] (with learned reward functions), goal-conditioned sequence modeling [Chen et al., 2021, Li et al., 2022, Schmidhuber, 2020], or RL on rewards learned via inverse RL [Ng and Russell, 2000]—however, these are beyond our current scope.”
Altman has recently said in a speech that continuing to do what has led them to GPT4 is probably not going to get to AGI. “”Let’s use the word superintelligence now, as superintelligence can’t discover novel physics, I don’t think it’s a superintelligence. Training on the data of what you know, teaching to clone the behavior of humans and human text, I don’t think that’s going to get there. So there’s this question that has been debated in the field for a long time: what do we have to do in addition to a language model to make a system that can go discover new physics?”
I think it’s pretty clear that no one has a clear path to AGI, nor do we know what a superintelligence will do, yet the Longtermist ecosystem is thriving. I find that curious, to say the least.
Thank you for the link to that paper, Zack. That’s not one that I’ve read yet.
And you’re correct that I raised two separate issues. I’m interested in hearing any responses that members of this community would like to give to either issue.
I mean, I agree that there are psycho-sociological similarities between religions and the AI risk movement (and indeed, I sometimes pejoratively refer to the latter as a “robot cult”), but analyzing the properties of the social group that believes that AI is an extinction risk is a separate question from whether AI in fact poses an extinction risk, which one could call Armageddon. (You could spend vast amounts of money trying to persuade people of true things, or false things; the money doesn’t care either way.)
Obviously, there’s not going to be a “proof” of things that haven’t happened yet, but there’s lots of informed speculation. Have you read, say, “The Alignment Problem from a Deep Learning Perspective”? (That may not be the best introduction for you, depending on the reasons for your skepticism, but it’s the one that happened to come to mind, which is more grounded in real AI research than previous informed speculation that had less empirical data to work from.)
I looked at the paper you recommended Zack. The specific section having to do with “how” AGI is developed (para 1.2) skirts around the problem.
“We assume that AGI is developed by pretraining a single large foundation model using selfsupervised learning on (possibly multi-modal) data [Bommasani et al., 2021], and then fine-tuning it using model-free reinforcement learning (RL) with a reward function learned from human feedback [Christiano et al., 2017] on a wide range of computer-based tasks.4 This setup combines elements of the techniques used to train cutting-edge systems such as GPT-4 [OpenAI, 2023a], Sparrow [Glaese et al., 2022], and ACT-1 [Adept, 2022]; we assume, however, that 2 the resulting policy goes far beyond their current capabilities, due to improvements in architectures, scale, and training tasks. We expect a similar analysis to apply if AGI training involves related techniques such as model-based RL and planning [Sutton and Barto, 2018] (with learned reward functions), goal-conditioned sequence modeling [Chen et al., 2021, Li et al., 2022, Schmidhuber, 2020], or RL on rewards learned via inverse RL [Ng and Russell, 2000]—however, these are beyond our current scope.”
Altman has recently said in a speech that continuing to do what has led them to GPT4 is probably not going to get to AGI. “”Let’s use the word superintelligence now, as superintelligence can’t discover novel physics, I don’t think it’s a superintelligence. Training on the data of what you know, teaching to clone the behavior of humans and human text, I don’t think that’s going to get there. So there’s this question that has been debated in the field for a long time: what do we have to do in addition to a language model to make a system that can go discover new physics?”
https://the-decoder.com/sam-altman-on-agi-scaling-large-language-models-is-not-enough/
I think it’s pretty clear that no one has a clear path to AGI, nor do we know what a superintelligence will do, yet the Longtermist ecosystem is thriving. I find that curious, to say the least.
Thank you for the link to that paper, Zack. That’s not one that I’ve read yet.
And you’re correct that I raised two separate issues. I’m interested in hearing any responses that members of this community would like to give to either issue.