Anyways, do you have any good policy ideas besides centralizing all the latest AI silicon into places where it can be inspected?
The little problem with pauses in this scenario is say you can get AI models from thousands of years in the future today. What else can you get your hands on...
Once there are AGIs and they had some time to figure things out, or once there are ASIs (which don’t necessarily need to figure things out at length to become able to start making relatively perfect moves), it becomes possible to reach into the bag of their technological wisdom and pull out scary stuff that won’t be contained on their original hardware, even if the AIs themselves remain contained. So for a pause to be effective, it needs to prevent existence of such AIs, containing them is ineffective if they are sufficiently free to go and figure out the scary things.
Without a pause, the process of pulling things out of the bag needs to be extremely disciplined, focusing on pivotal processes that would prevent yourself or others from accidentally or intentionally pulling out an apocalypse 6 months later. And hoping that there’s nothing going on that releases the scary things outside of your extremely disciplined process intended to end the acute risk period, because hope is the only thing you have going for you without decades of research that you don’t have because there was no pause.
Dangerous AGI means all strategically relevant human capabilities which means robotic tool use. It may not be physically possible on a 2070 due to latency.
There are many meanings of “AGI”, the meaning I’m intending in this context is about cognitive competence. The choice to focus on this meaning rather than some other meaning follows from what I expect to pose existential threat. In this sense, “(existentially) dangerous AGI” means the consequences of its cognitive activities might disempower or kill everyone. The activities don’t need to be about personally controlling terminators, as a silly example setting up a company that designs terminators would have similar effects without requiring good latency.
as a silly example setting up a company that designs terminators would have similar effects without requiring good latency.
Just a note here : good robotics experts today are “hands on” with the hardware. It gives humans are more grounded understanding of current designs/lets them iterate onto the next one. Good design doesn’t come from just thinking about it, and this would apply to designing humanoid combat robots.
This also is why more grounded current generation experts will strong disagree on the very idea of pulling any technology from thousands of years in the future. Current generation experts will, and do in ai debates, say that you need to build a prototype, test it in the real world, build another based on the information gained, and so on.
This is factually how all current technology was developed. No humans have, to my knowledge, ever done what you described—skipped a technology generation without a prototype or many at scale physical (or software) creations being used by end users. (Historically it has required more and more scale at later stages)
If this limitation still applies to superintelligence—and there are reasons to think it might but I would request a dialogue not a comment deep in a thread few will read—then the concerns you have expressed regarding future superintelligence are not legitimate worries.
If ASIs are in fact limited the same way, the way the world would be different is that each generation of technology developed by ASI would get built and deployed at scale. The human host country would use and test the new technology for a time period. The information gained from actual use is sold back to the ASI owners who then develop the next generation.
This whole iterative process is faster than human tech generation but it still happens in discrete steps and on a large, visible scale, and at human perceptible timescales. Probably weeks per cycle not months to the 1-2 year cycle humans are able to do.
There are reasons it would take weeks and it’s not just human feedback, you need time to deploy a product and you are mining 1 percent or lower edge cases in later development stages.
Yes, physical prototypes being inessential is decidedly not business as usual. Without doing things in the physical world, there need to be models for simulations, things like AlphaFold 2 (which predicts far more than would be possible to experimentally observe directly). You need enough data to define the rules of the physical world, and efficient simulations of what the rules imply for any project you consider. I expect automated theory at sufficiently large scale or superintelligent quality to blur the line between (simulated) experiments and impossibly good one shot engineering.
And the way it would fail would be if the simulations have an unavoidable error factor because real world physics is incomputable or a problem class above np (there’s posts here showing it is). The other way it would fail is if the simulations are missing information, for example in real battles between humanoid terminators the enemy might discover strategies the simulation didn’t model that are effective. So after deploying a lot of humanoid robots, rival terminators start winning the battles and it’s back to the design stage.
If you did it all in simulation I predict the robot would immediately fail and be unable to move. Humanoid robots are interdependent systems.
I am sure you know current protein folding algorithms are not accurate enough to actually use for protein engineering. You have to actually make the protein and test it along with the molecules you want binding logic for, and you will need to adjust your design. If the above is still true for ASI they will be unable to do protein engineering without a wet lab to build and test the parts for every step, where there will be hundreds of failures for every success.
If it does work that way then ASI will be faster than humans—since they can analyze more information at once and learn faster and conservatively try many routes in parallel—but not by the factors you imagine. Maybe 10 times faster not millions. This is because of Amdahls law.
The ASI are also cheaper. So it would be like a world where every technology in every field is being developed at the maximum speed humans could run at times 10.
(Theorizing about ASIs that have no access to physical reality feels noncentral in 2023 when GPT-4 has access to everything and everyone, and integration is only going to get stronger. But for the hypothetical ASI that grew out of an airgapped multimodal 1e29 LLM that has seen all youtube and read all papers and books and the web, I think ability to do good one shot engineering holds.)
(Also, we were discussing an exfiltrated AGI, for why else is RTX 2070 relevant, that happens to lack good latency to control robots. Presumably it doesn’t have the godshatter of technical knowledge, or else it doesn’t really matter that it’s a research-capable AGI. But it now has access to the physical world and can build prototypes. It can build another superintelligence. If it does have a bequest of ASI’s technical knowledge, it can just work to setup unsanctioned datacenters or a distributed network and run an OOMs-more-efficient-than-humanity’s-first-try superintelligence there.)
Predictability is vastly improved by developing the thing you need to predict yourself, especially when you intend to one shot it. Humans don’t do this, because for humans it happens to be much faster and cheaper to build prototypes, we are too slow at thinking useful thoughts. We possibly could succeed a lot more than observed in practice if each prototype was preceded by centuries of simulations and the prototypes were built with insane redundancies.
Simulations get better with data and with better learning algorithms. Looking at how a simulation works, it’s possible to spot issues and improve the simulation, including for the purpose of simulating a particular thing. Serial speed advantage more directly gives theory and general software from distant future (as opposed to engineering designs and experimental data). This includes theory and software for good learning algorithms, those that have much better sample efficiency and interrogate everything about the original 1e29 LLM to learn more of what its weights imply about the physical world. It’s a lot of data, who knows what details can be extracted from it from the position of theoretical and software-technological maturity.
None of this exists now though. Speculating about the future when it depends on all these unknowns and never before seen capabilities is dangerous—you’re virtually certain to be wrong. The uncertainty comes from all the moving parts in your model. Like you have:
Immense amounts of compute easily available
Accurate simulations of the world
Fully automated agi, there’s no humans helping at all, the model never gets stuck or just crashes from a bug in the lower framework
Enormously past human capabilities ASI. Not just a modest amount.
The reason you are probably wrong is just probability, if each step has a 50 percent chance of being right it’s 0.5^4. Dont think it of me saying you’re wrong.
And then only with all these pieces, humans are maybe doomed and will soon cease to exist. Therefore we should stop everything today.
While if just 1 piece is wrong, then this is the wrong choice to make. Right?
You’re also against a pro technology prior. Meaning I think you would have to actually prove the above—demo it—to convince people this the actual world we are in.
That’s because “future tech instead of turning out to be over hyped is going to be so amazing and perfect it can kill everyone quickly and easily” is against all the priors where tech turned out to be underwhelming and not that good. Like convincing someone the wolf is real when there’s been probably a million false alarms.
I don’t know how to think about this correctly. Like I feel like I should be weighting in the mountain of evidence I mentioned but if I do that then humans will always die to the ASI. Because there’s no warning. The whole threat model is that these are capabilities that are never seen prior to a certain point.
The whole threat model is that these are capabilities that are never seen prior to a certain point.
Yep, that’s how ChatGPT is a big deal for waking up policymakers, even as it’s not exactly relevant. I see two paths to a lasting pause. First, LLMs keep getting smarter and something object level scary happens before there are autonomous open weight AGIs, policymakers shut down big models. Second, 1e29 FLOPs is insufficient with LLMs, or LLMs stop getting smarter earlier and 1e29 FLOPs models are not attempted, and models at the scale that’s reached by then don’t get much smarter. It’s still unlikely that people won’t quickly find a way of using RL to extract more and more useful work out of the kind of data LLMs are trained on, but it doesn’t seem impossible that it might take a relatively long time.
Immense amounts of compute easily available
The other side to the argument for AGI in RTX 2070 is that the hardware that was sufficient to run humanity’s first attempt at AGI is sufficient to do much more than that when it’s employed efficiently.
Fully automated agi, there’s no humans helping at all, the model never gets stuck or just crashes from a bug in the lower framework
This is the argument’s assumption, the first AGI should be sufficiently close to this to fix the remaining limitations that make full autonomy reliable, including at research. Possibly requiring another long training run, if cracking online learning directly might take longer than that run.
Enormously past human capabilities ASI. Not just a modest amount.
I expect this, but this is not necessary for development of deep technological culture using serial speed advantage at very smart human level.
Accurate simulations of the world
This is more an expectation based on the rest than an assumption.
The reason you are probably wrong is just probability, if each step has a 50 percent chance of being right it’s 0.5^4.
These things are not independent.
Speculating about the future when it depends on all these unknowns and never before seen capabilities is dangerous—you’re virtually certain to be wrong.
That’s an argument about calibration. If you are doing the speculation correctly, not attempting to speculate is certain to leave a less accurate picture than doing it.
Once there are AGIs and they had some time to figure things out, or once there are ASIs (which don’t necessarily need to figure things out at length to become able to start making relatively perfect moves), it becomes possible to reach into the bag of their technological wisdom and pull out scary stuff that won’t be contained on their original hardware, even if the AIs themselves remain contained. So for a pause to be effective, it needs to prevent existence of such AIs, containing them is ineffective if they are sufficiently free to go and figure out the scary things.
Without a pause, the process of pulling things out of the bag needs to be extremely disciplined, focusing on pivotal processes that would prevent yourself or others from accidentally or intentionally pulling out an apocalypse 6 months later. And hoping that there’s nothing going on that releases the scary things outside of your extremely disciplined process intended to end the acute risk period, because hope is the only thing you have going for you without decades of research that you don’t have because there was no pause.
There are many meanings of “AGI”, the meaning I’m intending in this context is about cognitive competence. The choice to focus on this meaning rather than some other meaning follows from what I expect to pose existential threat. In this sense, “(existentially) dangerous AGI” means the consequences of its cognitive activities might disempower or kill everyone. The activities don’t need to be about personally controlling terminators, as a silly example setting up a company that designs terminators would have similar effects without requiring good latency.
Just a note here : good robotics experts today are “hands on” with the hardware. It gives humans are more grounded understanding of current designs/lets them iterate onto the next one. Good design doesn’t come from just thinking about it, and this would apply to designing humanoid combat robots.
This also is why more grounded current generation experts will strong disagree on the very idea of pulling any technology from thousands of years in the future. Current generation experts will, and do in ai debates, say that you need to build a prototype, test it in the real world, build another based on the information gained, and so on.
This is factually how all current technology was developed. No humans have, to my knowledge, ever done what you described—skipped a technology generation without a prototype or many at scale physical (or software) creations being used by end users. (Historically it has required more and more scale at later stages)
If this limitation still applies to superintelligence—and there are reasons to think it might but I would request a dialogue not a comment deep in a thread few will read—then the concerns you have expressed regarding future superintelligence are not legitimate worries.
If ASIs are in fact limited the same way, the way the world would be different is that each generation of technology developed by ASI would get built and deployed at scale. The human host country would use and test the new technology for a time period. The information gained from actual use is sold back to the ASI owners who then develop the next generation.
This whole iterative process is faster than human tech generation but it still happens in discrete steps and on a large, visible scale, and at human perceptible timescales. Probably weeks per cycle not months to the 1-2 year cycle humans are able to do.
There are reasons it would take weeks and it’s not just human feedback, you need time to deploy a product and you are mining 1 percent or lower edge cases in later development stages.
Yes, physical prototypes being inessential is decidedly not business as usual. Without doing things in the physical world, there need to be models for simulations, things like AlphaFold 2 (which predicts far more than would be possible to experimentally observe directly). You need enough data to define the rules of the physical world, and efficient simulations of what the rules imply for any project you consider. I expect automated theory at sufficiently large scale or superintelligent quality to blur the line between (simulated) experiments and impossibly good one shot engineering.
And the way it would fail would be if the simulations have an unavoidable error factor because real world physics is incomputable or a problem class above np (there’s posts here showing it is). The other way it would fail is if the simulations are missing information, for example in real battles between humanoid terminators the enemy might discover strategies the simulation didn’t model that are effective. So after deploying a lot of humanoid robots, rival terminators start winning the battles and it’s back to the design stage.
If you did it all in simulation I predict the robot would immediately fail and be unable to move. Humanoid robots are interdependent systems.
I am sure you know current protein folding algorithms are not accurate enough to actually use for protein engineering. You have to actually make the protein and test it along with the molecules you want binding logic for, and you will need to adjust your design. If the above is still true for ASI they will be unable to do protein engineering without a wet lab to build and test the parts for every step, where there will be hundreds of failures for every success.
If it does work that way then ASI will be faster than humans—since they can analyze more information at once and learn faster and conservatively try many routes in parallel—but not by the factors you imagine. Maybe 10 times faster not millions. This is because of Amdahls law.
The ASI are also cheaper. So it would be like a world where every technology in every field is being developed at the maximum speed humans could run at times 10.
(Theorizing about ASIs that have no access to physical reality feels noncentral in 2023 when GPT-4 has access to everything and everyone, and integration is only going to get stronger. But for the hypothetical ASI that grew out of an airgapped multimodal 1e29 LLM that has seen all youtube and read all papers and books and the web, I think ability to do good one shot engineering holds.)
(Also, we were discussing an exfiltrated AGI, for why else is RTX 2070 relevant, that happens to lack good latency to control robots. Presumably it doesn’t have the godshatter of technical knowledge, or else it doesn’t really matter that it’s a research-capable AGI. But it now has access to the physical world and can build prototypes. It can build another superintelligence. If it does have a bequest of ASI’s technical knowledge, it can just work to setup unsanctioned datacenters or a distributed network and run an OOMs-more-efficient-than-humanity’s-first-try superintelligence there.)
Predictability is vastly improved by developing the thing you need to predict yourself, especially when you intend to one shot it. Humans don’t do this, because for humans it happens to be much faster and cheaper to build prototypes, we are too slow at thinking useful thoughts. We possibly could succeed a lot more than observed in practice if each prototype was preceded by centuries of simulations and the prototypes were built with insane redundancies.
Simulations get better with data and with better learning algorithms. Looking at how a simulation works, it’s possible to spot issues and improve the simulation, including for the purpose of simulating a particular thing. Serial speed advantage more directly gives theory and general software from distant future (as opposed to engineering designs and experimental data). This includes theory and software for good learning algorithms, those that have much better sample efficiency and interrogate everything about the original 1e29 LLM to learn more of what its weights imply about the physical world. It’s a lot of data, who knows what details can be extracted from it from the position of theoretical and software-technological maturity.
None of this exists now though. Speculating about the future when it depends on all these unknowns and never before seen capabilities is dangerous—you’re virtually certain to be wrong. The uncertainty comes from all the moving parts in your model. Like you have:
Immense amounts of compute easily available
Accurate simulations of the world
Fully automated agi, there’s no humans helping at all, the model never gets stuck or just crashes from a bug in the lower framework
Enormously past human capabilities ASI. Not just a modest amount.
The reason you are probably wrong is just probability, if each step has a 50 percent chance of being right it’s 0.5^4. Dont think it of me saying you’re wrong.
And then only with all these pieces, humans are maybe doomed and will soon cease to exist. Therefore we should stop everything today.
While if just 1 piece is wrong, then this is the wrong choice to make. Right?
You’re also against a pro technology prior. Meaning I think you would have to actually prove the above—demo it—to convince people this the actual world we are in.
That’s because “future tech instead of turning out to be over hyped is going to be so amazing and perfect it can kill everyone quickly and easily” is against all the priors where tech turned out to be underwhelming and not that good. Like convincing someone the wolf is real when there’s been probably a million false alarms.
I don’t know how to think about this correctly. Like I feel like I should be weighting in the mountain of evidence I mentioned but if I do that then humans will always die to the ASI. Because there’s no warning. The whole threat model is that these are capabilities that are never seen prior to a certain point.
Yep, that’s how ChatGPT is a big deal for waking up policymakers, even as it’s not exactly relevant. I see two paths to a lasting pause. First, LLMs keep getting smarter and something object level scary happens before there are autonomous open weight AGIs, policymakers shut down big models. Second, 1e29 FLOPs is insufficient with LLMs, or LLMs stop getting smarter earlier and 1e29 FLOPs models are not attempted, and models at the scale that’s reached by then don’t get much smarter. It’s still unlikely that people won’t quickly find a way of using RL to extract more and more useful work out of the kind of data LLMs are trained on, but it doesn’t seem impossible that it might take a relatively long time.
The other side to the argument for AGI in RTX 2070 is that the hardware that was sufficient to run humanity’s first attempt at AGI is sufficient to do much more than that when it’s employed efficiently.
This is the argument’s assumption, the first AGI should be sufficiently close to this to fix the remaining limitations that make full autonomy reliable, including at research. Possibly requiring another long training run, if cracking online learning directly might take longer than that run.
I expect this, but this is not necessary for development of deep technological culture using serial speed advantage at very smart human level.
This is more an expectation based on the rest than an assumption.
These things are not independent.
That’s an argument about calibration. If you are doing the speculation correctly, not attempting to speculate is certain to leave a less accurate picture than doing it.
If you feel there are further issues to discuss, pm me for a dialogue.