STEM-level AGI is AGI that has “the basic mental machinery required to do par-human reasoning about all the hard sciences”
This definition seems very ambiguous to me, and I’ve already seen it confuse some people. Since the concept of a “STEM-level AGI” is the central concept underpinning the entire argument, I think it makes sense to spend more time making this definition less ambiguous.
Some specific questions:
Does “par-human reasoning” mean at the level of an individual human or at the level of all of humanity combined?
If it’s the former, what human should we compare it against? 50th percentile? 99.999th percentile?
What is the “basic mental machinery” required to do par-human reasoning? What if a system has the basic mental machinery but not the more advanced mental machinery?
Do you want this to include the robotic capabilities to run experiments and use physical tools? If not, why not (that seems important to me, but maybe you disagree)?
Does a human count as a STEM-level NGI (natural general intelligence)? If so, doesn’t that imply that we should already be able to perform pivotal acts? You said: “If it makes sense to try to build STEM-level AGI at all in that situation, then the obvious thing to do with your STEM-level AGI is to try to leverage its capabilities to prevent other AGIs from destroying the world (a “pivotal act”).”
Does “par-human reasoning” mean at the level of an individual human or at the level of all of humanity combined?
If it’s the former, what human should we compare it against? 50th percentile? 99.999th percentile?
I partly answered that here, and I’ll edit some of this into the post:
By ‘matching smart human performance… across all the scientific work humans do in that field’ I don’t mean to require that there literally be nothing humans can do that the AI can’t match. I do expect this kind of AI to quickly (or immediately) blow humans out of the water, but the threshold I have in mind is more like:
STEM-level AGI is AI that’s at least as scientifically productive as a human scientist who makes a variety of novel, original contributions to a hard-science field that requires understanding the physical world well. E.g., it can go toe-to-toe with highly productive human scientists on applying its abstract theories to real-world phenomena, using scientific ideas to design new tech, designing physical experiments, operating equipment, and generating new ideas that turn out to be true and that importantly advance the frontiers of our knowledge.
The way I’m thinking about the threshold, AI doesn’t have to be Nobel-prize-level, but it has to be “fully doing science”. I’d also be happy with a definition like ‘AI that can reason about the physical world in general’, but I think that emphasizing hard-science tasks makes it clearer why I’m not thinking of GPT-4 as ‘reasoning about the physical world in general’ in the relevant sense.
I’m not sure what the right percentile to target here is—maybe we should be looking at the top 5% of Americans with STEM PhDs? Where Americans with STEM PhDs maybe are at the top 1% of STEM ability for Americans?
What is the “basic mental machinery” required to do par-human reasoning? What if a system has the basic mental machinery but not the more advanced mental machinery?
Do you want this to include the robotic capabilities to run experiments and use physical tools? If not, why not (that seems important to me, but maybe you disagree)?
I want it to include the ability to run experiments and use physical tools.
I don’t know what the “basic mental machinery” required is—I think GPT-4 is missing some of the basic cognitive machinery top human scientists use to advance the frontiers of knowledge (as opposed to GPT-4 doing all the same mental operations as a top scientist but slower, or something), but this is based on a gestalt impression from looking at how different their outputs are in many domains, not based on a detailed or precise model of how general intelligence works.
One way of thinking about the relevant threshold is: if you gave a million chimpanzees billions of years to try to build a superintelligence, I think they’d fail, unless maybe you let them reproduce and applied selection pressure to them to change their minds. (But the latter isn’t something the chimps themselves realize is a good idea.)
In contrast, top human scientists pass the threshold ‘give us enough time, and we’ll be able to build a superintelligence’.
If an AI system, given enough time and empirical data and infrastructure, would eventually build a superintelligence, then I’m mostly happy to treat that as “STEM-level AGI”. This isn’t a necessary condition, and it’s presumably not strictly sufficient (since in principle it should be possible to build a very narrow and dumb meta-learning system that also bootstraps in this way eventually), but it maybe does a better job of gesturing at where I’m drawing a line between “GPT-4″ and “systems in a truly dangerous capability range”.
(Though my reason for thinking systems in that capability range are dangerous isn’t centered on “they can deliberately bootstrap to superintelligence eventually”. It’s far broader points like “if they can do that, they can probably do an enormous variety of other STEM tasks” and “falling exactly in the human capability range, and staying there, seems unlikely”.)
Does a human count as a STEM-level NGI (natural general intelligence)?
I tend to think of us that way, since top human scientists aren’t a separate species from average humans, so it would be hard for them to be born with complicated “basic mental machinery” that isn’t widespread among humans. (Though local mutations can subtract complex machinery from a subset of humans in one generation, even if it can’t add complex machinery to a subset of humans in one generation.)
Regardless, given how I defined the term, at least some humans are STEM-level.
If so, doesn’t that imply that we should already be able to perform pivotal acts? You said: “If it makes sense to try to build STEM-level AGI at all in that situation, then the obvious thing to do with your STEM-level AGI is to try to leverage its capabilities to prevent other AGIs from destroying the world (a “pivotal act”).”
The weakest STEM-level AGIs couldn’t do a pivotal act; the reason I think you can do a pivotal act within a few years of inventing STEM-level AGI is that I think you can quickly get to far more powerful systems than “the weakest possible STEM-level AGIs”.
The kinds of pivotal act I’m thinking about often involve Drexler-style feats, so one way of answering “why can’t humans already do pivotal acts?” might be to answer “why can’t humans just build nanotechnology without AGI?”. I’d say we can, and I think we should divert a lot of resources into trying to do so; but my guess is that we’ll destroy ourselves with misaligned AGI before we have time to reach nanotechnology “the hard way”, so I currently have at least somewhat more hope in leveraging powerful future AI to achieve nanotech.
(The OP doesn’t really talk about this, because the focus is ‘is p(doom) high?’ rather than ‘what are the most plausible paths to us saving ourselves?’.)
In an unpublished 2017 draft, a MIRI researcher and I put together some ass numbers regarding how hard (wet, par-biology) nanotech looked to us:
We believe that the bottlenecks on current progress toward par-biology nanotechnology are (a) figuring out how to put all of the puzzle pieces together correctly, (b) executing certain difficult computations required for determining how to build materials, and (c) engineering certain basic tools that will allow us to engineer better tools, where there are likely to be mutual dependencies between progress on these fronts. If the world’s top scientific and engineering talent were actively focusing on this application and were inspired to solve the key technical problems, we would expect it to be possible to push past these bottlenecks with no more than 10x the compute that Google spent on research projects in 2016.
Assuming no advances in AI algorithms over the state of the art in 2017, we would assign a 50% probability to fifty copies of John von Neumann, divided into five teams and supplied with a large number of lab technicians and other support staff, being able to achieve nanotechnology within 25 calendar years at a level that would be sufficient for a decisive advantage if the technology were available to a group in 2017.
(footnote: We stipulate “in 2017” because we would not necessarily expect par-biology nanotechnology to confer a decisive advantage in a world where nanotechnology had been gradually advanced to that level by human engineers over multiple decades; in that scenario, factors such as leaks, regulations, and competition from other developers would make it harder for one group to strongly pull ahead. We would expect it to be much easier for one group to strongly pull ahead if nanotechnology advances too quickly for leaks, regulations, and competition to be significant factors on the relevant timescale, as we believe is possible using AGI.)
Translating this into a more realistic scenario: we would assign a 40% probability to an organization with a $10 billion budget and the involvement of someone who can attract top researchers and leadership (e.g., Elon Musk) being able to reach this level of technological capability within 25 years, absent AI advances. Our probability would lower to 15% if there were only 10 calendar years available to the hypothetical Musk project instead of 25, and would rise to 85% if there were 50 calendar years and $20 billion available instead of 25 calendar years and $10 billion, holding these conditions stable and assuming no other large global disruptions.
As in §1.3, the predictions here are rough and intuitive, and were not generated by a formal model. It would be difficult for our probability to rise much higher than 85% given additional time or other resources. Our inside-view evaluation of the arguments assigns high probability to par-biology nanotechnology being achievable in fifty years under these idealized conditions, such that the remaining uncertainty in our informal aggregate models largely stems from model uncertainty and deference to experts who disagree with our view and consider par-biology nanotechnology much more difficult. We would be very surprised to learn that par-biology nanotechnology were much more difficult (say, requiring more than 500 VNG research years), and this would have a fairly large impact on our overall expectations about early AGI systems’ potential uses and impact.
(500 VNG research years = 500 von-Neumann-group research year, defined as ‘how much progress ten copies of John von Neumann would make if they worked together on the problem, hard, for 500 serial years’.)
This is also why I think humanity should probably put lots of resources into whole-brain emulation: I don’t think you need qualitatively superhuman cognition in order to get to nanotech, I think we’re just short on time given how slowly whole-brain emulation has advanced thus far.
With STEM-level AGI I think we’ll have more than enough cognition to do basically whatever we can align; but given how tenuous humanity’s grasp on alignment is today, it would be prudent to at least take a stab at a “straight to whole-brain emulation” Manhattan Project. I don’t think humanity as it exists today has the tech capabilities to hit the pause button on ML progress indefinitely, but I think we could readily do that with “run a thousand copies of your top researchers at 1000x speed” tech.
(Note that having dramatically improved hardware to run a lot of ems very fast is crucial here. This is another reason the straight-to-WBE path doesn’t look hopeful at a glance, and seems more like a desperation move to me; but maybe there’s a way to do it.)
This definition seems very ambiguous to me, and I’ve already seen it confuse some people. Since the concept of a “STEM-level AGI” is the central concept underpinning the entire argument, I think it makes sense to spend more time making this definition less ambiguous.
Some specific questions:
Does “par-human reasoning” mean at the level of an individual human or at the level of all of humanity combined?
If it’s the former, what human should we compare it against? 50th percentile? 99.999th percentile?
What is the “basic mental machinery” required to do par-human reasoning? What if a system has the basic mental machinery but not the more advanced mental machinery?
Do you want this to include the robotic capabilities to run experiments and use physical tools? If not, why not (that seems important to me, but maybe you disagree)?
Does a human count as a STEM-level NGI (natural general intelligence)? If so, doesn’t that imply that we should already be able to perform pivotal acts? You said: “If it makes sense to try to build STEM-level AGI at all in that situation, then the obvious thing to do with your STEM-level AGI is to try to leverage its capabilities to prevent other AGIs from destroying the world (a “pivotal act”).”
I partly answered that here, and I’ll edit some of this into the post:
I’m not sure what the right percentile to target here is—maybe we should be looking at the top 5% of Americans with STEM PhDs? Where Americans with STEM PhDs maybe are at the top 1% of STEM ability for Americans?
I want it to include the ability to run experiments and use physical tools.
I don’t know what the “basic mental machinery” required is—I think GPT-4 is missing some of the basic cognitive machinery top human scientists use to advance the frontiers of knowledge (as opposed to GPT-4 doing all the same mental operations as a top scientist but slower, or something), but this is based on a gestalt impression from looking at how different their outputs are in many domains, not based on a detailed or precise model of how general intelligence works.
One way of thinking about the relevant threshold is: if you gave a million chimpanzees billions of years to try to build a superintelligence, I think they’d fail, unless maybe you let them reproduce and applied selection pressure to them to change their minds. (But the latter isn’t something the chimps themselves realize is a good idea.)
In contrast, top human scientists pass the threshold ‘give us enough time, and we’ll be able to build a superintelligence’.
If an AI system, given enough time and empirical data and infrastructure, would eventually build a superintelligence, then I’m mostly happy to treat that as “STEM-level AGI”. This isn’t a necessary condition, and it’s presumably not strictly sufficient (since in principle it should be possible to build a very narrow and dumb meta-learning system that also bootstraps in this way eventually), but it maybe does a better job of gesturing at where I’m drawing a line between “GPT-4″ and “systems in a truly dangerous capability range”.
(Though my reason for thinking systems in that capability range are dangerous isn’t centered on “they can deliberately bootstrap to superintelligence eventually”. It’s far broader points like “if they can do that, they can probably do an enormous variety of other STEM tasks” and “falling exactly in the human capability range, and staying there, seems unlikely”.)
I tend to think of us that way, since top human scientists aren’t a separate species from average humans, so it would be hard for them to be born with complicated “basic mental machinery” that isn’t widespread among humans. (Though local mutations can subtract complex machinery from a subset of humans in one generation, even if it can’t add complex machinery to a subset of humans in one generation.)
Regardless, given how I defined the term, at least some humans are STEM-level.
The weakest STEM-level AGIs couldn’t do a pivotal act; the reason I think you can do a pivotal act within a few years of inventing STEM-level AGI is that I think you can quickly get to far more powerful systems than “the weakest possible STEM-level AGIs”.
The kinds of pivotal act I’m thinking about often involve Drexler-style feats, so one way of answering “why can’t humans already do pivotal acts?” might be to answer “why can’t humans just build nanotechnology without AGI?”. I’d say we can, and I think we should divert a lot of resources into trying to do so; but my guess is that we’ll destroy ourselves with misaligned AGI before we have time to reach nanotechnology “the hard way”, so I currently have at least somewhat more hope in leveraging powerful future AI to achieve nanotech.
(The OP doesn’t really talk about this, because the focus is ‘is p(doom) high?’ rather than ‘what are the most plausible paths to us saving ourselves?’.)
In an unpublished 2017 draft, a MIRI researcher and I put together some ass numbers regarding how hard (wet, par-biology) nanotech looked to us:
(500 VNG research years = 500 von-Neumann-group research year, defined as ‘how much progress ten copies of John von Neumann would make if they worked together on the problem, hard, for 500 serial years’.)
This is also why I think humanity should probably put lots of resources into whole-brain emulation: I don’t think you need qualitatively superhuman cognition in order to get to nanotech, I think we’re just short on time given how slowly whole-brain emulation has advanced thus far.
With STEM-level AGI I think we’ll have more than enough cognition to do basically whatever we can align; but given how tenuous humanity’s grasp on alignment is today, it would be prudent to at least take a stab at a “straight to whole-brain emulation” Manhattan Project. I don’t think humanity as it exists today has the tech capabilities to hit the pause button on ML progress indefinitely, but I think we could readily do that with “run a thousand copies of your top researchers at 1000x speed” tech.
(Note that having dramatically improved hardware to run a lot of ems very fast is crucial here. This is another reason the straight-to-WBE path doesn’t look hopeful at a glance, and seems more like a desperation move to me; but maybe there’s a way to do it.)