tldr: In this post I am explaining my current model of AI risk after failing at my goal of understanding everything completely, without relying on arguments by authority. I hope this post is valuable in explaining the topic and also by giving an example of what a particular newbie found and did not find easy to understand / accessible.
I thank all those who helped me understand more about AI risk and AI safety and I thank benjaminalt, Leon Lang, Linda Linsefors, and Magdalena Wache for their helpful comments.
Ideas about advanced artificial intelligence (AI) have always been one of the core topics on LessWrong and in the rationality community in general. One central aspect of this is the concern that the development of artificial general intelligence (AGI) might pose an existential threat to humanity. Since a few years ago, these ideas have gained a lot of traction and the prevention of risk from A(G)I has grown into one of the central cause areas in Effective Altruism (EA). Recently, the buzz around the topic has increased even more, at least in my perception, after Eliezer’s April fools post and list of fatalities.
Having first read about AGI and risk from AGI in 2016, I have gotten more and more interested about the topic until this year I finally decided to actually invest some time and effort to learn more about it. I therefore signed up for the AGI safety fundamentals course by EA Cambridge. Unfortunately, after completing it I still felt pretty confused about my core question of how seriously I should take existential risk (x-risk) from AI. In particular, I was feeling confused about how much of the concern for AI safety was due to the topic being a fantastic nerd-snipe, that might be pushed by group-think and halo effects around high-profile figures in the rationality and EA communities.
I therefore set out to understand the topic from the ground up, explicitly making sense of the arguments and basic assumptions without relying on arguments by authority. I did not fully succeed at this, but in this post I present my current model about x-risk from advanced AI, as well as a brief discussion of what I feel confused about. The focus lies on understanding existential risk directly caused by AI systems, as this is what I was most confused about. In particular, I am not trying to explain risks caused by more complex societal and economic interactions of AI systems, as this seems like somewhat of a different problem and I understand it even less.
ⅰ. Summary
Very roughly, I think the potential risk posed by an AI system depends on the combination of its capabilities and its agency. An AI system with sufficiently high and general capabilities as well as strong agency very likely poses a catastrophic and even existential risk. If otherwise the system is lacking in either capabilities or agency, it is not a direct source of catastrophic or existential risk.
Outline. The remainder of this post will quickly explain what I mean by capabilities and by agency. I will then explain why the presence of sufficient capability and agency are dangerous. Subsequently, the post discusses my understanding of why both strong capabilities and agency can be expected to be features of future AI systems. Finally, I’ll point out the major confusions I still have.
ⅱ. What do I mean by capability?
With capability I refer to the cognitive power of the system or to its ‘intelligence’. Concepts and terms such as Artificial General Intelligence (AGI), Artificial Superintelligence (ASI), High-Level Machine Intelligence (HLMI), Process for Automating Scientific and Technological Advancement (PASTA), Transformative AI (TAI) and others can all be seen as describing certain capability levels. When describing such levels of capabilities, I feel that it often makes sense to separate two distinct dimensions of ‘wideness’ and ‘depth’. Depth with respect to a certain task refers to how good a system is at this task, while wideness describes how diverse and general the types of tasks are that a system can perform. This also roughly fits into the definition of intelligence by Legg and Hutter (see Intelligence—Wikipedia), which is also the definition I would settle on for this post, if this was necessary.
Quick note on AGI
The term AGI seems to assume that the concept of general intelligence makes sense. It potentially even assumes that humans have general intelligence. I personally find the assumption of general intelligence problematic, as I am not convinced that generality is something that can easily be judged from the inside. I don’t know general intelligence would entail and whether humans are actually capable of it. Fortunately, human cognition clearly is capable of things as general as writing poetry, running civilization and landing rockets on the moon (see also Four background claims, by Nate Soares). Admittedly, this seems to be at least somewhat general. I like the terms such as TAI or PASTA, as they do not require the assumption of generality and instead just describe what some system is capable of doing.
ⅲ. What do I mean by agency?
The concept of agency seems more fuzzy than that of capability. Very roughly speaking, a system has agency, if it plans and executes actions in order to achieve goals (for example as specified by a reward or loss-function). This often includes having a model of the environment or world and perceptions via certain input channels.
The concept of agents appears in several fields of science like game theory, economics and artificial intelligence. Examples for agency include:
a company that tries to optimize profits
a predator trying to catch prey
a player trying to win a game
Russel and Norvig define an agent as: “anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators″ and define rational agent as “an agent that acts so as to maximize the expected value of a performance measure based on past experience and knowledge”. I am not completely sure how much I like their definition, but together with the examples above, it should give a good intuition of what I mean by agency. Also note that I sometimes treat agency as a continuum, based on how strongly a system behaves like an agent.[1]
ⅳ. Why do sufficient agency and capabilities lead to undesired outcomes?
With my notion of the basic concepts layed out, I’ll explain why I think highly capable and agentic AI systems pose a threat. The basic argument here is that regardless of the specific goal / reward-function that an agent pursues, there are some sub-goals that are generally useful and likely to be part of the agent’s overall plan. For example, no matter the actual goal, it is generally in the interest of that goal to not let it be altered. Along the same line of thought, any agent will want to avoid being switched off, as this stands in the way of basically any useful goal not specifically designed for corrigibility (goals that make the agent want to shut itself off are counterexamples to this, but rather pointless). Goals that are subgoals for almost any other specific goal, are called convergent instrumental goals (see Bostrom, and Instrumental convergence—Wikipedia). Other examples for convergent instrumental goals include resource acquisition and self-improvement.
Another way to frame this idea is that for many environments in which an agent is trained, “hacking” the environment leads to higher rewards than possible with legal actions in the environment. Here I use the term hacking to refer to anything between unwanted exploitation of misspecified goals (funny gif, longer explanation), bugs or glitches and actual hacking of the computational environment. This is already somewhat annoying with rather weak agents barely capable enough to play Atari games. However, with an agent possessing sufficiently strong and general cognitive power, it becomes something to worry about. For instance a sufficiently smart agent with enough knowledge about the real world, can be expected to want to prevent the physical hardware it is running on to be shut off. It could maybe be argued that a sufficiently smart Atari agent might also not only realize that the Atari emulator has an exploitable bug, but try to take actions in the real world. Admittedly, I find it hard to take this as a realistic concern for even highly capable agents, because it would require the agent to acquire knowledge about its territory that it can hardly attain. This point is discussed further in Section ⅵ, Why worry about agency.
I think common objections to this can be summarized in something akin to “Ok, I get instrumental convergence, but I do not see how a computer program could actually [take over the world]”. In my opinion, this type of objection is mostly due to a lack of imagination. I find it highly plausible that a system with high cognitive capabilities and enough knowledge about its environment can find ways of influencing the world (even if that was not intended by human designers). Some evidence for this comes from the fact that other humans came up with ways for misaligned AI systems to lead to catastrophic outcomes that I did not anticipate (see for example No Physical Substrate, No Problem by Scott Alexander).
I do not know how progress in AI will continue in the future, how far current ML/AI approaches will go and how new paradigms or algorithms might shape the path. Additionally, I do not know at which level of capabilities AI systems could start to pose existential risks (I suspect this might depend on how much agency a system has). I am however quite convinced that the development of very highly capable systems is possible and would expect it to happen at some point in the future.
To see why the development of AI systems with very strong capabilities is possible, consider the design of the human brain. Evolution managed to create a system with impressively general and strong cognitive capabilities under severely(!) limiting constraints with respect to size (the human brain could not be much bigger without causing problems during birth and/or early development) and power consumption (the human brain is running on ~20W (The Physics Factbook)), all while not even optimizing for cognitive capabilities, but for inclusive genetic fitness. Clearly, if human researchers and engineers aim to create systems with high cognitive capabilities, they are far less limited in their designs. For example digital computing infrastructure can be far more easily scaled than biological hardware, power consumption is far less limited, and cognitive power can be more directly aimed for. I think from this perspective it is very plausible that AI systems can surpass human levels of cognition (both with relation to depth and width) at some point in the future, if enough effort is aimed towards this.
Beyond being possible, designing systems with high capabilities is also incentivized economically. Clearly, there is great economic value in automation and optimization, which both fuel the demand for capable AI systems. I think this makes it likely that a lot of effort will continue to be put into the design of ever more capable AI systems. Relatedly, economic progress in general leads to higher availability of resources (including computing hardware), which also makes it easier to put effort into AI progress[2]. In summary, I expect tremendous progress in AI to be possible and worked on with a lot of effort. Even though I find it hard to judge the course of future progress in AI systems as well as the level at which to expect those to pose existential risk, this combination seems to be worrying.
Sidenote: This is separate from, but also related to the concept of self-improving AI systems. The argument here is that highly capable AI systems might be able to modify their own architecture and code and thereby improve themselves, potentially leading to a rapid intelligence-explosion. While I see that capable AI systems can accelerate the development of more advanced AI systems, I don’t think this necessarily leads to a rapid feedback loop. I think that our current understanding of intelligence/cognitive capabilities/optimization power is not sufficient to predict how hard it is for systems of some intelligence to design systems that surpass them[3]. I therefore do not want to put a lot of weight on this argument.
So far, we have established that systems with sufficiently strong capabilities and agency are an existential threat and that the development of strong capabilities is probably possible and also happening at some point, due to economic incentives. It remains, to analyze the plausibility of such systems also having sufficient agency.
ⅵ. Why worry about agency?
The most direct reason why the development of AI systems with agency is to be expected is that one fundamental approach in machine learning, reinforcement learning, is about building systems that are pretty much the textbook example for agents. Reinforcement learning agents are either selected for their ability to maximize reward or are more directly learning policies / behavior based on received reward. This means that in any successful application of reinforcement learning there are agents sufficiently good at achieving some goal that are selected for or trained. In cases of bad inner-misalignment, reinforcement learning might also produce systems with strongly corrupted goals that are effectively lacking in agency. I am thus not completely certain about how necessary the use of RL to build extremely capable systems is dangerous.
Another problem is that reinforcement learning systems or generally AI systems with agency might outperform non-agentic AI systems on hard and more general tasks. If these tasks are useful and economically valuable, then the development and usage of agentic AI systems can be expected. I do expect hard and complex tasks that include interactions with the real world to be candidates for such tasks. The reason for this is that for such a problem an agent can (and has to) come up with a strategy itself, even if that strategy involves exploration of different approaches, experimentation, and/or the acquisition of more knowledge. While individual steps in such a process could also be solved or assisted by tool AIs, this would require much more manual work that would probably be worth automating. Also I expect it to become prohibitively hard to build non-agentic tool AIs for increasingly complex and general goals. Unfortunately I do not have concrete and realistic examples of economically valuable tasks that are far more easily solved using agentic AI systems. Running a company is probably not solvable with supervised learning, but (at least with current approaches) also not with reinforcement learning as exploration is too costly. More narrow tasks such as optimizing video suggestions for watch time on the other hand seem too narrow and constrained in the types of actions / outputs of a potential agent. There might however be tasks somewhere in between that, like these two examples, also are economically valuable.
This is also related to a point I raised earlier about agents needing to have enough knowledge about the real world in order to be able to consider influencing it. Again, for many tasks, for which agentic AI systems might be used, it is probably helpful or necessary to provide the agent with either direct knowledge or the ability to query relevant information. This would certainly be the case for complex tasks involving interactions with the real world, such as (helping to) run a company and potentially also many tasks involving the interaction with people, like for example content suggestions. In summary, I suspect there to be economically valuable tasks for which agentic systems with information about the real world are the cheapest option and I strongly expect that such systems would be used.
Another path towards solving very hard and general tasks without using agentic systems could lie in very general tool AIs. For example, an extremely capable text prediction AI (think GPT-n) could potentially be used to solve very general and hard problems with prompts such as “The following research paper explains how to construct X”. I am not sure how realistic the successful development of such a prediction AI is (as you would want the prompt to be completed with a construction of X and not something that just looks plausible but is wrong). Also there is another (at least conceptual) problem here. Very general prediction tasks are likely to involve agents and therefore an AI that tries to predict such a scenario well is likely modeling these agents internally. If these agents are modeled with enough granularity, then the AI is basically running a simulation of the agents that might itself be subject to misalignment and problematic instrumental goals. While this argument seems conceptually valid to me, I don’t know how far it carries into the real world with finite numbers. Intuitively it seems like extremely vast computational resources and also some additional information are needed until a spontaneously emerged sub-agent can become capable enough to realize it is running inside a machine learning model on some operating system on some physical hardware inside a physical world. I would rather expect tool AIs to not pose existential risk before more directly agentic AIs, which are likely also selected for economically.
ⅶ. Conclusion
This concludes my current model of the landscape of direct existential risk from AI systems. In a nutshell, capable and agentic systems will have convergent instrumental goals. These are potentially conflicting with the goals of humanity and in the limit also its continued existence, as they would incentivize the AI to, e.g., gather resources and avoid being shut off. Unfortunately, highly capable systems are realistic, economically valuable, and can be expected to be built at some point. Agentic systems are unfortunately also being built, in part because a common machine learning paradigm produces (almost?) agents and because it can be expected to be economically valuable.
My current model does not directly give quantitative estimates, but at least it makes my key-uncertainties explicit. These are:
In order to expect real world consequences because of instrumental convergence:
How strong and general are the required capabilities?
How much agency is required?
How much knowledge about the environment and real world is required?
How likely are emergent sub-agents in non-agentic systems?
How quickly and how far will capabilities of AI systems improve in the foreseeable future (next few months / years / decades)?
How strong are economic incentives to build agentic AI systems?
What are concrete economically valuable tasks that are more easily solved by agentic approaches than by (un)supervised learning?
How hard is it to design increasingly intelligent systems?
How (un)likely is an intelligence explosion?
Apart from these rather concrete and partly also somewhat empirical questions I also want to state a few more general and also maybe more confused questions:
How high-dimensional is intelligence? How much sense does it make to speak of general intelligence?
How independent are capabilities and agency? Can you have arbitrarily capable non-agentic systems?
How big is the type of risk explained in this post compared to more fuzzy AI risk related to general risk from Moloch, optimization, and competition?
How do these two problems differ in scale, tractability, neglectedness?
What does non-existential risk from AI look like? How big of a problem is it? How tractable? How neglected?
Some authors seem to like the term “consequentialist” more than agent, even though I have the impression it refers to the (almost?) same thing. I do not understand why this is the case.
Also, it seems plausible that AI systems will help with the design of future more capable AI systems, which can be expected to increase the pace of development. This is arguably already starting to happen with e.g. GitHub Copilot and the use of AI for chip-design
Capability and Agency as Cornerstones of AI risk — My current model
tldr: In this post I am explaining my current model of AI risk after failing at my goal of understanding everything completely, without relying on arguments by authority. I hope this post is valuable in explaining the topic and also by giving an example of what a particular newbie found and did not find easy to understand / accessible.
This was not planned as a submission for the AI Safety Public Materials contest, but feel free to consider it.
I thank all those who helped me understand more about AI risk and AI safety and I thank benjaminalt, Leon Lang, Linda Linsefors, and Magdalena Wache for their helpful comments.
Ideas about advanced artificial intelligence (AI) have always been one of the core topics on LessWrong and in the rationality community in general. One central aspect of this is the concern that the development of artificial general intelligence (AGI) might pose an existential threat to humanity. Since a few years ago, these ideas have gained a lot of traction and the prevention of risk from A(G)I has grown into one of the central cause areas in Effective Altruism (EA). Recently, the buzz around the topic has increased even more, at least in my perception, after Eliezer’s April fools post and list of fatalities.
Having first read about AGI and risk from AGI in 2016, I have gotten more and more interested about the topic until this year I finally decided to actually invest some time and effort to learn more about it. I therefore signed up for the AGI safety fundamentals course by EA Cambridge. Unfortunately, after completing it I still felt pretty confused about my core question of how seriously I should take existential risk (x-risk) from AI. In particular, I was feeling confused about how much of the concern for AI safety was due to the topic being a fantastic nerd-snipe, that might be pushed by group-think and halo effects around high-profile figures in the rationality and EA communities.
I therefore set out to understand the topic from the ground up, explicitly making sense of the arguments and basic assumptions without relying on arguments by authority. I did not fully succeed at this, but in this post I present my current model about x-risk from advanced AI, as well as a brief discussion of what I feel confused about. The focus lies on understanding existential risk directly caused by AI systems, as this is what I was most confused about. In particular, I am not trying to explain risks caused by more complex societal and economic interactions of AI systems, as this seems like somewhat of a different problem and I understand it even less.
ⅰ. Summary
Very roughly, I think the potential risk posed by an AI system depends on the combination of its capabilities and its agency. An AI system with sufficiently high and general capabilities as well as strong agency very likely poses a catastrophic and even existential risk. If otherwise the system is lacking in either capabilities or agency, it is not a direct source of catastrophic or existential risk.
Outline. The remainder of this post will quickly explain what I mean by capabilities and by agency. I will then explain why the presence of sufficient capability and agency are dangerous. Subsequently, the post discusses my understanding of why both strong capabilities and agency can be expected to be features of future AI systems. Finally, I’ll point out the major confusions I still have.
ⅱ. What do I mean by capability?
With capability I refer to the cognitive power of the system or to its ‘intelligence’. Concepts and terms such as Artificial General Intelligence (AGI), Artificial Superintelligence (ASI), High-Level Machine Intelligence (HLMI), Process for Automating Scientific and Technological Advancement (PASTA), Transformative AI (TAI) and others can all be seen as describing certain capability levels. When describing such levels of capabilities, I feel that it often makes sense to separate two distinct dimensions of ‘wideness’ and ‘depth’. Depth with respect to a certain task refers to how good a system is at this task, while wideness describes how diverse and general the types of tasks are that a system can perform. This also roughly fits into the definition of intelligence by Legg and Hutter (see Intelligence—Wikipedia), which is also the definition I would settle on for this post, if this was necessary.
Quick note on AGI
The term AGI seems to assume that the concept of general intelligence makes sense. It potentially even assumes that humans have general intelligence. I personally find the assumption of general intelligence problematic, as I am not convinced that generality is something that can easily be judged from the inside. I don’t know general intelligence would entail and whether humans are actually capable of it. Fortunately, human cognition clearly is capable of things as general as writing poetry, running civilization and landing rockets on the moon (see also Four background claims, by Nate Soares). Admittedly, this seems to be at least somewhat general. I like the terms such as TAI or PASTA, as they do not require the assumption of generality and instead just describe what some system is capable of doing.
ⅲ. What do I mean by agency?
The concept of agency seems more fuzzy than that of capability. Very roughly speaking, a system has agency, if it plans and executes actions in order to achieve goals (for example as specified by a reward or loss-function). This often includes having a model of the environment or world and perceptions via certain input channels.
The concept of agents appears in several fields of science like game theory, economics and artificial intelligence. Examples for agency include:
a company that tries to optimize profits
a predator trying to catch prey
a player trying to win a game
Russel and Norvig define an agent as: “anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators″ and define rational agent as “an agent that acts so as to maximize the expected value of a performance measure based on past experience and knowledge”. I am not completely sure how much I like their definition, but together with the examples above, it should give a good intuition of what I mean by agency. Also note that I sometimes treat agency as a continuum, based on how strongly a system behaves like an agent.[1]
ⅳ. Why do sufficient agency and capabilities lead to undesired outcomes?
With my notion of the basic concepts layed out, I’ll explain why I think highly capable and agentic AI systems pose a threat. The basic argument here is that regardless of the specific goal / reward-function that an agent pursues, there are some sub-goals that are generally useful and likely to be part of the agent’s overall plan. For example, no matter the actual goal, it is generally in the interest of that goal to not let it be altered. Along the same line of thought, any agent will want to avoid being switched off, as this stands in the way of basically any useful goal not specifically designed for corrigibility (goals that make the agent want to shut itself off are counterexamples to this, but rather pointless). Goals that are subgoals for almost any other specific goal, are called convergent instrumental goals (see Bostrom, and Instrumental convergence—Wikipedia). Other examples for convergent instrumental goals include resource acquisition and self-improvement.
Another way to frame this idea is that for many environments in which an agent is trained, “hacking” the environment leads to higher rewards than possible with legal actions in the environment. Here I use the term hacking to refer to anything between unwanted exploitation of misspecified goals (funny gif, longer explanation), bugs or glitches and actual hacking of the computational environment. This is already somewhat annoying with rather weak agents barely capable enough to play Atari games. However, with an agent possessing sufficiently strong and general cognitive power, it becomes something to worry about. For instance a sufficiently smart agent with enough knowledge about the real world, can be expected to want to prevent the physical hardware it is running on to be shut off. It could maybe be argued that a sufficiently smart Atari agent might also not only realize that the Atari emulator has an exploitable bug, but try to take actions in the real world. Admittedly, I find it hard to take this as a realistic concern for even highly capable agents, because it would require the agent to acquire knowledge about its territory that it can hardly attain. This point is discussed further in Section ⅵ, Why worry about agency.
I think common objections to this can be summarized in something akin to “Ok, I get instrumental convergence, but I do not see how a computer program could actually [take over the world]”. In my opinion, this type of objection is mostly due to a lack of imagination. I find it highly plausible that a system with high cognitive capabilities and enough knowledge about its environment can find ways of influencing the world (even if that was not intended by human designers). Some evidence for this comes from the fact that other humans came up with ways for misaligned AI systems to lead to catastrophic outcomes that I did not anticipate (see for example No Physical Substrate, No Problem by Scott Alexander).
ⅴ. Why expect highly capable / transformative AI?
This question seems to get increasingly easier to answer on an intuitive level, as AI progress continues to march on. However, even if computer systems can now beat humans at Starcraft, Go, protein folding and show impressive generalization at text (see e.g. GPT-3 Creative Fiction · Gwern.net) and image synthesis (An advanced guide to writing prompts for Midjourney ( text-to-image) | by Lars Nielsen | MLearning.ai | Sep, 2022 | Medium), one might still question how much this implies about reaching dangerously strong and general capabilities.
I do not know how progress in AI will continue in the future, how far current ML/AI approaches will go and how new paradigms or algorithms might shape the path. Additionally, I do not know at which level of capabilities AI systems could start to pose existential risks (I suspect this might depend on how much agency a system has). I am however quite convinced that the development of very highly capable systems is possible and would expect it to happen at some point in the future.
To see why the development of AI systems with very strong capabilities is possible, consider the design of the human brain. Evolution managed to create a system with impressively general and strong cognitive capabilities under severely(!) limiting constraints with respect to size (the human brain could not be much bigger without causing problems during birth and/or early development) and power consumption (the human brain is running on ~20W (The Physics Factbook)), all while not even optimizing for cognitive capabilities, but for inclusive genetic fitness. Clearly, if human researchers and engineers aim to create systems with high cognitive capabilities, they are far less limited in their designs. For example digital computing infrastructure can be far more easily scaled than biological hardware, power consumption is far less limited, and cognitive power can be more directly aimed for. I think from this perspective it is very plausible that AI systems can surpass human levels of cognition (both with relation to depth and width) at some point in the future, if enough effort is aimed towards this.
Beyond being possible, designing systems with high capabilities is also incentivized economically. Clearly, there is great economic value in automation and optimization, which both fuel the demand for capable AI systems. I think this makes it likely that a lot of effort will continue to be put into the design of ever more capable AI systems. Relatedly, economic progress in general leads to higher availability of resources (including computing hardware), which also makes it easier to put effort into AI progress[2]. In summary, I expect tremendous progress in AI to be possible and worked on with a lot of effort. Even though I find it hard to judge the course of future progress in AI systems as well as the level at which to expect those to pose existential risk, this combination seems to be worrying.
Sidenote: This is separate from, but also related to the concept of self-improving AI systems. The argument here is that highly capable AI systems might be able to modify their own architecture and code and thereby improve themselves, potentially leading to a rapid intelligence-explosion. While I see that capable AI systems can accelerate the development of more advanced AI systems, I don’t think this necessarily leads to a rapid feedback loop. I think that our current understanding of intelligence/cognitive capabilities/optimization power is not sufficient to predict how hard it is for systems of some intelligence to design systems that surpass them[3]. I therefore do not want to put a lot of weight on this argument.
So far, we have established that systems with sufficiently strong capabilities and agency are an existential threat and that the development of strong capabilities is probably possible and also happening at some point, due to economic incentives. It remains, to analyze the plausibility of such systems also having sufficient agency.
ⅵ. Why worry about agency?
The most direct reason why the development of AI systems with agency is to be expected is that one fundamental approach in machine learning, reinforcement learning, is about building systems that are pretty much the textbook example for agents. Reinforcement learning agents are either selected for their ability to maximize reward or are more directly learning policies / behavior based on received reward. This means that in any successful application of reinforcement learning there are agents sufficiently good at achieving some goal that are selected for or trained. In cases of bad inner-misalignment, reinforcement learning might also produce systems with strongly corrupted goals that are effectively lacking in agency. I am thus not completely certain about how necessary the use of RL to build extremely capable systems is dangerous.
Another problem is that reinforcement learning systems or generally AI systems with agency might outperform non-agentic AI systems on hard and more general tasks. If these tasks are useful and economically valuable, then the development and usage of agentic AI systems can be expected. I do expect hard and complex tasks that include interactions with the real world to be candidates for such tasks. The reason for this is that for such a problem an agent can (and has to) come up with a strategy itself, even if that strategy involves exploration of different approaches, experimentation, and/or the acquisition of more knowledge. While individual steps in such a process could also be solved or assisted by tool AIs, this would require much more manual work that would probably be worth automating. Also I expect it to become prohibitively hard to build non-agentic tool AIs for increasingly complex and general goals. Unfortunately I do not have concrete and realistic examples of economically valuable tasks that are far more easily solved using agentic AI systems. Running a company is probably not solvable with supervised learning, but (at least with current approaches) also not with reinforcement learning as exploration is too costly. More narrow tasks such as optimizing video suggestions for watch time on the other hand seem too narrow and constrained in the types of actions / outputs of a potential agent. There might however be tasks somewhere in between that, like these two examples, also are economically valuable.
This is also related to a point I raised earlier about agents needing to have enough knowledge about the real world in order to be able to consider influencing it. Again, for many tasks, for which agentic AI systems might be used, it is probably helpful or necessary to provide the agent with either direct knowledge or the ability to query relevant information. This would certainly be the case for complex tasks involving interactions with the real world, such as (helping to) run a company and potentially also many tasks involving the interaction with people, like for example content suggestions. In summary, I suspect there to be economically valuable tasks for which agentic systems with information about the real world are the cheapest option and I strongly expect that such systems would be used.
Another path towards solving very hard and general tasks without using agentic systems could lie in very general tool AIs. For example, an extremely capable text prediction AI (think GPT-n) could potentially be used to solve very general and hard problems with prompts such as “The following research paper explains how to construct X”. I am not sure how realistic the successful development of such a prediction AI is (as you would want the prompt to be completed with a construction of X and not something that just looks plausible but is wrong). Also there is another (at least conceptual) problem here. Very general prediction tasks are likely to involve agents and therefore an AI that tries to predict such a scenario well is likely modeling these agents internally. If these agents are modeled with enough granularity, then the AI is basically running a simulation of the agents that might itself be subject to misalignment and problematic instrumental goals. While this argument seems conceptually valid to me, I don’t know how far it carries into the real world with finite numbers. Intuitively it seems like extremely vast computational resources and also some additional information are needed until a spontaneously emerged sub-agent can become capable enough to realize it is running inside a machine learning model on some operating system on some physical hardware inside a physical world. I would rather expect tool AIs to not pose existential risk before more directly agentic AIs, which are likely also selected for economically.
ⅶ. Conclusion
This concludes my current model of the landscape of direct existential risk from AI systems. In a nutshell, capable and agentic systems will have convergent instrumental goals. These are potentially conflicting with the goals of humanity and in the limit also its continued existence, as they would incentivize the AI to, e.g., gather resources and avoid being shut off. Unfortunately, highly capable systems are realistic, economically valuable, and can be expected to be built at some point. Agentic systems are unfortunately also being built, in part because a common machine learning paradigm produces (almost?) agents and because it can be expected to be economically valuable.
My current model does not directly give quantitative estimates, but at least it makes my key-uncertainties explicit. These are:
In order to expect real world consequences because of instrumental convergence:
How strong and general are the required capabilities?
How much agency is required?
How much knowledge about the environment and real world is required?
How likely are emergent sub-agents in non-agentic systems?
How quickly and how far will capabilities of AI systems improve in the foreseeable future (next few months / years / decades)?
How strong are economic incentives to build agentic AI systems?
What are concrete economically valuable tasks that are more easily solved by agentic approaches than by (un)supervised learning?
How hard is it to design increasingly intelligent systems?
How (un)likely is an intelligence explosion?
Apart from these rather concrete and partly also somewhat empirical questions I also want to state a few more general and also maybe more confused questions:
How high-dimensional is intelligence? How much sense does it make to speak of general intelligence?
How independent are capabilities and agency? Can you have arbitrarily capable non-agentic systems?
How big is the type of risk explained in this post compared to more fuzzy AI risk related to general risk from Moloch, optimization, and competition?
How do these two problems differ in scale, tractability, neglectedness?
What does non-existential risk from AI look like? How big of a problem is it? How tractable? How neglected?
Some authors seem to like the term “consequentialist” more than agent, even though I have the impression it refers to the (almost?) same thing. I do not understand why this is the case.
Also, it seems plausible that AI systems will help with the design of future more capable AI systems, which can be expected to increase the pace of development. This is arguably already starting to happen with e.g. GitHub Copilot and the use of AI for chip-design
It might require ω(X) insights to build a system of intelligence X, that can only generate insights at pace O(X) or even o(x).