Crossposted from EA forum. The second post in the sequence covers the importance of crises, argues for crises as opportunities, and makes the claim that this community is currently better at acting with longer timescale OODA loops but lacks skills and capabilities to act with short OODA loops.
We often talk about the hinge of history, a period of high influence over the whole future trajectory of life. If we grant that our century is such a hinge, it’s unlikely that the “hinginess” is distributed uniformly across the century; instead, it seems much more likely it will be concentrated to some particular decades, years, and months, which will have much larger influence. It also seems likely that some of these “hingy” periods will look eventful and be understood as crises at the time. So understanding crises, and the ability to act during crises, may be particularly important for influencing the long-term future.
The first post in this sequence mentioned my main reason to work on COVID: it let me test my models of the world, and so informed my longtermist work. This post presents some other reasons, related to the above argument about hinges. None of these reasons would have been sufficient for me personally on their own, but they still carry weight, and should be sufficient for others in the next crisis.[1]
An exemplar crisis with a timescale of months
COVID has commonalities with some existential risk scenarios. (See Krakovna.) Lessons from it could transfer to risks in which:
the crisis unfolds over a similar timescale (weeks or years, rather than seconds or hours),
governments have some role,
the risk is at least partially visible,
the general population is engaged in some way.
This makes COVID a more useful comparison for versions of continuous AI takeoff where governments are struggling to understand an unfolding situation, but in which they have options to act and/or regulate. Similarly, it is a useful model for versions of any x-risk where a large fraction of academia suddenly focuses on a topic previously studied by a small group, and resources spent on the topic increase by many orders of magnitude. This emergency research push is likely in scenarios with a warning shot or sufficiently loud fire alarm that gets noticed by academia.
On the other hand, lessons learned from COVID will be correspondingly less useful for cases where few of the above assumptions hold (e.g. “an AI in a box bursts out in an intelligence explosion on the timescale of hours”).
Crisis and opportunity
Crises often bring opportunities to change the established order, and, for example, policy options that were outside the Overton window can suddenly become real. (This was noted pre-COVID by Anders Sandberg.) There can also be rapid developments in relevant disciplines and technologies.
Some examples of Overton shifts during COVID include: total border closures (in the West), large-scale and prolonged stay-at-home orders, mask mandates, unconditional payouts to large fractions of the population, and automatic data-driven control policies.
Technology developments include the familiar new vaccine platforms (mRNA, DNA) going to production, massive deployment of rapid tests, and the unprecedented use of digital contact tracing.
(Note that many other opportunities which opened up were not acted on.)
Taking advantage of such opportunities may depend on factors such as “do we have a relevant policy proposal in the drawer?”, “do we have a team of experts able to advise?” or “do we have a relevant network?”. These can be prepared in advance.
Default example for humanity thinking about large-scale risk
COVID will likely become the go-to example of a large-scale, seemingly low-probability risk we were unprepared for. The ability to shape narratives and attention around COVID could be important for the broader problem of how humanity should deal with other such risks.
While there is a clear philosophical distinction between existential risks and merely catastrophic risks, 1) in practice it may be difficult to tell the ultimate scale of some risks, and 2) most people will not understand the distinction between GCRs and x-risks in an intuitive way (understanding both as merely “extremely large”). So narratives and research surrounding GCRs are important for work on x-risk.
Conclusion
The above are why it made sense to pay attention to COVID, even if the pandemic’s direct impact on the trajectory of humanity is small. (In some ways it still makes sense to pay attention.)
The broader conclusion is that longtermists’ ability to observe, orient themselves, decide and act during crises may be critical to influencing long-term outcomes.
The usual ontology of longtermist interventions partitions the space according to “cause areas” or “risks”, leaving room for the unknown “cause X”. An alternative, almost orthogonal view partitions interventions according to the time scale of the OODA loop (i.e. the decision and action process) they implement.
On this view, longtermism has so far focussed on actions in the top row, that have OODA loops on the horizon of years and decades. Typical examples might be writing books that fix the basic framing of a field, basic research, or community building.
While there is a lot of commonality in actions along a column (e.g. at all timescales, the AI risk field will want to do AI research), there is also a lot that would be common interventions across a row (e.g. all cause areas may will need to know how governement may pass emergency regulation on a timescale of days).
The skills and capabilities needed to act on a scale of months, weeks, or days seem relatively undeveloped. The following posts will make specific suggestions for what to improve in this regard, based on our experience with COVID—in particular the rather obvious suggestion of creating a longtermist “emergency response team” devoted to fast action.
At the same time, I suggest taking this framing as a prompt: what else are we not doing? Where else is the table filled less than it should be?
Hinges and crises
Link post
Crossposted from EA forum. The second post in the sequence covers the importance of crises, argues for crises as opportunities, and makes the claim that this community is currently better at acting with longer timescale OODA loops but lacks skills and capabilities to act with short OODA loops.
We often talk about the hinge of history, a period of high influence over the whole future trajectory of life. If we grant that our century is such a hinge, it’s unlikely that the “hinginess” is distributed uniformly across the century; instead, it seems much more likely it will be concentrated to some particular decades, years, and months, which will have much larger influence. It also seems likely that some of these “hingy” periods will look eventful and be understood as crises at the time. So understanding crises, and the ability to act during crises, may be particularly important for influencing the long-term future.
The first post in this sequence mentioned my main reason to work on COVID: it let me test my models of the world, and so informed my longtermist work. This post presents some other reasons, related to the above argument about hinges. None of these reasons would have been sufficient for me personally on their own, but they still carry weight, and should be sufficient for others in the next crisis.[1]
An exemplar crisis with a timescale of months
COVID has commonalities with some existential risk scenarios. (See Krakovna.) Lessons from it could transfer to risks in which:
the crisis unfolds over a similar timescale (weeks or years, rather than seconds or hours),
governments have some role,
the risk is at least partially visible,
the general population is engaged in some way.
This makes COVID a more useful comparison for versions of continuous AI takeoff where governments are struggling to understand an unfolding situation, but in which they have options to act and/or regulate. Similarly, it is a useful model for versions of any x-risk where a large fraction of academia suddenly focuses on a topic previously studied by a small group, and resources spent on the topic increase by many orders of magnitude. This emergency research push is likely in scenarios with a warning shot or sufficiently loud fire alarm that gets noticed by academia.
On the other hand, lessons learned from COVID will be correspondingly less useful for cases where few of the above assumptions hold (e.g. “an AI in a box bursts out in an intelligence explosion on the timescale of hours”).
Crisis and opportunity
Crises often bring opportunities to change the established order, and, for example, policy options that were outside the Overton window can suddenly become real. (This was noted pre-COVID by Anders Sandberg.) There can also be rapid developments in relevant disciplines and technologies.
Some examples of Overton shifts during COVID include: total border closures (in the West), large-scale and prolonged stay-at-home orders, mask mandates, unconditional payouts to large fractions of the population, and automatic data-driven control policies.
Technology developments include the familiar new vaccine platforms (mRNA, DNA) going to production, massive deployment of rapid tests, and the unprecedented use of digital contact tracing.
(Note that many other opportunities which opened up were not acted on.)
Taking advantage of such opportunities may depend on factors such as “do we have a relevant policy proposal in the drawer?”, “do we have a team of experts able to advise?” or “do we have a relevant network?”. These can be prepared in advance.
Default example for humanity thinking about large-scale risk
COVID will likely become the go-to example of a large-scale, seemingly low-probability risk we were unprepared for. The ability to shape narratives and attention around COVID could be important for the broader problem of how humanity should deal with other such risks.
While there is a clear philosophical distinction between existential risks and merely catastrophic risks, 1) in practice it may be difficult to tell the ultimate scale of some risks, and 2) most people will not understand the distinction between GCRs and x-risks in an intuitive way (understanding both as merely “extremely large”). So narratives and research surrounding GCRs are important for work on x-risk.
Conclusion
The above are why it made sense to pay attention to COVID, even if the pandemic’s direct impact on the trajectory of humanity is small. (In some ways it still makes sense to pay attention.)
The broader conclusion is that longtermists’ ability to observe, orient themselves, decide and act during crises may be critical to influencing long-term outcomes.
The usual ontology of longtermist interventions partitions the space according to “cause areas” or “risks”, leaving room for the unknown “cause X”. An alternative, almost orthogonal view partitions interventions according to the time scale of the OODA loop (i.e. the decision and action process) they implement.
On this view, longtermism has so far focussed on actions in the top row, that have OODA loops on the horizon of years and decades. Typical examples might be writing books that fix the basic framing of a field, basic research, or community building.
While there is a lot of commonality in actions along a column (e.g. at all timescales, the AI risk field will want to do AI research), there is also a lot that would be common interventions across a row (e.g. all cause areas may will need to know how governement may pass emergency regulation on a timescale of days).
The skills and capabilities needed to act on a scale of months, weeks, or days seem relatively undeveloped. The following posts will make specific suggestions for what to improve in this regard, based on our experience with COVID—in particular the rather obvious suggestion of creating a longtermist “emergency response team” devoted to fast action.
At the same time, I suggest taking this framing as a prompt: what else are we not doing? Where else is the table filled less than it should be?
I worked on the covid crisis at the expense of working directly on AI alignment and macro strategy at FHI, which is a very high bar.