Broadly agree with this in most points of disagreement with Eliezer, and also agree with many points of agreement.
Few points where I sort of disagree with both, although this is sometimes unclear
1.
Even if there were consensus about a risk from powerful AI systems, there is a good chance that the world would respond in a totally unproductive way. It’s wishful thinking to look at possible stories of doom and say “we wouldn’t let that happen;” humanity is fully capable of messing up even very basic challenges, especially if they are novel.
I literally agree with this, but at the same time, in contrast to Eliezer’s original point, I also think there is a decent chance the world would respond in a somewhat productive way, and this is a mayor point of leverage.
For people who doubt this, I’d point to variance in initial governmental-level response to COVID19, which ranged from “highly incompetent” (eg. early US) to “quite competent” (eg Taiwan). (I also have some intuitions around this based on non-trivial amounts of first-hand experience with how governments actually internally worked and made decisions—which you certainly don’t need to trust, but if you are highly confident in inability of governments to act, or do reasonable things, you should at least be less confident.)
2.
AI systems will ultimately be wildly superhuman, and there probably won’t be strong technological hurdles right around human level. Extrapolating the rate of existing AI progress suggests you don’t get too much time between weak AI systems and very strong AI systems, and AI contributions could very easily go from being a tiny minority of intellectual work to a large majority over a few years.
While I do agree there likely won’t be strong technological hurdles, I think “right around the human level” is the point where it seems most likely some regulatory hurdles can be erected, or the human coordination landscape can change, or resources spent on alignment research could grow extremely fast, or, generally, weird things can happen. While I generally agree weird bad things can happen, I also do think weird good things can happen, and this also likely seems a potential period of increased leverage.
3.
There are strong social and political pressures to spend much more of our time talking about how AI shapes existing conflicts and shifts power. This pressure is already playing out and it doesn’t seem too likely to get better. I think Eliezer’s term “the last derail” is hyperbolic but on point.
I do agree that the pressures do exist, and would be bad if it caused many people working on the pessimistic-assumptions-side to switch to work on e.g. corporate governance; on the other hand, I don’t agree it’s just a distraction. Given previous two points, I think the overall state of power / coordination / conflict can have significant trajectory-shaping influence.
Also, this dynamic will likely bring many more people to work on alignment-adjacent topics, and I think there is some chance to steer part of this attention to productive work on important problems; I think this is more likely if at least some alignment researchers bother to engage with this influx of attention (as opposed to ignoring it as random distraction).
This response / increases in attention in some sense seem like the normal way how humanity solves problems, and it may be easier to steer it, rather than e.g. try to find&convince random people to care about technical alignment problems.
It sounds like we are broadly on the same page about 1 and 2 (presumably partly because my list doesn’t focus on my spiciest takes, which might have generated more disagreement).
Here are some extremely rambling thoughts on point 3.
I agree that the interaction between AI and existing conflict is a very important consideration for understanding or shaping policy responses to AI, and that you should be thinking a lot about how to navigate (and potentially leverage) those dynamics if you want to improve how well we handle any aspect of AI. I was trying to mostly point to differences in “which problems related to AI are we trying to solve?” We could think about technical or institutional or economic approaches/aspects of any problem.
With respect to “which problem are we trying to solve?”: I also think potential undesirable effects of AI on the balance of power are real and important, both because it affects our long term future and because it will affect humanity’s ability to cope with problems during the transition to AI. I think that problem is at least somewhat less important than alignment, but will probably get much more attention by default. I think this is especially true from a technical perspective, because technical work plays a totally central work for alignment, and a much more unpredictable and incidental role for affecting the balance of power.
I’m not sure how alignment researchers should engage with this kind of alignment-adjacent topic. My naive guess would be that I (and probably other alignment researchers) should:
Try to have reasonable takes on other problems (and be appropriately respectful/deferential when we don’t know what we’re talking about).
Feel comfortable “staying in my lane” even though it does inevitably lead to lots of people being unhappy with us.
Be relatively clear about my beliefs and prioritization with EA-types who are considering where to work, even though that will potentially lead to some conflict with people who have different priorities. (Similarly, I think people who work on different approaches to alignment should probably be clear about their positions and disagree openly, even though it will lead to some conflict.)
Generally be respectful, acknowledge legitimate differences in what people care about, acknowledge differing empirical views without being overconfident and condescending about it, and behave like a reasonable person (I find Eliezer is often counterproductive on this front, though I have to admit that he does a better job of clearly expressing his concerns and complaints than I do).
I am somewhat concerned that general blurring of the lines between alignment and other concerns will tend to favor topics with more natural social gravity. That’s not enough to make me think it’s clearly net negative to engage, but is at least enough to make me feel ambivalent. I think it’s very plausible that semi-approvingly citing Eliezer’s term “the last derail” was unwise, but I don’t know. In my defense, the difficulty of talking about alignment per se, and the amount of social pressure to instead switch to talking about something else, is a pretty central fact about my experience of working on alignment, and leaves me protective of spaces and norms that let people just focus on alignment.
(On the other hand: (i) I would not be surprised if people on the other side of the fence feel the same way, (ii) there are clearly spaces—like LW—where the dynamic is reversed, though they have their own problems, (iii) the situation is much better than a few years ago and I’m optimistic that will continue getting better for a variety of reasons, not least that the technical problems in AI alignment become increasingly well-defined and conversations about those topics will naturally become more focused.)
I’m not convinced that the dynamic “we care a lot about who ends up with power, and more important topics are more relevant to the distribution of power” is a major part of how humanity solves hard human vs nature problems. I do agree that it’s an important fact about humans to take into account when trying to solve any problem though.
Caring about visible power is a very human motivation, and I’d expect will draw many people to care about “who are the AI principals”, “what are the AIs actually doing”, and few other topics, which have significant technical components
Somewhat wild datapoints in this space: nuclear weapons, space race. in each case, salient motivations such as “war” led some of the best technical people to work on hard technical problems. in my view, the problems the technical people ended up working on were often “vs. nature” and distant from the original social motivations
Another take on this is, some people want to technically interesting and import problems, but some of them want to work on “legibly important” or “legibly high-status” problems
I do believe there are some opportunities in steering some fraction of this attention toward some of the core technical problems (not toward all of them, at this moment).
This can often depend on framing; while my guess is e.g. you shouldn’t probably work on this, my guess is some people who understand alignment technical problems should
This can also depend on social dynamics; your “naive guess” seem a good starting point
Also: it seems there are many low-hanging fruits in low-difficulty problems which someone should work on—eg at this moment, many humans should be spending a lot of time trying to get empirical understanding of what types of generalization are LLMs capable of.
With prioritization, I think it would be good if someone made some sort of a curated list “who is working on which problems, and why”—my concern with part of the “EAs figuring out what to do” is many people are doing some sort of expert-aggregation on the wrong level. (Like, if someone basically averages your and Eliezer Yudkowsky’s conclusions giving 50% weight each, I don’t think it is useful and coherent model)
For people who doubt this, I’d point to variance in initial governmental-level response to COVID19, which ranged from “highly incompetent” (eg. early US) to “quite competent” (eg Taiwan).
Seems worth noting that Taiwan is an outlier in terms of average IQ of its population. Given this, I find it pretty unlikely that typical governmental response to AI would be more akin to Taiwan than the US.
I doubt that’s the primary component that makes the difference. Other countries which did mostly sensible things early are eg Australia, Czechia, Vietnam, New Zealand, Iceland.
My main claim isn’t about what a median response would be, but something like “difference between median early covid governmental response and actually good early covid response was something between 1 and 2 sigma; this suggests bad response isn’t over-determined, and sensibe responses are within human reach”. Even if Taiwan was an outlier, it’s not like it’s inhabited by aliens or run by a friendly superintelligence.
Empirically, median governmental response to a novel crisis is copycat policymaking from some other governments
I doubt that’s the primary component that makes the difference. Other countries which did mostly sensible things early are eg Australia, Czechia, Vietnam, New Zealand, Iceland.
What do you think is the primary component? I seem to recall reading somewhere that previous experience with SARS makes a big difference. I guess my more general point is that if the good COVID responses can mostly be explained by factors that predictably won’t be available to the median AI risk response, then the variance in COVID response doesn’t help to give much hope for a good AI risk response.
My main claim isn’t about what a median response would be, but something like “difference between median early covid governmental response and actually good early covid response was something between 1 and 2 sigma; this suggests bad response isn’t over-determined, and sensibe responses are within human reach”.
This seems to depend on response to AI risk being of similar difficulty as response to COVID. I think people who updated towards “bad response to AI risk is overdetermined” did so partly on the basis that the former is much harder. (In other words, if the median government has done this badly against COVID, what chance does it have against something much harder?) I wrote down a list of things that make COVID an easier challenge, which I now realize may be a bit of a tangent if that’s not the main thing you want to argue about, but I’ll put down here anyway so as to not waste it.
it’s relatively intuitive for humans to think about the mechanics of the danger and possible countermeasures
previous human experiences with pandemics, including very similar ones like SARS
there are very effective countermeasures that are much easier / less costly than comparable countermeasures for AI risk, such as distributing high quality masks to everyone and sealing one’s borders
COVID isn’t agenty and can’t fight back intelligently
potentially divisive issues in AI risk response seem to be a strict superset of politically divisive issues in COVID response (additional issues include: how to weigh very long term benefits against short term costs, the sentience, moral worth, and rights of AIs, what kind of values do we want AIs to have, and/or who should have control/access to AI)
I asked myself for an example of a country whose initial pandemic response was unusually poor, settled on Brazil, and found that Brazil’s IQ was lower than I expected at 87. So that’s one data point that supports your hypothesis.
I suspect that cultural homogeneity is at least as important.
What do you think is the primary component? I seem to recall reading somewhere that previous experience with SARS makes a big difference. I guess my more general point is that if the good COVID responses can mostly be explained by factors that predictably won’t be available to the median AI risk response, then the variance in COVID response doesn’t help to give much hope for a good AI risk response.
What seemed to make a difference
someone with a good models what to do getting to advisory position when the politicians freak out
previous experience with SARS
ratio of “trust in institutions” vs. “trust in your neighbors wisdom”
raw technological capacity
ability of the government to govern (ie execute many things at short time)
In my view, 1. and 4. could go better than in covid, 2. is irrelevant, 3. and 5. seem broad parameters which can develop in different directions. Image you somehow become the main advisor to US president when the situation becomes really weird, and she follows your advice closely—my rough impression is in most situations you would be able to move the response to be moderately sane.
it’s relatively intuitive for humans to think about the mechanics of the danger and possible countermeasures
Empirically, this often wasn’t true. Humans had mildly confused ideas about the micro-level, but often highly confused ideas about the exponential macro-dynamics. (We created a whole educational game on that, and have some feedback that for some policymakers it was the thing that helped them understand… after a year in the pandemic)
previous human experiences with pandemics, including very similar ones like SARS
there are very effective countermeasures that are much easier / less costly than comparable countermeasures for AI risk, such as distributing high quality masks to everyone and sealing one’s borders
COVID isn’t agenty and can’t fight back intelligently
potentially divisive issues in AI risk response seem to be a strict superset of politically divisive issues in COVID response (additional issues include: how to weigh very long term benefits against short term costs, the sentience, moral worth, and rights of AIs, what kind of values do we want AIs to have, and/or who should have control/access to AI)
One factor which may make governments more responsive to AI risk is covid wasn’t exactly threatening to states. Covid was pretty bad for individual people, and some businesses, but in some cases, the relative power of states even grew during covid. In contrast, in some scenarios it may be clear that AI is existential risk for states as well.
Australia seems to have suffered a lot more from the pandemic than the U.S., paying much more in the cost of lockdown than even a relatively conservative worst-case estimate would have been for the costs of an uncontrolled COVID pandemic. I don’t know about the others, but given that you put Australia on this list, I don’t currently trust the others to have acted sensibly.
I’m not sure if you actually read carefully what you are commenting on. I emphasized early response, or initial governmental-level response in both comments in this thread.
Sure, multiple countries on the list made mistakes later, some countries sort of become insane, and so on. Later, almost everyone made mistakes with vaccines, rapid tests, investments in contact tracing, etc.
Arguing that the early lockdown was more costly than “an uncontrolled pandemic” would be pretty insane position (cf GDP costs, Italy had the closest thing to an uncontrolled pandemic). (Btw the whole notion of “an uncontrolled pandemic” is deeply confused—unless you are a totalitarian dictatorship, you cannot just order people “live as normally” during a pandemic when enough other people are dying; you get spontaneous “anarchic lockdowns” anyway, just later and in a more costly way)
If Australia was pursuing a strategy of “lock down irrespective of cost”, then I don’t think it makes sense to describe the initial response as competent. It just happened to be right in this case, but in order for the overall response to helpful, it has to be adaptive to the actual costs. I agree that the early response on its own would have indicated a potentially competent decision-making algorithm, but the later followup showed that the algorithm seems to have mostly been correct on accident, and not on-purpose.
I do appreciate the link to the GDP cost article. I would have to look into the methodology more to comment on that, but it certainly seems like an interest analysis and suggestive result.
I absolutely agree. Australia has done substantially better than most other nations regarding COVID from all of economic, health, and lifestyle points of view. The two largest cities did somewhat worse in lifestyle for some periods, but most other places had far fewer and less onerous restrictions than most other countries for nearly 2 years. I personally was very happy to have lived with essentially zero risk of COVID and essentially zero restrictions both personal or economic for more than a year and a half.
A conservative worst-case estimate for costs of an uncontrolled COVID outbreak in Australia was on the order of 300,000 deaths and about $600 billion direct economic loss over 2 years, along with even larger economic impacts from higher-order effects.
We did very much better than that, especially in health outcomes. We had 2,000 deaths up until giving up on elimination in December last year, which was about 0.08 deaths per thousand. Even after giving up on local elimination, we still only have 0.37 per thousand compared with United States at 3.0 per thousand.
Economic losses are also substantially less than US in terms of comparison with the pre-pandemic economy, but the attribution of causes there is much more contentious as with everything to do with economics.
I know of a good number of friends who were unable to continue their jobs that requires substantial in-person abroad coordination since Australia prevented nationals from leaving their own country. I also talked to 2-3 Australians who thought that Australia had messed up pretty badly here.
Sure. I also talked to tens of Australians who thought that they did a great job. In Spain, the country where I am from, I know personally many people who were also unable to continue their jobs, and not because the country forbade their nationals to leave. There is going to be a lot of variance in the individual opinions. The amount of dead people is on the other hand a more objective measure on how successful were countries at dealing with the pandemic
Taking the # of dead people as an objective is biasing the question.
Fundamentally, there is a question of whether the benefits of lockdowns were worth the costs.
Measuring that only by # of dead people is ignoring the fundamental problems with the lockdowns.
Let me explicate.
I think I am in the minority position on this board (and Habryka might be too) in that I feel it is obvious that the relatively small number of elderly people saved counterfactually by lockdowns is not comparable to the enormous mental, economic loss, the dangerous precedent for civil liberties set by lockdowns etc. It is clear to me that a “correct” utilitarian calculation will conclude that the QALYs lost by dead elderly people in the first world is absolutely swamped by the QALYs lost by mental health of young people and the millions of global poor thrown back into poverty.
(Moreover, this ignores the personal liberty aspect that people are free to make their own safety/lifestyle tradeoffs and it should require a superabundance of QALYs saved to impinge on this freedom)
Bolstered by the apparent succes of Taiwan I supported a short lockdown followed by track & trace—but mid summer 2020 it was clear that this was never going to work. Actually, Taiwan had to revert to lockdowns later during the pandemic anyway. It was clear to me that further lockdowns were no longer worth it.
Even if you think the lockdowns were justified, one should note that Australia has gone much farther; it has continued severe COVID restrictions even after vaccination & absence of a long-term plan. It has made it almost completely impossible to go in or out of the country (even if one is an Australian citizen willing to undergo extensive testing) . In my humble opinion this is completely crazy territory.
Speaking about completely crazy territory… If you measure a country’s COVID response by # of deaths by COVID then the “bestest most responsible government” would be the Chinese government.
I hope you will agree with me that this would be a mistake.
My assessment is also that the health costs of the pandemic were small in comparison to the secondary effects of lockdown (which were mostly negative). Any analysis that primarily measures deaths seems to me to ignore the vast majority of the impact (which is primarily economic and social).
I know this is a sensitive topic and I probably won’t change your mind but hear me out for a second. Re. China, I do agree with you that the response of the CCP (now) is not really a model of what an exemplar government should do. I also agree that up to a certain point you shouldn’t measure exclusively the number of dead people to judge how well a country fared. But it certainly is an important variable that we shouldn’t discount either. The number of dead people is closely correlated to other important factors such as the number of people suffering long covid or even the human suffering in general. I do agree with you that lockdowns in many places have caused potentially more harm than they should. The problem is that not all lockdowns are the same, and people keep them treating as equivalent. Another problem is that I see that many people are rationalizing that things couldn’t have been different, which is super convenient especially for those in power.
So let me talk a bit about Australia (I was living there during the whole pandemic period).
USA sits right now at 3015 dead people per 1M. Australia’s casualties are 364.
I can guarantee you, that to everyone I spoke with who was living at the time in other places (I have many friends in different European countries, Spain, Italy, France, England, etc) would have swtiched places with me without thinking about it for a second.
I follow very closely the news in the USA and I know how extremely biased the coverage was (including some famous podcasters, I am looking at you, Joe Rogan). They focused a lot on the Australian border restrictions / lockdown in Melbourne and very little on the fact that for almost two years, most Australians enjoyed a mostly normal life when people abroad were facing repeatedly absurd government interventions/restrictions. It is not totally true that the borders were completly close either: I have a friend who was allowed to leave the country to visit her dying father in Italy. She came back to Australia and she had to do a quarantine, true, but she was allowed to be back.
The lockdowns in Australia (at least in Queensland where I lived) served a purpose: buy time for the contact tracers so that COVID cases can really be taken down to zero. In Queensland we have a long one at the beginning (2 months maybe?) but then we have a few more (don’t rememeber how many, maybe 3?) that lasted only a few days. They understood very well that dealing with COVID should be a binary thing: Either you have no cases, or you are facing repeated waves of covid. This must continue until everyone has an opportunity to have two shots of the vaccine. Once that everyone had a chance, the borders were opened again and most restrictions were lifted. So in this regard, I do think that the harsh Chinese government measures AT THE BEGINNING (i.e. closing the national borders, PCRs, selective lockdowns, contact tracing, etc), made much more sense that everything that was happening in most of the Western world. Talking to a few Chinese friends, they considered utterly outrageous the fact that we were justifying the death of people saying that they were old anyway or that we shouldn’t stop the economy.
I still remember that at the very beginning of the pandemic, the POTUS was given a press conference and he showed a hesitancy rarely seen on him: he swallowed and took a few seconds to say, stuttering a little bit, that he hadn’t taken measures, there could be a hundred thousand American dying. Today the tally sits at more than 1M. Things could have been different.
Fair enough. Thank you for explaining where you are coming from.
I do agree that if an island is able to close the borders and thereby avoid severe domestic lockdowns this can be justified.
Broadly agree with this in most points of disagreement with Eliezer, and also agree with many points of agreement.
Few points where I sort of disagree with both, although this is sometimes unclear
1.
I literally agree with this, but at the same time, in contrast to Eliezer’s original point, I also think there is a decent chance the world would respond in a somewhat productive way, and this is a mayor point of leverage.
For people who doubt this, I’d point to variance in initial governmental-level response to COVID19, which ranged from “highly incompetent” (eg. early US) to “quite competent” (eg Taiwan). (I also have some intuitions around this based on non-trivial amounts of first-hand experience with how governments actually internally worked and made decisions—which you certainly don’t need to trust, but if you are highly confident in inability of governments to act, or do reasonable things, you should at least be less confident.)
2.
While I do agree there likely won’t be strong technological hurdles, I think “right around the human level” is the point where it seems most likely some regulatory hurdles can be erected, or the human coordination landscape can change, or resources spent on alignment research could grow extremely fast, or, generally, weird things can happen. While I generally agree weird bad things can happen, I also do think weird good things can happen, and this also likely seems a potential period of increased leverage.
3.
I do agree that the pressures do exist, and would be bad if it caused many people working on the pessimistic-assumptions-side to switch to work on e.g. corporate governance; on the other hand, I don’t agree it’s just a distraction. Given previous two points, I think the overall state of power / coordination / conflict can have significant trajectory-shaping influence.
Also, this dynamic will likely bring many more people to work on alignment-adjacent topics, and I think there is some chance to steer part of this attention to productive work on important problems; I think this is more likely if at least some alignment researchers bother to engage with this influx of attention (as opposed to ignoring it as random distraction).
This response / increases in attention in some sense seem like the normal way how humanity solves problems, and it may be easier to steer it, rather than e.g. try to find&convince random people to care about technical alignment problems.
It sounds like we are broadly on the same page about 1 and 2 (presumably partly because my list doesn’t focus on my spiciest takes, which might have generated more disagreement).
Here are some extremely rambling thoughts on point 3.
I agree that the interaction between AI and existing conflict is a very important consideration for understanding or shaping policy responses to AI, and that you should be thinking a lot about how to navigate (and potentially leverage) those dynamics if you want to improve how well we handle any aspect of AI. I was trying to mostly point to differences in “which problems related to AI are we trying to solve?” We could think about technical or institutional or economic approaches/aspects of any problem.
With respect to “which problem are we trying to solve?”: I also think potential undesirable effects of AI on the balance of power are real and important, both because it affects our long term future and because it will affect humanity’s ability to cope with problems during the transition to AI. I think that problem is at least somewhat less important than alignment, but will probably get much more attention by default. I think this is especially true from a technical perspective, because technical work plays a totally central work for alignment, and a much more unpredictable and incidental role for affecting the balance of power.
I’m not sure how alignment researchers should engage with this kind of alignment-adjacent topic. My naive guess would be that I (and probably other alignment researchers) should:
Try to have reasonable takes on other problems (and be appropriately respectful/deferential when we don’t know what we’re talking about).
Feel comfortable “staying in my lane” even though it does inevitably lead to lots of people being unhappy with us.
Be relatively clear about my beliefs and prioritization with EA-types who are considering where to work, even though that will potentially lead to some conflict with people who have different priorities. (Similarly, I think people who work on different approaches to alignment should probably be clear about their positions and disagree openly, even though it will lead to some conflict.)
Generally be respectful, acknowledge legitimate differences in what people care about, acknowledge differing empirical views without being overconfident and condescending about it, and behave like a reasonable person (I find Eliezer is often counterproductive on this front, though I have to admit that he does a better job of clearly expressing his concerns and complaints than I do).
I am somewhat concerned that general blurring of the lines between alignment and other concerns will tend to favor topics with more natural social gravity. That’s not enough to make me think it’s clearly net negative to engage, but is at least enough to make me feel ambivalent. I think it’s very plausible that semi-approvingly citing Eliezer’s term “the last derail” was unwise, but I don’t know. In my defense, the difficulty of talking about alignment per se, and the amount of social pressure to instead switch to talking about something else, is a pretty central fact about my experience of working on alignment, and leaves me protective of spaces and norms that let people just focus on alignment.
(On the other hand: (i) I would not be surprised if people on the other side of the fence feel the same way, (ii) there are clearly spaces—like LW—where the dynamic is reversed, though they have their own problems, (iii) the situation is much better than a few years ago and I’m optimistic that will continue getting better for a variety of reasons, not least that the technical problems in AI alignment become increasingly well-defined and conversations about those topics will naturally become more focused.)
I’m not convinced that the dynamic “we care a lot about who ends up with power, and more important topics are more relevant to the distribution of power” is a major part of how humanity solves hard human vs nature problems. I do agree that it’s an important fact about humans to take into account when trying to solve any problem though.
Not very coherent response to #3. Roughly
Caring about visible power is a very human motivation, and I’d expect will draw many people to care about “who are the AI principals”, “what are the AIs actually doing”, and few other topics, which have significant technical components
Somewhat wild datapoints in this space: nuclear weapons, space race. in each case, salient motivations such as “war” led some of the best technical people to work on hard technical problems. in my view, the problems the technical people ended up working on were often “vs. nature” and distant from the original social motivations
Another take on this is, some people want to technically interesting and import problems, but some of them want to work on “legibly important” or “legibly high-status” problems
I do believe there are some opportunities in steering some fraction of this attention toward some of the core technical problems (not toward all of them, at this moment).
This can often depend on framing; while my guess is e.g. you shouldn’t probably work on this, my guess is some people who understand alignment technical problems should
This can also depend on social dynamics; your “naive guess” seem a good starting point
Also: it seems there are many low-hanging fruits in low-difficulty problems which someone should work on—eg at this moment, many humans should be spending a lot of time trying to get empirical understanding of what types of generalization are LLMs capable of.
With prioritization, I think it would be good if someone made some sort of a curated list “who is working on which problems, and why”—my concern with part of the “EAs figuring out what to do” is many people are doing some sort of expert-aggregation on the wrong level. (Like, if someone basically averages your and Eliezer Yudkowsky’s conclusions giving 50% weight each, I don’t think it is useful and coherent model)
Seems worth noting that Taiwan is an outlier in terms of average IQ of its population. Given this, I find it pretty unlikely that typical governmental response to AI would be more akin to Taiwan than the US.
I doubt that’s the primary component that makes the difference. Other countries which did mostly sensible things early are eg Australia, Czechia, Vietnam, New Zealand, Iceland.
My main claim isn’t about what a median response would be, but something like “difference between median early covid governmental response and actually good early covid response was something between 1 and 2 sigma; this suggests bad response isn’t over-determined, and sensibe responses are within human reach”. Even if Taiwan was an outlier, it’s not like it’s inhabited by aliens or run by a friendly superintelligence.
Empirically, median governmental response to a novel crisis is copycat policymaking from some other governments
What do you think is the primary component? I seem to recall reading somewhere that previous experience with SARS makes a big difference. I guess my more general point is that if the good COVID responses can mostly be explained by factors that predictably won’t be available to the median AI risk response, then the variance in COVID response doesn’t help to give much hope for a good AI risk response.
This seems to depend on response to AI risk being of similar difficulty as response to COVID. I think people who updated towards “bad response to AI risk is overdetermined” did so partly on the basis that the former is much harder. (In other words, if the median government has done this badly against COVID, what chance does it have against something much harder?) I wrote down a list of things that make COVID an easier challenge, which I now realize may be a bit of a tangent if that’s not the main thing you want to argue about, but I’ll put down here anyway so as to not waste it.
it’s relatively intuitive for humans to think about the mechanics of the danger and possible countermeasures
previous human experiences with pandemics, including very similar ones like SARS
there are very effective countermeasures that are much easier / less costly than comparable countermeasures for AI risk, such as distributing high quality masks to everyone and sealing one’s borders
COVID isn’t agenty and can’t fight back intelligently
potentially divisive issues in AI risk response seem to be a strict superset of politically divisive issues in COVID response (additional issues include: how to weigh very long term benefits against short term costs, the sentience, moral worth, and rights of AIs, what kind of values do we want AIs to have, and/or who should have control/access to AI)
I asked myself for an example of a country whose initial pandemic response was unusually poor, settled on Brazil, and found that Brazil’s IQ was lower than I expected at 87. So that’s one data point that supports your hypothesis.
I suspect that cultural homogeneity is at least as important.
What seemed to make a difference
someone with a good models what to do getting to advisory position when the politicians freak out
previous experience with SARS
ratio of “trust in institutions” vs. “trust in your neighbors wisdom”
raw technological capacity
ability of the government to govern (ie execute many things at short time)
In my view, 1. and 4. could go better than in covid, 2. is irrelevant, 3. and 5. seem broad parameters which can develop in different directions. Image you somehow become the main advisor to US president when the situation becomes really weird, and she follows your advice closely—my rough impression is in most situations you would be able to move the response to be moderately sane.
Empirically, this often wasn’t true. Humans had mildly confused ideas about the micro-level, but often highly confused ideas about the exponential macro-dynamics. (We created a whole educational game on that, and have some feedback that for some policymakers it was the thing that helped them understand… after a year in the pandemic)
One factor which may make governments more responsive to AI risk is covid wasn’t exactly threatening to states. Covid was pretty bad for individual people, and some businesses, but in some cases, the relative power of states even grew during covid. In contrast, in some scenarios it may be clear that AI is existential risk for states as well.
Australia seems to have suffered a lot more from the pandemic than the U.S., paying much more in the cost of lockdown than even a relatively conservative worst-case estimate would have been for the costs of an uncontrolled COVID pandemic. I don’t know about the others, but given that you put Australia on this list, I don’t currently trust the others to have acted sensibly.
I’m not sure if you actually read carefully what you are commenting on. I emphasized early response, or initial governmental-level response in both comments in this thread.
Sure, multiple countries on the list made mistakes later, some countries sort of become insane, and so on. Later, almost everyone made mistakes with vaccines, rapid tests, investments in contact tracing, etc.
Arguing that the early lockdown was more costly than “an uncontrolled pandemic” would be pretty insane position (cf GDP costs, Italy had the closest thing to an uncontrolled pandemic). (Btw the whole notion of “an uncontrolled pandemic” is deeply confused—unless you are a totalitarian dictatorship, you cannot just order people “live as normally” during a pandemic when enough other people are dying; you get spontaneous “anarchic lockdowns” anyway, just later and in a more costly way)
If Australia was pursuing a strategy of “lock down irrespective of cost”, then I don’t think it makes sense to describe the initial response as competent. It just happened to be right in this case, but in order for the overall response to helpful, it has to be adaptive to the actual costs. I agree that the early response on its own would have indicated a potentially competent decision-making algorithm, but the later followup showed that the algorithm seems to have mostly been correct on accident, and not on-purpose.
I do appreciate the link to the GDP cost article. I would have to look into the methodology more to comment on that, but it certainly seems like an interest analysis and suggestive result.
I don’t think this is true at all. See: https://www.lesswrong.com/posts/r9gfbq26qvrjjA7JA/thank-you-queensland
I absolutely agree. Australia has done substantially better than most other nations regarding COVID from all of economic, health, and lifestyle points of view. The two largest cities did somewhat worse in lifestyle for some periods, but most other places had far fewer and less onerous restrictions than most other countries for nearly 2 years. I personally was very happy to have lived with essentially zero risk of COVID and essentially zero restrictions both personal or economic for more than a year and a half.
A conservative worst-case estimate for costs of an uncontrolled COVID outbreak in Australia was on the order of 300,000 deaths and about $600 billion direct economic loss over 2 years, along with even larger economic impacts from higher-order effects.
We did very much better than that, especially in health outcomes. We had 2,000 deaths up until giving up on elimination in December last year, which was about 0.08 deaths per thousand. Even after giving up on local elimination, we still only have 0.37 per thousand compared with United States at 3.0 per thousand.
Economic losses are also substantially less than US in terms of comparison with the pre-pandemic economy, but the attribution of causes there is much more contentious as with everything to do with economics.
I know of a good number of friends who were unable to continue their jobs that requires substantial in-person abroad coordination since Australia prevented nationals from leaving their own country. I also talked to 2-3 Australians who thought that Australia had messed up pretty badly here.
Sure. I also talked to tens of Australians who thought that they did a great job. In Spain, the country where I am from, I know personally many people who were also unable to continue their jobs, and not because the country forbade their nationals to leave. There is going to be a lot of variance in the individual opinions. The amount of dead people is on the other hand a more objective measure on how successful were countries at dealing with the pandemic
Taking the # of dead people as an objective is biasing the question.
Fundamentally, there is a question of whether the benefits of lockdowns were worth the costs. Measuring that only by # of dead people is ignoring the fundamental problems with the lockdowns.
Let me explicate.
I think I am in the minority position on this board (and Habryka might be too) in that I feel it is obvious that the relatively small number of elderly people saved counterfactually by lockdowns is not comparable to the enormous mental, economic loss, the dangerous precedent for civil liberties set by lockdowns etc. It is clear to me that a “correct” utilitarian calculation will conclude that the QALYs lost by dead elderly people in the first world is absolutely swamped by the QALYs lost by mental health of young people and the millions of global poor thrown back into poverty. (Moreover, this ignores the personal liberty aspect that people are free to make their own safety/lifestyle tradeoffs and it should require a superabundance of QALYs saved to impinge on this freedom)
Bolstered by the apparent succes of Taiwan I supported a short lockdown followed by track & trace—but mid summer 2020 it was clear that this was never going to work. Actually, Taiwan had to revert to lockdowns later during the pandemic anyway. It was clear to me that further lockdowns were no longer worth it.
Even if you think the lockdowns were justified, one should note that Australia has gone much farther; it has continued severe COVID restrictions even after vaccination & absence of a long-term plan. It has made it almost completely impossible to go in or out of the country (even if one is an Australian citizen willing to undergo extensive testing) . In my humble opinion this is completely crazy territory.
Speaking about completely crazy territory… If you measure a country’s COVID response by # of deaths by COVID then the “bestest most responsible government” would be the Chinese government. I hope you will agree with me that this would be a mistake.
My assessment is also that the health costs of the pandemic were small in comparison to the secondary effects of lockdown (which were mostly negative). Any analysis that primarily measures deaths seems to me to ignore the vast majority of the impact (which is primarily economic and social).
I know this is a sensitive topic and I probably won’t change your mind but hear me out for a second. Re. China, I do agree with you that the response of the CCP (now) is not really a model of what an exemplar government should do. I also agree that up to a certain point you shouldn’t measure exclusively the number of dead people to judge how well a country fared. But it certainly is an important variable that we shouldn’t discount either. The number of dead people is closely correlated to other important factors such as the number of people suffering long covid or even the human suffering in general. I do agree with you that lockdowns in many places have caused potentially more harm than they should. The problem is that not all lockdowns are the same, and people keep them treating as equivalent. Another problem is that I see that many people are rationalizing that things couldn’t have been different, which is super convenient especially for those in power.
So let me talk a bit about Australia (I was living there during the whole pandemic period).
USA sits right now at 3015 dead people per 1M. Australia’s casualties are 364.
I can guarantee you, that to everyone I spoke with who was living at the time in other places (I have many friends in different European countries, Spain, Italy, France, England, etc) would have swtiched places with me without thinking about it for a second.
I follow very closely the news in the USA and I know how extremely biased the coverage was (including some famous podcasters, I am looking at you, Joe Rogan). They focused a lot on the Australian border restrictions / lockdown in Melbourne and very little on the fact that for almost two years, most Australians enjoyed a mostly normal life when people abroad were facing repeatedly absurd government interventions/restrictions. It is not totally true that the borders were completly close either: I have a friend who was allowed to leave the country to visit her dying father in Italy. She came back to Australia and she had to do a quarantine, true, but she was allowed to be back.
The lockdowns in Australia (at least in Queensland where I lived) served a purpose: buy time for the contact tracers so that COVID cases can really be taken down to zero. In Queensland we have a long one at the beginning (2 months maybe?) but then we have a few more (don’t rememeber how many, maybe 3?) that lasted only a few days. They understood very well that dealing with COVID should be a binary thing: Either you have no cases, or you are facing repeated waves of covid. This must continue until everyone has an opportunity to have two shots of the vaccine. Once that everyone had a chance, the borders were opened again and most restrictions were lifted. So in this regard, I do think that the harsh Chinese government measures AT THE BEGINNING (i.e. closing the national borders, PCRs, selective lockdowns, contact tracing, etc), made much more sense that everything that was happening in most of the Western world. Talking to a few Chinese friends, they considered utterly outrageous the fact that we were justifying the death of people saying that they were old anyway or that we shouldn’t stop the economy.
I still remember that at the very beginning of the pandemic, the POTUS was given a press conference and he showed a hesitancy rarely seen on him: he swallowed and took a few seconds to say, stuttering a little bit, that he hadn’t taken measures, there could be a hundred thousand American dying. Today the tally sits at more than 1M. Things could have been different.
Fair enough. Thank you for explaining where you are coming from. I do agree that if an island is able to close the borders and thereby avoid severe domestic lockdowns this can be justified.
(364 Vs 3015 is two orders of magnitude?)
Oooops! Corrected, thanks
Well, Australia did orders of magnitude better than USA and in IQ they seem to be pretty close. I’m not sure that IQ is the right variable to look at