I think that very good RSPs would effectively require a much longer pause if alignment turns out to be extremely difficult.
I do not know whether this kind of conditional pause is feasible even given that evidence. That said I think it’s much more feasible to get such a pause as a result of good safety standards together with significant evidence of hazardous capabilities and alignment difficulty, and the 10x risk reduction is reflecting the probability that you are able to get that kind of evidence in advance of a catastrophe (but conditioning on a very good implementation).
The point of this comment is to explain why I am primarily worried about implementation difficulty, rather than about the risk that failures will occur before we detect them. It seems extremely difficult to manage risks even once they appear, and almost all of the risk comes from our failure to do so.
(Incidentally, I think some other participants in this discussion are advocating for an indefinite pause starting now, and so I’d expect them to be much more optimistic about this step than you appear to be.)
(I’m guessing you’re not assuming that every lab in the world will adopt RSPs, though it’s unclear. And even if every lab implements them presumably some will make mistakes in evals and/or protective measures)
I don’t think that voluntary implementation of RSPs is a substitute for regulatory requirements and international collaboration (and tried to emphasize this in the post). In talking about a 10x risk reduction I’m absolutely imagining international coordination to regulate AI development.
In terms of “mistakes in evals” I don’t think this is the right picture of how this works. If you have noticed serious enough danger that leading developers have halted further development, and also have multiple years of experience with those systems establishing alignment difficulty and the nature of dangerous capabilities, you aren’t just relying on other developers to come up with their own independent assessments. You have an increasingly robust picture of what would be needed to proceed safely, and if someone claims that actually they are the one developer who has solved safety, that claim is going to be subject to extreme scrutiny.
I don’t really believe this argument. I guess I don’t think situations will be that “normal-ish” in the world where a $10 trillion industry has been paused for years over safety concerns, and in that regime I think we have more like 3 orders of magnitude of gap between “low effort” and “high effort” which is actually quite large. I also think there very likely ways to get several orders of magnitude of additional output with AI systems using levels of caution that are extreme but knowably possible. And even if we can’t solve the problem we could continue to invest in stronger understanding of risk, and with good enough understanding in hand I think there is a significant chance (perhaps 50%) that we could hold off on AI development for many years such that other game-changing technologies or institutional changes could arrive first.
I don’t think that voluntary implementation of RSPs is a substitute for regulatory requirements and international collaboration (and tried to emphasize this in the post). In talking about a 10x risk reduction I’m absolutely imagining international coordination to regulate AI development.
Appreciate this clarification.
I think that very good RSPs would effectively require a much longer pause if alignment turns out to be extremely difficult.
(but conditioning on a very good implementation)
I’m still confused about the definition of “very good RSPs” and “very good implementation” here. If the evals/mitigations are defined and implemented in some theoretically perfect way by all developers of course it will lead to drastically reduced risk, but “very good” has a lot of ambiguity. I was taking it to mean something like “~95th percentile of the range of RSPs we could realistically hope to achieve before doom”, but you may have meant something different. It’s still very hard for me to see how under the definition I’ve laid out we could get to a 10x reduction. Even just priors on how large effect sizes of interventions are feels like it brings it under 10x unless there are more detailed arguments given for 10x, but I’ll give some more specific thoughts below.
I think that very good RSPs would effectively require a much longer pause if alignment turns out to be extremely difficult.
In terms of “mistakes in evals” I don’t think this is the right picture of how this works. If you have noticed serious enough danger that leading developers have halted further development, and also have multiple years of experience with those systems establishing alignment difficulty and the nature of dangerous capabilities, you aren’t just relying on other developers to come up with their own independent assessments. You have an increasingly robust picture of what would be needed to proceed safely, and if someone claims that actually they are the one developer who has solved safety, that claim is going to be subject to extreme scrutiny.
I agree directionally with all of the claims you are making, but (a) I’d guess I have much less confidence than you that even applying very large amounts of effort / accumulated knowledge we will be able to reliably classify a system as safe or not (especially once it is getting close to and above human-level) and (b) even if we could after several years do this reliably, if you have to do a many-year pause there are various other sources of risk like countries refusing to join / pulling out of the pause and risks from open-source models including continued improvements via fine-tuning/scaffolding/etc.
I guess I don’t think situations will be that “normal-ish” in the world where a $10 trillion industry has been paused for years over safety concerns, and in that regime I think we have more like 3 orders of magnitude of gap between “low effort” and “high effort” which is actually quite large. I also think there very likely ways to get several orders of magnitude of additional output with AI systems using levels of caution that are extreme but knowably possible
Yeah normal-ish was a bad way to put it. I’m skeptical that 3 marginal OOMs is significantly more than ~5% probability to tip the scales but this is just intuition (if anyone knows of projects on the distribution of alignment difficulty, would be curious). I agree that automating alignment is important and that’s where a lot of my hope comes from.
[EDIT: After thinking about this more I’ve realized that I was to some extent conflating my intuition that it will be hard for the x-risk community to make a large counterfactual impact on x-risk % with the intuition that +3 OOMs of effort doesn’t cut more than ~5% of the risk. I haven’t thought much about exact numbers but now maybe ~20% seems reasonable to me now]
[edited to remove something that was clarified in another comment]
Even just priors on how large effect sizes of interventions are feels like it brings it under 10x unless there are more detailed arguments given for 10x, but I’ll give some more specific thoughts below.
Hm, at the scale of “(inter-)national policy”, I think you can get quite large effect sizes. I don’t know large the effect-sizes of the following are, but I wouldn’t be surprised by 10x or greater for:
Regulation of nuclear power leading to reduction in nuclear-related harms. (Compared to a very relaxed regulatory regime.)
Regulation of pharmaceuticals leading to reduced side-effects from drugs. (Compared to a regime where people can mostly sell what they want, and drugs only get banned after people notice that they’re causing harm.)
Worker protection standards. (Wikipedia claims that the Netherlands has a ~17x lower rate of fatal workplace accidents than the US, which is ~22x lower than India.) I don’t know what’s driving the differences here, but the difference between the US and Netherlands suggests that it’s not all “individuals can afford to take lower risks in richer countries”.
Thanks for calling me out on this. I think you’re likely right. I will cross out that line of the comment, and I have updated toward the effect size of strong AI regulation being larger and am less skeptical of the 10x risk reduction, but my independent impression would still be much lower (~1.25x or smth, while before I would have been at ~1.15x).
I still think the AI case has some very important differences with the examples provided due to the general complexity of the situation and the potentially enormous difficulty of aligning superhuman AIs and preventing misuse (this is not to imply you disagree, just stating my view).
I don’t think you need to reliably classify a system as safe or not. You need to apply consistent standards that output “unsafe” in >90% of cases where things really are unsafe.
I think I’m probably imagining better implementation than you, probably because (based on context) I’m implicitly anchoring to the levels of political will that would be required to implement something like a global moratorium. I think what I’m describing as “very good RSPs” and imagining cutting risk 10x still requires significantly less political will than a global moratorium now (but I think this is a point that’s up for debate).
So at that point you obviously aren’t talking about 100% of countries voluntarily joining (instead we are assuming export controls implemented by the global community on straggling countries—which I don’t even think seems very unrealistic at this point and IMO is totally reasonable for “very good”), and I’m not convinced open source models are a relevant risk (since the whole proposal is gating precautions on hazardous capabilities of models rather than size, and so again I think that’s fair to include as part of “very good”).
I would strongly disagree with a claim that +3 OOMs of effort and a many-year pause can’t cut risk by much. I’m sympathetic to the claim that >10% of risk comes from worlds where you need to pursue the technology in a qualitatively different way to avoid catastrophe, but again in those scenarios I do think it’s plausible for well-implemented RSPs to render some kinds of technologies impractical and therefore force developers to pursue alternative approaches.
I would strongly disagree with a claim that +3 OOMs of effort and a many-year pause can’t cut risk by much
This seems to be our biggest crux, as I said interested in analyses of alignment difficulty distribution if any onlookers know. Also, a semantic point but under my current views I’d view cutting ~5% of the risk as a huge deal that’s at least an ~80th percentile outcome for the AI risk community if it had a significant counterfactual impact on it, but yes not much compared to 10x.
[EDIT: After thinking about this more I’ve realized that I was to some extent conflating my intuition that it will be hard for the x-risk community to make a large counterfactual impact on x-risk % with the intuition that +3 OOMs of effort doesn’t cut more than ~5% of the risk. I haven’t thought much about exact numbers but now maybe ~20% seems reasonable to me now]
Quick thoughts on the less cruxy stuff:
You need to apply consistent standards that output “unsafe” in >90% of cases where things really are unsafe.
Fair, though I think 90% would be too low and the more you raise the longer you have to maintain the pause.
(based on context) I’m implicitly anchoring to the levels of political will that would be required to implement something like a global moratorium
This might coincidentally be close to the 95th percentile I had in mind.
So at that point you obviously aren’t talking about 100% of countries voluntarily joining
Fair, I think I was wrong on that point. (I still think it’s likely there would be various other difficulties with enforcing either RSPs or a moratorium for an extended period of time, but I’m open to changing mind)
I’m not convinced open source models are a relevant risk (since the whole proposal is gating precautions on hazardous capabilities of models rather than size, and so again I think that’s fair to include as part of “very good”)
Sorry if I wasn’t clear: my worry is that open-source models will get better over time due to new post-training enhancements, not about their capabilities upon release.
I think that very good RSPs would effectively require a much longer pause if alignment turns out to be extremely difficult.
I do not know whether this kind of conditional pause is feasible even given that evidence. That said I think it’s much more feasible to get such a pause as a result of good safety standards together with significant evidence of hazardous capabilities and alignment difficulty, and the 10x risk reduction is reflecting the probability that you are able to get that kind of evidence in advance of a catastrophe (but conditioning on a very good implementation).
The point of this comment is to explain why I am primarily worried about implementation difficulty, rather than about the risk that failures will occur before we detect them. It seems extremely difficult to manage risks even once they appear, and almost all of the risk comes from our failure to do so.
(Incidentally, I think some other participants in this discussion are advocating for an indefinite pause starting now, and so I’d expect them to be much more optimistic about this step than you appear to be.)
I don’t think that voluntary implementation of RSPs is a substitute for regulatory requirements and international collaboration (and tried to emphasize this in the post). In talking about a 10x risk reduction I’m absolutely imagining international coordination to regulate AI development.
In terms of “mistakes in evals” I don’t think this is the right picture of how this works. If you have noticed serious enough danger that leading developers have halted further development, and also have multiple years of experience with those systems establishing alignment difficulty and the nature of dangerous capabilities, you aren’t just relying on other developers to come up with their own independent assessments. You have an increasingly robust picture of what would be needed to proceed safely, and if someone claims that actually they are the one developer who has solved safety, that claim is going to be subject to extreme scrutiny.
I don’t really believe this argument. I guess I don’t think situations will be that “normal-ish” in the world where a $10 trillion industry has been paused for years over safety concerns, and in that regime I think we have more like 3 orders of magnitude of gap between “low effort” and “high effort” which is actually quite large. I also think there very likely ways to get several orders of magnitude of additional output with AI systems using levels of caution that are extreme but knowably possible. And even if we can’t solve the problem we could continue to invest in stronger understanding of risk, and with good enough understanding in hand I think there is a significant chance (perhaps 50%) that we could hold off on AI development for many years such that other game-changing technologies or institutional changes could arrive first.
Appreciate this clarification.
I’m still confused about the definition of “very good RSPs” and “very good implementation” here. If the evals/mitigations are defined and implemented in some theoretically perfect way by all developers of course it will lead to drastically reduced risk, but “very good” has a lot of ambiguity. I was taking it to mean something like “~95th percentile of the range of RSPs we could realistically hope to achieve before doom”, but you may have meant something different. It’s still very hard for me to see how under the definition I’ve laid out we could get to a 10x reduction.
Even just priors on how large effect sizes of interventions are feels like it brings it under 10x unless there are more detailed arguments given for 10x, but I’ll give some more specific thoughts below.I agree directionally with all of the claims you are making, but (a) I’d guess I have much less confidence than you that even applying very large amounts of effort / accumulated knowledge we will be able to reliably classify a system as safe or not (especially once it is getting close to and above human-level) and (b) even if we could after several years do this reliably, if you have to do a many-year pause there are various other sources of risk like countries refusing to join / pulling out of the pause and risks from open-source models including continued improvements via fine-tuning/scaffolding/etc.
Yeah normal-ish was a bad way to put it. I’m skeptical that 3 marginal OOMs is significantly more than ~5% probability to tip the scales but this is just intuition (if anyone knows of projects on the distribution of alignment difficulty, would be curious). I agree that automating alignment is important and that’s where a lot of my hope comes from.
[EDIT: After thinking about this more I’ve realized that I was to some extent conflating my intuition that it will be hard for the x-risk community to make a large counterfactual impact on x-risk % with the intuition that +3 OOMs of effort doesn’t cut more than ~5% of the risk. I haven’t thought much about exact numbers but now maybe ~20% seems reasonable to me now]
[edited to remove something that was clarified in another comment]
Hm, at the scale of “(inter-)national policy”, I think you can get quite large effect sizes. I don’t know large the effect-sizes of the following are, but I wouldn’t be surprised by 10x or greater for:
Regulation of nuclear power leading to reduction in nuclear-related harms. (Compared to a very relaxed regulatory regime.)
Regulation of pharmaceuticals leading to reduced side-effects from drugs. (Compared to a regime where people can mostly sell what they want, and drugs only get banned after people notice that they’re causing harm.)
Worker protection standards. (Wikipedia claims that the Netherlands has a ~17x lower rate of fatal workplace accidents than the US, which is ~22x lower than India.) I don’t know what’s driving the differences here, but the difference between the US and Netherlands suggests that it’s not all “individuals can afford to take lower risks in richer countries”.
Thanks for calling me out on this. I think you’re likely right. I will cross out that line of the comment, and I have updated toward the effect size of strong AI regulation being larger and am less skeptical of the 10x risk reduction, but my independent impression would still be much lower (~1.25x or smth, while before I would have been at ~1.15x).
I still think the AI case has some very important differences with the examples provided due to the general complexity of the situation and the potentially enormous difficulty of aligning superhuman AIs and preventing misuse (this is not to imply you disagree, just stating my view).
I don’t think you need to reliably classify a system as safe or not. You need to apply consistent standards that output “unsafe” in >90% of cases where things really are unsafe.
I think I’m probably imagining better implementation than you, probably because (based on context) I’m implicitly anchoring to the levels of political will that would be required to implement something like a global moratorium. I think what I’m describing as “very good RSPs” and imagining cutting risk 10x still requires significantly less political will than a global moratorium now (but I think this is a point that’s up for debate).
So at that point you obviously aren’t talking about 100% of countries voluntarily joining (instead we are assuming export controls implemented by the global community on straggling countries—which I don’t even think seems very unrealistic at this point and IMO is totally reasonable for “very good”), and I’m not convinced open source models are a relevant risk (since the whole proposal is gating precautions on hazardous capabilities of models rather than size, and so again I think that’s fair to include as part of “very good”).
I would strongly disagree with a claim that +3 OOMs of effort and a many-year pause can’t cut risk by much. I’m sympathetic to the claim that >10% of risk comes from worlds where you need to pursue the technology in a qualitatively different way to avoid catastrophe, but again in those scenarios I do think it’s plausible for well-implemented RSPs to render some kinds of technologies impractical and therefore force developers to pursue alternative approaches.
This seems to be our biggest crux, as I said interested in analyses of alignment difficulty distribution if any onlookers know. Also, a semantic point but under my current views I’d view cutting ~5% of the risk as a huge deal that’s at least an ~80th percentile outcome for the AI risk community if it had a significant counterfactual impact on it, but yes not much compared to 10x.
[EDIT: After thinking about this more I’ve realized that I was to some extent conflating my intuition that it will be hard for the x-risk community to make a large counterfactual impact on x-risk % with the intuition that +3 OOMs of effort doesn’t cut more than ~5% of the risk. I haven’t thought much about exact numbers but now maybe ~20% seems reasonable to me now]
Quick thoughts on the less cruxy stuff:
Fair, though I think 90% would be too low and the more you raise the longer you have to maintain the pause.
This might coincidentally be close to the 95th percentile I had in mind.
Fair, I think I was wrong on that point. (I still think it’s likely there would be various other difficulties with enforcing either RSPs or a moratorium for an extended period of time, but I’m open to changing mind)
Sorry if I wasn’t clear: my worry is that open-source models will get better over time due to new post-training enhancements, not about their capabilities upon release.