Yup, that’s basically right. In the previous post, I mentioned that the goal of this sort of thing for X-risk purposes would be to incentivize AI companies to proactively look for safety problems ahead of time, and actually hold back deployment and/or actually fix the problems in a generalizable way when they do come up. They wouldn’t be incentivized to anywhere near the efficient level to avoid X-risk, but they’d at least be incentivized to put in place safety processes with actual teeth at all.
Yup, that’s basically right. In the previous post, I mentioned that the goal of this sort of thing for X-risk purposes would be to incentivize AI companies to proactively look for safety problems ahead of time, and actually hold back deployment and/or actually fix the problems in a generalizable way when they do come up. They wouldn’t be incentivized to anywhere near the efficient level to avoid X-risk, but they’d at least be incentivized to put in place safety processes with actual teeth at all.