It sounds like you may be assuming that people will roll out a technology when its reliability meets a certain level X, so that raising reliability of AI systems has no or little effect on the reliability of deployed system (namely it will just be X).
Yes, this is more or less my assumption. I think slower progress on OODR will delay release dates of transformative tech much more than it will improve quality/safety on the eventual date of release.
A more plausible model is that deployment decisions will be based on many axes of quality, e.g. suppose you deploy when the sum of reliability and speed reaches some threshold Y. If that’s the case, then raising reliability will improve the reliability and decrease the speed of deployed systems. If you think that increasing the reliability of AI systems is good (e.g. because AI developers want their AI systems to have various socially desirable properties and are limited by their ability to robustly achieve those properties) then this would be good.
I’m not clear on what part of that picture you disagree with or if you think that this is just small relative to some other risks.
Thanks for asking; I do disagree with this! Think reliability is a strongly dominant factor in decisions deploying real-world technology, such that to me it feels roughly-correct to treat it as the only factor. In this way of thinking, which you rightly attribute to me, progress in OODR doesn’t improve reliability on deployment-day, it mostly just moves deployment-day a bit earlier in time.
That’s not to say I’m advocating being afraid of OODR research because it “shortens timelines”, only that I think contributions to OODR are not particularly directly valuable to humanity’s long-term fate. As the post emphasizes, if someone cares about existential safety and wants to deploy their professional ambition to reducing x-risk, I think OODR is of high educational value for them to learn about, and as such I would be against “censoring” it as a topic to be discussed here.
Yes, this is more or less my assumption. I think slower progress on OODR will delay release dates of transformative tech much more than it will improve quality/safety on the eventual date of release.
Thanks for asking; I do disagree with this! Think reliability is a strongly dominant factor in decisions deploying real-world technology, such that to me it feels roughly-correct to treat it as the only factor. In this way of thinking, which you rightly attribute to me, progress in OODR doesn’t improve reliability on deployment-day, it mostly just moves deployment-day a bit earlier in time.
That’s not to say I’m advocating being afraid of OODR research because it “shortens timelines”, only that I think contributions to OODR are not particularly directly valuable to humanity’s long-term fate. As the post emphasizes, if someone cares about existential safety and wants to deploy their professional ambition to reducing x-risk, I think OODR is of high educational value for them to learn about, and as such I would be against “censoring” it as a topic to be discussed here.