More like:
(P1) Currently there is a lot of investment in AI.
(P2) I cannot currently imagine a good roadblock for RSI.
(C) Therefore, I have more reasons to believe RSI will not be entail atypically difficult roadblocks than I do to believe it will.
This is obviously a high level overview, and a more in-depth response might cite claims like the fact that RSI is likely an effective strategy for achieving most goals, or mention counterarguments like Robin Hanson’s, which asserts that RSI is unlikely due to the observed behaviors of existing >human systems (e.g. corporations).
In (P2) you talk about a roadblock for RSI, but in (C) you talk about about RSI as a roadblock, is that intentional?
This was a typo.
By “difficult”, do you mean something like, many hours of human work or many dollars spent? If so, then I don’t see why the current investment level in AI is relevant. The investment level partially determines how quickly it will arrive, but not how difficult it is to produce.
The primary implications of the difficulty of a capabilities problem in the context of safety is when said capability will arrive in most contexts. I didn’t mean to imply that the investment amount determined the difficulty of the problem, but that if you invest additional resources into a problem it is more likely to be solved faster than if you didn’t invest those resources. As a result, the desired effect of RSI being a difficult hurdle to overcome (increasing the window to AGI) wouldn’t be realized.
More like: (P1) Currently there is a lot of investment in AI. (P2) I cannot currently imagine a good roadblock for RSI. (C) Therefore, I have more reasons to believe RSI will not be entail atypically difficult roadblocks than I do to believe it will.
This is obviously a high level overview, and a more in-depth response might cite claims like the fact that RSI is likely an effective strategy for achieving most goals, or mention counterarguments like Robin Hanson’s, which asserts that RSI is unlikely due to the observed behaviors of existing >human systems (e.g. corporations).
This was a typo.
The primary implications of the difficulty of a capabilities problem in the context of safety is when said capability will arrive in most contexts. I didn’t mean to imply that the investment amount determined the difficulty of the problem, but that if you invest additional resources into a problem it is more likely to be solved faster than if you didn’t invest those resources. As a result, the desired effect of RSI being a difficult hurdle to overcome (increasing the window to AGI) wouldn’t be realized.