I think it’s an interesting framing of the systems that exist today, although I expect the key crux between your views and the more traditional MIRI views is that MIRI-folks seem to expect recursive self-improvement at some point when the system suddenly replaces its learned heuristics with the ability to optimise in real-time.
Update: After discussing with Dragongod, I agree his point that explicit thought is slow and so powerful AI’s would likely want to use heuristics for most decisions, but not novel situations where explicit thought would work best. I would also emphasise the importance of explicit thought for updating heuristics.
I think it’s an interesting framing of the systems that exist today, although I expect the key crux between your views and the more traditional MIRI views is that MIRI-folks seem to expect recursive self-improvement at some point when the system suddenly replaces its learned heuristics with the ability to optimise in real-time.
Update: After discussing with Dragongod, I agree his point that explicit thought is slow and so powerful AI’s would likely want to use heuristics for most decisions, but not novel situations where explicit thought would work best. I would also emphasise the importance of explicit thought for updating heuristics.