Develop AIs which are very dumb within a forward pass, but which are very good at using natural language reasoning such that they are competitive with our current systems. Demonstrate that these AIs are very unlikely to be scheming due to insufficient capacity outside of natural language (if we monitor their chains of thought). After ruling out scheming, solve other problems which seem notably easier.
Pursue a very different AI design which is much more modular and more hand constructed (as in, more GOFAI style). This can involve usage of many small and dumb neural components, but needs to be sufficiently interpretable in aggregate which might be hard. This can be done by having the AIs apply huge amounts of labor.
These are two of the main ideas I’m excited about. I’d quickly flag: 1) For the first one, “Demonstrate that these AIs are very unlikely to be scheming due to insufficient capacity outside of natural language ” → I imagine that in complex architectures, these AIs would also be unlikely to scheme because of other limitations. There are several LLM calls made within part of a complex composite, and each LLM call has very tight information and capability restrictions. Also, we might ensure that any motivation is optimized for the specific request, instead of the LLM aiming to optimize what the entire system does. 2) On the second, I expect that some of this will be pretty natural. Basically, it seems like “LLMs writing code” is already happening, and it seems easy to have creative combinations of LLM agents that write code that they know will be useful for their own reasoning later on. In theory, any function that could either run via an LLM or via interpretable code, should be run via interpretable code. As LLMs get very smart, they might find cleverer ways to write interpretable code that would cover a lot of what LLMs get used for. Over time, composite architectures would rely more and more on this code for reasoning processes. (Even better might be interpretable and proven code)
I expect substantially more integrated systems than you do at the point when AIs are obsoleting (almost all) top human experts such that I don’t expect these things will happen by default and indeed I think it might be quite hard to get them to work.
These are two of the main ideas I’m excited about. I’d quickly flag:
1) For the first one, “Demonstrate that these AIs are very unlikely to be scheming due to insufficient capacity outside of natural language ” → I imagine that in complex architectures, these AIs would also be unlikely to scheme because of other limitations. There are several LLM calls made within part of a complex composite, and each LLM call has very tight information and capability restrictions. Also, we might ensure that any motivation is optimized for the specific request, instead of the LLM aiming to optimize what the entire system does.
2) On the second, I expect that some of this will be pretty natural. Basically, it seems like “LLMs writing code” is already happening, and it seems easy to have creative combinations of LLM agents that write code that they know will be useful for their own reasoning later on. In theory, any function that could either run via an LLM or via interpretable code, should be run via interpretable code. As LLMs get very smart, they might find cleverer ways to write interpretable code that would cover a lot of what LLMs get used for. Over time, composite architectures would rely more and more on this code for reasoning processes. (Even better might be interpretable and proven code)
I expect substantially more integrated systems than you do at the point when AIs are obsoleting (almost all) top human experts such that I don’t expect these things will happen by default and indeed I think it might be quite hard to get them to work.