You really think humans are terrible at building AGI after the sudden success of LLMs? I think success builds on success, and (neural net-based) intelligence is turning out to be actually a lot easier than we thought.
I have been involved in two major project of hooking up different components of cognitive architectures. It was a nightmare, as you say. Yet there are already rapid advances in hooking up LLMs to different systems in different roles, for the reasons Nathan gives: their general intelligence makes them better at controlling other subsystems and taking in information from them.
Perhaps I should qualify what I mean by “easy”. Five years is well within my timeline. That’s not a lot of time to work on alignment. And less than five years for scary capabilities is also quite possible. It could be longer, which would be great- but shouldn’t at least a significant subset of us be working on the shortest realistic timeline scenarios? Giving up on them makes no sense.
Eventually—but agency is not sequence prediction + a few hacks. The remaining problems are hard. Massive compute, investment, and enthusiasm will lead to faster progress—i objected to 5 year timelines after chatgpt, but now it’s been a couple years. I think 5 years is still too soon but I’m not sure.
Edit: After Nathan offered to bet my claim is false, I bet no on his market at 82% claiming (roughly) that inference compute is as valuable as training computer for GPT-5: https://manifold.markets/NathanHelmBurger/gpt5-plus-scaffolding-and-inference. I expect this will be difficult to resolve because o1 is the closest we will get to a GPT-5 and it presumably benefits from both more training (including RLHF) and more inference compute. I think its perfectly possible that well thought out reinforcement learning can be as valuable as pretraining, but for practical purposes I expect scaling inference compute on a base model will not see qualitative improvements. I will reach out about more closely related bets.
You really think humans are terrible at building AGI after the sudden success of LLMs? I think success builds on success, and (neural net-based) intelligence is turning out to be actually a lot easier than we thought.
I have been involved in two major project of hooking up different components of cognitive architectures. It was a nightmare, as you say. Yet there are already rapid advances in hooking up LLMs to different systems in different roles, for the reasons Nathan gives: their general intelligence makes them better at controlling other subsystems and taking in information from them.
Perhaps I should qualify what I mean by “easy”. Five years is well within my timeline. That’s not a lot of time to work on alignment. And less than five years for scary capabilities is also quite possible. It could be longer, which would be great- but shouldn’t at least a significant subset of us be working on the shortest realistic timeline scenarios? Giving up on them makes no sense.
I’m not convinced that LLM agents are useful for anything.
Me either!
I’m convinced that they will be useful for a lot of things. Progress happens.
Eventually—but agency is not sequence prediction + a few hacks. The remaining problems are hard. Massive compute, investment, and enthusiasm will lead to faster progress—i objected to 5 year timelines after chatgpt, but now it’s been a couple years. I think 5 years is still too soon but I’m not sure.
Edit: After Nathan offered to bet my claim is false, I bet no on his market at 82% claiming (roughly) that inference compute is as valuable as training computer for GPT-5: https://manifold.markets/NathanHelmBurger/gpt5-plus-scaffolding-and-inference. I expect this will be difficult to resolve because o1 is the closest we will get to a GPT-5 and it presumably benefits from both more training (including RLHF) and more inference compute. I think its perfectly possible that well thought out reinforcement learning can be as valuable as pretraining, but for practical purposes I expect scaling inference compute on a base model will not see qualitative improvements. I will reach out about more closely related bets.