For context, Amdahl’s law states how fast you can speed up a process is bottlenecked on the serial parts. Eg you can have 100 people help make a cake really quickly, but it still takes ~30 to bake.
I’m assuming here, the human component is the serial component that we will be bottlenecked on, so will be outcompeted by agents?
If so, we should try to build the tools and knowledge to keep humans in the loop as far as we can. I agree it will eventually be outcompeted by full AI agency alone, but it isn’t set in stone how far human-steered AI can go.
I’m assuming here, the human component is the serial component that we will be bottlenecked on, so will be outcompeted by agents?
Basically yes. My point here is that yes we are in the approximately worst case here, and his option is probably going to not be to accelerate things very much, compared to agentic architecture.
I think a crux here is the long tail bites hard here, and thus I don’t think his approach provides much progress compared to the traditional approach of alignment research.
If so, we should try to build the tools and knowledge to keep humans in the loop as far as we can. I agree it will eventually be outcompeted by full AI agency alone, but it isn’t set in stone how far human-steered AI can go.
My guess is that it will only speed things up by very little: I’d be very surprised if it even improves research rates by 50%.
I disagree with this post for 1 reason:
Amdahl’s law limits how much cyborgism will actually work, and IMO is the reason agents are more effective than simulators.
On Amdahl’s law, John Wentworth’s post on the long tail is very relevant here, as it limits the use of cyborgism here:
https://www.lesswrong.com/posts/Nbcs5Fe2cxQuzje4K/value-of-the-long-tail
For context, Amdahl’s law states how fast you can speed up a process is bottlenecked on the serial parts. Eg you can have 100 people help make a cake really quickly, but it still takes ~30 to bake.
I’m assuming here, the human component is the serial component that we will be bottlenecked on, so will be outcompeted by agents?
If so, we should try to build the tools and knowledge to keep humans in the loop as far as we can. I agree it will eventually be outcompeted by full AI agency alone, but it isn’t set in stone how far human-steered AI can go.
Basically yes. My point here is that yes we are in the approximately worst case here, and his option is probably going to not be to accelerate things very much, compared to agentic architecture.
I think a crux here is the long tail bites hard here, and thus I don’t think his approach provides much progress compared to the traditional approach of alignment research.
My guess is that it will only speed things up by very little: I’d be very surprised if it even improves research rates by 50%.
I’m unsure how alt-history and point (2) history is hard to change and predictable relates to cyborgism. Could you elaborate?
I might want to remove that point for now.