Sorry, that was an off-the-cuff example I meant to help gesture towards the main idea. I didn’t mean to imply it’s a working instance (it’s not). The idea I’m going for is:
I’m expecting future AIs to be less single LLMs (like Llama) and more loops and search and scaffolding (like o1)
Those AIs will be composed of individual pieces
Maybe we can try making the AI pieces mutually dependent in such a way that it’s a pain to get the AI working at peak performance unless you include the safety pieces
Sorry, that was an off-the-cuff example I meant to help gesture towards the main idea. I didn’t mean to imply it’s a working instance (it’s not). The idea I’m going for is:
I’m expecting future AIs to be less single LLMs (like Llama) and more loops and search and scaffolding (like o1)
Those AIs will be composed of individual pieces
Maybe we can try making the AI pieces mutually dependent in such a way that it’s a pain to get the AI working at peak performance unless you include the safety pieces