Did you make any progress on choosing a course? My brief pitch is this: LLM agents are our most likely route to AGI, and particularly likely in short timelines. Aligning them is not the same as aligning the base LLMs. Yet almost no one is working on bridging that gap.
That’s what I’m working on. More can be found in my user profile.
I do think this is high prospective impact. I’m not sure what you mean by low prospective risk. I think the work has good odds of being at least somewhat useful, since it’s so neglected and it’s pretty commonly agreed that language model agents (or foundation model agents or LLM cognitive architectures) are a pretty likely path to first AGI.
I’m happy to talk more. I meant to respond here sooner.
Did you make any progress on choosing a course? My brief pitch is this: LLM agents are our most likely route to AGI, and particularly likely in short timelines. Aligning them is not the same as aligning the base LLMs. Yet almost no one is working on bridging that gap.
That’s what I’m working on. More can be found in my user profile.
I do think this is high prospective impact. I’m not sure what you mean by low prospective risk. I think the work has good odds of being at least somewhat useful, since it’s so neglected and it’s pretty commonly agreed that language model agents (or foundation model agents or LLM cognitive architectures) are a pretty likely path to first AGI.
I’m happy to talk more. I meant to respond here sooner.