there’s a reason I haven’t done any projects at the intersection of PL and AI, despite my huge comparative advantage at it.
What’s PL? Programming languages?
As context (not really disagreeing), afaik those meetings are between DeepMind’s AGI safety team and FHI. Pushmeet is not on that team and so probably doesn’t attend those meetings.
I guess I was imagining that people in the AGI safety team must know about the “AI for science” project that Pushmeet is heading up, and Pushmeet also heads up the ML safety team, which he says collaborates “very, very closely” with the AGI safety team, so they should have a lot of chances to talk. Perhaps they just talk about technical safety issues, and not about strategy.
Specification learning is also explicitly called out in Towards Robust and Verified AI: Specification Testing, Robust Training, and Formal Verification (AN #52)
Do you know if there are any further details about it somewhere, aside from just the bare idea of “maybe we can learn specifications from evaluative feedback”?
Do you know if there are any further details about it somewhere, aside from just the bare idea of “maybe we can learn specifications from evaluative feedback”?
What’s PL? Programming languages?
I guess I was imagining that people in the AGI safety team must know about the “AI for science” project that Pushmeet is heading up, and Pushmeet also heads up the ML safety team, which he says collaborates “very, very closely” with the AGI safety team, so they should have a lot of chances to talk. Perhaps they just talk about technical safety issues, and not about strategy.
Do you know if there are any further details about it somewhere, aside from just the bare idea of “maybe we can learn specifications from evaluative feedback”?
Yes, sorry for the jargon.
Not to my knowledge.