My current working take is that it is at the level of a median-but-dedicated undergraduate of a top university who is interested and enthusiastic about AI safety. But Deep Research can do in 10 minutes what would take that undergraduate about 20 hours.
Happy to try a prompt for you and see what you think.
People have said that to get a good prompt it’s better to have a discussion with a model like o3-mini, o1, or Claude first, and clarify various details about what you are imagining, then give the whole conversation as a prompt to OA Deep Research.
Thanks, seems pretty good on a quick skim, I’m a bit less certain on the corrigibility section, also more issues might become apparent if I read through it more slowly.
My current working take is that it is at the level of a median-but-dedicated undergraduate of a top university who is interested and enthusiastic about AI safety. But Deep Research can do in 10 minutes what would take that undergraduate about 20 hours.
Happy to try a prompt for you and see what you think.
How about “Please summarise Eliezer Yudkowsky’s views on decision theory and its relevance to the alignment problem”.
People have said that to get a good prompt it’s better to have a discussion with a model like o3-mini, o1, or Claude first, and clarify various details about what you are imagining, then give the whole conversation as a prompt to OA Deep Research.
Here you go: https://chatgpt.com/share/67a34222-e7d8-800d-9a86-357defc15a1d
Thanks, seems pretty good on a quick skim, I’m a bit less certain on the corrigibility section, also more issues might become apparent if I read through it more slowly.