Yeah I read those lines, and also “Want to use your engineering skills to push the frontiers of what state-of-the-art language models can accomplish”, and remain skeptical. I think the way OpenAI tends to equivocate on how they use the word “alignment” (or: they use it consistently, but, not in a way that I consider obviously good. Like, I the people working on RLHF a few years ago probably contributed to ChatGPT being released earlier which I think was bad*)
*I like the part where the world feels like it’s actually starting to respond to AI now, but, I think that would have happened later, with more serial-time for various other research to solidify.
(I think this is a broader difference in guesses about what research/approaches are good, which I’m not actually very confident about, esp. compared to habryka, but, is where I’m currently coming from)
*I like the part where the world feels like it’s actually starting to respond to AI now, but, I think that would have happened later, with more serial-time for various other research to solidify.
And with less serial-time for various policy plan to solidify and gain momentum.
If you think we’re irreparably far behind on the technical research, and advocacy / political action is relatively more promising, you might prefer to trade years of timeline for earlier, more widespread awareness of the importance of AI, and a longer relatively long period of people pushing on policy plans.
Yeah I read those lines, and also “Want to use your engineering skills to push the frontiers of what state-of-the-art language models can accomplish”, and remain skeptical. I think the way OpenAI tends to equivocate on how they use the word “alignment” (or: they use it consistently, but, not in a way that I consider obviously good. Like, I the people working on RLHF a few years ago probably contributed to ChatGPT being released earlier which I think was bad*)
*I like the part where the world feels like it’s actually starting to respond to AI now, but, I think that would have happened later, with more serial-time for various other research to solidify.
(I think this is a broader difference in guesses about what research/approaches are good, which I’m not actually very confident about, esp. compared to habryka, but, is where I’m currently coming from)
Tangent:
And with less serial-time for various policy plan to solidify and gain momentum.
If you think we’re irreparably far behind on the technical research, and advocacy / political action is relatively more promising, you might prefer to trade years of timeline for earlier, more widespread awareness of the importance of AI, and a longer relatively long period of people pushing on policy plans.