Thanks for sharing your experiences from the field.
Do you feel like it gave you any insights into AGI timelines, or informed any views you have about safety/alignment or transparency/interpretability?
See my reply to Yitz, tl;dr I am hedgy on alignment being an issue due to being hedgy on algorithms getting “too powerful”, and I am hedgy transparency or interpretability is even a coherent concept for any algorithm complex enough to not be so (sans some edge cases).
But it’s not like I’m the best person to say, far from it.
Thanks for sharing your experiences from the field.
Do you feel like it gave you any insights into AGI timelines, or informed any views you have about safety/alignment or transparency/interpretability?
See my reply to Yitz, tl;dr I am hedgy on alignment being an issue due to being hedgy on algorithms getting “too powerful”, and I am hedgy transparency or interpretability is even a coherent concept for any algorithm complex enough to not be so (sans some edge cases).
But it’s not like I’m the best person to say, far from it.