Wow! Glad good things are already coming out of this!
Thanks for sharing your experiences and the warning with it (this is the type of post I’d like to promote!), though I predict I’ll do well in this program due to what TurnTrout said in the other comment: I enjoy a lot of what I’m doing! * actually considers each item *… yep! This is honestly what I’d rather be doing than a lot of things, so I feel like Nate Soares in that regard (in his post I linked).
Regarding my why/motivation/someone to protect, I’m going to leave that for a separate post. I wanted this one to be a short & to the point intro. My why post will be much more poetic and wouldn’t fit here, and to separate it more cleanly, I’m referring to a terminal goal here.
Though I would love to clarify my instrumental goals to achieve that terminal goal! Those are those 3 bullet points “better self-model, feedback, & self-improvement”.
Better self-model: I would like to ~maximize my usefulness which would require working hard for several years (So closest to “productivity/ biological limits”). Getting the most bang for my buck those years involves finding a sustainable sprint/jog, so I’m making predictions and testing those predictions to get a more accurate self-model.
Self-improvement: I feel lacking in math and technical knowledge of open-problems in AI safety (as well as how progress has been made so far).
Wow! Glad good things are already coming out of this!
Thanks for sharing your experiences and the warning with it (this is the type of post I’d like to promote!), though I predict I’ll do well in this program due to what TurnTrout said in the other comment: I enjoy a lot of what I’m doing! * actually considers each item *… yep! This is honestly what I’d rather be doing than a lot of things, so I feel like Nate Soares in that regard (in his post I linked).
Regarding my why/motivation/someone to protect, I’m going to leave that for a separate post. I wanted this one to be a short & to the point intro. My why post will be much more poetic and wouldn’t fit here, and to separate it more cleanly, I’m referring to a terminal goal here.
Though I would love to clarify my instrumental goals to achieve that terminal goal! Those are those 3 bullet points “better self-model, feedback, & self-improvement”.
Better self-model: I would like to ~maximize my usefulness which would require working hard for several years (So closest to “productivity/ biological limits”). Getting the most bang for my buck those years involves finding a sustainable sprint/jog, so I’m making predictions and testing those predictions to get a more accurate self-model.
Self-improvement: I feel lacking in math and technical knowledge of open-problems in AI safety (as well as how progress has been made so far).