I see on your profile that you have already done a PhD in machine learning which gives you a particular kind of context that I’m happy to see; I would love to talk synchronously at some point, do you have time for a voice call or focused text chat about your research background? I’m just an independent researcher and don’t ask because of any ability to fund you, but I’m interested in talking to people with experience to exchange perspectives.
I’m most interested in fixing the QACI plan and fully understanding what it would make sense to mean in all edge cases by the word “agency”; some great progress has been made on that, eg check out the two papers mentioned in the comments of this post: https://www.lesswrong.com/posts/JqWQxTyWxig8Ltd2p/relative-abstracted-agency
a lot of what we’re worried about is ai being able to reach simulation grade agency over us before we’ve reached simulation grade in return. this would occur for any of a variety of reasons. curious about your thoughts!
Thanks!
I see on your profile that you have already done a PhD in machine learning which gives you a particular kind of context that I’m happy to see; I would love to talk synchronously at some point, do you have time for a voice call or focused text chat about your research background? I’m just an independent researcher and don’t ask because of any ability to fund you, but I’m interested in talking to people with experience to exchange perspectives.
I’m most interested in fixing the QACI plan and fully understanding what it would make sense to mean in all edge cases by the word “agency”; some great progress has been made on that, eg check out the two papers mentioned in the comments of this post: https://www.lesswrong.com/posts/JqWQxTyWxig8Ltd2p/relative-abstracted-agency
a lot of what we’re worried about is ai being able to reach simulation grade agency over us before we’ve reached simulation grade in return. this would occur for any of a variety of reasons. curious about your thoughts!
DM’d