There are a number of intro posts floating around. https://stampy.ai/ is one major project to make an organized list of topics. I keep meaning to look up more and getting distracted so I’m sending this instead of nothing.
I see on your profile that you have already done a PhD in machine learning which gives you a particular kind of context that I’m happy to see; I would love to talk synchronously at some point, do you have time for a voice call or focused text chat about your research background? I’m just an independent researcher and don’t ask because of any ability to fund you, but I’m interested in talking to people with experience to exchange perspectives.
I’m most interested in fixing the QACI plan and fully understanding what it would make sense to mean in all edge cases by the word “agency”; some great progress has been made on that, eg check out the two papers mentioned in the comments of this post: https://www.lesswrong.com/posts/JqWQxTyWxig8Ltd2p/relative-abstracted-agency
a lot of what we’re worried about is ai being able to reach simulation grade agency over us before we’ve reached simulation grade in return. this would occur for any of a variety of reasons. curious about your thoughts!
Thanks
> key problems
Is there a blog post to key problems?
> sharing their plans
Where is the best place to share? Once I come up with a plan I’m happy with, is there value in posting it on this site?
There are a number of intro posts floating around. https://stampy.ai/ is one major project to make an organized list of topics. I keep meaning to look up more and getting distracted so I’m sending this instead of nothing.
edit: here are some more https://www.lesswrong.com/tag/ai-alignment-intro-materials
Thanks!
I see on your profile that you have already done a PhD in machine learning which gives you a particular kind of context that I’m happy to see; I would love to talk synchronously at some point, do you have time for a voice call or focused text chat about your research background? I’m just an independent researcher and don’t ask because of any ability to fund you, but I’m interested in talking to people with experience to exchange perspectives.
I’m most interested in fixing the QACI plan and fully understanding what it would make sense to mean in all edge cases by the word “agency”; some great progress has been made on that, eg check out the two papers mentioned in the comments of this post: https://www.lesswrong.com/posts/JqWQxTyWxig8Ltd2p/relative-abstracted-agency
a lot of what we’re worried about is ai being able to reach simulation grade agency over us before we’ve reached simulation grade in return. this would occur for any of a variety of reasons. curious about your thoughts!
DM’d