I’ll look into those possibilities. However, though my proposed work relates to AI alignment, it is not focused on that issue; and I’d consider it “outside the dominant paradigm” of AI alignment work.
Edited to add: I was going to do a separate post about those possibilities, but it appears that this website is a reasonably up-to-date summary of all the funding sources that are linked from that post, so me repeating that work would be redundant..
Paul Christiano might still be active in funding stuff. (There are a few more links to funding opportunities in the comments of that post.)
Thanks.
I’ll look into those possibilities. However, though my proposed work relates to AI alignment, it is not focused on that issue; and I’d consider it “outside the dominant paradigm” of AI alignment work.
Edited to add: I was going to do a separate post about those possibilities, but it appears that this website is a reasonably up-to-date summary of all the funding sources that are linked from that post, so me repeating that work would be redundant..