Have you thought about what the currently most neglected areas of x-risk are, and how to encourage more activities in those areas specifically? Some neglected areas that I can see are:
better coordination / exchange of ideas between different groups working on AI risk (see this question and I have a draft post about this)
Maybe we do need some sort of management layer in x-risk, where there’s some people who specialize in looking at the big picture and saying “hey, here’s an opportunity that seems to be neglected, how can we recruit more people to work on it?” instead of the current situation where we just wait for people to notice such opportunities on their own (which might not be where their comparative advantage lies) and then applying for funding. Maybe this management layer is something that LTFF could help fund, or organize, or grow into (since you’re already thinking about similar issues while making grant decisions)?
Second question is, do you do post-evaluations of your past grants, to see how successful they were?
(Edit: Added links and reformatted in response to comments.)
Yeah, I have a bunch of thoughts on that. I think I am hesitant about a management layer for a variety of reasons, including viewpoint diversity, corrupting effects of power and people not doing super good work if they are told what to do vs. figuring out what to do themselves.
My current perspective on this is that I want to solicit what projects are missing from the best people in the field, and then do public writeups for the LTF-Fund where I summarize that and also add my own perspective. Trying to improve the current situation on this axis is one of the big reasons why I am investing so much time on writing up things for the LTF-Fund.
Re. second question: I expect I will do at least some post-evaluation, but probably nothing super formal, mostly because of time-constraints. I wrote some more things in response to the same question here.
Perhaps it’d be useful if there was a group that took more of a dialectical approach, such as in a philosophy class? For example, it could collect different perspectives on what needs to happen for AI to go well and try to help people understand the assumptions underlying the project they are considering being valuable.
Minor – I found the formatting of that comment slightly hard to read. Would have preferred more paragraphs and possibly breaking the the numbered items into separate lines.
Have you thought about what the currently most neglected areas of x-risk are, and how to encourage more activities in those areas specifically? Some neglected areas that I can see are:
metaphilosophy in relation to AI safety
economics of AI risk
human-AI safety problems
better coordination / exchange of ideas between different groups working on AI risk (see this question and I have a draft post about this)
Maybe we do need some sort of management layer in x-risk, where there’s some people who specialize in looking at the big picture and saying “hey, here’s an opportunity that seems to be neglected, how can we recruit more people to work on it?” instead of the current situation where we just wait for people to notice such opportunities on their own (which might not be where their comparative advantage lies) and then applying for funding. Maybe this management layer is something that LTFF could help fund, or organize, or grow into (since you’re already thinking about similar issues while making grant decisions)?
Second question is, do you do post-evaluations of your past grants, to see how successful they were?
(Edit: Added links and reformatted in response to comments.)
Yeah, I have a bunch of thoughts on that. I think I am hesitant about a management layer for a variety of reasons, including viewpoint diversity, corrupting effects of power and people not doing super good work if they are told what to do vs. figuring out what to do themselves.
My current perspective on this is that I want to solicit what projects are missing from the best people in the field, and then do public writeups for the LTF-Fund where I summarize that and also add my own perspective. Trying to improve the current situation on this axis is one of the big reasons why I am investing so much time on writing up things for the LTF-Fund.
Re. second question: I expect I will do at least some post-evaluation, but probably nothing super formal, mostly because of time-constraints. I wrote some more things in response to the same question here.
Perhaps it’d be useful if there was a group that took more of a dialectical approach, such as in a philosophy class? For example, it could collect different perspectives on what needs to happen for AI to go well and try to help people understand the assumptions underlying the project they are considering being valuable.
Can you say more about what you mean by I. metaphilosophy in relation to AI safety? Thanks.
How strongly do you think improving human meta-philosophy would improve computational meta-philosophy?
Minor – I found the formatting of that comment slightly hard to read. Would have preferred more paragraphs and possibly breaking the the numbered items into separate lines.