Yeah, I have a bunch of thoughts on that. I think I am hesitant about a management layer for a variety of reasons, including viewpoint diversity, corrupting effects of power and people not doing super good work if they are told what to do vs. figuring out what to do themselves.
My current perspective on this is that I want to solicit what projects are missing from the best people in the field, and then do public writeups for the LTF-Fund where I summarize that and also add my own perspective. Trying to improve the current situation on this axis is one of the big reasons why I am investing so much time on writing up things for the LTF-Fund.
Re. second question: I expect I will do at least some post-evaluation, but probably nothing super formal, mostly because of time-constraints. I wrote some more things in response to the same question here.
Perhaps it’d be useful if there was a group that took more of a dialectical approach, such as in a philosophy class? For example, it could collect different perspectives on what needs to happen for AI to go well and try to help people understand the assumptions underlying the project they are considering being valuable.
Yeah, I have a bunch of thoughts on that. I think I am hesitant about a management layer for a variety of reasons, including viewpoint diversity, corrupting effects of power and people not doing super good work if they are told what to do vs. figuring out what to do themselves.
My current perspective on this is that I want to solicit what projects are missing from the best people in the field, and then do public writeups for the LTF-Fund where I summarize that and also add my own perspective. Trying to improve the current situation on this axis is one of the big reasons why I am investing so much time on writing up things for the LTF-Fund.
Re. second question: I expect I will do at least some post-evaluation, but probably nothing super formal, mostly because of time-constraints. I wrote some more things in response to the same question here.
Perhaps it’d be useful if there was a group that took more of a dialectical approach, such as in a philosophy class? For example, it could collect different perspectives on what needs to happen for AI to go well and try to help people understand the assumptions underlying the project they are considering being valuable.