Surely you could work for free as an engineer at an AI alignment org or something and then shift into discussions w/ them about alignment?
To be clear: his motivation isn’t “I want to contribute to alignment research!” He’s aiming to actually solve the problem. If he works as an engineer at an org, he’s not pursuing his project, and he’d be approximately 0% as usefwl.
So am I. So are a lot of would-be researchers. There are many people who think they have a shot at doing this. Most are probably wrong. I’m not saying an org is a good solution for him or me. It would have to be an org willing to encompass and support the things he had in mind. Same with me. I’m not sure such orgs exist for either of us.
With a convincing track-record, one can apply for funding to found or co-found a new org based on your ideas. That’s a very high bar to clear though.
The FAR AI org might be an adequate solution? They are an organization for coordinating independent researchers.
I have this description but it’s not that good, because it’s very unfocused. That’s why I did not link it in the OP. The LessWrong dialog linked at the top of the post is probably the best thing in terms of describing the motivation and what the project is about at a high level.
He linked his extensive research log on the project above, and has made LW posts of some of their progress. That said, I don’t know of any good legible summary of it. It would be good to have. I don’t know if that’s one of Johannes’ top priorities, however. It’s never obvious from the outside what somebody’s top priorities ought to be.
To be clear: his motivation isn’t “I want to contribute to alignment research!” He’s aiming to actually solve the problem. If he works as an engineer at an org, he’s not pursuing his project, and he’d be approximately 0% as usefwl.
So am I. So are a lot of would-be researchers. There are many people who think they have a shot at doing this. Most are probably wrong. I’m not saying an org is a good solution for him or me. It would have to be an org willing to encompass and support the things he had in mind. Same with me. I’m not sure such orgs exist for either of us.
With a convincing track-record, one can apply for funding to found or co-found a new org based on your ideas. That’s a very high bar to clear though.
The FAR AI org might be an adequate solution? They are an organization for coordinating independent researchers.
Oh, I see! That makes a lot more sense. But he should really write up/link to his project then, or his collaborator’s project.
I have this description but it’s not that good, because it’s very unfocused. That’s why I did not link it in the OP. The LessWrong dialog linked at the top of the post is probably the best thing in terms of describing the motivation and what the project is about at a high level.
I can’t see a link to any LW dialog at the top.
At the top of this document.
Thanks!
He linked his extensive research log on the project above, and has made LW posts of some of their progress. That said, I don’t know of any good legible summary of it. It would be good to have. I don’t know if that’s one of Johannes’ top priorities, however. It’s never obvious from the outside what somebody’s top priorities ought to be.