Alas, I think it’s quite unlikely that this article will make somebody fund me. It’s just that I noticed how extremely slow I am (without collaborators) to create a proper grant application.
IDGI. Why don’t you work w/ someone to get funding? If you’re 15x more productive, then you’ve got a much better shot at finding/filling out grants and then getting funding for you and your partner.
EDIT: Also, you’re a game dev and hence good at programming. Surely you could work for free as an engineer at an AI alignment org or something and then shift into discussions w/ them about alignment?
Surely you could work for free as an engineer at an AI alignment org or something and then shift into discussions w/ them about alignment?
To be clear: his motivation isn’t “I want to contribute to alignment research!” He’s aiming to actually solve the problem. If he works as an engineer at an org, he’s not pursuing his project, and he’d be approximately 0% as usefwl.
So am I. So are a lot of would-be researchers. There are many people who think they have a shot at doing this. Most are probably wrong. I’m not saying an org is a good solution for him or me. It would have to be an org willing to encompass and support the things he had in mind. Same with me. I’m not sure such orgs exist for either of us.
With a convincing track-record, one can apply for funding to found or co-found a new org based on your ideas. That’s a very high bar to clear though.
The FAR AI org might be an adequate solution? They are an organization for coordinating independent researchers.
I have this description but it’s not that good, because it’s very unfocused. That’s why I did not link it in the OP. The LessWrong dialog linked at the top of the post is probably the best thing in terms of describing the motivation and what the project is about at a high level.
He linked his extensive research log on the project above, and has made LW posts of some of their progress. That said, I don’t know of any good legible summary of it. It would be good to have. I don’t know if that’s one of Johannes’ top priorities, however. It’s never obvious from the outside what somebody’s top priorities ought to be.
Yeah, so… I find myself feeling like I have some things in common with the post author’s situation.
I don’t think “work for free at an alignment org” is really an option? I don’t know about any alignment orgs offering unpaid internships. An unpaid worker isn’t free for an org, you still need to coordinate them, assess their output, etc. The issues with team bloat and how much to try to integrate a volunteer are substantial.
I wish I had someone I could work with on my personal alignment agenda, but it’s not necessarily easy to find someone interested enough in the same topic and trustworthy enough to want to commit to working with them.
Which brings up another issue. Research which has potential capabilities side-effects is always going to be a temptation to some degree. How can potential collaborators or grant makers trust that researchers will resist the temptation to cash in on powerful advances and also prevent the ideas from leaking? If the ideas are unsafe to publish then the ideas can’t contribute piecemeal to the field of alignment research, they have to be valuable alone. That places a much higher bar for success. Which makes it seem like a riskier bet from the perspective of funders.
One way to partially ameliorate this issue of trust is having Orgs/Companies. They can thoroughly investigate a person’s competence and trustworthiness. Then the person can potentially contribute to a variety of different projects once onboarded. Management can supervise and ensure that individual contributors are acting in alignment with the company’s values and rules. That’s a hard thing for a grant-making institution to do. They can’t afford that level of initial evaluation, much less the ongoing supervision and technical guidance. So… Yeah. Tougher problem than it seems on first glance.
IDGI. Why don’t you work w/ someone to get funding? If you’re 15x more productive, then you’ve got a much better shot at finding/filling out grants and then getting funding for you and your partner.
EDIT:
Also, you’re a game dev and hence good at programming. Surely you could work for free as an engineer at an AI alignment org or something and then shift into discussions w/ them about alignment?
To be clear: his motivation isn’t “I want to contribute to alignment research!” He’s aiming to actually solve the problem. If he works as an engineer at an org, he’s not pursuing his project, and he’d be approximately 0% as usefwl.
So am I. So are a lot of would-be researchers. There are many people who think they have a shot at doing this. Most are probably wrong. I’m not saying an org is a good solution for him or me. It would have to be an org willing to encompass and support the things he had in mind. Same with me. I’m not sure such orgs exist for either of us.
With a convincing track-record, one can apply for funding to found or co-found a new org based on your ideas. That’s a very high bar to clear though.
The FAR AI org might be an adequate solution? They are an organization for coordinating independent researchers.
Oh, I see! That makes a lot more sense. But he should really write up/link to his project then, or his collaborator’s project.
I have this description but it’s not that good, because it’s very unfocused. That’s why I did not link it in the OP. The LessWrong dialog linked at the top of the post is probably the best thing in terms of describing the motivation and what the project is about at a high level.
I can’t see a link to any LW dialog at the top.
At the top of this document.
Thanks!
He linked his extensive research log on the project above, and has made LW posts of some of their progress. That said, I don’t know of any good legible summary of it. It would be good to have. I don’t know if that’s one of Johannes’ top priorities, however. It’s never obvious from the outside what somebody’s top priorities ought to be.
Yeah, so… I find myself feeling like I have some things in common with the post author’s situation. I don’t think “work for free at an alignment org” is really an option? I don’t know about any alignment orgs offering unpaid internships. An unpaid worker isn’t free for an org, you still need to coordinate them, assess their output, etc. The issues with team bloat and how much to try to integrate a volunteer are substantial.
I wish I had someone I could work with on my personal alignment agenda, but it’s not necessarily easy to find someone interested enough in the same topic and trustworthy enough to want to commit to working with them.
Which brings up another issue. Research which has potential capabilities side-effects is always going to be a temptation to some degree. How can potential collaborators or grant makers trust that researchers will resist the temptation to cash in on powerful advances and also prevent the ideas from leaking? If the ideas are unsafe to publish then the ideas can’t contribute piecemeal to the field of alignment research, they have to be valuable alone. That places a much higher bar for success. Which makes it seem like a riskier bet from the perspective of funders. One way to partially ameliorate this issue of trust is having Orgs/Companies. They can thoroughly investigate a person’s competence and trustworthiness. Then the person can potentially contribute to a variety of different projects once onboarded. Management can supervise and ensure that individual contributors are acting in alignment with the company’s values and rules. That’s a hard thing for a grant-making institution to do. They can’t afford that level of initial evaluation, much less the ongoing supervision and technical guidance. So… Yeah. Tougher problem than it seems on first glance.