There are two basic ways to increase the number of AI Safety refreshers. 1) Take mission aligned people (usually EA undergraduates) and help then gain the skills. 2) Take a skilled AI researcher and convince them to join the mission.
I think these two types of growth may have very different effects.
A type 1 new person might take some time to get any good, but will be mission aligned. If that person looses sight of the real problem, I am very optimistic about just reminding them what AI Safety is really about, and they will get back on track. Further more, these people already exist, and are already trying to become AI Safety researches. We can help them, ignore them, or tell them to stop. Ignoring them will produce more noise compared to helping them, since the normal pressure of building academic prestige is currently not very aligned with the mission. So do we support them or tell them to stop? Actively telling people not to try to help with AI Safety seems very bad, it is something I would expect to have bad cultural effects outside just regulating how many people are doing AI Safety research.
A type 2 new person who are converted to AI Safety research becasue they actually care about the mission is not to dissimilar from a type 2 new person, so I will not write more about that.
However there is an other type of type 2 person who will be attracted to AI Safety as a side effect of AI Safety being cool and interesting. I think there is a risk that these people takes over the field and diverts the focus completely. I’m not sure how to stop this though since this is a direct side effect of gaining respectability, and AI Safety will need respectability. And we can’t just work in the shadows until it is the right time, because we don’t know the timelines. The best plan I have for keeping global AI Safety research on course, is to put as many of “our” people in to the field as we can. We have a founders effect advantage, and I expect this to get stronger the more truly mission aligned people we can put into academia.
I agree with alexflint, that there are bad growth trajectories and good growth trajectories. But I don’t think the good one is as hard to hit as they do. I think partly what is wrong is the model of AI Safety as a single company. I don’t think this is a good intuition pump. Noise is a thing, but it is much less intrusive that this metaphor suggest. Someone at MIRI told me that to first approximation he don’t read other peoples work, so at least for this person, it don’t matter how much noise is published, and I think this is a normal situation, especially for people interested deep work.
What mostly keep people in academia from doing deep work is the pressure to constantly publish.
I think focusing on growth v.s. not growth is the wrong question. But I think focusing on deep work is the right question. So let’s help people do deep work. Or, at least that what I aim to do. And I’m also happy to discuss with anyone.
Thank you for this thoughtful comment Linda—writing this replying has helped me to clarify my own thinking on growth and depth. My basic sense is this:
If I meet someone who really wants to help out with AI safety, I want to help them to do that, basically without reservation, regardless of their skill, experience, etc. My sense is that we have a huge and growing challenge in navigating the development of advanced AI, and there is just no shortage of work to do, though it can at first be quite difficult to find. So when I meet individuals, I will try to help them find out how to really help out. There is no need for me to judge whether a particular person really wants to help out or not; I simply help them see how they can help out, and those who want to help out will proceed. Those who do not want to help out will not proceed, and that’s fine too—there are plenty of good reasons for a person to not want to dive head-first into AI safety.
But it’s different when I consider setting up incentives, which is what @Larks was writing about:
My basic model for AI safety success is this:
Identify interesting problems. As a byproduct this draws new people into the field through altruism, nerd-sniping, apparent tractability.
Solve interesting problems. As a byproduct this draws new people into the field through credibility and prestige.
I’m quite concerned about “drawing people into the field through credibility and prestige” and even about “drawing people into the field through altruism, nerd-sniping, and apparent tractability”. The issue is not the people who genuinely want to help out, whom I consider to be a boon to the field regardless of their skill or experience. The issue is twofold:
Drawing people who are not particularly interested in helping out into the field via incentives (credibility, prestige, etc).
Tempting those who do really want to help out and are already actually helping out to instead pursue incentives (credibility, prestige, etc).
So I’m not skeptical of growth via helping individuals, I’m skeptical of growth via incentives.
Ok, that makes sense. Seems like we are mostly on the same page then.
I don’t have strong opinions weather drawing in people via prestige is good or bad. I expect it is probably complicated. For example, there might be people who want to work on AI Safety for the right reason, but are too agreeable to do it unless it reach some level of acceptability. So I don’t know what the effects will be on net. But I think it is an effect we will have to handle, since prestige will be important for other reasons.
On the other hand, there are lots of people who really do want to help, for the right reason. So if growth is the goal, helping these people out seems like just an obvious thing to do. I expect there are ways funders can help out here too.
I would not update much on the fact that currently most research is produced by existing institutions. It is hard to do good research, and even harder with out collogues, sallary and other support that comes with being part of an org. So I think there is a lot of room for growth, by just helping the people who are already involved and trying.
On the other hand, there are lots of people who really do want to help, for the right reason. So if growth is the goal, helping these people out seems like just an obvious thing to do
So I think there is a lot of room for growth, by just helping the people who are already involved and trying.
There are two basic ways to increase the number of AI Safety refreshers.
1) Take mission aligned people (usually EA undergraduates) and help then gain the skills.
2) Take a skilled AI researcher and convince them to join the mission.
I think these two types of growth may have very different effects.
A type 1 new person might take some time to get any good, but will be mission aligned. If that person looses sight of the real problem, I am very optimistic about just reminding them what AI Safety is really about, and they will get back on track. Further more, these people already exist, and are already trying to become AI Safety researches. We can help them, ignore them, or tell them to stop. Ignoring them will produce more noise compared to helping them, since the normal pressure of building academic prestige is currently not very aligned with the mission. So do we support them or tell them to stop? Actively telling people not to try to help with AI Safety seems very bad, it is something I would expect to have bad cultural effects outside just regulating how many people are doing AI Safety research.
A type 2 new person who are converted to AI Safety research becasue they actually care about the mission is not to dissimilar from a type 2 new person, so I will not write more about that.
However there is an other type of type 2 person who will be attracted to AI Safety as a side effect of AI Safety being cool and interesting. I think there is a risk that these people takes over the field and diverts the focus completely. I’m not sure how to stop this though since this is a direct side effect of gaining respectability, and AI Safety will need respectability. And we can’t just work in the shadows until it is the right time, because we don’t know the timelines. The best plan I have for keeping global AI Safety research on course, is to put as many of “our” people in to the field as we can. We have a founders effect advantage, and I expect this to get stronger the more truly mission aligned people we can put into academia.
I agree with alexflint, that there are bad growth trajectories and good growth trajectories. But I don’t think the good one is as hard to hit as they do. I think partly what is wrong is the model of AI Safety as a single company. I don’t think this is a good intuition pump. Noise is a thing, but it is much less intrusive that this metaphor suggest. Someone at MIRI told me that to first approximation he don’t read other peoples work, so at least for this person, it don’t matter how much noise is published, and I think this is a normal situation, especially for people interested deep work.
What mostly keep people in academia from doing deep work is the pressure to constantly publish.
I think focusing on growth v.s. not growth is the wrong question. But I think focusing on deep work is the right question. So let’s help people do deep work. Or, at least that what I aim to do. And I’m also happy to discuss with anyone.
Thank you for this thoughtful comment Linda—writing this replying has helped me to clarify my own thinking on growth and depth. My basic sense is this:
If I meet someone who really wants to help out with AI safety, I want to help them to do that, basically without reservation, regardless of their skill, experience, etc. My sense is that we have a huge and growing challenge in navigating the development of advanced AI, and there is just no shortage of work to do, though it can at first be quite difficult to find. So when I meet individuals, I will try to help them find out how to really help out. There is no need for me to judge whether a particular person really wants to help out or not; I simply help them see how they can help out, and those who want to help out will proceed. Those who do not want to help out will not proceed, and that’s fine too—there are plenty of good reasons for a person to not want to dive head-first into AI safety.
But it’s different when I consider setting up incentives, which is what @Larks was writing about:
I’m quite concerned about “drawing people into the field through credibility and prestige” and even about “drawing people into the field through altruism, nerd-sniping, and apparent tractability”. The issue is not the people who genuinely want to help out, whom I consider to be a boon to the field regardless of their skill or experience. The issue is twofold:
Drawing people who are not particularly interested in helping out into the field via incentives (credibility, prestige, etc).
Tempting those who do really want to help out and are already actually helping out to instead pursue incentives (credibility, prestige, etc).
So I’m not skeptical of growth via helping individuals, I’m skeptical of growth via incentives.
Ok, that makes sense. Seems like we are mostly on the same page then.
I don’t have strong opinions weather drawing in people via prestige is good or bad. I expect it is probably complicated. For example, there might be people who want to work on AI Safety for the right reason, but are too agreeable to do it unless it reach some level of acceptability. So I don’t know what the effects will be on net. But I think it is an effect we will have to handle, since prestige will be important for other reasons.
On the other hand, there are lots of people who really do want to help, for the right reason. So if growth is the goal, helping these people out seems like just an obvious thing to do. I expect there are ways funders can help out here too.
I would not update much on the fact that currently most research is produced by existing institutions. It is hard to do good research, and even harder with out collogues, sallary and other support that comes with being part of an org. So I think there is a lot of room for growth, by just helping the people who are already involved and trying.
I very much agree with these two: