Have you guys noticed that, while the notion of AI x-risk is gaining credibility thanks to some famous physicists, there is no mention of Eliezer and only a passing mention of MIRI? Yet Irving Good, who pointed out the possibility of recursive self-improvement without linking it to x-risk, is right there. Seems like a PR problem to me. Either raising the profile of the issue is not associated with EY/MIRI, or he is considered too low status to speak of publicly. Both possibilities are clearly detrimental to MIRI’s fundraising efforts.
The contrarian will have established some priority with these once-contrarian ideas, such as being the first to publish on or actively pursue related ideas. And he will be somewhat more familiar with those ideas, having spent years on them.
But the cautious person will be more familiar with standard topics and methods, and so be in a better position to communicate this new area to a standard audience, and to integrate it in with other standard areas. More important to the “powers that be” hoping to establish this new area, this standard person will bring more prestige and resources to this new area.
If the standard guy wins the first few such contests, his advantage can quickly snowball into an overwhelming one. People will prefer to cite his publications as they will be in more prestigious journals, even if they were not quite as creative. Reporters will prefer to quote him, students will prefer to study under him, firms will prefer to hire him as a consultant, and journals will prefer to publish him, as he will be affiliated with more prestigious institutions. And of course the contrarian may have a worse reputation as a “team player.”
I think this is fine. Convincing people that this is a Real Thing and then specifically making them aware of Eliezer and MIRI should be done separately anyway. Doing the second thing too soon may make the first thing harder, while doing the second thing late makes the first thing easier (because then AI x-risk can be put in a mental category other than “that weird thing that those weird people care about”).
Have you guys noticed that, while the notion of AI x-risk is gaining credibility thanks to some famous physicists, there is no mention of Eliezer and only a passing mention of MIRI? Yet Irving Good, who pointed out the possibility of recursive self-improvement without linking it to x-risk, is right there. Seems like a PR problem to me. Either raising the profile of the issue is not associated with EY/MIRI, or he is considered too low status to speak of publicly. Both possibilities are clearly detrimental to MIRI’s fundraising efforts.
See also this old post where Robin Hanson basically predicted that this would happen.
I think this is fine. Convincing people that this is a Real Thing and then specifically making them aware of Eliezer and MIRI should be done separately anyway. Doing the second thing too soon may make the first thing harder, while doing the second thing late makes the first thing easier (because then AI x-risk can be put in a mental category other than “that weird thing that those weird people care about”).