Not speaking for multi, but, in any x-risk item (blowing up asteroids, stabilizing nuclear powers, global warming, catastrophic viral outbreak, climate change of whatever sort, FAI, whatever) for those working on the problem, there are degrees of realism:
“I am working on a project that may have massive effect on future society. While the chance that I specifically am a key person on the project are remote, given the fine minds at (Google/CDC/CIA/whatever), I still might be, and that’s worth doing.”—Probably sane, even if misguided.
“I am working on a project that may have massive effect on future society. I am the greatest mind in the field. Still, many other smart people are involved. The specific risk I am worried about may or not occur, but efforts to prevent its occurrence are valuable. There is some real possibility that I will the critical person on the project.”—Possibly sane, even if misguided.
“I am working on a project that will save a near-infinite number of universes. In all likelihood, only I can achieve it. All of the people—even people perceived as having better credentials, intelligence, and ability—cannot do what I am doing. All critics of me are either ignorant, stupid, or irrational. If I die, the chance of multiverse collapse is radically increased; no one can do what I do. I don’t care if other people view this as crazy, because they’re crazy if they don’t believe me.”—Clinical diagnosis.
You’re doing direct, substantial harm to your cause, because you and your views appear irrational. Those who hear about SIAI as the lead dog in this effort who are smart, have money, and are connected, will mostly conclude that this effort must not be worth anything.
I believe you had some language for Roko on the wisdom of damaging the cause in order to show off how smart you are.
I’m a little uncomfortable with the heat of my comment here, but other efforts have not been read the way I intended them by you (Others appeared to understand.) I am hopeful this is clear—and let me once again clarify that I had these views before multi’s post. Before. Don’t blame him again; blame me.
I’d like existential risk generally to be better received. In my opinion—and I may be wrong—you’re actively hurting the cause.
I don’t think Eliezer believes he’s irreplaceable, exactly. He thinks, or I think he thinks, that any sufficiently intelligent AI which has not been built to the standard of Friendliness (as he defines it) is an existential risk. And the only practical means for preventing the development of UnFriendly AI is to develop superintelligent FAI first. The team to develop FAI needn’t be SIAI, and Eliezer wouldn’t necessarily be the most important contributor to the project, and SIAI might not ultimately be equal to the task. But if he’s right about the risk and the solution, and his untimely demise were to doom the world, it would be because no-one else tried to do this, not because he was the only one who could.
Not that this rules out your interpretation. I’m sure he has a high opinion of his abilities as well. Any accusation of hubris should probably mention that he once told Aubrey de Grey “I bet I can solve ALL of Earth’s emergency problems before you cure aging.”
There may be multiple different projects projects, each necessary to save the world, and each having a key person who knows more about the project, and/or is more driven and/or is more capable than anyone else. Each such person has weirdly high expected utility, and could accurately make a statement like EY’s and still not be the person with the highest expected utility. Their actual expected utility would depend on the complexity of the project and the surrounding community, and how much the success of the project alters the value of human survival.
This is similar to the idea that responsibility is not a division of 100%.
What you say sounds reasonable, but I feel it’s unwise for me to worry about such things. If I were to sound such a vague alarm, I wouldn’t expect anyone to listen to me unless I’d made significant contributions in the field myself (I haven’t).
Not speaking for multi, but, in any x-risk item (blowing up asteroids, stabilizing nuclear powers, global warming, catastrophic viral outbreak, climate change of whatever sort, FAI, whatever) for those working on the problem, there are degrees of realism:
“I am working on a project that may have massive effect on future society. While the chance that I specifically am a key person on the project are remote, given the fine minds at (Google/CDC/CIA/whatever), I still might be, and that’s worth doing.”—Probably sane, even if misguided.
“I am working on a project that may have massive effect on future society. I am the greatest mind in the field. Still, many other smart people are involved. The specific risk I am worried about may or not occur, but efforts to prevent its occurrence are valuable. There is some real possibility that I will the critical person on the project.”—Possibly sane, even if misguided.
“I am working on a project that will save a near-infinite number of universes. In all likelihood, only I can achieve it. All of the people—even people perceived as having better credentials, intelligence, and ability—cannot do what I am doing. All critics of me are either ignorant, stupid, or irrational. If I die, the chance of multiverse collapse is radically increased; no one can do what I do. I don’t care if other people view this as crazy, because they’re crazy if they don’t believe me.”—Clinical diagnosis.
You’re doing direct, substantial harm to your cause, because you and your views appear irrational. Those who hear about SIAI as the lead dog in this effort who are smart, have money, and are connected, will mostly conclude that this effort must not be worth anything.
I believe you had some language for Roko on the wisdom of damaging the cause in order to show off how smart you are.
I’m a little uncomfortable with the heat of my comment here, but other efforts have not been read the way I intended them by you (Others appeared to understand.) I am hopeful this is clear—and let me once again clarify that I had these views before multi’s post. Before. Don’t blame him again; blame me.
I’d like existential risk generally to be better received. In my opinion—and I may be wrong—you’re actively hurting the cause.
--JRM
I don’t think Eliezer believes he’s irreplaceable, exactly. He thinks, or I think he thinks, that any sufficiently intelligent AI which has not been built to the standard of Friendliness (as he defines it) is an existential risk. And the only practical means for preventing the development of UnFriendly AI is to develop superintelligent FAI first. The team to develop FAI needn’t be SIAI, and Eliezer wouldn’t necessarily be the most important contributor to the project, and SIAI might not ultimately be equal to the task. But if he’s right about the risk and the solution, and his untimely demise were to doom the world, it would be because no-one else tried to do this, not because he was the only one who could.
Not that this rules out your interpretation. I’m sure he has a high opinion of his abilities as well. Any accusation of hubris should probably mention that he once told Aubrey de Grey “I bet I can solve ALL of Earth’s emergency problems before you cure aging.”
There may be multiple different projects projects, each necessary to save the world, and each having a key person who knows more about the project, and/or is more driven and/or is more capable than anyone else. Each such person has weirdly high expected utility, and could accurately make a statement like EY’s and still not be the person with the highest expected utility. Their actual expected utility would depend on the complexity of the project and the surrounding community, and how much the success of the project alters the value of human survival.
This is similar to the idea that responsibility is not a division of 100%.
http://www.ranprieur.com/essays/mathres.html
What you say sounds reasonable, but I feel it’s unwise for me to worry about such things. If I were to sound such a vague alarm, I wouldn’t expect anyone to listen to me unless I’d made significant contributions in the field myself (I haven’t).