I feel that perhaps you haven’t considered the best way to maximise your chance of developing Friendly AI if you were Eliezer Yudkowsky; your perspective is very much focussed on how you see it lookin in from the outside. Consider for a moment that you are in a situation where you think you can make a huge positive impact upon the world, and have founded an organisation to help you act upon that.
Your first, and biggest problem is getting paid. You could take time off to work on attaining a fortune through some other means but this is not a certain bet, and will waste years that you could be spending working on the problem instead. Your best bet is to find already wealthy people who can be convinced that you can change the world, that it’s for the best, and that they should donate significant sums of money to you, unless you believe this is even less certain than making a fortune yourself. There’s already a lot of people in the world with the requisite amount of money to spare. I think seeking donations is the more rational path.
Now, given that you need to persuade people of the importance of your brilliant new idea which no one has really been considering before, and that to most people isn’t at all an obvious idea. Is the better fund seeking strategy to admit to people that you’re uncertain if you’ll accomplish it, and compound that on top of their own doubts? Not really. Confidence is a very strong signal that will help you persuade people that you’re worth taking seriously. You asking Eliezer to be more publically doubtful probably puts him in an awkward situation. I’d be very surprised if he doesn’t have some doubts, maybe he even agrees with you, but to admit to these doubts would be to lower the confidence of investors in him, which would then lower further the chance of him actually being able to accomplish his goal.
Having confidence in himself is probably also important, incidentally. Talking about doubts would tend to reinforce them, and when you’re embarking upon a large and important undertaking, you want to spend as much of your mental effort and time as possible on increasing the chances that you’ll bring the project about, rather than dwelling on your doubts and wasting mental energy on motivating yourself to keep working.
So how to mitigate the problem that you might be wrong without running into these problems? Well, he seems ot have done fairly well here. The SIAI has now grown beyond just him, giving further perspectives he can draw upon in his work to mitigate any shortcomings in his own analyses. He’s laid down a large body of work explaining the mental processes he is basing his approaches on, which should be helpful both in recruitment for SIAI, and in letting people point out flaws or weaknesses in the work he is doing. Seems to me so far he has laid the groundwork out quite well, and now it just remains to see where he and the SIAI go from here. Importantly, the SIAI has grown to the point where even if he is not considering his doubts strongly enough, even if he becomes a kook, there are others there who may be able to do the same work. And if not there, his reasoning has been fairly well laid out, and there is no reason others can’t follow their own take on what needs to be done.
That said, as an outsider obviously it’s wise to consider the possibility that SIAI will never meet its goals. Luckily, it doesn’t have to be an either/or question. Too few people consider existential risk at all, but those of us who do consider it can spread ourselves over the different risks that we see. To the degree which you think Eliezer and the SIAI are on the right track, you can donate a portion of your disposable income to them. To the extent that you think other types of existential risk prevention matter, you can donate a portion of that money to the Future of Humanity Institute, or other relevant existential risk fighting organisation.
A display of confidence is a good way of getting people on your side if you are right,. It is also a good way of ovwrestimating whether you are right or not.
I feel that perhaps you haven’t considered the best way to maximise your chance of developing Friendly AI if you were Eliezer Yudkowsky; your perspective is very much focussed on how you see it lookin in from the outside. Consider for a moment that you are in a situation where you think you can make a huge positive impact upon the world, and have founded an organisation to help you act upon that.
Your first, and biggest problem is getting paid. You could take time off to work on attaining a fortune through some other means but this is not a certain bet, and will waste years that you could be spending working on the problem instead. Your best bet is to find already wealthy people who can be convinced that you can change the world, that it’s for the best, and that they should donate significant sums of money to you, unless you believe this is even less certain than making a fortune yourself. There’s already a lot of people in the world with the requisite amount of money to spare. I think seeking donations is the more rational path.
Now, given that you need to persuade people of the importance of your brilliant new idea which no one has really been considering before, and that to most people isn’t at all an obvious idea. Is the better fund seeking strategy to admit to people that you’re uncertain if you’ll accomplish it, and compound that on top of their own doubts? Not really. Confidence is a very strong signal that will help you persuade people that you’re worth taking seriously. You asking Eliezer to be more publically doubtful probably puts him in an awkward situation. I’d be very surprised if he doesn’t have some doubts, maybe he even agrees with you, but to admit to these doubts would be to lower the confidence of investors in him, which would then lower further the chance of him actually being able to accomplish his goal.
Having confidence in himself is probably also important, incidentally. Talking about doubts would tend to reinforce them, and when you’re embarking upon a large and important undertaking, you want to spend as much of your mental effort and time as possible on increasing the chances that you’ll bring the project about, rather than dwelling on your doubts and wasting mental energy on motivating yourself to keep working.
So how to mitigate the problem that you might be wrong without running into these problems? Well, he seems ot have done fairly well here. The SIAI has now grown beyond just him, giving further perspectives he can draw upon in his work to mitigate any shortcomings in his own analyses. He’s laid down a large body of work explaining the mental processes he is basing his approaches on, which should be helpful both in recruitment for SIAI, and in letting people point out flaws or weaknesses in the work he is doing. Seems to me so far he has laid the groundwork out quite well, and now it just remains to see where he and the SIAI go from here. Importantly, the SIAI has grown to the point where even if he is not considering his doubts strongly enough, even if he becomes a kook, there are others there who may be able to do the same work. And if not there, his reasoning has been fairly well laid out, and there is no reason others can’t follow their own take on what needs to be done.
That said, as an outsider obviously it’s wise to consider the possibility that SIAI will never meet its goals. Luckily, it doesn’t have to be an either/or question. Too few people consider existential risk at all, but those of us who do consider it can spread ourselves over the different risks that we see. To the degree which you think Eliezer and the SIAI are on the right track, you can donate a portion of your disposable income to them. To the extent that you think other types of existential risk prevention matter, you can donate a portion of that money to the Future of Humanity Institute, or other relevant existential risk fighting organisation.
A display of confidence is a good way of getting people on your side if you are right,. It is also a good way of ovwrestimating whether you are right or not.