Or is it that AGI is more likely to come before WBE and so we should focus our efforts on making sure that the AGI is friendly?
Isn’t is strange that the people smart enough to build AGI would be stupid enough not to even TRY to make sure it’s friendly? I’m not saying that strange things don’t happen, but they to me it seems like they would have a lower probability. I mean, the one of the goals of SIAI is to make sure that anybody who would have the means to create WBE would know about the risks involved. But who are the people who are likely to develop AGI? That seems to be a key question that I haven’t seemed discussed in very many places. If we can try to identify those people, then SIAI could make sure that they are informed about the risks. Also, if we can identify who those people are, we could try to tell them to hold off until WBE comes along.
This doesn’t work reliably enough. You need just one failure, and actually convincing (as opposed even to eliciting an ostensible admission of having been convinced) is really difficult. A serious complication is that it’s not possible to “make AGI Friendly”, most AGI designs can’t be fixed without essentially discarding everything, and so people won’t be moved deeply enough to kill their mind baby, they would instead raise up defenses against offending arguments, failing to understand the point and coming up with rationalizations that claim that whatever they are doing already happens to be Friendly (perhaps with minor modifications). Just look at Goertzel (see my comment).
Good point. Do you know if SIAI is planning on trying to build the first AGI? Isn’t the only other option to try to persuade others?
Also, I don’t really know too much about the specifics of AGI designs. Where could I learn more? Can you back up the claim that “most AGI designs can’t be fixed without essentially discarding everything”?
That’s why I wrote this:
Isn’t is strange that the people smart enough to build AGI would be stupid enough not to even TRY to make sure it’s friendly? I’m not saying that strange things don’t happen, but they to me it seems like they would have a lower probability. I mean, the one of the goals of SIAI is to make sure that anybody who would have the means to create WBE would know about the risks involved. But who are the people who are likely to develop AGI? That seems to be a key question that I haven’t seemed discussed in very many places. If we can try to identify those people, then SIAI could make sure that they are informed about the risks. Also, if we can identify who those people are, we could try to tell them to hold off until WBE comes along.
This doesn’t work reliably enough. You need just one failure, and actually convincing (as opposed even to eliciting an ostensible admission of having been convinced) is really difficult. A serious complication is that it’s not possible to “make AGI Friendly”, most AGI designs can’t be fixed without essentially discarding everything, and so people won’t be moved deeply enough to kill their mind baby, they would instead raise up defenses against offending arguments, failing to understand the point and coming up with rationalizations that claim that whatever they are doing already happens to be Friendly (perhaps with minor modifications). Just look at Goertzel (see my comment).
Good point. Do you know if SIAI is planning on trying to build the first AGI? Isn’t the only other option to try to persuade others?
Also, I don’t really know too much about the specifics of AGI designs. Where could I learn more? Can you back up the claim that “most AGI designs can’t be fixed without essentially discarding everything”?