Either way, I think that building toward an FAI team is good for AI risk reduction, even if we decide (later) that an SI-hosted FAI team is not the best thing to do.
I question this assumption. I think that building an FAI team may damage your overall goal of AI risk reduction for several reasons:
By setting yourself up as a competitor to other AGI research efforts, you strongly decrease the chance that they will listen to you. It will be far easier for them to write off your calls for consideration of friendliness issues as self-serving.
You risk undermining your credibility on risk reduction by tarring yourselves as crackpots. In particular, looking for good mathematicians to work out your theories comes off as “we already know the truth, now we just need people to prove it.”
You’re a small organization. Splitting your focus is not a recipe for greater effectiveness.
On the other hand, SI might get taken more seriously if it is able to demonstrate that it actually does know something about AGI design and isn’t just a bunch of outsiders to the field doing idle philosophizing.
Of course, this requires that SI is ready to publish part of its AGI research.
I agree but, as I’ve understood it, they’re explicitly saying they won’t release any AGI advances they make. What will it do to their credibility to be funding a “secret” AI project?
I honestly worry that this could kill funding for the organization which doesn’t seem optimal in any scenario.
Potential Donor: I’ve been impressed with your work on AI risk. Now, I hear you’re also trying to build an AI yourselves. Who do you have working on your team?
SI: Well, we decided to train high schoolers since we couldn’t find any researchers we could trust.
PD: Hm, so what about the project lead?
SI: Well, he’s done brilliant work on rationality training and wrote a really fantastic Harry Potter fanfic that helped us recruit the high schoolers.
PD: Huh. So, how has the work gone so far?
SI: That’s the best part, we’re keeping it all secret so that our advances don’t fall into the wrong hands. You wouldn’t want that, would you?
PD: [backing away slowly] No, of course not… Well, I need to do a little more reading about your organization, but this sounds, um, good...
That also requires that SI really isn’t just a bunch of outsiders to the field doing idle philosophizing about infinitely powerful fully general purpose minds that would be so general purpose they’d be naturally psychopathic (seeing the psychopathy as type of intelligent behaviour that fully general intelligence should do).
If the SI is that, the best course of action for SI is to claim that it does or would have to do such awesome research that to publish it would be to risk the mankind survival, and so to protect the mankind it only does philosophizing.
I question this assumption. I think that building an FAI team may damage your overall goal of AI risk reduction for several reasons:
By setting yourself up as a competitor to other AGI research efforts, you strongly decrease the chance that they will listen to you. It will be far easier for them to write off your calls for consideration of friendliness issues as self-serving.
You risk undermining your credibility on risk reduction by tarring yourselves as crackpots. In particular, looking for good mathematicians to work out your theories comes off as “we already know the truth, now we just need people to prove it.”
You’re a small organization. Splitting your focus is not a recipe for greater effectiveness.
On the other hand, SI might get taken more seriously if it is able to demonstrate that it actually does know something about AGI design and isn’t just a bunch of outsiders to the field doing idle philosophizing.
Of course, this requires that SI is ready to publish part of its AGI research.
I agree but, as I’ve understood it, they’re explicitly saying they won’t release any AGI advances they make. What will it do to their credibility to be funding a “secret” AI project?
I honestly worry that this could kill funding for the organization which doesn’t seem optimal in any scenario.
Potential Donor: I’ve been impressed with your work on AI risk. Now, I hear you’re also trying to build an AI yourselves. Who do you have working on your team?
SI: Well, we decided to train high schoolers since we couldn’t find any researchers we could trust.
PD: Hm, so what about the project lead?
SI: Well, he’s done brilliant work on rationality training and wrote a really fantastic Harry Potter fanfic that helped us recruit the high schoolers.
PD: Huh. So, how has the work gone so far?
SI: That’s the best part, we’re keeping it all secret so that our advances don’t fall into the wrong hands. You wouldn’t want that, would you?
PD: [backing away slowly] No, of course not… Well, I need to do a little more reading about your organization, but this sounds, um, good...
Indeed.
“Wish You Were Here”—R. Waters, D. Gilmour
That also requires that SI really isn’t just a bunch of outsiders to the field doing idle philosophizing about infinitely powerful fully general purpose minds that would be so general purpose they’d be naturally psychopathic (seeing the psychopathy as type of intelligent behaviour that fully general intelligence should do).
If the SI is that, the best course of action for SI is to claim that it does or would have to do such awesome research that to publish it would be to risk the mankind survival, and so to protect the mankind it only does philosophizing.