On the other hand, SI might get taken more seriously if it is able to demonstrate that it actually does know something about AGI design and isn’t just a bunch of outsiders to the field doing idle philosophizing.
Of course, this requires that SI is ready to publish part of its AGI research.
I agree but, as I’ve understood it, they’re explicitly saying they won’t release any AGI advances they make. What will it do to their credibility to be funding a “secret” AI project?
I honestly worry that this could kill funding for the organization which doesn’t seem optimal in any scenario.
Potential Donor: I’ve been impressed with your work on AI risk. Now, I hear you’re also trying to build an AI yourselves. Who do you have working on your team?
SI: Well, we decided to train high schoolers since we couldn’t find any researchers we could trust.
PD: Hm, so what about the project lead?
SI: Well, he’s done brilliant work on rationality training and wrote a really fantastic Harry Potter fanfic that helped us recruit the high schoolers.
PD: Huh. So, how has the work gone so far?
SI: That’s the best part, we’re keeping it all secret so that our advances don’t fall into the wrong hands. You wouldn’t want that, would you?
PD: [backing away slowly] No, of course not… Well, I need to do a little more reading about your organization, but this sounds, um, good...
That also requires that SI really isn’t just a bunch of outsiders to the field doing idle philosophizing about infinitely powerful fully general purpose minds that would be so general purpose they’d be naturally psychopathic (seeing the psychopathy as type of intelligent behaviour that fully general intelligence should do).
If the SI is that, the best course of action for SI is to claim that it does or would have to do such awesome research that to publish it would be to risk the mankind survival, and so to protect the mankind it only does philosophizing.
On the other hand, SI might get taken more seriously if it is able to demonstrate that it actually does know something about AGI design and isn’t just a bunch of outsiders to the field doing idle philosophizing.
Of course, this requires that SI is ready to publish part of its AGI research.
I agree but, as I’ve understood it, they’re explicitly saying they won’t release any AGI advances they make. What will it do to their credibility to be funding a “secret” AI project?
I honestly worry that this could kill funding for the organization which doesn’t seem optimal in any scenario.
Potential Donor: I’ve been impressed with your work on AI risk. Now, I hear you’re also trying to build an AI yourselves. Who do you have working on your team?
SI: Well, we decided to train high schoolers since we couldn’t find any researchers we could trust.
PD: Hm, so what about the project lead?
SI: Well, he’s done brilliant work on rationality training and wrote a really fantastic Harry Potter fanfic that helped us recruit the high schoolers.
PD: Huh. So, how has the work gone so far?
SI: That’s the best part, we’re keeping it all secret so that our advances don’t fall into the wrong hands. You wouldn’t want that, would you?
PD: [backing away slowly] No, of course not… Well, I need to do a little more reading about your organization, but this sounds, um, good...
Indeed.
“Wish You Were Here”—R. Waters, D. Gilmour
That also requires that SI really isn’t just a bunch of outsiders to the field doing idle philosophizing about infinitely powerful fully general purpose minds that would be so general purpose they’d be naturally psychopathic (seeing the psychopathy as type of intelligent behaviour that fully general intelligence should do).
If the SI is that, the best course of action for SI is to claim that it does or would have to do such awesome research that to publish it would be to risk the mankind survival, and so to protect the mankind it only does philosophizing.