If you want to offer a concrete proposal for verifying the trustworthiness of nine people in a basement, offer it. Otherwise you’re just giving people an excuse to get lost in thought and implement the do-nothing option instead of implementing the best time-sensitive policy proposal offered so far.
If you want to offer a concrete proposal for verifying the trustworthiness of nine people in a basement, offer it.
Pay independent experts to peer-review your work.
Make the finances of the SIAI easily accessible.
Openly explain why and for what you currently need more money.
Publish progress reports for people to assess how close you are to run a fooming AI.
Publish a roadmap, set certain goals and openly pronounce success or failure.
Devise a plan that allows the examination by independent experts of a possible seed AI before you run it.
I came up with the above in about 1 minute. You shouldn’t even have to ask me how one could verify the trustworthiness of a charity. There are many more obvious ways to approach that problem.
Those sound like good ideas (except the first one), but they aren’t ideas for allaying your fears that SIAI will make an evil AI (except the last one). They are ideas for allaying your fears that SIAI won’t put your donation to good use. (Except the last one.)
Yes—they’d show SIAI is doing something, but not that it’s doing the right thing. And a 99% competent SIAI could well be worse than a 0% competent one – if they create a fooming UFAI a few years earlier.
It seems hard to think of anything that would verify that the nine are doing the right thing without risking AGI knowledge leaking out—I’d much sooner take my chances with a bunch of dudes in a basement who at least know there’s a problem then an IBM team who just want moar awesum.
If Friendliness turns out to be largely independent of the AGI bit I suppose it could be usefully published—both for feedback, and to raise awareness, and LW etc. could critique it.
The realistic outcomes for humanity are uFAI foom, FAI foom, or extinction by some other means. Soon doesn’t matter all that much; the only significant question is probability of an eventual Friendly foom. Those “few years earlier” only matter if someone else would have run a Friendly AGI in those few intervening years.
EDITED TO ADD: None of this changes the substance of your article, but just to pick a few nits:
“Foom” refers to a scenario in which we reach superintelligence rapidly enough to take humanity by surprise. That isn’t certain—it’s imaginable that we could have, say, several years of moderately superhuman intelligence.
Also, while these may be the long term realistic outcomes, in the short term another possible outcome is global catastrophe short of extinction, which would slow things down some.
They are ideas for allaying fears that SIAI is incompetent or worse. Which, since it is devoted to building an AI, would tend to allay fears that it is building an evil one.
Openly explain why and for what you currently need more money.
I’m especially interested in this. I’m open to the idea that SIAI is the maximally useful charity, but since I don’t know why they need the money I’m currently giving it to Village Reach.
If you want to offer a concrete proposal for verifying the trustworthiness of nine people in a basement, offer it. Otherwise you’re just giving people an excuse to get lost in thought and implement the do-nothing option instead of implementing the best time-sensitive policy proposal offered so far.
Pay independent experts to peer-review your work.
Make the finances of the SIAI easily accessible.
Openly explain why and for what you currently need more money.
Publish progress reports for people to assess how close you are to run a fooming AI.
Publish a roadmap, set certain goals and openly pronounce success or failure.
Devise a plan that allows the examination by independent experts of a possible seed AI before you run it.
I came up with the above in about 1 minute. You shouldn’t even have to ask me how one could verify the trustworthiness of a charity. There are many more obvious ways to approach that problem.
Those sound like good ideas (except the first one), but they aren’t ideas for allaying your fears that SIAI will make an evil AI (except the last one). They are ideas for allaying your fears that SIAI won’t put your donation to good use. (Except the last one.)
Yes—they’d show SIAI is doing something, but not that it’s doing the right thing. And a 99% competent SIAI could well be worse than a 0% competent one – if they create a fooming UFAI a few years earlier.
It seems hard to think of anything that would verify that the nine are doing the right thing without risking AGI knowledge leaking out—I’d much sooner take my chances with a bunch of dudes in a basement who at least know there’s a problem then an IBM team who just want moar awesum.
If Friendliness turns out to be largely independent of the AGI bit I suppose it could be usefully published—both for feedback, and to raise awareness, and LW etc. could critique it.
The realistic outcomes for humanity are uFAI foom, FAI foom, or extinction by some other means. Soon doesn’t matter all that much; the only significant question is probability of an eventual Friendly foom. Those “few years earlier” only matter if someone else would have run a Friendly AGI in those few intervening years.
EDITED TO ADD: None of this changes the substance of your article, but just to pick a few nits:
“Foom” refers to a scenario in which we reach superintelligence rapidly enough to take humanity by surprise. That isn’t certain—it’s imaginable that we could have, say, several years of moderately superhuman intelligence.
Also, while these may be the long term realistic outcomes, in the short term another possible outcome is global catastrophe short of extinction, which would slow things down some.
I don’t think any of that changes the substance of my argument.
Sorry, should have been clearer that I was just nitpicking. Will edit.
They are ideas for allaying fears that SIAI is incompetent or worse. Which, since it is devoted to building an AI, would tend to allay fears that it is building an evil one.
Basically incompetent organizations that try to build AI just won’t do anything.
I’m especially interested in this. I’m open to the idea that SIAI is the maximally useful charity, but since I don’t know why they need the money I’m currently giving it to Village Reach.
He has
I’m not sure I would personally endorse those possibilities, but let it not be said that he complains without offering solutions.