Building a fAGI isn’t our main objective. Our main objective is to stop non-fAGI from being built. I say until we aren’t 100% sure the AGI would be friendly we shouldn’t build AGI at all.
And the only justification you seem to give for “they’re gonna kill us” is “powers not involved in developing it will be unhappy”.
Why should powers be involved at all? Why not make it an international, nonprofit, Open-Source program? And why is it a bad idea to reach the consciousness of the public and impart them a sense of clear and present danger regarding this project, so that they democratically force the necessary institutions into existence.
We can never become 100% certain of anything. Even if you just mean “really really sure”, that’s still quite contentious. Whoever first got in a position to launch would have to weigh up the possibility that they’ve made a mistake against the possibility that someone else will make a UFAI while they’re still checking.
Have we even given thought to how a clash between a FAI and a UFAI might develop?
At a guess, first mover wins. If foom is correct then even a small head start in self improvement should lead to an easy victory, suggesting that this is, in fact, a race.
I didn’t want to give time lengths, since there’s a great deal of uncertainty about this, but I was thinking in terms of days or weeks rather than minutes or seconds when I wrote that. I would consider it quite a strange coincidence if two AIs are finished in the same week despite no AI having been discovered prior to that.
Well, if there’s an open-source project, multiple teams could race to put the finishing touches on, and some microchip factory could grant access to the team with the best friendliness-checking rather than the fastest results.
It might be possible to organise an open-source project in such a way that those who take part are not racing each other, but they must still deal with the possibility of other projects which may not be as generous in sharing all their data.
Wouldn’t the UFAI’s possible amorality give it an advantage over a morally fettered FAI? Also, friendliness or unfriendliness doesn’t dictate the order of magnitude of the AI’s development speed (though i suspect proper ethics could really slow a FAI down). It’d be down to the one written to develop faster, not necessarily the first if the other can quickly catch up.
Wouldn’t the UFAI’s possible amorality give it an advantage over a morally fettered FAI?
Probably not enough to overcome much of a head start, especially since a consequentialist FAI could and would do anything necessary to win without fear of being corrupted by power in the process.
It’d be down to the one written to develop faster, not necessarily the first if the other can quickly catch up.
True, to a limited extent. Still, if the theory about foom is correct the time-lengths involved my be very short, to the point where barring an unlikely coincidence of development the first one will take over the world before the second one is even fully coded. Even if that’s not the case, it will always be the case that there will be some sort of cut-off ‘launch before this time or lose’ point. You always have to weigh up the chance that that cut-off is in the near future, bearing in mind that the amount of cleverness and effort need to build an AGI will be decreasing all the time.
Wouldn’t the UFAI’s possible amorality give it an advantage over a morally fettered FAI?
That’s what the SIAI is for, creating a way to code friendliness now so that when it comes down to building an AGI FAI is just as easy to build as UFAI.
And the only justification you seem to give for “they’re gonna kill us” is “powers not involved in developing it will be unhappy”.
By “they’re gonna kill us” I assume you mean our potential adversaries. Well, by “powers” I essentially meant other nations, the general public, religious institutions and perhaps even corporations.
You are of course right, when you say that I can’t prove that the public reaction towards AGI development will be highly negative, but I think I did give a sensible justification: Self-Improving AGI has a higher threat-level than nuclear warheads and when people realize this (and I suppose they will in ~30 years), then I confidently predict that their reaction will be highly negative.
I’ll also add that I didn’t pose any specific scenarios like public lynchings. There are other numerous ways to repress and shut down AGI-research and nowhere did I speculate that an angry mob would kill the researchers.
Why not make self-improving AGI research open-source you ask? Essentially for the same reasons why biological weapons don’t get developed in open-source projects. Someone could simply steal the code and release an unsafe AI that may kill us all. (By the way, at the current stage of AGI development an open source project may be a terrific way to move things along, but once things get more sophisticated you can’t put self-improving AGI code “out there” for the whole world to see and modify, that’s just madness.) As far as my opinion of how likely worldwide democratic consensus about developing self-improving AGI goes, I think I made my point and don’t need to elaborate it further.
People were quite enthusiastic about nukes when they were first introduced. It’s all a matter of perception and timing.
nowhere did I speculate that an angry mob would kill the researchers
I know you didn’t, I was speaking figuratively. My bad.
for the same reasons why biological weapons don’t get developed in open-source projects
AFAIK, biological weapons don’t get developed at all, mostly because of how incredibly dangerous and unreliable they are. There’s a lot of international scrutinizing each other and oneself over this. Perhaps the same policy can and should be imposed on AGI?
that’s just madness
Blasphemy! Why would that be so?
I think I made my point
You explained your opinion, but haven’t justified it to my satisfaction. A lot of your argument is implicit, and I suspect that if we made we’d find out it’s based on unwarranted heuristics, i.e. prejudice. Please don’t take this personally: you’re suggesting an important update of my beliefs, and I want to be thorough before adopting it.
Building a fAGI isn’t our main objective. Our main objective is to stop non-fAGI from being built. I say until we aren’t 100% sure the AGI would be friendly we shouldn’t build AGI at all.
And the only justification you seem to give for “they’re gonna kill us” is “powers not involved in developing it will be unhappy”.
Why should powers be involved at all? Why not make it an international, nonprofit, Open-Source program? And why is it a bad idea to reach the consciousness of the public and impart them a sense of clear and present danger regarding this project, so that they democratically force the necessary institutions into existence.
We can never become 100% certain of anything. Even if you just mean “really really sure”, that’s still quite contentious. Whoever first got in a position to launch would have to weigh up the possibility that they’ve made a mistake against the possibility that someone else will make a UFAI while they’re still checking.
This isn’t a race. Why “release my FAI before anyone releases a UFAI”?
...
Have we even given thought to how a clash between a FAI and a UFAI might develop?
At a guess, first mover wins. If foom is correct then even a small head start in self improvement should lead to an easy victory, suggesting that this is, in fact, a race.
If things are a bit slower, like, days or weeks rather than minutes or seconds, access to human-built infrastructure might still be a factor.
I didn’t want to give time lengths, since there’s a great deal of uncertainty about this, but I was thinking in terms of days or weeks rather than minutes or seconds when I wrote that. I would consider it quite a strange coincidence if two AIs are finished in the same week despite no AI having been discovered prior to that.
Well, if there’s an open-source project, multiple teams could race to put the finishing touches on, and some microchip factory could grant access to the team with the best friendliness-checking rather than the fastest results.
It might be possible to organise an open-source project in such a way that those who take part are not racing each other, but they must still deal with the possibility of other projects which may not be as generous in sharing all their data.
Wouldn’t the UFAI’s possible amorality give it an advantage over a morally fettered FAI? Also, friendliness or unfriendliness doesn’t dictate the order of magnitude of the AI’s development speed (though i suspect proper ethics could really slow a FAI down). It’d be down to the one written to develop faster, not necessarily the first if the other can quickly catch up.
But yeah, race elements are undeniable.
Probably not enough to overcome much of a head start, especially since a consequentialist FAI could and would do anything necessary to win without fear of being corrupted by power in the process.
True, to a limited extent. Still, if the theory about foom is correct the time-lengths involved my be very short, to the point where barring an unlikely coincidence of development the first one will take over the world before the second one is even fully coded. Even if that’s not the case, it will always be the case that there will be some sort of cut-off ‘launch before this time or lose’ point. You always have to weigh up the chance that that cut-off is in the near future, bearing in mind that the amount of cleverness and effort need to build an AGI will be decreasing all the time.
That’s what the SIAI is for, creating a way to code friendliness now so that when it comes down to building an AGI FAI is just as easy to build as UFAI.
By “they’re gonna kill us” I assume you mean our potential adversaries. Well, by “powers” I essentially meant other nations, the general public, religious institutions and perhaps even corporations.
You are of course right, when you say that I can’t prove that the public reaction towards AGI development will be highly negative, but I think I did give a sensible justification: Self-Improving AGI has a higher threat-level than nuclear warheads and when people realize this (and I suppose they will in ~30 years), then I confidently predict that their reaction will be highly negative.
I’ll also add that I didn’t pose any specific scenarios like public lynchings. There are other numerous ways to repress and shut down AGI-research and nowhere did I speculate that an angry mob would kill the researchers.
Why not make self-improving AGI research open-source you ask? Essentially for the same reasons why biological weapons don’t get developed in open-source projects. Someone could simply steal the code and release an unsafe AI that may kill us all. (By the way, at the current stage of AGI development an open source project may be a terrific way to move things along, but once things get more sophisticated you can’t put self-improving AGI code “out there” for the whole world to see and modify, that’s just madness.) As far as my opinion of how likely worldwide democratic consensus about developing self-improving AGI goes, I think I made my point and don’t need to elaborate it further.
People were quite enthusiastic about nukes when they were first introduced. It’s all a matter of perception and timing.
I know you didn’t, I was speaking figuratively. My bad.
AFAIK, biological weapons don’t get developed at all, mostly because of how incredibly dangerous and unreliable they are. There’s a lot of international scrutinizing each other and oneself over this. Perhaps the same policy can and should be imposed on AGI?
Blasphemy! Why would that be so?
You explained your opinion, but haven’t justified it to my satisfaction. A lot of your argument is implicit, and I suspect that if we made we’d find out it’s based on unwarranted heuristics, i.e. prejudice. Please don’t take this personally: you’re suggesting an important update of my beliefs, and I want to be thorough before adopting it.