We can never become 100% certain of anything. Even if you just mean “really really sure”, that’s still quite contentious. Whoever first got in a position to launch would have to weigh up the possibility that they’ve made a mistake against the possibility that someone else will make a UFAI while they’re still checking.
Have we even given thought to how a clash between a FAI and a UFAI might develop?
At a guess, first mover wins. If foom is correct then even a small head start in self improvement should lead to an easy victory, suggesting that this is, in fact, a race.
I didn’t want to give time lengths, since there’s a great deal of uncertainty about this, but I was thinking in terms of days or weeks rather than minutes or seconds when I wrote that. I would consider it quite a strange coincidence if two AIs are finished in the same week despite no AI having been discovered prior to that.
Well, if there’s an open-source project, multiple teams could race to put the finishing touches on, and some microchip factory could grant access to the team with the best friendliness-checking rather than the fastest results.
It might be possible to organise an open-source project in such a way that those who take part are not racing each other, but they must still deal with the possibility of other projects which may not be as generous in sharing all their data.
Wouldn’t the UFAI’s possible amorality give it an advantage over a morally fettered FAI? Also, friendliness or unfriendliness doesn’t dictate the order of magnitude of the AI’s development speed (though i suspect proper ethics could really slow a FAI down). It’d be down to the one written to develop faster, not necessarily the first if the other can quickly catch up.
Wouldn’t the UFAI’s possible amorality give it an advantage over a morally fettered FAI?
Probably not enough to overcome much of a head start, especially since a consequentialist FAI could and would do anything necessary to win without fear of being corrupted by power in the process.
It’d be down to the one written to develop faster, not necessarily the first if the other can quickly catch up.
True, to a limited extent. Still, if the theory about foom is correct the time-lengths involved my be very short, to the point where barring an unlikely coincidence of development the first one will take over the world before the second one is even fully coded. Even if that’s not the case, it will always be the case that there will be some sort of cut-off ‘launch before this time or lose’ point. You always have to weigh up the chance that that cut-off is in the near future, bearing in mind that the amount of cleverness and effort need to build an AGI will be decreasing all the time.
Wouldn’t the UFAI’s possible amorality give it an advantage over a morally fettered FAI?
That’s what the SIAI is for, creating a way to code friendliness now so that when it comes down to building an AGI FAI is just as easy to build as UFAI.
We can never become 100% certain of anything. Even if you just mean “really really sure”, that’s still quite contentious. Whoever first got in a position to launch would have to weigh up the possibility that they’ve made a mistake against the possibility that someone else will make a UFAI while they’re still checking.
This isn’t a race. Why “release my FAI before anyone releases a UFAI”?
...
Have we even given thought to how a clash between a FAI and a UFAI might develop?
At a guess, first mover wins. If foom is correct then even a small head start in self improvement should lead to an easy victory, suggesting that this is, in fact, a race.
If things are a bit slower, like, days or weeks rather than minutes or seconds, access to human-built infrastructure might still be a factor.
I didn’t want to give time lengths, since there’s a great deal of uncertainty about this, but I was thinking in terms of days or weeks rather than minutes or seconds when I wrote that. I would consider it quite a strange coincidence if two AIs are finished in the same week despite no AI having been discovered prior to that.
Well, if there’s an open-source project, multiple teams could race to put the finishing touches on, and some microchip factory could grant access to the team with the best friendliness-checking rather than the fastest results.
It might be possible to organise an open-source project in such a way that those who take part are not racing each other, but they must still deal with the possibility of other projects which may not be as generous in sharing all their data.
Wouldn’t the UFAI’s possible amorality give it an advantage over a morally fettered FAI? Also, friendliness or unfriendliness doesn’t dictate the order of magnitude of the AI’s development speed (though i suspect proper ethics could really slow a FAI down). It’d be down to the one written to develop faster, not necessarily the first if the other can quickly catch up.
But yeah, race elements are undeniable.
Probably not enough to overcome much of a head start, especially since a consequentialist FAI could and would do anything necessary to win without fear of being corrupted by power in the process.
True, to a limited extent. Still, if the theory about foom is correct the time-lengths involved my be very short, to the point where barring an unlikely coincidence of development the first one will take over the world before the second one is even fully coded. Even if that’s not the case, it will always be the case that there will be some sort of cut-off ‘launch before this time or lose’ point. You always have to weigh up the chance that that cut-off is in the near future, bearing in mind that the amount of cleverness and effort need to build an AGI will be decreasing all the time.
That’s what the SIAI is for, creating a way to code friendliness now so that when it comes down to building an AGI FAI is just as easy to build as UFAI.