Wouldn’t the UFAI’s possible amorality give it an advantage over a morally fettered FAI? Also, friendliness or unfriendliness doesn’t dictate the order of magnitude of the AI’s development speed (though i suspect proper ethics could really slow a FAI down). It’d be down to the one written to develop faster, not necessarily the first if the other can quickly catch up.
Wouldn’t the UFAI’s possible amorality give it an advantage over a morally fettered FAI?
Probably not enough to overcome much of a head start, especially since a consequentialist FAI could and would do anything necessary to win without fear of being corrupted by power in the process.
It’d be down to the one written to develop faster, not necessarily the first if the other can quickly catch up.
True, to a limited extent. Still, if the theory about foom is correct the time-lengths involved my be very short, to the point where barring an unlikely coincidence of development the first one will take over the world before the second one is even fully coded. Even if that’s not the case, it will always be the case that there will be some sort of cut-off ‘launch before this time or lose’ point. You always have to weigh up the chance that that cut-off is in the near future, bearing in mind that the amount of cleverness and effort need to build an AGI will be decreasing all the time.
Wouldn’t the UFAI’s possible amorality give it an advantage over a morally fettered FAI?
That’s what the SIAI is for, creating a way to code friendliness now so that when it comes down to building an AGI FAI is just as easy to build as UFAI.
Wouldn’t the UFAI’s possible amorality give it an advantage over a morally fettered FAI? Also, friendliness or unfriendliness doesn’t dictate the order of magnitude of the AI’s development speed (though i suspect proper ethics could really slow a FAI down). It’d be down to the one written to develop faster, not necessarily the first if the other can quickly catch up.
But yeah, race elements are undeniable.
Probably not enough to overcome much of a head start, especially since a consequentialist FAI could and would do anything necessary to win without fear of being corrupted by power in the process.
True, to a limited extent. Still, if the theory about foom is correct the time-lengths involved my be very short, to the point where barring an unlikely coincidence of development the first one will take over the world before the second one is even fully coded. Even if that’s not the case, it will always be the case that there will be some sort of cut-off ‘launch before this time or lose’ point. You always have to weigh up the chance that that cut-off is in the near future, bearing in mind that the amount of cleverness and effort need to build an AGI will be decreasing all the time.
That’s what the SIAI is for, creating a way to code friendliness now so that when it comes down to building an AGI FAI is just as easy to build as UFAI.