I would suggest working on practical software, such as that which solves for best designs of transistors (gradually generalized), and perhaps for more optimal code for that. The AGI will have to out-foom such software to beat it, and probably won’t, as the ‘general==good’ is just the halo effect (and the software self improvement is likely to be of very limited help for automated superior computer design). And if it won’t outfoom such software, then we don’t have the scenario of AGI massively outpowering the mankind, and the whole risk issue is a lot lower.
This prevents boat load of nasty scenarios including “the leader of FAI team is a psychopath and actually sets himself a dictator, legalizes rape, etc” which should be expected to have risk of at least 2..3% if the FAI team is to be technically successful (about 2..3% of people are psychopaths, and those folks have edge over normals when it comes to talking people into giving them money and/or control).
I would suggest working on practical software, such as that which solves for best designs of transistors (gradually generalized), and perhaps for more optimal code for that. The AGI will have to out-foom such software to beat it, and probably won’t [...]
A bizarre optimisation problem to make that claim of.
I would suggest working on practical software, such as that which solves for best designs of transistors (gradually generalized), and perhaps for more optimal code for that. The AGI will have to out-foom such software to beat it, and probably won’t, as the ‘general==good’ is just the halo effect (and the software self improvement is likely to be of very limited help for automated superior computer design). And if it won’t outfoom such software, then we don’t have the scenario of AGI massively outpowering the mankind, and the whole risk issue is a lot lower.
This prevents boat load of nasty scenarios including “the leader of FAI team is a psychopath and actually sets himself a dictator, legalizes rape, etc” which should be expected to have risk of at least 2..3% if the FAI team is to be technically successful (about 2..3% of people are psychopaths, and those folks have edge over normals when it comes to talking people into giving them money and/or control).
I would suggest working on practical software, such as that which solves for best designs of transistors (gradually generalized), and perhaps for more optimal code for that. The AGI will have to out-foom such software to beat it, and probably won’t [...]
A bizarre optimisation problem to make that claim of.