Well, the attention of those capable of solving FAI should be undivided. Those who aren’t equipped to work on FAI and who could potentially make progress on intelligence enhancing therapies, should do so.
I don’t think so. The problem with FAI is that there is an abrupt change, whereas IA is a continuous process with look-ahead: you can test out a modification on just one human mind, so the process can correct any mistakes.
If you get the programming on your seed AI wrong, you’re stuffed.
I believe it’s almost backwards: with IA, you get small mistakes accumulating into irreversible changes (with all sorts of temptations to declare the result “good enough”), while with FAI you have a chance of getting it absolutely right at some point. The process of designing FAI doesn’t involve any abrupt change, the same way as you’d expect for IA. On the other hand, if there is no point with IA where you can “let go” and be sure the result holds the required preference, the “abrupt change” of deploying FAI is the point where you actually win.
So, we could decompile humans, and do FAI to them. Or we could just do FAI. Isn’t the latter strictly simpler?
Well, the attention of those capable of solving FAI should be undivided. Those who aren’t equipped to work on FAI and who could potentially make progress on intelligence enhancing therapies, should do so.
I don’t think so. The problem with FAI is that there is an abrupt change, whereas IA is a continuous process with look-ahead: you can test out a modification on just one human mind, so the process can correct any mistakes.
If you get the programming on your seed AI wrong, you’re stuffed.
I believe it’s almost backwards: with IA, you get small mistakes accumulating into irreversible changes (with all sorts of temptations to declare the result “good enough”), while with FAI you have a chance of getting it absolutely right at some point. The process of designing FAI doesn’t involve any abrupt change, the same way as you’d expect for IA. On the other hand, if there is no point with IA where you can “let go” and be sure the result holds the required preference, the “abrupt change” of deploying FAI is the point where you actually win.