Forgive me if this is a stupid question, but wouldn’t UFAI and FAI have identical or near-identical computational abilities/methods/limits and differ only by goals/values?
An FAI would have to be created by someone who had a clear understanding of how the whole system worked—in order for them to know it would be able to maintain the original values its creator wanted it to have. Because of that, an FAI would probably have to have fairly clean, simple code. You could also imagine a super-complex kludge of different systems (think of the human brain) that work when backed by massive processing power, but is not well-understood. It would be hard to predict what that system would do without turning it on. The overwhelming probability is that it would be a UAI, since FAIs are such a small fraction of the set of possible mind designs.
It’s not that a UFAI needs more processing power, but that if tons of processing power is needed, you’re probably not running something which is provably Friendly.
An FAI would have to be created by someone who had a clear understanding of how the whole system worked—in order for them to know it would be able to maintain the original values its creator wanted it to have. Because of that, an FAI would probably have to have fairly clean, simple code. You could also imagine a super-complex kludge of different systems (think of the human brain) that work when backed by massive processing power, but is not well-understood. It would be hard to predict what that system would do without turning it on. The overwhelming probability is that it would be a UAI, since FAIs are such a small fraction of the set of possible mind designs.
It’s not that a UFAI needs more processing power, but that if tons of processing power is needed, you’re probably not running something which is provably Friendly.