Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces. This means that UFAI parallelizes better than FAI. UFAI also probably benefits from brute-force computing power more than FAI. Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done.
Forgive me if this is a stupid question, but wouldn’t UFAI and FAI have identical or near-identical computational abilities/methods/limits and differ only by goals/values?
Forgive me if this is a stupid question, but wouldn’t UFAI and FAI have identical or near-identical computational abilities/methods/limits and differ only by goals/values?
An FAI would have to be created by someone who had a clear understanding of how the whole system worked—in order for them to know it would be able to maintain the original values its creator wanted it to have. Because of that, an FAI would probably have to have fairly clean, simple code. You could also imagine a super-complex kludge of different systems (think of the human brain) that work when backed by massive processing power, but is not well-understood. It would be hard to predict what that system would do without turning it on. The overwhelming probability is that it would be a UAI, since FAIs are such a small fraction of the set of possible mind designs.
It’s not that a UFAI needs more processing power, but that if tons of processing power is needed, you’re probably not running something which is provably Friendly.
Yes. The OP is assuming that the process of reliably defining the goals/values which characterize FAI is precisely what requires a “mathier and more insight-based” process which parallelizes less well and benefits less from brute-force computing power.
Forgive me if this is a stupid question, but wouldn’t UFAI and FAI have identical or near-identical computational abilities/methods/limits and differ only by goals/values?
An FAI would have to be created by someone who had a clear understanding of how the whole system worked—in order for them to know it would be able to maintain the original values its creator wanted it to have. Because of that, an FAI would probably have to have fairly clean, simple code. You could also imagine a super-complex kludge of different systems (think of the human brain) that work when backed by massive processing power, but is not well-understood. It would be hard to predict what that system would do without turning it on. The overwhelming probability is that it would be a UAI, since FAIs are such a small fraction of the set of possible mind designs.
It’s not that a UFAI needs more processing power, but that if tons of processing power is needed, you’re probably not running something which is provably Friendly.
Yes. The OP is assuming that the process of reliably defining the goals/values which characterize FAI is precisely what requires a “mathier and more insight-based” process which parallelizes less well and benefits less from brute-force computing power.