Given SAI is possible, regulation on AI is necessary to prevent people from making a UFAI. Alternatively, an SAI which is not fully aligned but has not goals directly conflicting with ours might be used to prevent the creation of UFAI.
This seems like one potential path, but for it to work, you would need a government structure that can survive without successful pro AI revolutionaries for a billion years. You also need law enforcement good enough to stop anyone trying to make UFAI, with not a single failure in a billion years. As for a SAI that will help us stop UFAI, can explain 1) how it would help and 2) how it would be easier to build than FAI?
You also need to say what happens with evolution, given this kind of time, and non ancestral selection pressures, evolution will produce beings not remotely human in mind or body. Either argue that the evolution is in a morally ok direction, and that your government structure works with these beings, or stop evolution by selective breeding—frozen samples—genetic modification towards some baseline. Then you just need to say how all human populations get this, or how any population that doesn’t won’t be building UFAI.
I have no comment on how plausible either of these scenarios are. I’m only making the observation that long term good futures not featuring friendly AI require some other mechanism preventing UFAI from happening. Either SAI in general would have to be implausible to create at all, or some powerful actor such as a government or limited AI would have to prevent it.
Given SAI is possible, regulation on AI is necessary to prevent people from making a UFAI. Alternatively, an SAI which is not fully aligned but has not goals directly conflicting with ours might be used to prevent the creation of UFAI.
This seems like one potential path, but for it to work, you would need a government structure that can survive without successful pro AI revolutionaries for a billion years. You also need law enforcement good enough to stop anyone trying to make UFAI, with not a single failure in a billion years. As for a SAI that will help us stop UFAI, can explain 1) how it would help and 2) how it would be easier to build than FAI?
You also need to say what happens with evolution, given this kind of time, and non ancestral selection pressures, evolution will produce beings not remotely human in mind or body. Either argue that the evolution is in a morally ok direction, and that your government structure works with these beings, or stop evolution by selective breeding—frozen samples—genetic modification towards some baseline. Then you just need to say how all human populations get this, or how any population that doesn’t won’t be building UFAI.
I have no comment on how plausible either of these scenarios are. I’m only making the observation that long term good futures not featuring friendly AI require some other mechanism preventing UFAI from happening. Either SAI in general would have to be implausible to create at all, or some powerful actor such as a government or limited AI would have to prevent it.