I think this is an overly simplistic and binary way of looking at it:
Maybe FAI is so hard that we can only get FAI with a large team of IQ 200+ humans...I doubt (1) is true. I think IQ 130–170 humans could figure out FAI in 50–150 …
If it’s possible or not, IMHO, isn’t the only question. A better question may be “What are the odds that a team of IQ 150 humans thinks they have developed a FAI and are correct, vs. the odds that they think they have developed a FAI and are wrong? Are those odds better or worse if it’s a team of IQ 200+ individuals? ”
I think that a group of normal people could probably develop a FAI. But I also think that a group of IA people are more likely to do so correctly on the first try without missing any vital details, given that in practice you may only have one shot at it.
I would also say that if something does go badly wrong, a group of IA people (or people with other augmentations, like brain-computer interfaces) probably have a better shot at figuring it out in time and responding properly (not necessarily that they have good odds at it, but probably at least better odds). They’re also probably less likely to fail at other related forms of AI danger (trying to create a GAI that is designed to not self-improve, for example, at least not until the team is convinced that it is friendly; or maintaining control over some kind of oracle AI, or losing control over a narrow AI with significant destructive capability).
Note that I’m not necessarily saying that those are good ideas, but either way, AI risk is probably lowered if IA comes first. Very smart people still may intentionally make uFAI for whatever reason, but at least they’re less likely to try to make FAI but mess it up.
I think this is an overly simplistic and binary way of looking at it:
If it’s possible or not, IMHO, isn’t the only question. A better question may be “What are the odds that a team of IQ 150 humans thinks they have developed a FAI and are correct, vs. the odds that they think they have developed a FAI and are wrong? Are those odds better or worse if it’s a team of IQ 200+ individuals? ”
I think that a group of normal people could probably develop a FAI. But I also think that a group of IA people are more likely to do so correctly on the first try without missing any vital details, given that in practice you may only have one shot at it.
I would also say that if something does go badly wrong, a group of IA people (or people with other augmentations, like brain-computer interfaces) probably have a better shot at figuring it out in time and responding properly (not necessarily that they have good odds at it, but probably at least better odds). They’re also probably less likely to fail at other related forms of AI danger (trying to create a GAI that is designed to not self-improve, for example, at least not until the team is convinced that it is friendly; or maintaining control over some kind of oracle AI, or losing control over a narrow AI with significant destructive capability).
Note that I’m not necessarily saying that those are good ideas, but either way, AI risk is probably lowered if IA comes first. Very smart people still may intentionally make uFAI for whatever reason, but at least they’re less likely to try to make FAI but mess it up.