I think your arguments would make sense if there was a general “let’s deal with existential risks” program; I see SIAI concentrating specifically on the imminent possibility of uFAI. They feel they already have enough researchers for the specific problem, and they have some fund flow that saves them the effort to tap the more general public. They would rather use the resources they have to attack the problem itself. You may argue with the specific point of compromise, but it is not illogical.
It-just-so-happens that “solving” uFAI risk would most likely solve all other problems by triggering a friendly Singularity, but that does not make SIAI a general existential-risk fighting unit.
It-just-so-happens that “solving” uFAI risk would most likely solve all other problems by triggering a friendly Singularity
This seems unlikely to me. Even if you completely solve the problem of Friendly AI you might lack the processing power to implement it. Or it might turn out that there are fundamental limits which prevent a Singularity event from taking place. The first problem seems particularly relevant given that to someone concerned about uFAI, the goal presumably is to solve the Friendliness problem well before we’re anywhere near actually having functional general AI. No one want this to be cut close and there’s no a priori reason to think it would be cut close. (Indeed if it did seem to be getting cut close one could arguably use that as evidence that we’re in a simulation and that this is a semifictionalized account with a timeline specifically engineered to create suspense and drama.)
I think your arguments would make sense if there was a general “let’s deal with existential risks” program; I see SIAI concentrating specifically on the imminent possibility of uFAI. They feel they already have enough researchers for the specific problem, and they have some fund flow that saves them the effort to tap the more general public. They would rather use the resources they have to attack the problem itself. You may argue with the specific point of compromise, but it is not illogical.
It-just-so-happens that “solving” uFAI risk would most likely solve all other problems by triggering a friendly Singularity, but that does not make SIAI a general existential-risk fighting unit.
This seems unlikely to me. Even if you completely solve the problem of Friendly AI you might lack the processing power to implement it. Or it might turn out that there are fundamental limits which prevent a Singularity event from taking place. The first problem seems particularly relevant given that to someone concerned about uFAI, the goal presumably is to solve the Friendliness problem well before we’re anywhere near actually having functional general AI. No one want this to be cut close and there’s no a priori reason to think it would be cut close. (Indeed if it did seem to be getting cut close one could arguably use that as evidence that we’re in a simulation and that this is a semifictionalized account with a timeline specifically engineered to create suspense and drama.)