The problem is that the risks involved with creating roughly human-level AI like GPT-4, and the risks involved with creating superintelligence, are quite different.
With human-level AI, we have some idea of what we are doing. With superintelligence, you’re a bit like a chimp breaking into a medical clinic. You might find something there that you can wield as a useful tool, but in general, you are surrounded by phenomena and possibilities that are completely beyond your comprehension, and you’ll easily end up doing the equivalent of poisoning yourself, injuring yourself, or setting the place on fire.
So I need another clarification. In the hypothetical, is your magic AI protocol capable of creating an intelligence much much greater than human, or should we only be concerned by the risks that could come from an entity with a human level of intelligence?
Do you have a best guess though? Deferring is forbidden in the hypothetical.
The problem is that the risks involved with creating roughly human-level AI like GPT-4, and the risks involved with creating superintelligence, are quite different.
With human-level AI, we have some idea of what we are doing. With superintelligence, you’re a bit like a chimp breaking into a medical clinic. You might find something there that you can wield as a useful tool, but in general, you are surrounded by phenomena and possibilities that are completely beyond your comprehension, and you’ll easily end up doing the equivalent of poisoning yourself, injuring yourself, or setting the place on fire.
So I need another clarification. In the hypothetical, is your magic AI protocol capable of creating an intelligence much much greater than human, or should we only be concerned by the risks that could come from an entity with a human level of intelligence?