Please excuse me if I’m missing something, but why is whether an Oracle AI can be safe considered such an important question in the first place? One of the main premises behind considering unfriendly AI a major existential risk is that someone will end up building one eventually if nothing is done to stop it. Oracle AI doesn’t seem to address that. One particular AGI that doesn’t itself destroy the world doesn’t automatically save the world. Or is the intention to ask the Oracle how best to stop unfriendly AI and/or build friendly AI? Then it would be important to determine whether either of those questions and any sub-questions can be asked safely, but why would comparatively unimportant other questions that e. g. only save a few million lives even matter?
Please excuse me if I’m missing something, but why is whether an Oracle AI can be safe considered such an important question in the first place? One of the main premises behind considering unfriendly AI a major existential risk is that someone will end up building one eventually if nothing is done to stop it. Oracle AI doesn’t seem to address that. One particular AGI that doesn’t itself destroy the world doesn’t automatically save the world. Or is the intention to ask the Oracle how best to stop unfriendly AI and/or build friendly AI? Then it would be important to determine whether either of those questions and any sub-questions can be asked safely, but why would comparatively unimportant other questions that e. g. only save a few million lives even matter?