What if asking what the sum of 1+1 is causes the Oracle to devote as many resources as possible to looking for an inconsistency arising from the Peano axioms?
What if asking what the sum of 1+1 is causes the Oracle to devote as many resources as possible to looking for an inconsistency arising from the Peano axioms?
If the Oracle we are talking about was specifically designed to do that, for the sake of the thought experiment, then yes. But I don’t see that it would make sense to build such a device, or that it is very likely to be possible at all.
If Apple was going to build an Oracle it would anticipate that other people would also want to ask it questions. Therefore it can’t just waste all resources on looking for an inconsistency arising from the Peano axioms when asked to solve 1+1. It would not devote additional resources on answering those questions that are already known to be correct with a high probability. I just don’t see that it would be economically useful to take over the universe to answer simple questions.
I further do not think that it would be rational to look for an inconsistency arising from the Peano axioms while solving 1+1. To answer questions an Oracle needs a good amount of general intelligence. And concluding that asking it to solve 1+1 implies to look for an inconsistency arising from the Peano axioms does not seem reasonable. It also does not seem reasonable to suspect that humans desire an answer to their questions to approach infinite certainty. Why would someone build such an Oracle in the first place?
I think that a reasonable Oracle would quickly yield good solutions by trying to find answers within a reasonable time which are with a high probability just 2–3% away from the optimal solution. I don’t think anyone would build an answering machine that throws the whole universe at the first sub-problem it encounters.
What if asking what the sum of 1+1 is causes the Oracle to devote as many resources as possible to looking for an inconsistency arising from the Peano axioms?
If the Oracle we are talking about was specifically designed to do that, for the sake of the thought experiment, then yes. But I don’t see that it would make sense to build such a device, or that it is very likely to be possible at all.
If Apple was going to build an Oracle it would anticipate that other people would also want to ask it questions. Therefore it can’t just waste all resources on looking for an inconsistency arising from the Peano axioms when asked to solve 1+1. It would not devote additional resources on answering those questions that are already known to be correct with a high probability. I just don’t see that it would be economically useful to take over the universe to answer simple questions.
I further do not think that it would be rational to look for an inconsistency arising from the Peano axioms while solving 1+1. To answer questions an Oracle needs a good amount of general intelligence. And concluding that asking it to solve 1+1 implies to look for an inconsistency arising from the Peano axioms does not seem reasonable. It also does not seem reasonable to suspect that humans desire an answer to their questions to approach infinite certainty. Why would someone build such an Oracle in the first place?
I think that a reasonable Oracle would quickly yield good solutions by trying to find answers within a reasonable time which are with a high probability just 2–3% away from the optimal solution. I don’t think anyone would build an answering machine that throws the whole universe at the first sub-problem it encounters.