I respectfully suggest that you are not thinking Weirdly enough. Notice that the evolved circuit still has structure that the laws of physics understand! An AI needn’t operate directly at that level to make intuitive leaps beyond the capability of any human; and needn’t operate by trial and error, precisely, (although, notice, we don’t know the internal structure of generating an insight in human brains; for all we know it involves subconsciously trying a hundred different paths and discarding most of them) to generate stuff that’s very different from the code it starts with.
I respectfully suggest that you are not thinking Weirdly enough.
Thinking Weirdly has nothing to do with it. I expect an AI to not use programming techniques it doesn’t expect to be able to use effectively, and I expect the AI’s expectations to use techniques effectively to be accurate. So, given that an AI is using a technique, even if it is Weird, I expect the AI to be effective using it. If you have an argument that a certain technique, like random guessing and checking, has insurmountable problems, then you have an argument that an AI will not use that technique. Given that the AI is using a Weird technique, I expect the AI to be advanced enough to cope with, and benefit from, the Weirdness.
Notice that the evolved circuit still has structure that the laws of physics understand!
The laws of physics can only understand the structure in a poetic sense. When I say that an AI understands the structure of its code, I mean that it has a map of the code, organized into logical components with information (not required to actually run the program) about high level properties components have and how other components rely on those properties, and this information is available and useful to modifying the code in good ways.
An AI needn’t operate directly at that level to make intuitive leaps beyond the capability of any human; and needn’t operate by trial and error, precisely, (although, notice, we don’t know the internal structure of generating an insight in human brains; for all we know it involves subconsciously trying a hundred different paths and discarding most of them) to generate stuff that’s very different from the code it starts with.
It doesn’t matter if the AI makes leaps beyond the capability of any human as long as it doesn’t make leaps beyond its own capability. You seem much more eager to apply Weirdness to difficulty of the problem than to capability of solving the problem. It doesn’t matter that understanding the AI’s Weird code is too hard for humans, because it’s not too hard for the Weird AI. The AI may “generate stuff that’s very different from the code it starts with”, but it won’t generate anything so different the AI can’t verify it is a good change.
I respectfully suggest that you are not thinking Weirdly enough. Notice that the evolved circuit still has structure that the laws of physics understand! An AI needn’t operate directly at that level to make intuitive leaps beyond the capability of any human; and needn’t operate by trial and error, precisely, (although, notice, we don’t know the internal structure of generating an insight in human brains; for all we know it involves subconsciously trying a hundred different paths and discarding most of them) to generate stuff that’s very different from the code it starts with.
Thinking Weirdly has nothing to do with it. I expect an AI to not use programming techniques it doesn’t expect to be able to use effectively, and I expect the AI’s expectations to use techniques effectively to be accurate. So, given that an AI is using a technique, even if it is Weird, I expect the AI to be effective using it. If you have an argument that a certain technique, like random guessing and checking, has insurmountable problems, then you have an argument that an AI will not use that technique. Given that the AI is using a Weird technique, I expect the AI to be advanced enough to cope with, and benefit from, the Weirdness.
The laws of physics can only understand the structure in a poetic sense. When I say that an AI understands the structure of its code, I mean that it has a map of the code, organized into logical components with information (not required to actually run the program) about high level properties components have and how other components rely on those properties, and this information is available and useful to modifying the code in good ways.
It doesn’t matter if the AI makes leaps beyond the capability of any human as long as it doesn’t make leaps beyond its own capability. You seem much more eager to apply Weirdness to difficulty of the problem than to capability of solving the problem. It doesn’t matter that understanding the AI’s Weird code is too hard for humans, because it’s not too hard for the Weird AI. The AI may “generate stuff that’s very different from the code it starts with”, but it won’t generate anything so different the AI can’t verify it is a good change.