Only in the sense that any working Oracle can be trivially transformed into a Genie. The argument doesn’t say that it’s difficult to construct a non-Genie Oracle and use it as an Oracle if that’s what you want; the difficulty there is for other reasons.
Nick Bostrom takes Oracles seriously so I dust off the concept every year and take another look at it. It’s been looking slightly more solvable lately, I’m not sure if it would be solvable enough even assuming the trend continued.
A clarification: my point was that denying orthogonality requires denying the possibility of Oracles being constructed; your post seemed a rephrasing of that general idea (that once you can have a machine that can solve some things abstractly, then you need just connect that abstract ability to some implementation module).
Ah. K. It does seem to me like “you can construct it as an Oracle and then turn it into an arbitrary Genie” sounds weaker than “denying the Orthogonality thesis means superintelligences cannot know 1, 2, and 3.” The sort of person who denies OT is liable to deny Oracle construction because the Oracle itself would be converted unto the true morality, but find it much more counterintuitive that an SI could not know something. Also we want to focus on the general shortness of the gap from epistemic knowledge to a working agent.
Possibly. I think your argument needs to be a bit developed to show that one can extract the knowledge usefully, which is not a trivial statement for general AI. So your argument is better in the end, but needs more argument to establish.
Only in the sense that any working Oracle can be trivially transformed into a Genie. The argument doesn’t say that it’s difficult to construct a non-Genie Oracle and use it as an Oracle if that’s what you want; the difficulty there is for other reasons.
Nick Bostrom takes Oracles seriously so I dust off the concept every year and take another look at it. It’s been looking slightly more solvable lately, I’m not sure if it would be solvable enough even assuming the trend continued.
A clarification: my point was that denying orthogonality requires denying the possibility of Oracles being constructed; your post seemed a rephrasing of that general idea (that once you can have a machine that can solve some things abstractly, then you need just connect that abstract ability to some implementation module).
Ah. K. It does seem to me like “you can construct it as an Oracle and then turn it into an arbitrary Genie” sounds weaker than “denying the Orthogonality thesis means superintelligences cannot know 1, 2, and 3.” The sort of person who denies OT is liable to deny Oracle construction because the Oracle itself would be converted unto the true morality, but find it much more counterintuitive that an SI could not know something. Also we want to focus on the general shortness of the gap from epistemic knowledge to a working agent.
Possibly. I think your argument needs to be a bit developed to show that one can extract the knowledge usefully, which is not a trivial statement for general AI. So your argument is better in the end, but needs more argument to establish.