« I don’t think that “reflective coherence” or “reflective consistency” should be considered as a desideratum in itself. » It is not a terminal value, but I do consider it to be still a very useful “intermediate value”. The reason is that interacting with reality is often costly (in term of time, resources, energy, risks, …) so doing an internal-check of consistency before going to experiment is a very useful heuristic. If your hypothesis/theory is not coherent or consistent with itself, it’s very likely to not be true. If it’s coherent, then it may be true or not, and you’ve to check with reality.
If a map of a city includes Escher-like always ascending staircase, I don’t even need to go to the place to say “hey, there is a problem”. If a designer claims to make a perpetual motion machine, I don’t even need to build it to say “it won’t work”. So it would appear a good thing to add an initial AI, that it won’t perform costly/dangerous checks of an hypothesis that just doesn’t have reflective coherence.
« I don’t think that “reflective coherence” or “reflective consistency” should be considered as a desideratum in itself. » It is not a terminal value, but I do consider it to be still a very useful “intermediate value”. The reason is that interacting with reality is often costly (in term of time, resources, energy, risks, …) so doing an internal-check of consistency before going to experiment is a very useful heuristic. If your hypothesis/theory is not coherent or consistent with itself, it’s very likely to not be true. If it’s coherent, then it may be true or not, and you’ve to check with reality.
If a map of a city includes Escher-like always ascending staircase, I don’t even need to go to the place to say “hey, there is a problem”. If a designer claims to make a perpetual motion machine, I don’t even need to build it to say “it won’t work”. So it would appear a good thing to add an initial AI, that it won’t perform costly/dangerous checks of an hypothesis that just doesn’t have reflective coherence.