Isn’t there a “telomere shortening” effect, where an optimizer using a strong theory can’t prove good behavior of successors using the same theory and will pick a successor with weaker theory? Using logical inductors could help with that, but you’d need to spell it out in detail.
Isn’t there a “telomere shortening” effect, where an optimizer using a strong theory can’t prove good behavior of successors using the same theory and will pick a successor with weaker theory? Using logical inductors could help with that, but you’d need to spell it out in detail.