You will be right about it being genuine recursive self-modification when genetics advances sufficiently that a scientist discovers a gene therapy that confers a significant intelligence advantage, and she takes it herself so that she can more effectively discover even more powerful gene therapies. We’re not there yet, not even remotely close, and we’re even further away when it comes to epigenetics.
Your football example is not recursive self-modification, but the genetics examples would be if they actually come to pass. You’re right that if it happened, it would happen without a proof of correctness. The point is not that it’s not possible without a proof of correctness, but that it’s irresponsibly dangerous. If a single individual recursively self-improved his intelligence to the point that he was then easily able to thoroughly dominate the entire world economy, how much more dangerous would it be for a radically different kind of intelligence to reach that level at a rate of increase that is orders of magnitude greater? It depends on the kind of intelligence, in particular, unless we want to just “hope for the best” and see what happens, it depends on what we can prove about the particular kind of intelligence. Wanting a proof is just a way of saying that we want to really know how it will turn out rather than just hope and pray or rely on vague gap-filled arguments that may or may not turn out to be correct. That’s the point.
You will be right about it being genuine recursive self-modification when genetics advances sufficiently that a scientist discovers a gene therapy that confers a significant intelligence advantage, and she takes it herself so that she can more effectively discover even more powerful gene therapies. We’re not there yet, not even remotely close, and we’re even further away when it comes to epigenetics.
Your football example is not recursive self-modification, but the genetics examples would be if they actually come to pass. You’re right that if it happened, it would happen without a proof of correctness. The point is not that it’s not possible without a proof of correctness, but that it’s irresponsibly dangerous. If a single individual recursively self-improved his intelligence to the point that he was then easily able to thoroughly dominate the entire world economy, how much more dangerous would it be for a radically different kind of intelligence to reach that level at a rate of increase that is orders of magnitude greater? It depends on the kind of intelligence, in particular, unless we want to just “hope for the best” and see what happens, it depends on what we can prove about the particular kind of intelligence. Wanting a proof is just a way of saying that we want to really know how it will turn out rather than just hope and pray or rely on vague gap-filled arguments that may or may not turn out to be correct. That’s the point.