Your examples are all missing either the ‘self’ aspect or the ‘recursive’ aspect. See Intelligence Explosion for an actual example of recursive self-modification, or for a longer explanation of recursive self-improvement, this post.
I concede that the human learning process is not at all as explosive as the self-modifying AI processes of the future will be, but I was speaking to a different point:
Eliezer said: “I’d be pretty doubtful of any humans trying to do recursive self-modification in a way that didn’t involve logical proof of correctness to start with.”
I am arguing that humans do recursive self-modification all the time, without “proofs of correctness to start with” - even to the extent of developing gene therapies that modify our own hardware.
I fail to see how human learning is not recursive self-modification. All human intelligence can be thought of as deeply recursive. A playFootBall() function certainly calls itself repeatedly until the game is over. A football player certainly improves skill at football by repeated playing football. As skills sets develop human software (and instantiation) is being self-modified in the development of new neural networks and muscles (i.e. marathon runners have physically larger hearts, etc.) Arguably, hardware is being modified via epigenetics (phenotypes changing within narrow ranges of potential expression). As a species, we are definitely exploring genetic self-modification. A scientist who injects himself with a gene-based therapy is self-modifiying hardware.
We do all these things without foregoing proof of correctness and yet we still make improvements. I don’t think that we should ignore the possibility of an AI that destroys the world. I am very happy that some people are pursuing a guarantee that it won’t happen. I think it is worth noting that the process that will lead to provably friendly AI seems very different than the one that leads to not-necessarily-so-friendly humans and human society.
You will be right about it being genuine recursive self-modification when genetics advances sufficiently that a scientist discovers a gene therapy that confers a significant intelligence advantage, and she takes it herself so that she can more effectively discover even more powerful gene therapies. We’re not there yet, not even remotely close, and we’re even further away when it comes to epigenetics.
Your football example is not recursive self-modification, but the genetics examples would be if they actually come to pass. You’re right that if it happened, it would happen without a proof of correctness. The point is not that it’s not possible without a proof of correctness, but that it’s irresponsibly dangerous. If a single individual recursively self-improved his intelligence to the point that he was then easily able to thoroughly dominate the entire world economy, how much more dangerous would it be for a radically different kind of intelligence to reach that level at a rate of increase that is orders of magnitude greater? It depends on the kind of intelligence, in particular, unless we want to just “hope for the best” and see what happens, it depends on what we can prove about the particular kind of intelligence. Wanting a proof is just a way of saying that we want to really know how it will turn out rather than just hope and pray or rely on vague gap-filled arguments that may or may not turn out to be correct. That’s the point.
Your examples are all missing either the ‘self’ aspect or the ‘recursive’ aspect. See Intelligence Explosion for an actual example of recursive self-modification, or for a longer explanation of recursive self-improvement, this post.
I found those links posted above interesting.
I concede that the human learning process is not at all as explosive as the self-modifying AI processes of the future will be, but I was speaking to a different point:
Eliezer said: “I’d be pretty doubtful of any humans trying to do recursive self-modification in a way that didn’t involve logical proof of correctness to start with.”
I am arguing that humans do recursive self-modification all the time, without “proofs of correctness to start with” - even to the extent of developing gene therapies that modify our own hardware.
I fail to see how human learning is not recursive self-modification. All human intelligence can be thought of as deeply recursive. A playFootBall() function certainly calls itself repeatedly until the game is over. A football player certainly improves skill at football by repeated playing football. As skills sets develop human software (and instantiation) is being self-modified in the development of new neural networks and muscles (i.e. marathon runners have physically larger hearts, etc.) Arguably, hardware is being modified via epigenetics (phenotypes changing within narrow ranges of potential expression). As a species, we are definitely exploring genetic self-modification. A scientist who injects himself with a gene-based therapy is self-modifiying hardware.
We do all these things without foregoing proof of correctness and yet we still make improvements. I don’t think that we should ignore the possibility of an AI that destroys the world. I am very happy that some people are pursuing a guarantee that it won’t happen. I think it is worth noting that the process that will lead to provably friendly AI seems very different than the one that leads to not-necessarily-so-friendly humans and human society.
You will be right about it being genuine recursive self-modification when genetics advances sufficiently that a scientist discovers a gene therapy that confers a significant intelligence advantage, and she takes it herself so that she can more effectively discover even more powerful gene therapies. We’re not there yet, not even remotely close, and we’re even further away when it comes to epigenetics.
Your football example is not recursive self-modification, but the genetics examples would be if they actually come to pass. You’re right that if it happened, it would happen without a proof of correctness. The point is not that it’s not possible without a proof of correctness, but that it’s irresponsibly dangerous. If a single individual recursively self-improved his intelligence to the point that he was then easily able to thoroughly dominate the entire world economy, how much more dangerous would it be for a radically different kind of intelligence to reach that level at a rate of increase that is orders of magnitude greater? It depends on the kind of intelligence, in particular, unless we want to just “hope for the best” and see what happens, it depends on what we can prove about the particular kind of intelligence. Wanting a proof is just a way of saying that we want to really know how it will turn out rather than just hope and pray or rely on vague gap-filled arguments that may or may not turn out to be correct. That’s the point.