I talked to one fellow about GO playing AI last night and I mentioned these Restricted Boltzmann Machines. If the GO problem can be cast as an image processing problem, RBMs might be worth looking into: http://www.youtube.com/watch?v=AyzOUbkUf3M Here is a more recent Google Tech talk by Hinton on RBMs http://www.youtube.com/watch?v=VdIURAu1-aU
Scott_Jackisch
I found those links posted above interesting.
I concede that the human learning process is not at all as explosive as the self-modifying AI processes of the future will be, but I was speaking to a different point:
Eliezer said: “I’d be pretty doubtful of any humans trying to do recursive self-modification in a way that didn’t involve logical proof of correctness to start with.”
I am arguing that humans do recursive self-modification all the time, without “proofs of correctness to start with” - even to the extent of developing gene therapies that modify our own hardware.
I fail to see how human learning is not recursive self-modification. All human intelligence can be thought of as deeply recursive. A playFootBall() function certainly calls itself repeatedly until the game is over. A football player certainly improves skill at football by repeated playing football. As skills sets develop human software (and instantiation) is being self-modified in the development of new neural networks and muscles (i.e. marathon runners have physically larger hearts, etc.) Arguably, hardware is being modified via epigenetics (phenotypes changing within narrow ranges of potential expression). As a species, we are definitely exploring genetic self-modification. A scientist who injects himself with a gene-based therapy is self-modifiying hardware.
We do all these things without foregoing proof of correctness and yet we still make improvements. I don’t think that we should ignore the possibility of an AI that destroys the world. I am very happy that some people are pursuing a guarantee that it won’t happen. I think it is worth noting that the process that will lead to provably friendly AI seems very different than the one that leads to not-necessarily-so-friendly humans and human society.
We might say that humans as individuals do recursive self-modification when they practice at a skilled task such as playing football or riding a bike. Coaches and parents might or might not be conscious of logical proofs of correctness when teaching those tasks. Arguably a logical proof of (their definition of) correctness could be derived. But I am not sure that is what you mean.
Humans as a species do recursive self-modification through evolution. Correctness in that context is survival and the part under human control is selecting mates. I would like to have access to those proofs. They might come in handy when dating.
“NS can at best only transfer information from the environment to the genome.” Does this statement mean to suggest that the environment is not complex?