I wonder if the distinction between self-modification and recursive self-improvement is one of those things that requires a magic gear to get, and otherwise can’t be explained by any amount of effort.
I understand there is a distinction. Would you agree that RSI systems are conceptually a subset of self-modifying (SM) systems? One that we don’t understand what exact properties make a SM system one that will RSI. Could you theoretically say why EURISKO didn’t RSI?
I was interested in how big a subset. The bigger it is the more dangerous, the more easily we will find it.
Could you theoretically say why EURISKO didn’t RSI?
Sure. In fact, some of the Lenat quotes on LW even tell you why.
As a hack to defeat ‘parasitic’ heuristics, Lent (& co.?) put into Eurisko a ‘protected kernel’ which couldn’t be modified. This core was not good enough to get everything going, dooming Eurisko from a seed AI perspective, and the heuristics never got anywhere near the point they could bypass the kernel. Eurisko was inherently self-limited.
It seems to me that for SM to become RSI, the SM has to able to improve all the parts of the system that are used for SM, without leaving any “weak links” to slow things down. Then the question is (slightly) narrowed to what exactly is required to have SM that can improve all the needed parts.
Intrinsically they aren’t optimization processes but they seem computationally expressive enough for an optimization process to be implemented on them (the same way X86-arch computers aren’t optimization processes). And if you are a bacteria it seems it should be something that is evolutionarily beneficial, so I wouldn’t be surprised to find some optimization going on at the gene network level. Enough to be considered a full optimization process I don’t know, but if not, why not?
Intrinsically they aren’t optimization processes but they seem computationally expressive enough for an optimization process to be implemented on them
But they aren’t optimization processes. It doesn’t matter if they could implement one, they don’t. You might as well point to any X86 chip and ask why it doesn’t RSI.
I’m not talking about any specific Gene Network, I’m talking about the number and variety of gene networks that have been explored through out evolutionary history. Do you know that they all aren’t optimisation processes? That they haven’t popped up at least once?
To my mind it is asking why a very very large number of simple x86 systems (not just chips, they have storage) each with a different program that you don’t know the details of hasn’t RSId. Which I don’t think is unreasonable.
How many distinct bacterial genomes do you think there has been since the beginning of life? Considering people estimate 10 million+ bacterial species alive today.
Some people have talked about the possibility of brute forcing AGI through evolutionary means, I’m simply looking at a previous evolutionary search through computational system space to get some clues.
A gene network optimizes the use of resources to make more copies of that gene network. It sense the environment, and its own operations, and adjusts what it is doing in response. I think it is an optimization process.
Evolution is stupid and optimization processes are complicated. Do you not think that’s an adequate explanation?
The question is, Why did evolution get thus far and no further? Can you give an account that simultaneously explains both of the observed bounds? I suppose that some would be happy with “Shear difficulty explains why evolution did no better, and anthropics explains why it did no worse.” But I don’t find that especially satisfying.
Evolution managed to make an optimisation process in our heads, but not one in anything’s genes. It had had a lot more time to work with genes as well. Why?
It is possibly worth noting that I am not talking about optimising proteins but the network that controls the activation of the genes. Protein folding is hard.
I wonder if the distinction between self-modification and recursive self-improvement is one of those things that requires a magic gear to get, and otherwise can’t be explained by any amount of effort.
I understand there is a distinction. Would you agree that RSI systems are conceptually a subset of self-modifying (SM) systems? One that we don’t understand what exact properties make a SM system one that will RSI. Could you theoretically say why EURISKO didn’t RSI?
I was interested in how big a subset. The bigger it is the more dangerous, the more easily we will find it.
Sure. In fact, some of the Lenat quotes on LW even tell you why.
As a hack to defeat ‘parasitic’ heuristics, Lent (& co.?) put into Eurisko a ‘protected kernel’ which couldn’t be modified. This core was not good enough to get everything going, dooming Eurisko from a seed AI perspective, and the heuristics never got anywhere near the point they could bypass the kernel. Eurisko was inherently self-limited.
It seems to me that for SM to become RSI, the SM has to able to improve all the parts of the system that are used for SM, without leaving any “weak links” to slow things down. Then the question is (slightly) narrowed to what exactly is required to have SM that can improve all the needed parts.
Does my edit make more sense now?
Sure, but the answer is very simple. Gene regulatory networks are not RSI because they are not optimization processes.
Intrinsically they aren’t optimization processes but they seem computationally expressive enough for an optimization process to be implemented on them (the same way X86-arch computers aren’t optimization processes). And if you are a bacteria it seems it should be something that is evolutionarily beneficial, so I wouldn’t be surprised to find some optimization going on at the gene network level. Enough to be considered a full optimization process I don’t know, but if not, why not?
But they aren’t optimization processes. It doesn’t matter if they could implement one, they don’t. You might as well point to any X86 chip and ask why it doesn’t RSI.
I’m not talking about any specific Gene Network, I’m talking about the number and variety of gene networks that have been explored through out evolutionary history. Do you know that they all aren’t optimisation processes? That they haven’t popped up at least once?
To my mind it is asking why a very very large number of simple x86 systems (not just chips, they have storage) each with a different program that you don’t know the details of hasn’t RSId. Which I don’t think is unreasonable.
How many distinct bacterial genomes do you think there has been since the beginning of life? Considering people estimate 10 million+ bacterial species alive today.
Some people have talked about the possibility of brute forcing AGI through evolutionary means, I’m simply looking at a previous evolutionary search through computational system space to get some clues.
A gene network optimizes the use of resources to make more copies of that gene network. It sense the environment, and its own operations, and adjusts what it is doing in response. I think it is an optimization process.
Evolution is stupid and optimization processes are complicated. Do you not think that’s an adequate explanation?
The question is, Why did evolution get thus far and no further? Can you give an account that simultaneously explains both of the observed bounds? I suppose that some would be happy with “Shear difficulty explains why evolution did no better, and anthropics explains why it did no worse.” But I don’t find that especially satisfying.
Evolution managed to make an optimisation process in our heads, but not one in anything’s genes. It had had a lot more time to work with genes as well. Why?
It is possibly worth noting that I am not talking about optimising proteins but the network that controls the activation of the genes. Protein folding is hard.
It may be that getting optimization into our heads was the easiest way to get it into our genes (eventually, when we master genetic engineering).
Possibly, but if you could link to your best efforts to explain it I’d be interested. I tried Google...
EDIT: D’oh! Thanks Cyan!
Shoulda tried the Google custom search bar: Recursive self-improvement.
You’re just lucky there’s no such thing as LMG(CSB)TFY. ;-)
Such things probably happen because effort spent on explaining quickly hits diminishing returns if the other person spends no effort on understanding.