I’ve been wondering what the existance of Gene Networks tells us about recursively self improving systems. Edit: Not that self-modifying gene networks are RSIS, but the question is “Why aren’t they?” In the same way that failed attempts at flying machines tell us something, but not much, about what flying machines are not. End Edit
They are the equivalent of logic gates and have the potential for self-modification and reflection, what with DNAs ability to make enzymes that chop itself up and do so selectively.
So you can possibly use them as evidence that low-complexity, low-memory systems are unlikely to RSI. How complex they get and how much memory they have, I am not sure.
It seems like in gene networks, every logic gate has to evolve separately, and those restriction enzymes you mention barely do anything but destroy foreign DNA. That’s less self-modification potential than the human brain.
The inability to create new logic gates is what I meant by the systems having low memory. In this case low memory to store programs.
Restriction enzymes also have a role in the insertion of plasmids into genes.
An interesting question is: If I told you about computer model of evolution with things like plasmids, controlled mutationJ_recombination); would you expect it to be potentially dangerous?
I’m asking this to try to improve our thinking about what is and isn’t dangerous. To try and improve upon the kneejerk “everything we don’t understand is dangerous” opinion that you have seen.
I wonder if the distinction between self-modification and recursive self-improvement is one of those things that requires a magic gear to get, and otherwise can’t be explained by any amount of effort.
I understand there is a distinction. Would you agree that RSI systems are conceptually a subset of self-modifying (SM) systems? One that we don’t understand what exact properties make a SM system one that will RSI. Could you theoretically say why EURISKO didn’t RSI?
I was interested in how big a subset. The bigger it is the more dangerous, the more easily we will find it.
Could you theoretically say why EURISKO didn’t RSI?
Sure. In fact, some of the Lenat quotes on LW even tell you why.
As a hack to defeat ‘parasitic’ heuristics, Lent (& co.?) put into Eurisko a ‘protected kernel’ which couldn’t be modified. This core was not good enough to get everything going, dooming Eurisko from a seed AI perspective, and the heuristics never got anywhere near the point they could bypass the kernel. Eurisko was inherently self-limited.
It seems to me that for SM to become RSI, the SM has to able to improve all the parts of the system that are used for SM, without leaving any “weak links” to slow things down. Then the question is (slightly) narrowed to what exactly is required to have SM that can improve all the needed parts.
Intrinsically they aren’t optimization processes but they seem computationally expressive enough for an optimization process to be implemented on them (the same way X86-arch computers aren’t optimization processes). And if you are a bacteria it seems it should be something that is evolutionarily beneficial, so I wouldn’t be surprised to find some optimization going on at the gene network level. Enough to be considered a full optimization process I don’t know, but if not, why not?
Intrinsically they aren’t optimization processes but they seem computationally expressive enough for an optimization process to be implemented on them
But they aren’t optimization processes. It doesn’t matter if they could implement one, they don’t. You might as well point to any X86 chip and ask why it doesn’t RSI.
I’m not talking about any specific Gene Network, I’m talking about the number and variety of gene networks that have been explored through out evolutionary history. Do you know that they all aren’t optimisation processes? That they haven’t popped up at least once?
To my mind it is asking why a very very large number of simple x86 systems (not just chips, they have storage) each with a different program that you don’t know the details of hasn’t RSId. Which I don’t think is unreasonable.
How many distinct bacterial genomes do you think there has been since the beginning of life? Considering people estimate 10 million+ bacterial species alive today.
Some people have talked about the possibility of brute forcing AGI through evolutionary means, I’m simply looking at a previous evolutionary search through computational system space to get some clues.
A gene network optimizes the use of resources to make more copies of that gene network. It sense the environment, and its own operations, and adjusts what it is doing in response. I think it is an optimization process.
Evolution is stupid and optimization processes are complicated. Do you not think that’s an adequate explanation?
The question is, Why did evolution get thus far and no further? Can you give an account that simultaneously explains both of the observed bounds? I suppose that some would be happy with “Shear difficulty explains why evolution did no better, and anthropics explains why it did no worse.” But I don’t find that especially satisfying.
Evolution managed to make an optimisation process in our heads, but not one in anything’s genes. It had had a lot more time to work with genes as well. Why?
It is possibly worth noting that I am not talking about optimising proteins but the network that controls the activation of the genes. Protein folding is hard.
They already are RSIS, if you believe in the evolution of evolvability, which you probably should. The probable evolution of DNA from RNA, of introns, and of sex are examples of the evolution of evolvability.
There are single-celled organisms that act intelligently despite not having (or being) neurons. The slime mold, for example.
A gene network is a lot like the brain of an insect in which the exact connectivity of every neuron is predetermined. However, its switching frequency is much slower.
More advanced brains have algorithms that can use homogenous networks. That means that you can simply increase the number of neurons made, and automatically get more intelligence out of them.
Organisms have 600 to 20,000 genes. A honeybee has about a million neurons.
There are single-celled organisms that act intelligently despite not having (or being) neurons. The slime mold, for example.
What does “intelligence” mean here?
For context, some more about slime moulds. In that thesis is a detailed model of the whole life-cycle of the slime mould, using biochemical investigations and computer modelling to show how all the different stages and the transitions between them happen.
What does it mean, to say that this system is “intelligent”? The word is used for a very wide range of things, from slime moulds (and perhaps even simpler systems?) to people and beyond. What is being claimed when the same word is applied to all of these things?
Put in practical terms, does a detailed knowledge of exactly how the slime mould works help in constructing an AGI? Does it help in constructing more limited sorts of AI? Does it illuminate the investigation of other natural systems that fall within the concept of “intelligence”?
I am not seeing a reason to answer “yes” to any of these questions.
Put in practical terms, does a detailed knowledge of exactly how the slime mould works help in constructing an AGI? Does it help in constructing more limited sorts of AI? Does it illuminate the investigation of other natural systems that fall within the concept of “intelligence”?
I am not seeing a reason to answer “yes” to any of these questions.
Yes, to all of those questions. I don’t think we currently have the AI technology needed to produce something with the intelligence of a slime mold. (Yes, we might be able to, if we gave it magical sensors and effectors, so that it just had to say “go this way” or “go that way”. Remember that the slime mold has to do all this by directing an extremely complex sequence of modifications to its cytoskeleton.) Therefore, having a detailed knowledge of how it did this, and the ability to replicate it, would advance AI.
I’ve been wondering what the existance of Gene Networks tells us about recursively self improving systems. Edit: Not that self-modifying gene networks are RSIS, but the question is “Why aren’t they?” In the same way that failed attempts at flying machines tell us something, but not much, about what flying machines are not. End Edit
They are the equivalent of logic gates and have the potential for self-modification and reflection, what with DNAs ability to make enzymes that chop itself up and do so selectively.
So you can possibly use them as evidence that low-complexity, low-memory systems are unlikely to RSI. How complex they get and how much memory they have, I am not sure.
It seems like in gene networks, every logic gate has to evolve separately, and those restriction enzymes you mention barely do anything but destroy foreign DNA. That’s less self-modification potential than the human brain.
The inability to create new logic gates is what I meant by the systems having low memory. In this case low memory to store programs.
Restriction enzymes also have a role in the insertion of plasmids into genes.
An interesting question is: If I told you about computer model of evolution with things like plasmids, controlled mutationJ_recombination); would you expect it to be potentially dangerous?
I’m asking this to try to improve our thinking about what is and isn’t dangerous. To try and improve upon the kneejerk “everything we don’t understand is dangerous” opinion that you have seen.
Well, I’m not familiar enough with controlled mutation to be able to say anything useful about it.
I wonder if the distinction between self-modification and recursive self-improvement is one of those things that requires a magic gear to get, and otherwise can’t be explained by any amount of effort.
I understand there is a distinction. Would you agree that RSI systems are conceptually a subset of self-modifying (SM) systems? One that we don’t understand what exact properties make a SM system one that will RSI. Could you theoretically say why EURISKO didn’t RSI?
I was interested in how big a subset. The bigger it is the more dangerous, the more easily we will find it.
Sure. In fact, some of the Lenat quotes on LW even tell you why.
As a hack to defeat ‘parasitic’ heuristics, Lent (& co.?) put into Eurisko a ‘protected kernel’ which couldn’t be modified. This core was not good enough to get everything going, dooming Eurisko from a seed AI perspective, and the heuristics never got anywhere near the point they could bypass the kernel. Eurisko was inherently self-limited.
It seems to me that for SM to become RSI, the SM has to able to improve all the parts of the system that are used for SM, without leaving any “weak links” to slow things down. Then the question is (slightly) narrowed to what exactly is required to have SM that can improve all the needed parts.
Does my edit make more sense now?
Sure, but the answer is very simple. Gene regulatory networks are not RSI because they are not optimization processes.
Intrinsically they aren’t optimization processes but they seem computationally expressive enough for an optimization process to be implemented on them (the same way X86-arch computers aren’t optimization processes). And if you are a bacteria it seems it should be something that is evolutionarily beneficial, so I wouldn’t be surprised to find some optimization going on at the gene network level. Enough to be considered a full optimization process I don’t know, but if not, why not?
But they aren’t optimization processes. It doesn’t matter if they could implement one, they don’t. You might as well point to any X86 chip and ask why it doesn’t RSI.
I’m not talking about any specific Gene Network, I’m talking about the number and variety of gene networks that have been explored through out evolutionary history. Do you know that they all aren’t optimisation processes? That they haven’t popped up at least once?
To my mind it is asking why a very very large number of simple x86 systems (not just chips, they have storage) each with a different program that you don’t know the details of hasn’t RSId. Which I don’t think is unreasonable.
How many distinct bacterial genomes do you think there has been since the beginning of life? Considering people estimate 10 million+ bacterial species alive today.
Some people have talked about the possibility of brute forcing AGI through evolutionary means, I’m simply looking at a previous evolutionary search through computational system space to get some clues.
A gene network optimizes the use of resources to make more copies of that gene network. It sense the environment, and its own operations, and adjusts what it is doing in response. I think it is an optimization process.
Evolution is stupid and optimization processes are complicated. Do you not think that’s an adequate explanation?
The question is, Why did evolution get thus far and no further? Can you give an account that simultaneously explains both of the observed bounds? I suppose that some would be happy with “Shear difficulty explains why evolution did no better, and anthropics explains why it did no worse.” But I don’t find that especially satisfying.
Evolution managed to make an optimisation process in our heads, but not one in anything’s genes. It had had a lot more time to work with genes as well. Why?
It is possibly worth noting that I am not talking about optimising proteins but the network that controls the activation of the genes. Protein folding is hard.
It may be that getting optimization into our heads was the easiest way to get it into our genes (eventually, when we master genetic engineering).
Possibly, but if you could link to your best efforts to explain it I’d be interested. I tried Google...
EDIT: D’oh! Thanks Cyan!
Shoulda tried the Google custom search bar: Recursive self-improvement.
You’re just lucky there’s no such thing as LMG(CSB)TFY. ;-)
Such things probably happen because effort spent on explaining quickly hits diminishing returns if the other person spends no effort on understanding.
They already are RSIS, if you believe in the evolution of evolvability, which you probably should. The probable evolution of DNA from RNA, of introns, and of sex are examples of the evolution of evolvability.
There are single-celled organisms that act intelligently despite not having (or being) neurons. The slime mold, for example.
A gene network is a lot like the brain of an insect in which the exact connectivity of every neuron is predetermined. However, its switching frequency is much slower.
More advanced brains have algorithms that can use homogenous networks. That means that you can simply increase the number of neurons made, and automatically get more intelligence out of them.
Organisms have 600 to 20,000 genes. A honeybee has about a million neurons.
What does “intelligence” mean here?
For context, some more about slime moulds. In that thesis is a detailed model of the whole life-cycle of the slime mould, using biochemical investigations and computer modelling to show how all the different stages and the transitions between them happen.
What does it mean, to say that this system is “intelligent”? The word is used for a very wide range of things, from slime moulds (and perhaps even simpler systems?) to people and beyond. What is being claimed when the same word is applied to all of these things?
Put in practical terms, does a detailed knowledge of exactly how the slime mould works help in constructing an AGI? Does it help in constructing more limited sorts of AI? Does it illuminate the investigation of other natural systems that fall within the concept of “intelligence”?
I am not seeing a reason to answer “yes” to any of these questions.
Yes, to all of those questions. I don’t think we currently have the AI technology needed to produce something with the intelligence of a slime mold. (Yes, we might be able to, if we gave it magical sensors and effectors, so that it just had to say “go this way” or “go that way”. Remember that the slime mold has to do all this by directing an extremely complex sequence of modifications to its cytoskeleton.) Therefore, having a detailed knowledge of how it did this, and the ability to replicate it, would advance AI.