Again thanks for the link. Just a note that “Memetic Computing”, while it might accept articles on memetics proper, is mostly about “memetic algorithms” which, as Edmonds says, have little to do with memetics.
Memetic algorithms have “little to do with memetics”?!? Huh? Of course they do. Memetic algorithms are cultural evolution in silico. As I say in my book:
Memetic algorithms will probably prove to be the most important application of memetics in the long term.
It sounds odd, but it’s true. So-called memetic algorithms are just genetic algorithms with extra local search, and some variations. The difference between genetic algorithms and memetic algorithms doesn’t capture any of the difference between genes and memes. That is my opinion and the opinion of quite a few researchers I know who work in evolutionary computation and, obviously, Edmonds.
The paper discusses at some length how Dawkins and memes inspired the approach in section 5 (“Towards Memetic Algorithms”). I don’t really know what you are talking about by apparently denying the link.
Memetic algorithms are important—since the process that leads to superintelligent machines will inevitably work more like cultrural evolution than plain organic evolution. Memetic computing is one of the main areas which studies such algorithms.
I’ve read some memetic algorithms papers, but I never read this one. You did me a favour by pointing to the right section among the 68 pages :)
Now, I don’t deny that Moscato and other MA people were inspired by memetics and Dawkins. What I’m saying is the difference between GAs and MAs fails to correspond to the difference between genes and memes. I’m not alone: p. 19 says “The GA community would like to say that MA are only a special kind of GA with local hill-climbing.” That is what I said before.
The essential difference between MAs and GAs is the extra local search. This is supposed to be analogous to the ability of a martial arts master to make not random changes to his/her memes (genetic mutations are random), but directed changes. There are two problems with this:
In MAs, the hill-climbing does, in all cases I have seen, boil down to using random mutations and discarding the bad ones. (How else would we do hill-climbing, on a black-box fitness function?) So the changes are not really directed any more than GA mutations are directed, when the GA uses selection.
I think there is a lot more to the idea of memes than just directed changes. What about their non-particulate nature, in Dawkins’ phrase? That has no analogue in MAs that is not already in GAs. What about their weird method of combination, which is more like compounding than crossover? That has no analogue in MAs, as far as I know.
Are there some other features that distinguish MAs from GAs? Moscato mentions non-”genetic” representations, meaning non-linear ones. That has been common in GAs for a long time. I use tree-based, grammatical, and real-valued representations all the time. Even if (using the example in the paper) two-dimensional representations were common in MAs and unheard of in GAs, memes are not more accurately represented by matrices of bits than they are by lists of bits. Neither representation is adequate for the almost unrepresentable space occupied by memes.
Any other distinguishing features? Cooperative versus competitive coevolution (p 20)? Common in GAs, and more importantly, common in real-world genetic evolution.
The essential difference between MAs and GAs is the extra local search. This is supposed to be analogous to the ability of a martial arts master to make not random changes to his/her memes (genetic mutations are random), but directed changes. There are two problems with this:
In MAs, the hill-climbing does, in all cases I have seen, boil down to using random mutations and discarding the bad ones. (How else would we do hill-climbing, on a black-box fitness function?)
That is an easy question. A “black-box fitness function” doesn’t mean the function is completely unknown. One is allowed to presume Occam’s razor. That means that a range of techniques are likely to work when designing the next generation of trials: linear interpolation, extrapolation, fourier analysis of the fitness landscape—and so on. You can also keep historical records of notable past successes and failures—to help guide your search, use inductive inference, and take advantage of the rest of standard scientific toolkit.
I think there is a lot more to the idea of memes than just directed changes. What about their non-particulate nature, in Dawkins’ phrase? That has no analogue in MAs that is not already in GAs.
Well that’s because there are already analog genetic algorithms—or at least real-valued ones—which are about as “non-particulate” as you can get while remaining inside a digital computer.
To simulate cultural evolution some of the more important things you need are individual learning and social learning in a population. Much depends on how good your learning algorithms are. Yes, there are other aspects of cultural evolution—but if we knew how to reproduce them all in machines, we would have advanced machine intelligence by now. Today’s memetic algorithms are necessarily a work in progress. However, the goal of simulating cultural evolution—and taking advantage of its obvious power—was the aim from the very beginning.
Again thanks for the link. Just a note that “Memetic Computing”, while it might accept articles on memetics proper, is mostly about “memetic algorithms” which, as Edmonds says, have little to do with memetics.
Memetic algorithms have “little to do with memetics”?!? Huh? Of course they do. Memetic algorithms are cultural evolution in silico. As I say in my book:
It sounds odd, but it’s true. So-called memetic algorithms are just genetic algorithms with extra local search, and some variations. The difference between genetic algorithms and memetic algorithms doesn’t capture any of the difference between genes and memes. That is my opinion and the opinion of quite a few researchers I know who work in evolutionary computation and, obviously, Edmonds.
The link to memetics comes directly from the field’s founder, in the paper that started it all:
Pablo Moscato (1989) On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts—Towards Memetic Algorithms.
The paper discusses at some length how Dawkins and memes inspired the approach in section 5 (“Towards Memetic Algorithms”). I don’t really know what you are talking about by apparently denying the link.
Memetic algorithms are important—since the process that leads to superintelligent machines will inevitably work more like cultrural evolution than plain organic evolution. Memetic computing is one of the main areas which studies such algorithms.
I’ve read some memetic algorithms papers, but I never read this one. You did me a favour by pointing to the right section among the 68 pages :)
Now, I don’t deny that Moscato and other MA people were inspired by memetics and Dawkins. What I’m saying is the difference between GAs and MAs fails to correspond to the difference between genes and memes. I’m not alone: p. 19 says “The GA community would like to say that MA are only a special kind of GA with local hill-climbing.” That is what I said before.
The essential difference between MAs and GAs is the extra local search. This is supposed to be analogous to the ability of a martial arts master to make not random changes to his/her memes (genetic mutations are random), but directed changes. There are two problems with this:
In MAs, the hill-climbing does, in all cases I have seen, boil down to using random mutations and discarding the bad ones. (How else would we do hill-climbing, on a black-box fitness function?) So the changes are not really directed any more than GA mutations are directed, when the GA uses selection.
I think there is a lot more to the idea of memes than just directed changes. What about their non-particulate nature, in Dawkins’ phrase? That has no analogue in MAs that is not already in GAs. What about their weird method of combination, which is more like compounding than crossover? That has no analogue in MAs, as far as I know.
Are there some other features that distinguish MAs from GAs? Moscato mentions non-”genetic” representations, meaning non-linear ones. That has been common in GAs for a long time. I use tree-based, grammatical, and real-valued representations all the time. Even if (using the example in the paper) two-dimensional representations were common in MAs and unheard of in GAs, memes are not more accurately represented by matrices of bits than they are by lists of bits. Neither representation is adequate for the almost unrepresentable space occupied by memes.
Any other distinguishing features? Cooperative versus competitive coevolution (p 20)? Common in GAs, and more importantly, common in real-world genetic evolution.
EDIT: added to second-last paragraph.
That is an easy question. A “black-box fitness function” doesn’t mean the function is completely unknown. One is allowed to presume Occam’s razor. That means that a range of techniques are likely to work when designing the next generation of trials: linear interpolation, extrapolation, fourier analysis of the fitness landscape—and so on. You can also keep historical records of notable past successes and failures—to help guide your search, use inductive inference, and take advantage of the rest of standard scientific toolkit.
Well that’s because there are already analog genetic algorithms—or at least real-valued ones—which are about as “non-particulate” as you can get while remaining inside a digital computer.
To simulate cultural evolution some of the more important things you need are individual learning and social learning in a population. Much depends on how good your learning algorithms are. Yes, there are other aspects of cultural evolution—but if we knew how to reproduce them all in machines, we would have advanced machine intelligence by now. Today’s memetic algorithms are necessarily a work in progress. However, the goal of simulating cultural evolution—and taking advantage of its obvious power—was the aim from the very beginning.