You are missing the forest for the trees, over-focusing on simplistic binary distinctions.
Organism death results in loss of all brain information that wasn’t transmitted, so the emergence of sapiens and civilization (techno-culture) requires surpassing a key criticality where enough novel information about the world is transmitted to overcome the dissipative loss of death.
Non-genetic communication channels to the next generation include epigenetics, parental teaching / imitation learning, vertical transmission of symbionts, parameters of prenatal environment, hormonal and chemical signaling, bio-electric signals, and transmission of environmental resources or modifications created by previous generations, which can shape the conditions experienced by future generations
Absent symbolic language, none of these are capable of transmitting significant general purpose world knowledge, and thus are irrelevant for the techno-cultural criticality.
Regardless, the sharp left turn argument is wrong for an entirely different more fundamental reason.
Humans are the most successful new species in history according to the utility function of evolution. The argument that there is some inner/outer alignment mismatch is obviously false for this simple reason. It is completely irrelevant that some humans use contraception, or whatever, because it has zero impact on the fact that humanity is off the charts successful according to the true evolutionary utility function.
Humans are the most successful new species in history according to the utility function of evolution.
Have you covered this before in more detail? This seems probably false to me at first glance (I’d expect some insects and plants to be more successful). Last I checked, population rates in industrialized nations were also sub-replacement. But I also haven’t thought about this that much.
Yes. I said “most successful new species in history according to the utility function of evolution”.
There are about 100k to 300k chimpanzees (similar for gorillas, orangutans, etc) for example, compared to ~8 billion humans. So we are over 4 OOM more successful than related lineages.
We are far and away the most successful recent species by a landslide, and probably the most successful mammal ever. There are almost as many humans as there are bats (summing over all 1,400 known bat species). There are a bit more humans in the world than rats (summing over all 60 rat species) - a much older ultra-successful lineage.
By biomass, humans alone—a single species—accounts for almost half of the land mammal biomass, and our domesticated food sources account for the other half. Biomass is perhaps a better estimate of genetic replicator success, as larger animals have more cells and thus more gene copies.
Your argument indicates that humans are successful (by said metric) among mammals, but doesn’t address how it compares to insects. As I understand it, some insect species have both more many more individuals and much more biomass than humans
I think I get the issue here. You seem to be aggregating over IGF of every human gene in the human population.
The sheer stupidity of our civilization and the rate at which we are hurtling towards extinction does imply that we are not ‘aligned’ to IGF, though—so I disagree.
The unit of selection is the gene, not the species. Aggregating over a species is not a proxy for the success of a gene replicator—why not aggregate over apes instead, or European-origin races instead?
I’d also like to hear more about why “evolution” is modeled as HAVING a utility function, rather than just being the name we give to the results of of variation and selection. And only then discussion of what that function might be.
I don’t see how decision theory or VNM rationality applies to evolution, let alone what “success” might mean for a species as opposed to an individual conscious entity with semi-coherent goals.
The entire argument of the sharp left turn presupposes evolution has a utility function for the analogy to work, so arguing about that is tangential, but it’ pretty obvious you can model genetic evolution as optimizing for replication count (or inclusive fitness over a species). We have concrete computational models of genetic optimization already, so there is really no no need to bring in rationality or agents, it’s just a matter of optimization functions.
I think there’s a terminological mismatch here. Dagon was asking about a “utility function” as specifically being something satisfying the VNM axioms. But I think you’re using it (in this comment and the one Dagon was replying to) synonymous with the more general concept of an “optimization function”, i.e. a function returning some output that somehow gets optimized for?
Number of individuals is not a conserved quantity. If you’re going to score contests of homeostasis, do it by something like biomass (plants win), or how much space it takes up in a random encounter table, or how much attention aliens would need to pay to it to predict the planet’s future (humans win).
Absent symbolic language, none of these are capable of transmitting significant general purpose world knowledge, and thus are irrelevant for the techno-cultural criticality.
It’s likely literally not true, but if it was … this proves my point, doesn’t it?
“Symbolic language” is exactly the type of innovation which can be discontinuous, has a type “code” more than “data quantity”, and unlocks many other things. For example more rapid and robust horizontal synchronization of brains (eg when hunting). Or yes, jump in effective quantity of information transmitted via other signals in time.
At the same time …could be clearly discontinuous: you can teach actual apes sign language, and it seems plausible this would make them more fit, if done in the wild.
(It’s actually somewhat funny that Eric Drexler has a hundred page report based exactly on the premise “AI models using human language is obviously stupid inefficiency, and you can make a jump in efficiency with more native-architecture-friendly format”.
This does not seem obviously stupid: e.g, now, if you want one model to transfer some implicit knowledge it learned, the way to do it is use the ML-native model to generate shitload of human natural language examples, and train the other model on it, building the native representation again.)
Along with what Raemon said, though I expect us to probably grow far beyond any Earth species eventually, if we’re characterizing evolution as having a reasonable utility function then I think there’s the issue of other possibilities that would be more preferable.
Like, evolution would-if-it-could choose humans to be far more focused on reproducing, and we would expect that if we didn’t put in counter-effort that our partially-learned approximations (‘sex enjoyable’, ‘having family is good’, etc.) would get increasingly tuned for the common environments.
Similarly, if we end up with an almost-aligned AGI that has some value which extends to ‘filling the universe with as many squiggles as possible’ because that value doesn’t fall off quickly, but it has another more easily saturated ‘caring for humans’ then we end up with some resulting tradeoff along there: (for example) a dozen solar systems with a proper utopia set up.
This is better than the case where we don’t exist, similar to how evolution ‘prefers’ humans compared to no life at all. It is also maybe preferable to the worlds where we lock down enough to never build AGI, similar to how evolution prefers humans reproducing across the stars to never spreading. It isn’t the most desirable option, though. Ideally, we get everything, and evolution would prefer space algae to reproduce across the cosmos.
There’s also room for uncertainty in there, where even if we get the agent loosely aligned internally (which is still hard...) then it can have a lot of room between ‘nothing’ to ‘planet’ to ‘entirety of the available universe’ to give us. Similar to how humans have a lot of room between ‘negative utilitarianism’ to ‘basically no reproduction past some point’ to ‘reproduce all the time’ to choose from / end up in. There’s also the perturbations of that, where we don’t get a full utopia from a partially-aligned AGI, or where we design new people from the ground up rather than them being notably genetically related to anyone.
So this is a definite mismatch—even if we limit ourselves to reasonable bounded implementations that could fit in a human brain. It isn’t as bad a mismatch as it could have been, since it seems like we’re on track to ‘some amount of reproduction for a long period of time → lots of people’, but it still seems to be a mismatch to me.
You are missing the forest for the trees, over-focusing on simplistic binary distinctions.
Organism death results in loss of all brain information that wasn’t transmitted, so the emergence of sapiens and civilization (techno-culture) requires surpassing a key criticality where enough novel information about the world is transmitted to overcome the dissipative loss of death.
Absent symbolic language, none of these are capable of transmitting significant general purpose world knowledge, and thus are irrelevant for the techno-cultural criticality.
Regardless, the sharp left turn argument is wrong for an entirely different more fundamental reason.
Humans are the most successful new species in history according to the utility function of evolution. The argument that there is some inner/outer alignment mismatch is obviously false for this simple reason. It is completely irrelevant that some humans use contraception, or whatever, because it has zero impact on the fact that humanity is off the charts successful according to the true evolutionary utility function.
Have you covered this before in more detail? This seems probably false to me at first glance (I’d expect some insects and plants to be more successful). Last I checked, population rates in industrialized nations were also sub-replacement. But I also haven’t thought about this that much.
Yes. I said “most successful new species in history according to the utility function of evolution”. There are about 100k to 300k chimpanzees (similar for gorillas, orangutans, etc) for example, compared to ~8 billion humans. So we are over 4 OOM more successful than related lineages.
We are far and away the most successful recent species by a landslide, and probably the most successful mammal ever. There are almost as many humans as there are bats (summing over all 1,400 known bat species). There are a bit more humans in the world than rats (summing over all 60 rat species) - a much older ultra-successful lineage.
By biomass, humans alone—a single species—accounts for almost half of the land mammal biomass, and our domesticated food sources account for the other half. Biomass is perhaps a better estimate of genetic replicator success, as larger animals have more cells and thus more gene copies.
We are the anomaly.
Your argument indicates that humans are successful (by said metric) among mammals, but doesn’t address how it compares to insects. As I understand it, some insect species have both more many more individuals and much more biomass than humans
I think I get the issue here. You seem to be aggregating over IGF of every human gene in the human population.
The sheer stupidity of our civilization and the rate at which we are hurtling towards extinction does imply that we are not ‘aligned’ to IGF, though—so I disagree.
I agree that conditional on humanity going extinct, the seeming success of our species by a genetic metric would only be a false success.
The unit of selection is the gene, not the species. Aggregating over a species is not a proxy for the success of a gene replicator—why not aggregate over apes instead, or European-origin races instead?
I’d also like to hear more about why “evolution” is modeled as HAVING a utility function, rather than just being the name we give to the results of of variation and selection. And only then discussion of what that function might be.
I don’t see how decision theory or VNM rationality applies to evolution, let alone what “success” might mean for a species as opposed to an individual conscious entity with semi-coherent goals.
The entire argument of the sharp left turn presupposes evolution has a utility function for the analogy to work, so arguing about that is tangential, but it’ pretty obvious you can model genetic evolution as optimizing for replication count (or inclusive fitness over a species). We have concrete computational models of genetic optimization already, so there is really no no need to bring in rationality or agents, it’s just a matter of optimization functions.
I think there’s a terminological mismatch here. Dagon was asking about a “utility function” as specifically being something satisfying the VNM axioms. But I think you’re using it (in this comment and the one Dagon was replying to) synonymous with the more general concept of an “optimization function”, i.e. a function returning some output that somehow gets optimized for?
Number of individuals is not a conserved quantity. If you’re going to score contests of homeostasis, do it by something like biomass (plants win), or how much space it takes up in a random encounter table, or how much attention aliens would need to pay to it to predict the planet’s future (humans win).
It’s likely literally not true, but if it was … this proves my point, doesn’t it?
“Symbolic language” is exactly the type of innovation which can be discontinuous, has a type “code” more than “data quantity”, and unlocks many other things. For example more rapid and robust horizontal synchronization of brains (eg when hunting). Or yes, jump in effective quantity of information transmitted via other signals in time.
At the same time …could be clearly discontinuous: you can teach actual apes sign language, and it seems plausible this would make them more fit, if done in the wild.
(It’s actually somewhat funny that Eric Drexler has a hundred page report based exactly on the premise “AI models using human language is obviously stupid inefficiency, and you can make a jump in efficiency with more native-architecture-friendly format”.
This does not seem obviously stupid: e.g, now, if you want one model to transfer some implicit knowledge it learned, the way to do it is use the ML-native model to generate shitload of human natural language examples, and train the other model on it, building the native representation again.)
Along with what Raemon said, though I expect us to probably grow far beyond any Earth species eventually, if we’re characterizing evolution as having a reasonable utility function then I think there’s the issue of other possibilities that would be more preferable.
Like, evolution would-if-it-could choose humans to be far more focused on reproducing, and we would expect that if we didn’t put in counter-effort that our partially-learned approximations (‘sex enjoyable’, ‘having family is good’, etc.) would get increasingly tuned for the common environments.
Similarly, if we end up with an almost-aligned AGI that has some value which extends to ‘filling the universe with as many squiggles as possible’ because that value doesn’t fall off quickly, but it has another more easily saturated ‘caring for humans’ then we end up with some resulting tradeoff along there: (for example) a dozen solar systems with a proper utopia set up.
This is better than the case where we don’t exist, similar to how evolution ‘prefers’ humans compared to no life at all. It is also maybe preferable to the worlds where we lock down enough to never build AGI, similar to how evolution prefers humans reproducing across the stars to never spreading. It isn’t the most desirable option, though. Ideally, we get everything, and evolution would prefer space algae to reproduce across the cosmos.
There’s also room for uncertainty in there, where even if we get the agent loosely aligned internally (which is still hard...) then it can have a lot of room between ‘nothing’ to ‘planet’ to ‘entirety of the available universe’ to give us. Similar to how humans have a lot of room between ‘negative utilitarianism’ to ‘basically no reproduction past some point’ to ‘reproduce all the time’ to choose from / end up in. There’s also the perturbations of that, where we don’t get a full utopia from a partially-aligned AGI, or where we design new people from the ground up rather than them being notably genetically related to anyone.
So this is a definite mismatch—even if we limit ourselves to reasonable bounded implementations that could fit in a human brain. It isn’t as bad a mismatch as it could have been, since it seems like we’re on track to ‘some amount of reproduction for a long period of time → lots of people’, but it still seems to be a mismatch to me.