Along with what Raemon said, though I expect us to probably grow far beyond any Earth species eventually, if we’re characterizing evolution as having a reasonable utility function then I think there’s the issue of other possibilities that would be more preferable.
Like, evolution would-if-it-could choose humans to be far more focused on reproducing, and we would expect that if we didn’t put in counter-effort that our partially-learned approximations (‘sex enjoyable’, ‘having family is good’, etc.) would get increasingly tuned for the common environments.
Similarly, if we end up with an almost-aligned AGI that has some value which extends to ‘filling the universe with as many squiggles as possible’ because that value doesn’t fall off quickly, but it has another more easily saturated ‘caring for humans’ then we end up with some resulting tradeoff along there: (for example) a dozen solar systems with a proper utopia set up.
This is better than the case where we don’t exist, similar to how evolution ‘prefers’ humans compared to no life at all. It is also maybe preferable to the worlds where we lock down enough to never build AGI, similar to how evolution prefers humans reproducing across the stars to never spreading. It isn’t the most desirable option, though. Ideally, we get everything, and evolution would prefer space algae to reproduce across the cosmos.
There’s also room for uncertainty in there, where even if we get the agent loosely aligned internally (which is still hard...) then it can have a lot of room between ‘nothing’ to ‘planet’ to ‘entirety of the available universe’ to give us. Similar to how humans have a lot of room between ‘negative utilitarianism’ to ‘basically no reproduction past some point’ to ‘reproduce all the time’ to choose from / end up in. There’s also the perturbations of that, where we don’t get a full utopia from a partially-aligned AGI, or where we design new people from the ground up rather than them being notably genetically related to anyone.
So this is a definite mismatch—even if we limit ourselves to reasonable bounded implementations that could fit in a human brain. It isn’t as bad a mismatch as it could have been, since it seems like we’re on track to ‘some amount of reproduction for a long period of time → lots of people’, but it still seems to be a mismatch to me.
Along with what Raemon said, though I expect us to probably grow far beyond any Earth species eventually, if we’re characterizing evolution as having a reasonable utility function then I think there’s the issue of other possibilities that would be more preferable.
Like, evolution would-if-it-could choose humans to be far more focused on reproducing, and we would expect that if we didn’t put in counter-effort that our partially-learned approximations (‘sex enjoyable’, ‘having family is good’, etc.) would get increasingly tuned for the common environments.
Similarly, if we end up with an almost-aligned AGI that has some value which extends to ‘filling the universe with as many squiggles as possible’ because that value doesn’t fall off quickly, but it has another more easily saturated ‘caring for humans’ then we end up with some resulting tradeoff along there: (for example) a dozen solar systems with a proper utopia set up.
This is better than the case where we don’t exist, similar to how evolution ‘prefers’ humans compared to no life at all. It is also maybe preferable to the worlds where we lock down enough to never build AGI, similar to how evolution prefers humans reproducing across the stars to never spreading. It isn’t the most desirable option, though. Ideally, we get everything, and evolution would prefer space algae to reproduce across the cosmos.
There’s also room for uncertainty in there, where even if we get the agent loosely aligned internally (which is still hard...) then it can have a lot of room between ‘nothing’ to ‘planet’ to ‘entirety of the available universe’ to give us. Similar to how humans have a lot of room between ‘negative utilitarianism’ to ‘basically no reproduction past some point’ to ‘reproduce all the time’ to choose from / end up in. There’s also the perturbations of that, where we don’t get a full utopia from a partially-aligned AGI, or where we design new people from the ground up rather than them being notably genetically related to anyone.
So this is a definite mismatch—even if we limit ourselves to reasonable bounded implementations that could fit in a human brain. It isn’t as bad a mismatch as it could have been, since it seems like we’re on track to ‘some amount of reproduction for a long period of time → lots of people’, but it still seems to be a mismatch to me.