By analogy, I’d ask you to consider why it doesn’t make sense to try to “cooperate” with the process of evolution. Evolution can be thought of as an optimizer, with a “goal” of maximizing inclusive reproductive fitness. Why do we just try to help actual conscious beings, rather than doing some compromise between “helping conscious beings” and “maximizing inclusive reproductive fitness” in order to be more fair to evolution?
A few reasons:
The things evolution “wants” are terrible. This isn’t a case of “vanilla or chocolate?”; it’s more like “serial killing or non-serial-killing?”.
(The links I gave above argue that the same is true for a random optimizer.)
Evolution isn’t a moral patient: it isn’t a person, it doesn’t have experiences or emotions, etc.
(A paperclip maximizer might be a moral patient, but it’s not obvious that it would be; and there are obvious reasons for us to deliberately design AGI systems to not be moral patients, if possible.)
Evolution can’t use threats or force to get us to do what it wants.
(Ditto a random optimizer, at least if we’re smart enough to not build threatening or coercive systems!)
Evolution won’t reciprocate if we’re nice to it.
(Ditto a random optimizer. This is still true after you build an unfriendly optimizer, though not for the same reasons: an unfriendly superintelligence is smart enough to reciprocate, but there’s no reason to do so relative to its own goals, if it can better achieve those goals through force.)
I generally agree with Rob here (and I think it’s more useful for ai-crotes to engage with Rob and read the relevant sequence posts. My comment here assumes some sophisticated background, including reading the posts Rob suggested).
But, I’m not sure I agree with this paragraph as written. Some caveats:
I know at least one person who has made a conscious commitment to dedicate some of their eventual surplus resources (i.e. somewhere on the order of 1% of their post-singularity resources) to “try to figure out what evolution was trying to do when they created me, and do some of it.” (i.e. create a planet with tons of DNA in a pile, create copies of themselves, etc)
By being the sort of person who tries to understand what your creator was intending, and help said creator as best you can, you get access to more multiverse resources (across all possible creators).
[My own current position is that this sounds reasonable, but I have tons of philosophical uncertainty about it, and my own current commitment is something like “I promise to think hard about these issues if given more resources/compute and do the right thing.” But a hope is that by committing to that explicitly rather than incidentally, you can show up earlier on lower-resolution simulations]
I wasnt trying to make the case that one should try to cooperate with evolution, simply pointing out that alignment with evolution is reproduction and we as a species are living proof that its possible for intelligent agents to “outgrow” the optimizer that brought them to be.
By analogy, I’d ask you to consider why it doesn’t make sense to try to “cooperate” with the process of evolution. Evolution can be thought of as an optimizer, with a “goal” of maximizing inclusive reproductive fitness. Why do we just try to help actual conscious beings, rather than doing some compromise between “helping conscious beings” and “maximizing inclusive reproductive fitness” in order to be more fair to evolution?
A few reasons:
The things evolution “wants” are terrible. This isn’t a case of “vanilla or chocolate?”; it’s more like “serial killing or non-serial-killing?”.
(The links I gave above argue that the same is true for a random optimizer.)
Evolution isn’t a moral patient: it isn’t a person, it doesn’t have experiences or emotions, etc.
(A paperclip maximizer might be a moral patient, but it’s not obvious that it would be; and there are obvious reasons for us to deliberately design AGI systems to not be moral patients, if possible.)
Evolution can’t use threats or force to get us to do what it wants.
(Ditto a random optimizer, at least if we’re smart enough to not build threatening or coercive systems!)
Evolution won’t reciprocate if we’re nice to it.
(Ditto a random optimizer. This is still true after you build an unfriendly optimizer, though not for the same reasons: an unfriendly superintelligence is smart enough to reciprocate, but there’s no reason to do so relative to its own goals, if it can better achieve those goals through force.)
I generally agree with Rob here (and I think it’s more useful for ai-crotes to engage with Rob and read the relevant sequence posts. My comment here assumes some sophisticated background, including reading the posts Rob suggested).
But, I’m not sure I agree with this paragraph as written. Some caveats:
I know at least one person who has made a conscious commitment to dedicate some of their eventual surplus resources (i.e. somewhere on the order of 1% of their post-singularity resources) to “try to figure out what evolution was trying to do when they created me, and do some of it.” (i.e. create a planet with tons of DNA in a pile, create copies of themselves, etc)
This is not because you can cooperate with evolution-in-particular, but as part of a general strategy of maximizing your values across universes, including simulations. (ie. Beyond Astronomical Waste). For example “be the sort of agent that, if an engineer was white-boarding out your decision-making, they can see that you robustly cooperate in appropriate situations, including if the engineers failed to give you the values that they were trying to give you.”
By being the sort of person who tries to understand what your creator was intending, and help said creator as best you can, you get access to more multiverse resources (across all possible creators).
[My own current position is that this sounds reasonable, but I have tons of philosophical uncertainty about it, and my own current commitment is something like “I promise to think hard about these issues if given more resources/compute and do the right thing.” But a hope is that by committing to that explicitly rather than incidentally, you can show up earlier on lower-resolution simulations]
I wasnt trying to make the case that one should try to cooperate with evolution, simply pointing out that alignment with evolution is reproduction and we as a species are living proof that its possible for intelligent agents to “outgrow” the optimizer that brought them to be.
I wasn’t bringing up evolution because you brought up evolution; I was bringing it up separately to draw a specific analogy.
ah okay i see now, my apologies, gonna read the posts you linked in the upper reply, thanks for discussing (explaining really) this with me.
Sure! :) Sorry if I came off as brusque, I was multi-tasking a bit.
No worries thank you for clearing things up, I may reply if again once ive read/digested more the material you posted!