I can try. This is new thinking for me, so tell me if this isn’t convincing.
If a future is deep with respect to human progress so far, but not as deep with respect to all possible incompressible origins, then we are selecting for futures that in a sense make use of the computational gains of humanity.
These computational gains include such unique things as:
human DNA, which encodes our biological interests relative to the global ecosystem.
details, at unspecified depth, about the psychologies of human beings
political structures, sociological structures, etc.
I’ve left very unspecified what aspects of humanity should constitute the h term but my point is that by including them, to the extent that they represent the computationally costly process of biological and cultural evolution, they will be a precious endowment of high D(u/ht) / D(u) futures. So at the very least they will be preserved in the ongoing computational dynamism.
Further, the kinds of computations that would increase that ratio are the sorts of things that would be like the continuation of human history in a non-catastrophic way. To be concrete, consider the implementation that runs a lot of Monte Carlo simulations of human history from now on, with differences in the starting conditions based on the granularity of the h term and with simulations of exogenous shocks. Cases where large sections of humanity have been wiped out or had no impact would be less desirable than those in which the full complexity of human experience was taken up and expanded on.
A third argument is that something like coherent extrapolated volition or indirect normativity is exactly the kind of thing that is favored by depth with respect to humanity but not absolute depth. That’s a fairly weak claim but one that I think could motivate friendly amendments to the original function.
Lastly, I am drawing on some other ethical theory here which is out of scope of this post. My own view is shaped heavily by Simone de Beauvoir’s The Ethics of Ambiguity, whose text can be found here:
Further, the kinds of computations that would increase that ratio are the sorts of things that would be like the continuation of human history in a non-catastrophic way.
This is not obvious to me. I concur with Manfred’s point that “any solution that doesn’t have very good evidence that it will satisfy human values, will very likely not do so (small target in a big space).”
To be concrete, consider the implementation that runs a lot of Monte Carlo simulations of human history from now on, with differences in the starting conditions based on the granularity of the h term and with simulations of exogenous shocks.
Why couldn’t they just scan everyone’s brain then store the information in a big hard drive in a maximum-security facility while the robots wipe every living person out and start anew? Perhaps it’s possible that by doing that you vastly increase resilience to exogenous shocks, making it preferable. And about ‘using the computational gains of humanity’, that could just as easily be achieved by doing the opposite of what humans would have done.
Non-catastrophic with respect to existence, not with respect to “human values.” I’m leaving values out of the equation for now, focusing only on the problem of existence. If species suicide is on the table as something that might be what our morality ultimately points to, then this whole formulation of the problem has way deeper issues.
My point is that starting anew without taking into account the computational gains, you are increasing D(u) efficiently and D(u/h) inefficiently, which is not favored by the objective function.
If there’s something that makes humanity very resilient to exogenous shocks until some later time, that seems roughly analogous to cryogenic freezing of the ill until future cures are developed. I think that still qualifies as maintaining human existence.
Doing the opposite of what humans would have done is interesting. I hadn’t thought of that.
I can try. This is new thinking for me, so tell me if this isn’t convincing.
If a future is deep with respect to human progress so far, but not as deep with respect to all possible incompressible origins, then we are selecting for futures that in a sense make use of the computational gains of humanity.
These computational gains include such unique things as:
human DNA, which encodes our biological interests relative to the global ecosystem.
details, at unspecified depth, about the psychologies of human beings
political structures, sociological structures, etc.
I’ve left very unspecified what aspects of humanity should constitute the h term but my point is that by including them, to the extent that they represent the computationally costly process of biological and cultural evolution, they will be a precious endowment of high D(u/ht) / D(u) futures. So at the very least they will be preserved in the ongoing computational dynamism.
Further, the kinds of computations that would increase that ratio are the sorts of things that would be like the continuation of human history in a non-catastrophic way. To be concrete, consider the implementation that runs a lot of Monte Carlo simulations of human history from now on, with differences in the starting conditions based on the granularity of the h term and with simulations of exogenous shocks. Cases where large sections of humanity have been wiped out or had no impact would be less desirable than those in which the full complexity of human experience was taken up and expanded on.
A third argument is that something like coherent extrapolated volition or indirect normativity is exactly the kind of thing that is favored by depth with respect to humanity but not absolute depth. That’s a fairly weak claim but one that I think could motivate friendly amendments to the original function.
Lastly, I am drawing on some other ethical theory here which is out of scope of this post. My own view is shaped heavily by Simone de Beauvoir’s The Ethics of Ambiguity, whose text can be found here:
http://www.marxists.org/reference/subject/ethics/de-beauvoir/ambiguity/
I think the function I’ve proposed is a better expression of existentialist ethics than consequentialist ethics.
This is not obvious to me. I concur with Manfred’s point that “any solution that doesn’t have very good evidence that it will satisfy human values, will very likely not do so (small target in a big space).”
Why couldn’t they just scan everyone’s brain then store the information in a big hard drive in a maximum-security facility while the robots wipe every living person out and start anew? Perhaps it’s possible that by doing that you vastly increase resilience to exogenous shocks, making it preferable. And about ‘using the computational gains of humanity’, that could just as easily be achieved by doing the opposite of what humans would have done.
Non-catastrophic with respect to existence, not with respect to “human values.” I’m leaving values out of the equation for now, focusing only on the problem of existence. If species suicide is on the table as something that might be what our morality ultimately points to, then this whole formulation of the problem has way deeper issues.
My point is that starting anew without taking into account the computational gains, you are increasing D(u) efficiently and D(u/h) inefficiently, which is not favored by the objective function.
If there’s something that makes humanity very resilient to exogenous shocks until some later time, that seems roughly analogous to cryogenic freezing of the ill until future cures are developed. I think that still qualifies as maintaining human existence.
Doing the opposite of what humans would have done is interesting. I hadn’t thought of that.