I don’t really understand your greater argument. Inaction (e.g. sitting on Earth, not pursuing AI, not pursuing growth) is not morally neutral. By failing to act, we’re risking suffering in various ways; insufficiency of resources on the planet, political and social problems, or a Singularity perpetrated by actors who are not acting in the interest of humanity’s values. All of these could potentially result in the non-existence of all the future actors we’re discussing. That’s got to be first and foremost in any discussion of our moral responsibility toward them.
We can’t opt out of shaping the universe, so we ought to do a good a job as we can as per our values. The more powerful humanity is, the more options are open to us, and the better for our descendants to re-evaluate our choices and further steer our future.
The argument is about action. We forbid inbreeding because it causes suffering in future generations. Now if there is no way that the larger future could be desirable, i.e. if suffering is prevailing, then I ask how many entities have to suffer to forbid humanity to seed the universe? What is your expected number of entities born after 10^20 years who’ll face a increasing lack of resources until the end at around 10^100 years? All of them are doomed to face a future that might be shocking and undesirable. This is not a small part but most of it.
The more powerful humanity is, the more options are open to us, and the better for our descendants to re-evaluate our choices and further steer our future.
But what is there that speaks for our future ability to stop entropy?
If we can’t stop entropy, then we can’t stop entropy, but I still don’t see why our descendants should be less able to deal with this fact than we are. We appreciate living regardless, and so may they.
Surely posthuman entities living at the 10^20 year mark can figure out much more accurately than us whether it’s ethical to continue to grow and/or have children at that point.
As far as I can tell, the single real doomsday scenario here is, what if posthumans are no longer free to commit suicide, but they nevertheless continue to breed; heat death is inevitable, and life in a world with ever-decreasing resources is a fate worse than death. That would be pretty bad, but the first and last seem to me unlikely enough, and all four conditions are inscrutable enough from our limited perspective that I don’t see a present concern.
I don’t really understand your greater argument. Inaction (e.g. sitting on Earth, not pursuing AI, not pursuing growth) is not morally neutral. By failing to act, we’re risking suffering in various ways; insufficiency of resources on the planet, political and social problems, or a Singularity perpetrated by actors who are not acting in the interest of humanity’s values. All of these could potentially result in the non-existence of all the future actors we’re discussing. That’s got to be first and foremost in any discussion of our moral responsibility toward them.
We can’t opt out of shaping the universe, so we ought to do a good a job as we can as per our values. The more powerful humanity is, the more options are open to us, and the better for our descendants to re-evaluate our choices and further steer our future.
The argument is about action. We forbid inbreeding because it causes suffering in future generations. Now if there is no way that the larger future could be desirable, i.e. if suffering is prevailing, then I ask how many entities have to suffer to forbid humanity to seed the universe? What is your expected number of entities born after 10^20 years who’ll face a increasing lack of resources until the end at around 10^100 years? All of them are doomed to face a future that might be shocking and undesirable. This is not a small part but most of it.
But what is there that speaks for our future ability to stop entropy?
If we can’t stop entropy, then we can’t stop entropy, but I still don’t see why our descendants should be less able to deal with this fact than we are. We appreciate living regardless, and so may they.
Surely posthuman entities living at the 10^20 year mark can figure out much more accurately than us whether it’s ethical to continue to grow and/or have children at that point.
As far as I can tell, the single real doomsday scenario here is, what if posthumans are no longer free to commit suicide, but they nevertheless continue to breed; heat death is inevitable, and life in a world with ever-decreasing resources is a fate worse than death. That would be pretty bad, but the first and last seem to me unlikely enough, and all four conditions are inscrutable enough from our limited perspective that I don’t see a present concern.