I’m mildly shocked by the comment by Olle Häggström`s.
He contends that after we, these babies with a detonator in hand, miraculously actually managed to get it all right and create an FAI, make it correctly understand morality, and spread moral stuff through over 99.9% of the universe, generating more good than was thought imaginable in all of human history by many many orders of magnitude, we would be monsters to request our tiny galaxy for our purposes.
What does that amount to: saying that we can’t let the combination of uncertainty over the whole project, plus possibility that the universe is infinite and actually our actions were not morally relevant by adding only a finite amount of good, plus negotiating with the future in the sense of we believe it more likely that we will correctly create FAI if our descendants permit us to take some resources from our galaxy to explore our posthuman potential. If the universe is infinite, or if we still have moral uncertainty after the AI thought a lot about this, or if we are babies with an H-bomb in our hands and we managed to get it right! It feels to me profoundly uncaring/inhuman to not give us the posthuman opportunity to play and celebrate in our garden.
I’m not so sure. I feel very uncertain about the question of how aggregate utilities at such a scale. By your logic, if the universe is infinite, wouldn’t ANY sort of finite action not matter?
Let me spell that out.
Either the universe is infinite, or not. Either morality is agreggative or not. If both of these are true, then finite actions don’t make a difference.
I’m mildly shocked by the comment by Olle Häggström`s.
He contends that after we, these babies with a detonator in hand, miraculously actually managed to get it all right and create an FAI, make it correctly understand morality, and spread moral stuff through over 99.9% of the universe, generating more good than was thought imaginable in all of human history by many many orders of magnitude, we would be monsters to request our tiny galaxy for our purposes.
What does that amount to: saying that we can’t let the combination of uncertainty over the whole project, plus possibility that the universe is infinite and actually our actions were not morally relevant by adding only a finite amount of good, plus negotiating with the future in the sense of we believe it more likely that we will correctly create FAI if our descendants permit us to take some resources from our galaxy to explore our posthuman potential. If the universe is infinite, or if we still have moral uncertainty after the AI thought a lot about this, or if we are babies with an H-bomb in our hands and we managed to get it right! It feels to me profoundly uncaring/inhuman to not give us the posthuman opportunity to play and celebrate in our garden.
I’m not so sure. I feel very uncertain about the question of how aggregate utilities at such a scale. By your logic, if the universe is infinite, wouldn’t ANY sort of finite action not matter?
Let me spell that out. Either the universe is infinite, or not. Either morality is agreggative or not. If both of these are true, then finite actions don’t make a difference.