By the point of the singularity no human has any instrumental value. Everything any human can do, a nanotech robot AI can do better. No one will be able to signal usefulness or have anything to invest; we will all be instrumentally worthless.
If the singularity goes well at all, though, humanity will get its shit together and save everyone anyways, because people are intrinsically valuable. There will be no concern for the cost of maintaining or uplifting people, because it will be trivially small next to the sheer power we would have, and the value of saving a friend.
Don’t assume that everyone else will stay uncaring, once they have the capacity to care. We would save you, along with everyone else.
Don’t assume that everyone else will stay uncaring, once they have the capacity to care. We would save you, along with everyone else.
Downvotes for being unreasonably dramatic.
Rather than downvoting, how about trying to explain why “caring” is a universal value to someone who’s never experienced “caring”? How about trying to explain why, in all the design-space of posthuman optimization processes, I should bet that the one that gets picked is the one where “caring” applies to my sorry ass?
We have enough resources to feed and shelter the world right now, and we don’t. So saying that “once we have the resources to care, we will” seems like the sort of BS that our esteemed host warns us about—the assumption that just because something is all-powerful and all-wise, it will be all-good.
I grew up worshipping a Calvinist dick of a deity, so pull the other one.
And another thing:
people are intrinsically valuable
It’s all well and good for YOU to claim that people are intrinsically valuable, you probably have enough resources to avoid getting spit on and lectured about “bootstraps” when you say it. Some of us aren’t so lucky.
If whatever it is that gets to do the deciding is evaluating people based on their use as raw materials, things have gone horribly wrong. In fact, that’s basically the exact definition of “horribly wrong” that seems to be in common use around here. As a corollary, there’s a lot that’s wrong with the current state of affairs.
By the point of the singularity no human has any instrumental value. ... humanity will get its shit together and save everyone anyways, because people are intrinsically valuable.
It’s a tricky point, I expect humans (in their present form) will also have insignificant terminal value, compared to other things that could be created. The question of whether the contemporary humans will remain in some form depends on how bad it is to discard the old humans, compared to how much value gets lost to inefficiency by keeping (and improving) the old humans. Given how much value could be created de novo, even a slight inefficiency might be more significant than any terminal value the present humanity contributes. (Discarding humans won’t be a wrong outcome if it happens, because it will only happen if it turns out to be a better outcome, assuming a FAI.)
You’re wrong.
By the point of the singularity no human has any instrumental value. Everything any human can do, a nanotech robot AI can do better. No one will be able to signal usefulness or have anything to invest; we will all be instrumentally worthless.
If the singularity goes well at all, though, humanity will get its shit together and save everyone anyways, because people are intrinsically valuable. There will be no concern for the cost of maintaining or uplifting people, because it will be trivially small next to the sheer power we would have, and the value of saving a friend.
Don’t assume that everyone else will stay uncaring, once they have the capacity to care. We would save you, along with everyone else.
Downvotes for being unreasonably dramatic.
Rather than downvoting, how about trying to explain why “caring” is a universal value to someone who’s never experienced “caring”? How about trying to explain why, in all the design-space of posthuman optimization processes, I should bet that the one that gets picked is the one where “caring” applies to my sorry ass?
We have enough resources to feed and shelter the world right now, and we don’t. So saying that “once we have the resources to care, we will” seems like the sort of BS that our esteemed host warns us about—the assumption that just because something is all-powerful and all-wise, it will be all-good.
I grew up worshipping a Calvinist dick of a deity, so pull the other one.
And another thing:
It’s all well and good for YOU to claim that people are intrinsically valuable, you probably have enough resources to avoid getting spit on and lectured about “bootstraps” when you say it. Some of us aren’t so lucky.
If whatever it is that gets to do the deciding is evaluating people based on their use as raw materials, things have gone horribly wrong. In fact, that’s basically the exact definition of “horribly wrong” that seems to be in common use around here. As a corollary, there’s a lot that’s wrong with the current state of affairs.
It’s a tricky point, I expect humans (in their present form) will also have insignificant terminal value, compared to other things that could be created. The question of whether the contemporary humans will remain in some form depends on how bad it is to discard the old humans, compared to how much value gets lost to inefficiency by keeping (and improving) the old humans. Given how much value could be created de novo, even a slight inefficiency might be more significant than any terminal value the present humanity contributes. (Discarding humans won’t be a wrong outcome if it happens, because it will only happen if it turns out to be a better outcome, assuming a FAI.)