If accepting this level of moral horror is truly required to save the human race, then I for one prefer paperclips. The status quo is unacceptable.
Perhaps we could upload humans and a few cute fluffy species humans care about, then euthanize everything that remains? That doesn’t seem to add too much risk?
I agreed up until the “euthanize everything that remains” part. If we actually get to the stage of having aligned ASI, there are probably other options with the same or better value. The “gradients of bliss” that I described in another comment may be one.
I think we should do what we can now (conservation efforts, wild-life reserves with rangers and veterinarians, etc.), build AGI and then ASI with as low an x-risk as we can, advance our civilization’s technology, and then address this problem once we have appropriate technology and ASI advice. If things go FOOM, this could be a soluble problem fairly soon, post-Singularity. Or if (as I currently suspect), takeoff takes a rather longer than that, then our descendants can deal with this ethical problem once they have the appropriate technology. Nature has been red in tooth and claw (even under the restricted definition of sentience I initially propose in the post) at least since multicellular animals first evolved nervous systems, teeth, and claws back in the Precambrian. The moral horror is huge, but also extremely complex and longstanding.
The point of my post wasn’t to argue that we shouldn’t attempt with this once we can, it’s that we shouldn’t expect our first superintelligence to be able to deal with it immediately without it killing us all as a side effect. That’s why it says “Alas, Not Yet” in the title. This moral horror is the sort of task that very high-tech civilizations take on.
I would not enjoy living as a wild animal. While there would almost certainly be good days, some of the things that can happen are pretty horrendous. Still, when I encounter wild animals (fairly often, as I choose to live in a forest), they generally seem to be doing OK. Modern civilization is definitely a good thing (including painkillers); but if the life of a wild animal was my best available option, I wouldn’t want to be euthanized: I’d take my chances, as my ancestors have for hundreds of millions of years. As I discuss in a reply above to Shiroe, euthanasia is for things like hospital pain scale level 8+ for the rest of your life: the average utility of a typical wild animal’s life is better then that, so still net-positive under a well-calibrated Utilitarian utility scale, and euthanizing them because we can’t yet save them from a state of nature isn’t appropriate or proportionate.
If accepting this level of moral horror is truly required to save the human race, then I for one prefer paperclips. The status quo is unacceptable.
Perhaps we could upload humans and a few cute fluffy species humans care about, then euthanize everything that remains? That doesn’t seem to add too much risk?
I agreed up until the “euthanize everything that remains” part. If we actually get to the stage of having aligned ASI, there are probably other options with the same or better value. The “gradients of bliss” that I described in another comment may be one.
I think we should do what we can now (conservation efforts, wild-life reserves with rangers and veterinarians, etc.), build AGI and then ASI with as low an x-risk as we can, advance our civilization’s technology, and then address this problem once we have appropriate technology and ASI advice. If things go FOOM, this could be a soluble problem fairly soon, post-Singularity. Or if (as I currently suspect), takeoff takes a rather longer than that, then our descendants can deal with this ethical problem once they have the appropriate technology. Nature has been red in tooth and claw (even under the restricted definition of sentience I initially propose in the post) at least since multicellular animals first evolved nervous systems, teeth, and claws back in the Precambrian. The moral horror is huge, but also extremely complex and longstanding.
The point of my post wasn’t to argue that we shouldn’t attempt with this once we can, it’s that we shouldn’t expect our first superintelligence to be able to deal with it immediately without it killing us all as a side effect. That’s why it says “Alas, Not Yet” in the title. This moral horror is the sort of task that very high-tech civilizations take on.
I would not enjoy living as a wild animal. While there would almost certainly be good days, some of the things that can happen are pretty horrendous. Still, when I encounter wild animals (fairly often, as I choose to live in a forest), they generally seem to be doing OK. Modern civilization is definitely a good thing (including painkillers); but if the life of a wild animal was my best available option, I wouldn’t want to be euthanized: I’d take my chances, as my ancestors have for hundreds of millions of years. As I discuss in a reply above to Shiroe, euthanasia is for things like hospital pain scale level 8+ for the rest of your life: the average utility of a typical wild animal’s life is better then that, so still net-positive under a well-calibrated Utilitarian utility scale, and euthanizing them because we can’t yet save them from a state of nature isn’t appropriate or proportionate.