Therefore, certainty we can do that would eliminate much existential risk.
It seems to me that you are making a map-territory confusion here. Existential risks are in the territory. Our estimates of existential risks are our map. If we were to build a self-replicating spacecraft, our estimates of existential risks would go down some. But the risks themselves would be unaffected.
Additionally, if we were to build a self-replicating spacecraft and become less worried about existential risks, that decreased worry might mean the risks would become greater because people would become less cautious. If the filter is early, that means we have no anthropic evidence regarding future existential risks… given an early filter, the sample size of civilizations that reach our ability level is small, and you can’t make strong inferences from a small sample. So it’s possible that people would become less cautious incorrectly.
If we were to build a self-replicating spacecraft, our estimates of existential risks would go down some. But the risks themselves would be unaffected.
To take an extreme example, building a self-replicating spacecraft, copying it off a few million times, and sending people to other galaxies would, if successful, reduce existential risks. I agree that merely making theoretical arguments constitutes making maps, not changing the territory. I also tentatively agree that just building a prototype spacecraft and not actually using it probably won’t reduce existential risk.
It seems to me that you are making a map-territory confusion here. Existential risks are in the territory.
If I understand the reasoning correctly, it is that we only know the map. We do not know the territory. The territory could be many different kinds, as long as they are consistent with the map. Adding SRS to the map rules out some of the unsafer territories, i.e. reduces our existential risk. It is a Baysian type argument.
It seems to me that you are making a map-territory confusion here. Existential risks are in the territory. Our estimates of existential risks are our map. If we were to build a self-replicating spacecraft, our estimates of existential risks would go down some. But the risks themselves would be unaffected.
Additionally, if we were to build a self-replicating spacecraft and become less worried about existential risks, that decreased worry might mean the risks would become greater because people would become less cautious. If the filter is early, that means we have no anthropic evidence regarding future existential risks… given an early filter, the sample size of civilizations that reach our ability level is small, and you can’t make strong inferences from a small sample. So it’s possible that people would become less cautious incorrectly.
To take an extreme example, building a self-replicating spacecraft, copying it off a few million times, and sending people to other galaxies would, if successful, reduce existential risks. I agree that merely making theoretical arguments constitutes making maps, not changing the territory. I also tentatively agree that just building a prototype spacecraft and not actually using it probably won’t reduce existential risk.
If I understand the reasoning correctly, it is that we only know the map. We do not know the territory. The territory could be many different kinds, as long as they are consistent with the map. Adding SRS to the map rules out some of the unsafer territories, i.e. reduces our existential risk. It is a Baysian type argument.