You do not update anthropic reasoning based on self-generated evidence. That’s bad logic. Making a space-faring self-replicating machine gives you no new information.
It is also incredibly dangerous. Actual robust self-replicating machines is basically a AGI-complete problem. You can’t solve one without the other. What you are making is a paperclip maximizer, just with blueprints of itself instead of paperclips.
Self-replication need not be autonomous, or use AGI. Factories run by humans self-replicate but are not threatening. Plants self-replicate but are not threatening. An AGI might increase performance but is not required or desirable. Add in error-checking to prevent evolution if that’s a concern.
Building a self-replicating lunar mining & factory complex is one thing. Building a self-replicating machine that is able to operate effectively in any situation it encounters while expanding into the cosmos is another story entirely. Without knowing the environment in which it will operate, it’ll have to be able to adapt to circumstances to achieve its replication goal in whatever situation it finds itself in. That’s the definition of an AGI.
I would argue they are, for some level of micro-intelligence, but that’s entirely beside the point. A bacteria doesn’t know how to create tools or self-modify or purposefully engineer its environment in such a way as to make things more survivable.
In this case: can we build self-replicating machines? Yes. Is there any specific reason to think that the great filter might lie between now and deployment of the machines? No, because we’ve already had the capability for 35+ years, just not the political will or economic need. We could have made it already in an alternate history. So since we know the outcome (the universe permits self-replicating space-faring machines, and we have had the capability to build them for sufficient time), we can update based on that evidence now. Actually building the machines therefore provides zero new evideence.
In general: anthropic reasoning involves assuming that we are randomly selected from the space of all possible universes, according to some typically unspecified prior probability. If you change the state of the universe, that changed state is not a random selection against a universal prior. It’s no longer anthropic reasoning.
A paperclip maximizer decides for itself how to maximize paperclips; it can ignore human instructions. This SRS network can’t: It receives instructions and updates and deterministically follows them. Hence the question around secure communication between SRS and colonies: a paperclip maximizer doesn’t need that.
What is your distinction between “self-generated” evidence and evidence I can update anthropic reasoning on?
You do not update anthropic reasoning based on self-generated evidence. That’s bad logic. Making a space-faring self-replicating machine gives you no new information.
It is also incredibly dangerous. Actual robust self-replicating machines is basically a AGI-complete problem. You can’t solve one without the other. What you are making is a paperclip maximizer, just with blueprints of itself instead of paperclips.
Self-replication need not be autonomous, or use AGI. Factories run by humans self-replicate but are not threatening. Plants self-replicate but are not threatening. An AGI might increase performance but is not required or desirable. Add in error-checking to prevent evolution if that’s a concern.
Building a self-replicating lunar mining & factory complex is one thing. Building a self-replicating machine that is able to operate effectively in any situation it encounters while expanding into the cosmos is another story entirely. Without knowing the environment in which it will operate, it’ll have to be able to adapt to circumstances to achieve its replication goal in whatever situation it finds itself in. That’s the definition of an AGI.
Bacteria perform quite well at expanding into an environment, and they are not intelligent.
I would argue they are, for some level of micro-intelligence, but that’s entirely beside the point. A bacteria doesn’t know how to create tools or self-modify or purposefully engineer its environment in such a way as to make things more survivable.
I disagree. You don’t disregard evidence because it is “self-generated”. Can you explain your reasoning?
In this case: can we build self-replicating machines? Yes. Is there any specific reason to think that the great filter might lie between now and deployment of the machines? No, because we’ve already had the capability for 35+ years, just not the political will or economic need. We could have made it already in an alternate history. So since we know the outcome (the universe permits self-replicating space-faring machines, and we have had the capability to build them for sufficient time), we can update based on that evidence now. Actually building the machines therefore provides zero new evideence.
In general: anthropic reasoning involves assuming that we are randomly selected from the space of all possible universes, according to some typically unspecified prior probability. If you change the state of the universe, that changed state is not a random selection against a universal prior. It’s no longer anthropic reasoning.
A paperclip maximizer decides for itself how to maximize paperclips; it can ignore human instructions. This SRS network can’t: It receives instructions and updates and deterministically follows them. Hence the question around secure communication between SRS and colonies: a paperclip maximizer doesn’t need that.
What is your distinction between “self-generated” evidence and evidence I can update anthropic reasoning on?
Would using the spacefaring machine give new evidence? Presumably X-risk becomes lower as humanity disperses.