Why is not possible to check whether those nanobots are dangerous beforehand? In bjotech we already do that. For instance, if someone would try to synthesise some DNA sequences from certain bacteria, all alarms would go off.
Sorry, I might have not been clear enough. I understand that a machine would give us the instructions to create those fabricators but maybe not the designs. But what makes you think that those factories won’t have controls of what’s being produced in them?
Controls that who wrote? How good is our current industrial infrastructure at protecting against human-level exploitation, either via code or otherwise?
Can you verify code to be sure there’s no virus in it? It took years of trial and error to patch up some semblance of internet security. A single flaw in your nanotech factory is all a hostile AI would need.
We’ll have advanced AI by then we could use to help verify inputs or the design, or, as I said, we could use stricter standards, if nanotechnology is recognized as potentially dangerous.
A single flaw and them all humans die at once? I don’t see how. Or better put, I can conceive many reasons why this plan fails. Also, I don’t see how see build those factories in the first place and we can’t use that time window to make the AGI to produce explicit results on AGI safety
Or better put, I can conceive many reasons why this plan fails.
Then could you produce a few of the main ones, to allow for examination?
Also, I don’t see how see build those factories in the first place and we can’t use that time window to make the AGI to produce explicit results on AGI safety
What’s the time window in your scenario? As I noted in a different comment, I can agree with “days” as you initially stated. That’s barely enough time for the EA community to notice there’s a problem.
You aren’t going to get designs for specific nanotech, you’re going to get designs for generic nanotech fabricators.
Why is not possible to check whether those nanobots are dangerous beforehand? In bjotech we already do that. For instance, if someone would try to synthesise some DNA sequences from certain bacteria, all alarms would go off.
Can you reread what I wrote?
Sorry, I might have not been clear enough. I understand that a machine would give us the instructions to create those fabricators but maybe not the designs. But what makes you think that those factories won’t have controls of what’s being produced in them?
Controls that who wrote? How good is our current industrial infrastructure at protecting against human-level exploitation, either via code or otherwise?
How do the fabricators work? We can verify their inputs, too, right?
Can you verify code to be sure there’s no virus in it? It took years of trial and error to patch up some semblance of internet security. A single flaw in your nanotech factory is all a hostile AI would need.
We’ll have advanced AI by then we could use to help verify inputs or the design, or, as I said, we could use stricter standards, if nanotechnology is recognized as potentially dangerous.
A single flaw and them all humans die at once? I don’t see how. Or better put, I can conceive many reasons why this plan fails. Also, I don’t see how see build those factories in the first place and we can’t use that time window to make the AGI to produce explicit results on AGI safety
Then could you produce a few of the main ones, to allow for examination?
What’s the time window in your scenario? As I noted in a different comment, I can agree with “days” as you initially stated. That’s barely enough time for the EA community to notice there’s a problem.