That’s ok because it won’t have human killing capabilities (just following your example!). Why can’t the AGI find the solution to the alignment problem?
Please read carefully my post, because I think I have been very clear of what it is that I am arguing against. If you think that EY is just saying that our civilization can be disruptive, you are not paying attention
I am just following the example that they gave me to show that things are in fact more complicated to what they are suggesting. To be clear, in the example, the AGI looks for a way to kill humans using nanotech but it first needs to build those nanotech factories
...because it still won’t be aligned?
That’s ok because it won’t have human killing capabilities (just following your example!). Why can’t the AGI find the solution to the alignment problem?
An AGI doesn’t have to kill humans directly for our civilization to be disrupted.
Why would the AGI not have capabilities to pursue this if needed?
Please read carefully my post, because I think I have been very clear of what it is that I am arguing against. If you think that EY is just saying that our civilization can be disruptive, you are not paying attention
I am just following the example that they gave me to show that things are in fact more complicated to what they are suggesting. To be clear, in the example, the AGI looks for a way to kill humans using nanotech but it first needs to build those nanotech factories