I’m not sure what you mean by “how high the relative tech capabilities are of the AGI”.
I think the general capability of the AGI itself, not “tech” capabilities specifically, are plenty dangerous themselves.
The general danger seems more like ‘a really powerful but unaligned optimizer’ that’s ‘let loose’.
I’m not sure that ‘agent-ness’ is necessary for catastrophe; just ‘strong enough optimization’ and a lack of our own capability in predicting the consequences of running the AGI.
I do agree with this:
It’s not supposed to be a self-contained cinematic universe, it’s supposed to be “we have little/no reason to expect it to not be at least this weird”, according to his background assumptions (which he almost always goes into more detail on anyway).
I’m not sure what you mean by “how high the relative tech capabilities are of the AGI”.
I think the general capability of the AGI itself, not “tech” capabilities specifically, are plenty dangerous themselves.
The general danger seems more like ‘a really powerful but unaligned optimizer’ that’s ‘let loose’.
I’m not sure that ‘agent-ness’ is necessary for catastrophe; just ‘strong enough optimization’ and a lack of our own capability in predicting the consequences of running the AGI.
I do agree with this: