However I doubt another paranoid schizophrenic would be able to provide very good or effective therapy.
I don’t see a reason for why being a paranoid schizophrenic makes a person unable to lead another person through a CBT process.
As an engineering professional I find it extremely unlikely that an AI could successfully achieve hard take-off on the first try.
The assumption of an AGI achieving hard take-off on the first try is not required for the main arguments about AGI risk being a problem.
The fact that the AGI first doesn’t engage in particular harmful action X doesn’t imply that if you let it self modify a lot it still doesn’t engage in action X.
I don’t see a reason for why being a paranoid schizophrenic makes a person unable to lead another person through a CBT process.
The assumption of an AGI achieving hard take-off on the first try is not required for the main arguments about AGI risk being a problem.
The fact that the AGI first doesn’t engage in particular harmful action X doesn’t imply that if you let it self modify a lot it still doesn’t engage in action X.
We are clearly talking past each other and I’ve lost the will to engage further, sorry.