[removed]
I think you’re referring to narrowness of an AI’s goals, but Rossin seems to be referring to narrowness of the AI’s capabilities.
Do I understand you correctly as endorsing something like: it doesn’t matter how narrow an optimization process is, if it becomes powerful enough and is not well aligned, it still ends in disaster
[removed]
I think you’re referring to narrowness of an AI’s goals, but Rossin seems to be referring to narrowness of the AI’s capabilities.
Do I understand you correctly as endorsing something like: it doesn’t matter how narrow an optimization process is, if it becomes powerful enough and is not well aligned, it still ends in disaster