If you don’t understand other models , you don’t know that they have other bad failures modes. If you only understand one model, and know that you only understand one model, you shouldn’t be generalising it. If the literature isn’t “up to it”, no conclusions should be drawn until it is.
If you don’t understand other models , you don’t know that they have other bad failures modes. If you only understand one model, and know that you only understand one model, you shouldn’t be generalising it. If the literature isn’t “up to it”, no conclusions should be drawn until it is.
I think that’s a decent argument about what models we should build, but not an argument that AI isn’t dangerous.
“Dangerous” is a much easier target to hit than “”existentially dangerous, but “existentially dangerous” is the topic.