There’s the underlying rationality of the predictor and the second order rationality of the simulacra. Rather like the highly rational intuitive reasoning of humans modulo some bugs, and much less rational high level thought.
I am not disagreeing with you in any of my comments and I’ve strong upvoted your post; your point is very good. I’m disagreeing with fragments to add detail, but I agree with the bulk of it.
There’s the underlying rationality of the predictor and the second order rationality of the simulacra. Rather like the highly rational intuitive reasoning of humans modulo some bugs, and much less rational high level thought.
Okay, sure. But those “bugs” are probably something the AI risk community should take seriously.
I am not disagreeing with you in any of my comments and I’ve strong upvoted your post; your point is very good. I’m disagreeing with fragments to add detail, but I agree with the bulk of it.
Ah okay. My apologies for misunderstanding.