I’d actually say most of this post is true, but some of it is blatantly false, and if you’re going to make a post dismissing the entire community’s body of work, you better don’t have such parts in your post.
The sentence that jumps out is this:
Because it was not trained using reinforcement learning and doesn’t have a utility function, which means that it won’t face problems like mesa-optimisation and infinitely increasing expected utility.
which states two implications, both of which are false afaik.
I’d actually say most of this post is true, but some of it is blatantly false, and if you’re going to make a post dismissing the entire community’s body of work, you better don’t have such parts in your post.
The sentence that jumps out is this:
which states two implications, both of which are false afaik.