There is a difference between the brain encoding concepts a certain way and concepts themselves being a certain way (or best studied at a certain level of abstraction, or best characterized in terms of necessary and sufficient conditions, etc.). Analogously, when I think of the number 2, I might associate it with certain typical memories, perceptions, other mathematical ideas, etc. etc. None of this has anything (well, almost anything) to do with the number 2 itself, but rather merely with my way of grasping it.
Concepts, like the number 2, are so-called “abstract objects”. They do not have spatio-temporal location. If your philosophical view implies that the question “Where is the number 2?” has a definite answer (i.e. isn’t a category mistake), then there is something wrong with your view. The question is, what are concepts meant to explain? The “necessary and sufficient conditions” view you criticize is most closely associated with Gottlob Frege (AKA the inventor of Mathematical Logic). Frege was trying to explain things like how we can understand each other, how we have knowledge of Mathematics, given that it deals with objects that must be abstract, why certain arguments can be truth-preserving arguments solely in virtue of logical form, etc. Frege’s essay The Foundations of Arithmetic (1884) is still widely regarded as one of the greatest works of Philosophy ever produced and as the beginning of so-called “analytic” Philosophy.
Anyway, Frege wanted to understand the nature of Mathematics and put Mathematical reasoning on a firm foundation. So, he considered uncontroversially true or false statements in Mathematics (like “2+2=4”) and reflected on what contributes to the truth or falsity of such statements. He was interested in the objective, sharable content of statements we make and arguably thought of such content as given by the conditions under which such a statement is/would be true. This view has been adopted wholeheartedly in contemporary semantics, so far as I know. Concepts (what Frege called “Sinn” and we often translate as “senses”) were then characterized in virtue of the contribution they made to the truth-conditions of entire declarative sentences. This also allowed Frege to explain logical consequence, i.e. truth-preservation solely in virtue of the logical form of the content of the statements in the argument. That’s where the so-called “classical” view comes from. So, your criticisms seem to miss the point.
For instance, Frege himself was already well aware that humans grasp Mathematics, for example, via metaphor. But he took pains to argue that this doesn’t imply that Mathematics itself is somehow metaphorical, but rather that, at most, we as human beings must use metaphor to understand difficult Mathematical concepts. Again, just because we grasp some concepts via metaphor doesn’t imply that the concepts themselves are essentially metaphorical any more than the fact that I mostly relate things around me spatially via perception implies that space itself (or the physical object I’m looking at) is somehow perceptual.
It depends on what you mean by “simple”. The Diagonal Lemma is extremely easy to state and prove (by which I mean that the proof itself has very few steps), but the proof looks like magic. That is to say, the standard proof doesn’t really reveal how the Lemma was discovered in the first place.
Gödel Numbering, on the other hand, isn’t too difficult to understand, but actually proving the Incompleteness Theorems (or whatever) usually requires pages and pages of boring, combinatorial proofs that one’s Numbering works the way one wants it to. Conceptually, however, Gödel Numbering was a massive leap forward. As I understand it, before Gödel’s paper in 1931, no one had really realized that such techniques were possible (germs of the idea go back at least to Leibniz, though), nor that one could in fact use such a technique to make metatheoretical claims about one’s object-level theory in the language of that theory itself (so that the theory could, in a sense, “prove things about itself”), nor what the implications of this would be.
Another thing to note is that Gödel’s numbering technique inspired Alan Turing’s work in 1936, and arguably was an absolutely necessary conceptual breakthrough for the invention of computers.
Oh, and I wouldn’t recommend studying provability logic until you have already mastered a sufficient amount of Mathematical Logic, by which I mean that you have gained understanding equivalent to what you would ideally gain taking an advanced undergraduate Mathematics course or Philosophy course on the subject (assuming the Philosophy course was sufficiently technical/rigorous).