I agree that good use of vague ideas is important, but someone else here recently made the point that a lot of what needs to be done to use vague ideas well is to be good at figuring out which vague ideas are not promising and skip focusing on them.
I don’t think there’s a lot of high level ideas about learning. So I don’t see a problem of choosing between ideas. Note that “vague idea about neural nets’ math” and “(vague) idea about learning” are two different things.
again, my problem with it is that it underspecifies the math severely and in order to make use of your idea I would have to go myself read those papers I suggest you go look at.
Maybe if you tried to discuss the idea I could change your opinion.
Your idea here, however, seems to underspecify which math it describes, and to the degree I can see ways to convert it into math, it appears to describe math which is false.
That would mean that my idea is wrong on non-math level too and you could explain why (or at least explain why you can’t explain). I feel that you don’t think in terms of levels of the problem and the way they correspond.
Your english description is too vague to turn into useful math.
I don’t think “vagueness” is even a meaningful concept. An idea may be identical to other ideas or unclear, but not “vague”. If you see that an idea is different from some other idea and you understand what the idea says (about anything), then it’s already specific enough. Maybe you jump into neural nets math too early.
I think you can turn my idea into precise enough statements not tied to math of neural nets. Then you can see what implications the idea has for neural nets.
I don’t think there’s a lot of high level ideas about learning. So I don’t see a problem of choosing between ideas. Note that “vague idea about neural nets’ math” and “(vague) idea about learning” are two different things.
Maybe if you tried to discuss the idea I could change your opinion.
That would mean that my idea is wrong on non-math level too and you could explain why (or at least explain why you can’t explain). I feel that you don’t think in terms of levels of the problem and the way they correspond.
I don’t think “vagueness” is even a meaningful concept. An idea may be identical to other ideas or unclear, but not “vague”. If you see that an idea is different from some other idea and you understand what the idea says (about anything), then it’s already specific enough. Maybe you jump into neural nets math too early.
I think you can turn my idea into precise enough statements not tied to math of neural nets. Then you can see what implications the idea has for neural nets.