Thank you for taking the time to answer. But I think this is an irrational attitude. Not yours specifically, but in general it’s an irrational tradition of analyzing ideas and proposals. It’s incompatible with rational analysis of evidence and possibilities. (I wrote a post about this.)
my point is that you are effectively on an early step of figuring out how to precisely specify your idea, and it won’t be worth others’ time to use your idea until it becomes at least a reasonable summary of the current state of the research or a clear specification of how to go a different direction.
Imagine that one day you suddenly got personal evidence about the way your brain learns. You “saw” the learning process of your brain. Not on the lowest level, but still, you saw a lot of things that you didn’t know were true or didn’t even imagine as a possibility.
Did you gain any information?
If you saw some missed possibilities, can you explain them to other people?
Are you more likely to find a way to “go in a different direction” in your research than before?
If “yes”, then what stops us from discussing ideas not formulated in math? Neural nets aren’t a thing disconnected from reality, they try to model learning and you should be able to discuss what “learning” means and how it can happen outside of math. (I mean, if you want you can analyze neural nets as pure math with 0 connection to reality.)
If even professional researchers can’t easily understand the papers, it means they don’t have high level ideas about “learning”[1]. So it’s strange to encounter a rare high level idea and say that it’s not worth anyone’s time if it’s not math. Maybe it’s worth your time because it’s not math. Maybe you just rejected thinking about a single high level idea you know about abstract learning.
My idea is applicable to human learning too. You also could imagine situations/objects for which this way of learning is the best one. (A thought experiment that could help you to understand the idea. But you don’t let me to argue with you or to explain anything.)
This is wrong; I don’t know how to fix it in english. It would learn all of those things smoothly-ish in parallel; this is why I suggest watching several different training animations to get a sense of what a training process looks like in my previous post.
Likely it doesn’t affect the point of my post. It’s just a nitpick. (I watched Grant’s series, I watched some animations you linked.)
If even professional researchers can’t easily understand the papers, it means they don’t have high level ideas about “learning”[1]. So it’s strange to encounter a rare high level idea and say that it’s not worth anyone’s time if it’s not math. Maybe it’s worth your time because it’s not math. Maybe you just rejected thinking about a single high level idea you know about abstract learning.
This will be my last comment on this post, but for what it’s worth, math vs not-math is primarily a question of vagueness. Your english description is too vague to turn into useful math. Precise math can describe reality incredibly well, if it’s actually the correct model. Being able to understand the fuzzy version of precise math is in fact useful, you aren’t wrong, and I don’t think your sense that intuitive reasoning can be useful is wrong. Your idea here, however, seems to underspecify which math it describes, and to the degree I can see ways to convert it into math, it appears to describe math which is false. The difficulty of understanding papers isn’t because they don’t understand learning, it’s simply because writing understandable scientific papers is really hard and most papers do a bad job explaining themselves. (it’s fair to say they don’t understand it as well as they ideally would, of course.)
I agree that good use of vague ideas is important, but someone else here recently made the point that a lot of what needs to be done to use vague ideas well is to be good at figuring out which vague ideas are not promising and skip focusing on them. Unfortunately, vagueness makes it hard to avoid accidentally paying too much attention to less-promising ideas, and it makes it hard to avoid accidentally paying too little attention to highly-promising ideas.
In machine learning, it is very often the case that someone tried an idea before you thought of it, but tried it poorly and their version can be improved. If you want to make an impact on the field, I’d strongly suggest finding ways to rephrase this idea so that it is more precise; again, my problem with it is that it underspecifies the math severely and in order to make use of your idea I would have to go myself read those papers I suggest you go look at.
I agree that good use of vague ideas is important, but someone else here recently made the point that a lot of what needs to be done to use vague ideas well is to be good at figuring out which vague ideas are not promising and skip focusing on them.
I don’t think there’s a lot of high level ideas about learning. So I don’t see a problem of choosing between ideas. Note that “vague idea about neural nets’ math” and “(vague) idea about learning” are two different things.
again, my problem with it is that it underspecifies the math severely and in order to make use of your idea I would have to go myself read those papers I suggest you go look at.
Maybe if you tried to discuss the idea I could change your opinion.
Your idea here, however, seems to underspecify which math it describes, and to the degree I can see ways to convert it into math, it appears to describe math which is false.
That would mean that my idea is wrong on non-math level too and you could explain why (or at least explain why you can’t explain). I feel that you don’t think in terms of levels of the problem and the way they correspond.
Your english description is too vague to turn into useful math.
I don’t think “vagueness” is even a meaningful concept. An idea may be identical to other ideas or unclear, but not “vague”. If you see that an idea is different from some other idea and you understand what the idea says (about anything), then it’s already specific enough. Maybe you jump into neural nets math too early.
I think you can turn my idea into precise enough statements not tied to math of neural nets. Then you can see what implications the idea has for neural nets.
Thank you for taking the time to answer. But I think this is an irrational attitude. Not yours specifically, but in general it’s an irrational tradition of analyzing ideas and proposals. It’s incompatible with rational analysis of evidence and possibilities. (I wrote a post about this.)
Imagine that one day you suddenly got personal evidence about the way your brain learns. You “saw” the learning process of your brain. Not on the lowest level, but still, you saw a lot of things that you didn’t know were true or didn’t even imagine as a possibility.
Did you gain any information?
If you saw some missed possibilities, can you explain them to other people?
Are you more likely to find a way to “go in a different direction” in your research than before?
If “yes”, then what stops us from discussing ideas not formulated in math? Neural nets aren’t a thing disconnected from reality, they try to model learning and you should be able to discuss what “learning” means and how it can happen outside of math. (I mean, if you want you can analyze neural nets as pure math with 0 connection to reality.)
If even professional researchers can’t easily understand the papers, it means they don’t have high level ideas about “learning”[1]. So it’s strange to encounter a rare high level idea and say that it’s not worth anyone’s time if it’s not math. Maybe it’s worth your time because it’s not math. Maybe you just rejected thinking about a single high level idea you know about abstract learning.
My idea is applicable to human learning too. You also could imagine situations/objects for which this way of learning is the best one. (A thought experiment that could help you to understand the idea. But you don’t let me to argue with you or to explain anything.)
Likely it doesn’t affect the point of my post. It’s just a nitpick. (I watched Grant’s series, I watched some animations you linked.)
I don’t mean any disrespect here. Just saying they have no other context to work with.
This will be my last comment on this post, but for what it’s worth, math vs not-math is primarily a question of vagueness. Your english description is too vague to turn into useful math. Precise math can describe reality incredibly well, if it’s actually the correct model. Being able to understand the fuzzy version of precise math is in fact useful, you aren’t wrong, and I don’t think your sense that intuitive reasoning can be useful is wrong. Your idea here, however, seems to underspecify which math it describes, and to the degree I can see ways to convert it into math, it appears to describe math which is false. The difficulty of understanding papers isn’t because they don’t understand learning, it’s simply because writing understandable scientific papers is really hard and most papers do a bad job explaining themselves. (it’s fair to say they don’t understand it as well as they ideally would, of course.)
I agree that good use of vague ideas is important, but someone else here recently made the point that a lot of what needs to be done to use vague ideas well is to be good at figuring out which vague ideas are not promising and skip focusing on them. Unfortunately, vagueness makes it hard to avoid accidentally paying too much attention to less-promising ideas, and it makes it hard to avoid accidentally paying too little attention to highly-promising ideas.
In machine learning, it is very often the case that someone tried an idea before you thought of it, but tried it poorly and their version can be improved. If you want to make an impact on the field, I’d strongly suggest finding ways to rephrase this idea so that it is more precise; again, my problem with it is that it underspecifies the math severely and in order to make use of your idea I would have to go myself read those papers I suggest you go look at.
I don’t think there’s a lot of high level ideas about learning. So I don’t see a problem of choosing between ideas. Note that “vague idea about neural nets’ math” and “(vague) idea about learning” are two different things.
Maybe if you tried to discuss the idea I could change your opinion.
That would mean that my idea is wrong on non-math level too and you could explain why (or at least explain why you can’t explain). I feel that you don’t think in terms of levels of the problem and the way they correspond.
I don’t think “vagueness” is even a meaningful concept. An idea may be identical to other ideas or unclear, but not “vague”. If you see that an idea is different from some other idea and you understand what the idea says (about anything), then it’s already specific enough. Maybe you jump into neural nets math too early.
I think you can turn my idea into precise enough statements not tied to math of neural nets. Then you can see what implications the idea has for neural nets.