This post clearly helped a lot of other people, but it follows a pattern that many other posts on Less Wrong also follow which I consider negative. The valuable contribution here is not the formalisation, but the generator behind the formalisation. The core idea appears to be the following:
“Human brains contain two forms of knowledge: - explicit knowledge and weights that are used in implicit knowledge (admittedly the former is hacked on top of the later, but that isn’t relevant here). Mary doesn’t gain any extra explicit knowledge from seeing blue, but her brain changes some of her implicit weights so that when a blue object activates in her vision a sub-neural network can connect this to the label “blue”.”
Unfortunately, there is a wall of maths that you have to wade through before this is explained to you. I feel it is much better when you provide your readers with a conceptual understanding of what is happening and only then include the formal details.
The valuable contribution here is not the formalisation, but the generator behind the formalisation.
I disagree; the “core idea” I’d already thought of before seeing the post, but the valuable contribution to me was seeing why the core idea has to be true and how it works mechanically, rather than being just a plausible-seeming sentence. Technical explanation vs. verbal explanation.
I don’t necessarily see that as a verses. A good verbal explanation can provide enough information for you to simulate a formal model in your head. And obviously it’ll never be as reliable as working through a formal description step by step, but often that level of reliability isn’t required.
Upvoted for the useful comment, but my mind works completely the opposite to this—only through seeing the math does the formalism make sense to me. I suspect many lesswrongers are similar in that respect, but it’s interesting to see that not all are.
(also, yes, I could make my posts easier to follow, I admit that; one day, when I have more time, I will work on that)
FWIW I bounced off the post the first couple times I looked at it and was glad for Chris’ comment doing a good distillation of it, and am now more likely to read through the whole thing at some point.
Hmm, interesting. Now that you’re stating the opposite, it’s pretty clear to me that there are very particular assumptions underlying my claim that, “the valuable contribution here is not the formalisation, but the generator behind the formalisation” and maybe I should be more cautious about generalising to other people.
One of my underlying assumptions was my model of becoming good at maths—focusing on what ideas might allow you to generate the proof yourself, rather than trying to remember the exact steps. Of course, it is a bit parochial for me to act as though this is the “one true path”.
This post clearly helped a lot of other people, but it follows a pattern that many other posts on Less Wrong also follow which I consider negative. The valuable contribution here is not the formalisation, but the generator behind the formalisation. The core idea appears to be the following:
“Human brains contain two forms of knowledge: - explicit knowledge and weights that are used in implicit knowledge (admittedly the former is hacked on top of the later, but that isn’t relevant here). Mary doesn’t gain any extra explicit knowledge from seeing blue, but her brain changes some of her implicit weights so that when a blue object activates in her vision a sub-neural network can connect this to the label “blue”.”
Unfortunately, there is a wall of maths that you have to wade through before this is explained to you. I feel it is much better when you provide your readers with a conceptual understanding of what is happening and only then include the formal details.
I disagree; the “core idea” I’d already thought of before seeing the post, but the valuable contribution to me was seeing why the core idea has to be true and how it works mechanically, rather than being just a plausible-seeming sentence. Technical explanation vs. verbal explanation.
I don’t necessarily see that as a verses. A good verbal explanation can provide enough information for you to simulate a formal model in your head. And obviously it’ll never be as reliable as working through a formal description step by step, but often that level of reliability isn’t required.
Upvoted for the useful comment, but my mind works completely the opposite to this—only through seeing the math does the formalism make sense to me. I suspect many lesswrongers are similar in that respect, but it’s interesting to see that not all are.
(also, yes, I could make my posts easier to follow, I admit that; one day, when I have more time, I will work on that)
FWIW I bounced off the post the first couple times I looked at it and was glad for Chris’ comment doing a good distillation of it, and am now more likely to read through the whole thing at some point.
Thanks for commenting that!
Hmm, interesting. Now that you’re stating the opposite, it’s pretty clear to me that there are very particular assumptions underlying my claim that, “the valuable contribution here is not the formalisation, but the generator behind the formalisation” and maybe I should be more cautious about generalising to other people.
One of my underlying assumptions was my model of becoming good at maths—focusing on what ideas might allow you to generate the proof yourself, rather than trying to remember the exact steps. Of course, it is a bit parochial for me to act as though this is the “one true path”.