I grew up in Russia, not in Silicon Valley, so I didn’t know the other “people in our cluster”, and unfortunately I didn’t like to read, so I’m not familiar with many of the obvious background facts. 5 years ago I read HPMoR, but unfortunately not sequences, I read them only a couple of years ago and then only the part that was translated into Russian, in English I could not read fluently enough, but then I noticed that Google Translate began to cope with a much better translation than before, and in most cases produces a readable text from English into Russian, so that I could finally read the “Sequences” to the end and generally begin to read and write in Lesswrong.
Now I write here in “Short Forms” any thoughts that I have not seen anyone else express, but since I have not read many books, many concepts are probably expressed somewhere else by someone before me, I just do not saw, in that case it would be desirable to add a link there in the comments. Unfortunately, I have many thoughts that I wrote down even before lessvrong, but rather than re-reading and editing them, it’s easier for me to write again, so many such thoughts lie unpublished, and since even at least I wrote down my thoughts far from births, then even more of them are not even recorded anywhere except in my head, however, again, if I stumble upon them again, I will try to write them down and publish them.
I saw that a lot of people are confused by “what does Yudkowsky mean by this difference between deep causes and surface analogies?”. I didn’t have this problem, with no delay I had interpretation what he means.
I thought that it’s difference between deep and surface regarding to black box metaphor. Difference between searching correlation between similar inputs and outputs and building a structure of hidden nodes and checking the predictions with rewarding correct ones and dividing that all by complexity of internal structure.
Difference between making step from inputs to outputs and having a model. Looking only at visible things and thinking about invisible ones. Looking only at experiment results and building theories from that.
Just like difference between deep neural networks and neural networks with no hidden layers, the first ones are much more powerful.
I am really unsure that it is right, because if it was so, why he just didn’t say that? But I write it here just in case.