I’m glad to see the large number amount of sincere discussion here, and thanks to Luke and Pei for doing this.
Although most people are not guilty of this, I would like to personally plea that people keep references towards Pei civil; insulting him or belittling his ideas without taking the time to genuinely respond to them (linking to a sequence post doesn’t count) will make future people less likely to want to hold such discussions, which will be bad for the community, whichever side of the argument you are on.
My small non-meta contribution to this thread: I suspect that some of Pei’s statements that seem wrong at face value are a result of him lacking the language to state things in a way that would satisfy most LWers. Can someone try to charitably translate his arguments into such terms? In particular, his stuff about goal adaptation are somewhat similar in spirit to jtaylor’s recent posts on learning utility functions.
This is inevitable. Some people think when someone talk in technical terms, or is a person with good training and background, or is a clown saying nonsense. In cases like postmodern guys, the latter is the case, in the LW community, maybe be the former.
I’ve been reading comments and the sequences so far, and seen to me that the language in LW is a important signal of rationality level. Ambiguity is not a good thing, precision is, and when someone go confuse, it’s because he lacks some important knowledge. But, academia don’t use LW idiom. Some try to be precise, others don’t. In the dialogue it’s clear, a great part is around definitions. With Goertzel was the same.
I’m glad to see the large number amount of sincere discussion here, and thanks to Luke and Pei for doing this.
Although most people are not guilty of this, I would like to personally plea that people keep references towards Pei civil; insulting him or belittling his ideas without taking the time to genuinely respond to them (linking to a sequence post doesn’t count) will make future people less likely to want to hold such discussions, which will be bad for the community, whichever side of the argument you are on.
My small non-meta contribution to this thread: I suspect that some of Pei’s statements that seem wrong at face value are a result of him lacking the language to state things in a way that would satisfy most LWers. Can someone try to charitably translate his arguments into such terms? In particular, his stuff about goal adaptation are somewhat similar in spirit to jtaylor’s recent posts on learning utility functions.
This is inevitable. Some people think when someone talk in technical terms, or is a person with good training and background, or is a clown saying nonsense. In cases like postmodern guys, the latter is the case, in the LW community, maybe be the former. I’ve been reading comments and the sequences so far, and seen to me that the language in LW is a important signal of rationality level. Ambiguity is not a good thing, precision is, and when someone go confuse, it’s because he lacks some important knowledge. But, academia don’t use LW idiom. Some try to be precise, others don’t. In the dialogue it’s clear, a great part is around definitions. With Goertzel was the same.