Lest anyone get the idea that no-one thinks LW should be more phygish or more exclusive, let me hereby register that I for one would like us to all enforce a little more strongly that people read the sequences and even agree with them in a horrifying manner.
I haven’t read most of the sequences yet and agree with most of what those lw members are saying of who you’d like to see more of.
Most of the criticisms I voice are actually rephrased and forwarded arguments and ideas from people much smarter and more impressive than me. Including big names like Douglas Hofstadter. Quite a few of them have read all of the sequences too.
Here is an example from yesterday. I told an AI researcher about a comment made on lw (don’t worry possible negative influence, they are already well aware of everything and has read the sequences). Here is part of the reply:
I don’t need to justify myself. Rather, those who claim to be taking risks from AI seriously need to justify why they themselves aren’t
researchers in AI.
...
I’d argue that further researching and extending a formal framework like AIXI is one of the best ways to reduce the risk of AI. There’s plenty of other ways to make progress that are far less amenable to analysis.. those are the ones which we should really be concerned about. Actually, it’s quite surprising that nobody who (publically) cares about AI risk has, to the best of my knowledge, even tried to extend the AIXI framework to incorporate some notion of friendliness...
I would usually rephrase this at some point and post it as a reply.
And this is just one of many people who simply don’t bother to get into incredible exhausting debates with a lesswrong mob.
Without me your impression that everyone agrees with you would be even worse. And by making this community even more exclusive you will get even more out of touch with reality.
It is relatively easy to believe that the only people who would criticize your beloved beliefs are some idiots like me who haven’t even read your scriptures. Guess again!
Actually, it’s quite surprising that nobody who (publically) cares about AI risk has, to the best of my knowledge, even tried to extend the AIXI framework to incorporate some notion of friendliness...
UDT can be seen as just this. It was partly inspired/influenced by AIXI anyway, if not exactly an extension of it. Edit: It doesn’t incorporate a notion of friendliness yet, but is structured so that unlike AIXI, at least in principle such a notion could be incorporated. See the last paragraph of Towards a New Decision Theory for some idea of how to do this.
It is relatively easy to believe that the only people who would criticize your beloved beliefs are some idiots like me who haven’t even read your scriptures. Guess again!
That post is part of the reason I made this post. Shit like this from the OP there:
I happen to value technological progress as an intrinsic good, so classifying a Singularity as “positive” or “negative” is not easy for me.
!!!
I don’t expect that if everyone made more of an effort to be more deeply familiar with the LW materials that there would be no disagreement with them. There is and would be much more interesting disagreement, and a lot less of the default mistakes.
Um, you seem to me to be saying that someone (davidad) who is in fact familiar with the sequences, and who left AI to achieve things well past most of LW’s participants, are a perfect example of who you don’t want here. Is that really what you meant to put across?
I don’t expect that if everyone made more of an effort to be more deeply familiar with the LW materials that there would be no disagreement with them. There is and would be much more interesting disagreement, and a lot less of the default mistakes.
Can you provide some examples of interesting disagreement with the LW materials that was acknowledged as such by those who wrote the content or believe that it is correct?
My default stance has always been that people disagree, even when informed, otherwise I’d have a lot more organizations and communities to choose from, and there’d be no way I could make it into SI’s top donor list.
I haven’t read most of the sequences yet and agree with most of what those lw members are saying of who you’d like to see more of.
Most of the criticisms I voice are actually rephrased and forwarded arguments and ideas from people much smarter and more impressive than me. Including big names like Douglas Hofstadter. Quite a few of them have read all of the sequences too.
Here is an example from yesterday. I told an AI researcher about a comment made on lw (don’t worry possible negative influence, they are already well aware of everything and has read the sequences). Here is part of the reply:
...
I would usually rephrase this at some point and post it as a reply.
And this is just one of many people who simply don’t bother to get into incredible exhausting debates with a lesswrong mob.
Without me your impression that everyone agrees with you would be even worse. And by making this community even more exclusive you will get even more out of touch with reality.
It is relatively easy to believe that the only people who would criticize your beloved beliefs are some idiots like me who haven’t even read your scriptures. Guess again!
UDT can be seen as just this. It was partly inspired/influenced by AIXI anyway, if not exactly an extension of it. Edit: It doesn’t incorporate a notion of friendliness yet, but is structured so that unlike AIXI, at least in principle such a notion could be incorporated. See the last paragraph of Towards a New Decision Theory for some idea of how to do this.
That post is part of the reason I made this post. Shit like this from the OP there:
!!!
I don’t expect that if everyone made more of an effort to be more deeply familiar with the LW materials that there would be no disagreement with them. There is and would be much more interesting disagreement, and a lot less of the default mistakes.
Um, you seem to me to be saying that someone (davidad) who is in fact familiar with the sequences, and who left AI to achieve things well past most of LW’s participants, are a perfect example of who you don’t want here. Is that really what you meant to put across?
Can you provide some examples of interesting disagreement with the LW materials that was acknowledged as such by those who wrote the content or believe that it is correct?
My default stance has always been that people disagree, even when informed, otherwise I’d have a lot more organizations and communities to choose from, and there’d be no way I could make it into SI’s top donor list.