LessWrong has a dual nature. On one hand, it’s a place where anyone
can post, and where almost any idea can get a hearing.
On the other hand, LessWrong promotes the ideas of Eliezer Yudkowsky.
This is inevitable, and fair, since it was originally based on Eliezer’s posts. This is also intentional; no post makes it onto the home page unless Eliezer endorses it; and he
has to my knowledge never endorsed a post that disagreed with or
questioned things he has said in the past.
I’m not complaining. I applaud Eliezer for opening up top-level posting to everyone; he could have just kept it as his blog. But LessWrong shouldn’t simultaneously be Eliezer’s place, and a base to use to build an entire discipline, if you want that discipline to be well-built. That’s like trying to build a school of journalism at Fox News.
Could LessWrong become such a place, if Eliezer relinquished control of the coveted green button? I don’t know. There’s more memetic homegeneity here than I would prefer for such a venture. But I don’t see any more likely candidates at present.
The other dual nature of LessWrong is that it’s about rationality, and it’s about Friendly AI. The groupthink exists mainly within the FAI aspect of LessWrong. Perhaps someday these two parts should split into separate websites?
(Or perhaps, before that happens, we will develop a web service interface enabling two websites to interact so seamlessly that the notion of “separate websites” will dissolve.)
no post makes it onto the home page unless Eliezer endorses it; and he has to my knowledge never endorsed a post that disagreed with or questioned things he has said in the past.
There’s more memetic homegeneity here than I would prefer for such a venture.
Sometimes there are right answers, and smart people will mostly aggree. I suspect your perception of “memetic homegeneity” results from your insistance on disagreeing with some obviously (at least obviously after the discussions we’ve had) right answers, e.g. persistance of values as an instrumental value.
If I understand what you are talking about, I have expressed disagreement with it a couple of times. My disagreement has to do with the values expressed by a coalition (which will be some kind of bargained composite of the values of the individual members of that coalition).
But then when the membership in that coalition changes, the ‘deal’ must be renegotiated, and the coalition’s values are no longer perfectly persistent—nor should they be.
This is not just a technical quibble. The CEV of mankind is a composite value representing a coalition with a changing membership.
The case of agents in conflict. Keep your values and be destroyed, or change them and get the world partially optimized for your initial values.
The case of unknown future. You know class of worlds you want to be in. What you don’t know yet is that to reach them you must make choices incompatible with your values. And, to make things worse, all choices you can make ultimately lead to worlds you definitely don’t want to be in.
Yes. That is the general class that includes ‘Omega rewards you if you make your decision irrationally’. It applies whenever the specific state of your cognitive representation interacts significantly with the environment by means independent of your behaviour.
No. You don’t need to edit yourself to make unpleasant choices. Whenever you wish you were are different person than who you are so that you could make a different choice you just make that choice.
It works for pure consequentialist, but if one’s values have a deontology in the mix, then your suggestion effectively requires changing of one’s values.
And I doubt than instrumental value that will change terminal values can be called instrumental. Agent that adopts this value (persistence of values) will end up with different terminal values than agent that does not.
But LessWrong shouldn’t simultaneously be Eliezer’s place, and a base to use to build an entire discipline, if you want that discipline to be well-built. That’s like trying to build a school of journalism at Fox News.
The Sequences shouldn’t simultaneously be a slowly laid out, baby-steps introduction to rationality and the main resource to learn about EY’s ideas for domain specialists. They are trying to do contradictory things.
LessWrong has a dual nature. On one hand, it’s a place where anyone can post, and where almost any idea can get a hearing.
On the other hand, LessWrong promotes the ideas of Eliezer Yudkowsky. This is inevitable, and fair, since it was originally based on Eliezer’s posts. This is also intentional; no post makes it onto the home page unless Eliezer endorses it; and he has to my knowledge never endorsed a post that disagreed with or questioned things he has said in the past.
I’m not complaining. I applaud Eliezer for opening up top-level posting to everyone; he could have just kept it as his blog. But LessWrong shouldn’t simultaneously be Eliezer’s place, and a base to use to build an entire discipline, if you want that discipline to be well-built. That’s like trying to build a school of journalism at Fox News.
Could LessWrong become such a place, if Eliezer relinquished control of the coveted green button? I don’t know. There’s more memetic homegeneity here than I would prefer for such a venture. But I don’t see any more likely candidates at present.
The other dual nature of LessWrong is that it’s about rationality, and it’s about Friendly AI. The groupthink exists mainly within the FAI aspect of LessWrong. Perhaps someday these two parts should split into separate websites?
(Or perhaps, before that happens, we will develop a web service interface enabling two websites to interact so seamlessly that the notion of “separate websites” will dissolve.)
Here’s one example of a post that criticized Eliezer and others associated with SIAI but nevertheless got promoted to the home page: http://lesswrong.com/lw/2l8/existential_risk_and_public_relations/
I think there have been others, though I don’t remember any specific ones off the top of my head.
Off the top of my head, Abnormal Cryonics.
Sometimes there are right answers, and smart people will mostly aggree. I suspect your perception of “memetic homegeneity” results from your insistance on disagreeing with some obviously (at least obviously after the discussions we’ve had) right answers, e.g. persistance of values as an instrumental value.
What? Someone disagrees with that? But, but… how?
Ask Phil
If I understand what you are talking about, I have expressed disagreement with it a couple of times. My disagreement has to do with the values expressed by a coalition (which will be some kind of bargained composite of the values of the individual members of that coalition).
But then when the membership in that coalition changes, the ‘deal’ must be renegotiated, and the coalition’s values are no longer perfectly persistent—nor should they be.
This is not just a technical quibble. The CEV of mankind is a composite value representing a coalition with a changing membership.
The case of agents in conflict. Keep your values and be destroyed, or change them and get the world partially optimized for your initial values.
The case of unknown future. You know class of worlds you want to be in. What you don’t know yet is that to reach them you must make choices incompatible with your values. And, to make things worse, all choices you can make ultimately lead to worlds you definitely don’t want to be in.
Yes. That is the general class that includes ‘Omega rewards you if you make your decision irrationally’. It applies whenever the specific state of your cognitive representation interacts significantly with the environment by means independent of your behaviour.
No. You don’t need to edit yourself to make unpleasant choices. Whenever you wish you were are different person than who you are so that you could make a different choice you just make that choice.
It works for pure consequentialist, but if one’s values have a deontology in the mix, then your suggestion effectively requires changing of one’s values.
And I doubt than instrumental value that will change terminal values can be called instrumental. Agent that adopts this value (persistence of values) will end up with different terminal values than agent that does not.
No, it’s the red button that makes the biggest difference.
The Sequences shouldn’t simultaneously be a slowly laid out, baby-steps introduction to rationality and the main resource to learn about EY’s ideas for domain specialists. They are trying to do contradictory things.