LesserWrong is dead to me. Trust nothing here; it is not truth-tracking.
PDV
They don’t have intellectual progress as a goal.
The social incentives favor authors doing it more, and are ambivalent for the mods. Though I don’t trust them either, particularly after such a massive failure of judgment as proposing this change.
Calling out obvious groupthink and bullshit. Which is depressingly common with increasing regularity.
You are wrong about your own motivations in a way trivially predictable by monkey dynamics.
I expect content’s prominence on LesserWrong to be the result of political dynamics and filter bubbles, not insight or value. I do not expect it to be truth-tracking.
In line with this, I have given up on Lesserwrong. It’s clearly not going to be a source of insight I can trust for much longer, and I have doubts that it was any time recently.
I am in the process of taking everything I posted here and putting it back on my personal blog. After that’s been done, I don’t know whether I will interact with this site at all, since the main contribution I feel is needed is banned and the mods have threatened to ban me as well.
The Hamming Problem of Group Rationality
Fix the links, not the limit.
So scale it to...the size it already is? Maybe double that? I don’t think that requires any change. If you wanted a 10x user count increase, that probably would, but I don’t think those 10X potential users even exist. Unless and until round 3 of “Eliezer writes something that has no business getting a large audience into his preferred cause areas, but somehow works anyway” occurs.
I am also extremely skeptical that any discussion platform can do the third thing you mention. I don’t think any discussion platform that has ever existed both dealt with significant quantities of new people coming in well and was effective at filtering for effectiveness/quality. Those goals, in point of fact, seem directly opposed in most contexts; in order to judge people in any detail, the number to be judged must be kept small.
Are you sure you’re not building for scale because that’s the default thing you do with a web app made in the SF Bay Area?
Hmm, related question: Assuming this revival works, how long do you expect the site to be actively used before a 3.0 requiring a similar level of effort as this project becomes necessary? 5 years? 10?
(My prediction is 5 years.)
Why do you think that LessWrong can or should scale?
Many if not most people are Goodharting in most aspects of their lives. Why not this one?
I acknowledge your claim that you value feeling good over and above the things that cause you to feel good. I agree that many people implicitly endorse this claim about themselves. I think you and they are very likely mistaken about this preference, and that ceasing to optimize for it would improve your life significantly according to your other preferences.
I said that already? “Something great is something that increases your utility significantly.” This is a property of timelines, not of world-states, and so can’t be directly queried, but better approximations can be built up by retrospecting on which times feeling great was accurate and which times it was not.
Unreal, in a subthread above, claims that it is possible to realign System 1 such that feeling great coincides with being great. This seems wrong to me, but is the kind of thing that could be right. Your description does not seem to be the kind of thing that could be right.
It is strategically necessary to assume that social incentives are the true reason, because social incentives disguise themselves as any acceptable reason, and the corrosive effect of social incentives is the Hamming Problem for group epistemics. (I went into more detail here.)
I agree that desiring to hide traces is evidence of such a desire, but it’s simply not my motivation
Irrelevant. Stated motivation is cheap talk, not reliable introspectively, let alone coming from someone else.
Or, in more detail:
1) Unchecked, this capability being misused will create echo chambers.
2) There is a social incentive to misuse it; lack of dissent increases perceived legitimacy and thus status.
3) Where social incentives to do a thing for personal benefit exist, basic social instincts push people to do that thing for personal benefit.
4) These instincts operate at a level below and before conscious verbalization.
5) The mind’s justifier will, if feasible, throw up more palatable reasons why you are taking the action.
6) So even if you believe yourself to be using an action for good reasons, if there is a social incentive to be misusing it, you are very likely misusing it a significant fraction of the time.
7) Even doing this a fraction of the time will create an echo chamber.
8) For good group epistemics, preventing the descent into echo chambers is of utmost importance.
9) Therefore no given reason can be an acceptable reason.
10) Therefore this capability should not exist.
- 21 Feb 2018 16:48 UTC; 4 points) 's comment on [Meta] New moderation tools and moderation guidelines by (
People absolutely are silenced by this, and the core goal is to get high quality discussion, for which comments are at least as important as posts.
Writing a rebuttal on your personal page, if you are low-status, is still being silenced. To be able to speak, you need not just a technical ability to say things, but an ability to say them to the audience that cares.
Under this moderation scheme, if I have an novel, unpopular dissenting view against a belief that is important to the continuing power of the popular, they can costlessly prevent me from getting any traction.
I do not have a good understanding of what is meant by “ontology”.
Others could, if they are unwise. But they should not. There is no shame in deleting low-effort comments and so no reason to hide the traces of doing so. There is shame in deleting comments for less prosocial reasons, and therefore a reason to hide the traces.
The fact that you desire to hide the traces is evidence that the traces being hidden are of the type it is shameful to create.
I claim that as a general principle, “something feeling great is by itself a type of greatness to me” is a category error. What feels great is a map, and being great is the territory. There is a fact of the matter with regards to what is great for PDV, and what is great for Kaj. They are not identical, and they are not directly queriable, but there is a fact of the matter. Something great is something that increases your utility significantly. (Non-utilitarian ethics: translate that into language your system permits.)
What feels great is a separate fact. It is directly queriable, and correlates with being great, but it is only an approximation, and can therefore be Goodharted. The distinction between the true utility and the approximation is a general property of human minds, with some regularities (superstimuli), but also not identical between people.
So when you say “for me that’s a subcategory”, I conclude that you have a) misunderstood my claim, and b) mistaken the map for the territory.
And I don’t believe that many people are out to manipulate.
I think that would be a crux. Virtually everyone is out to manipulate almost everyone else, at all times. Much of the manipulation is subconscious, and observing that it is present is harshly socially punished. (cf. ialdabaoth/frustrateddemiurge/the living incarnation of David Monroe, PBUH).
If that’s the case then it’s your duty to be better at modelling them than they are at surprising you.
Doing that in full generality is literally impossible; it’s anti-inductive. It’s entirely a matter of what tolerances are acceptable. Treating most people as not giving a shit about me or anyone else, until clearly demonstrated otherwise, has predicted the world accurately up to this point.
That is complete. I’m out.