LessWrong team member / moderator. I’ve been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I’ve been interested in improving my own epistemic standards and helping others to do so as well.
Raemon
This post seems important-if-right. I get a vibe from it of aiming to persuade more than explain, and I’d be interested in multiple people gathering/presenting evidence about this, preferably at least some of them who are (currently) actively worried about China.
I’ve recently made a pull-request (not quite ready to merge yet) that gives LessWrong Fatebook hoverovers (which are different from embeds. I’m considering also making embeds, although I think the UI takes up a bit too much space by default).
I am into “more Fatebook integration everywhere”.
(I think individual FB questions can toggle whether to show/hide predictions before you’ve made your own)
This seems right to me, but the discussion of “scaling will plateau” feels like it usually comes bundled with “and the default expectation is that this means LLM-centric-AI will plateau”, which seems like the wrong-belief-to-have, to me.
Noting, this doesn’t really engage with any of the particular other claims in the previous comment’s link, just makes a general assertion.
Curated. This was one of the more inspiring things I read this year (in a year that had a moderate number of inspiring things!)
I really like how Sarah lays out the problem and desiderata for neutrality in our public/civic institutional spaces.
LessWrong’s strength is being a fairly opinionated ”university[1]” about how to do epistemics, which the rest of the world isn’t necessarily bought into. Trying to make LW a civic institution would fail. But, this post has me more excited to revisit “what would be necessary to build good, civic infrastructure” (where “good” requires both “be ‘good’ in some kind of deep sense,” but also “be memetically fit enough to compete with Twitter et all.” One solution might be convincing Musk of specific policies rather than building a competitor)
- ^
I.e. A gated community with epistemic standards, a process for teaching people, and a process for some of those people going on to do more research.
- ^
You can make a post or shortform discussing it and see what people think. I recommend front loading the main arguments, evidence or takeaways so people can easily get a sense of it—people often bounce off long worldview posts from newcomers
Fwiw I didn’t find the post hostile.
I’m assuming “natural abstraction” is also a scalar property. Reading this paragraph, I refactored the concept in my mind to “some abstractions tend to be cheaper to abstract than others. agents will converge to using cheaper abstractions. Many cheapness properties generalize reasonably well across agents/observation-systems/environments, but, all of those could in theory come apart.”
And the Strong NAH would be “cheap-to-abstract-ness will be very punctuated, or something” (i.e. you might expect less of a smooth gradient of cheapnesses across abstractions)
How would you solve the example legal situation you gave?
Thanks, this gave me the context I needed.
Put another way: this post seems like it’s arguing with someone but I’m not sure who.
I think I care a bunch about the subject matter of this post, but something about the way this post is written leaves me feeling confused and ungrounded.
Before reading this post, my background beliefs were:
Rationality doesn’t (quite) equal Systemized Winning. Or, rather, that focusing on this seems to lead people astray more than helps them.
There’s probably some laws of cognition to be discovered, about what sort of cognition will have various good properties, in idealized situations.
There’s probably some messier laws of cognition that apply to humans (but those laws are maybe more complicated).
Neither sets of laws necessarily have a simple unifying framework that accomplishes All the Things (although I think the search for simplicity/elegance/all-inclusiveness is probably a productive search, i.e. it tends to yield good stuff along the way. “More elegance” is usually achievable on the margin.
There might be heuristics that work moderately well for humans much of the time, which approximate those laws.
there are probably Very Rough heuristics you can tell an average person without lots of dependencies, and somewhat better heuristics you can give to people who are willing to learn lots of subskills.
Given all that… is there anything in-particular I am meant to take from this post? (I have right now only skimmed it, it felt effortful to comb for the novel bits). I can’t tell whether the few concrete bits are particularly important, or just illustrative examples.
This is not very practically useful to me but dayumn it is cool
An individual Social Psychology lab (or lose collection of labs) can choose who to let in.
Frontier Lab AI companies can decide who to hire, and what sort of standards they want internally (and maybe, in a lose alliance with other Frontier Lab companies).
The Immoral Mazes outlines some reasons that you might think large institutions are dramatically worse than smaller ones (see: Recursive Middle Manager Hell for a shorter intro, although I don’t spell out the part argument about how mazes are sort of “contagious” between large institutions)
But the simpler argument is “the fewer people you have, the easier it is for a few leaders to basically make personal choices based on their goals and values,” rather than selection effects resulting in the largest institutions being better modeled as “following incentives” rather than “pursuing goals on purpose.” (If an organization didn’t follow the incentives, they’d be outcompeted by one that does)
This claim looks like it’s implying that research communities can build better-than-median selection pressures but, can they? And if so why have we hypothesized that scientific fields don’t?
I’m a bit surprised this is the crux for you. Smaller communities have a lot more control over their gatekeeping because, like, they control it themselves, whereas the larger field’s gatekeeping is determined via openended incentives in the broader world that thousands (maybe millions?) of people have influence over. (There’s also things you could do in addition to gatekeeping. See Selective, Corrective, Structural: Three Ways of Making Social Systems Work)
(This doesn’t mean smaller research communities automatically have good gatekeeping or other mechanisms, but it doesn’t feel like a very confusing or mysterious problem on how to do better)
Curated. This was a practically useful post. A lot of the advice here resonated with stuff I’ve tried and found valuable, so insofar as you were like “well I’m glad this worked for Shoshannah but I dunno if it’d work for me”, well, I personally also have found it useful to:
have a direction more than a goal
do what I love but always tie it back
try random things and see what affordances they give me
Yeah, I didn’t read this post and come away with “and this is why LessWrong works great”, I came away with a crisper model of “here are some reasons LW performs well sometimes”, but more importantly “here is an important gear for what LW needs to work great.”
Nod.
One of the things we’ve had a bunch of internal debate about is “how noticeable should this be at all, by default?” (with opinions ranging from “it should be about as visible as the current green links are” to “it’d be basically fine if it jargon-terms weren’t noticeable at all by default.”
Another problem is just variety in monitor and/or “your biological eyes.” When I do this:
Turn your screen brightness up a bunch and the article looks a bit like Swiss cheese (because the contrast between the white background and the black text increases, the relative contrast between the white background and the gray text decreases).
What happens to me when I turn my macbook brightness to the max is that I stop being able to distinguish the grey and the black (rather than the contrast between white and grey seeming to decrease). I… am a bit surprised you had the opposite experience (I’m on a ~modern M3 macbook. What are you using?)
I will mock up a few options soon and post them here.
For now, here are a couple random options that I’m not currently thrilled with:
1. the words are just black, not particularly noticeable, but use the same little ° that we use for links.
2. Same, but the circle is green:
This feels like you have some way of thinking about responsibility that I’m not sure I’m tracking all the pieces of.
Who literally meant the individuals? No one (or, some random alien mind).
Who should take actions if someone flags that an unapproved term is wrong? The author, if they want to be involved, and site-admins (or me-in-particular), if they author does not want to be involved.
Who should be complained to if this overall system is having bad consequences? Site admins, me-in-particular or habryka-in-particular (Habryka has more final authority, I have more context on this feature. You can start with me and then escalate, or tag both of us, or whatever)
Who should have Some Kind of Social Pressure Leveraged At them if reasonable complaints seem to be falling on deaf ears and there are multiple people worried? Also the site admins, and habryka-and-me-in-particular.
It seems like you want #1 to have a better answer, but I don’t really know why.
Oh to be clear I don’t think it was bad for you to post this as-is. Just that I’d like to see more followup