LessWrong dev & admin as of July 5th, 2022.
RobertM
“10x engineers” are a thing, and if we assume they’re high-agency people always looking to streamline and improve their workflows, we should expect them to be precisely the people who get a further 10x boost from LLMs. Have you observed any specific people suddenly becoming 10x more prolific?
In addition to the objection from Archimedes, another reason this is unlikely to be true is that 10x coders are often much more productive than other engineers because they’ve heavily optimized around solving for specific problems or skills that other engineers are bottlenecked by, and most of those optimizations don’t readily admit of having an LLM suddenly inserted into the loop.
Not at the moment, but it is an obvious sort of thing to want.
Thanks for the heads up, we’ll have this fixed shortly (just need to re-index all the wiki pages once).
Eliezer’s Lost Alignment Articles / The Arbital Sequence
Arbital has been imported to LessWrong
Curated. This post does at least two things I find very valuable:
Accurately represents differing perspectives on a contentious topic
Makes clear, epistemically legible arguments on a confusing topic
And so I think that this post both describes and advances the canonical “state of the argument” with respect to the Sharp Left Turn (and similar concerns). I hope that other people will also find it helpful in improving their understanding of e.g. objections to basic evolutionary analogies (and why those objections shouldn’t make you very optimistic).
Yes:
My model is that Sam Altman regarded the EA world as a memetic threat, early on, and took actions to defuse that threat by paying lip service / taking openphil money / hiring prominent AI safety people for AI safety teams.
In the context of the thread, I took this to suggest that Sam Altman never had any genuine concern about x-risk from AI, or, at a minimum, that any such concern was dominated by the social maneuvering you’re describing. That seems implausible to me given that he publicly expressed concern about x-risk from AI 10 months before OpenAI was publicly founded, and possibly several months before it was even conceived.
Sam Altman posted Machine intelligence, part 1[1] on February 25th, 2015. This is admittedly after the FLI conference in Puerto Rico, which is reportedly where Elon Musk was inspired to start OpenAI (though I can’t find a reference substantiating his interaction with Demis as the specific trigger), but there is other reporting suggesting that OpenAI was only properly conceived later in the year, and Sam Altman wasn’t at the FLI conference himself. (Also, it’d surprise me a bit if it took nearly a year, i.e. from Jan 2nd[2] to Dec 11th[3], for OpenAI to go from “conceived of” to “existing”.)
I think it’s quite easy to read as condescending. Happy to hear that’s not the case!
I hadn’t downvoted this post, but I am not sure why OP is surprised given the first four paragraphs, rather than explaining what the post is about, instead celebrate tree murder and insult their (imagined) audience:
so that no references are needed but those any LW-rationalist is expected to have committed to memory by the time of their first Lighthaven cuddle puddle
I don’t think much has changed since this comment. Maybe someone will make a new wiki page on the subject, though if it’s not an admin I’d expect it to mostly be a collection of links to various posts/comments.
re: the table of contents, it’s hidden by default but becomes visible if you hover your mouse over the left column on post pages.
I understand the motivation behind this, but there is little warning that this is how the forum works. There is no warning that trying to contribute in good faith isn’t sufficient, and you may still end up partially banned (rate-limited) if they decide you are more noise than signal. Instead, people invest a lot only to discover this when it’s too late.
In addition to the New User Guide that gets DMed to every new user (and is also linked at the top of our About page), we:
Show this comment above the new post form to new users who haven’t already had some content approved by admins. (Note that it also links to the new user’s guide.)
Open a modal when a new, unreviewed user clicks into a comment box to write a comment for the first time. Note how it’s three sentences long, explicitly tells users that they start out rate limited, and also links to the new user’s guide.
Show new, unreviewed users this moderation warning directly underneath the comment box.
Now, it’s true that people mostly don’t read things. So there is a tricky balance to strike between providing “sufficient” warning, and not driving people away because you keep throwing annoying roadblocks/warnings at them[1]. But it is simply not the case that LessWrong does not go out of its way to tell new users that the site has specific (and fairly high) standards.
- ^
On the old internet, you didn’t get advance notice that you should internalize the norms of the community you were trying to join. You just got told to lurk more—or banned without warning, if you were unlucky.
Apropos of nothing, I’m reminded of the “<antthinking>” tags originally observed in Sonnet 3.5′s system prompt, and this section of Dario’s recent essay (bolding mine):
In 2024, the idea of using reinforcement learning (RL) to train models to generate chains of thought has become a new focus of scaling. Anthropic, DeepSeek, and many other companies (perhaps most notably OpenAI who released their o1-preview model in September) have found that this training greatly increases performance on certain select, objectively measurable tasks like math, coding competitions, and on reasoning that resembles these tasks.
When is the “efficient outcome-achieving hypothesis” false? More narrowly, under what conditions are people more likely to achieve a goal (or harder, better, faster, stronger) with fewer resources?
The timing of this quick take is of course motivated by recent discussion about deepseek-r1, but I’ve had similar thoughts in the past when observing arguments against e.g. hardware restrictions: that they’d motivate labs to switch to algorithmic work, which would be speed up timelines (rather than just reducing the naive expected rate of slowdown). Such arguments propose that labs are following predictably inefficient research directions. I don’t want to categorically rule out such arguments. From the perspective of a person with good research taste, everyone else with worse research taste is “following predictably inefficient research directions”. But the people I saw making those arguments were generally not people who might conceivably have an informed inside view on novel capabilities advancements.
I’m interested in stronger forms of those arguments, not limited to AI capabilities. Are there heuristics about when agents (or collections of agents) might benefit from having fewer resources? One example is the resource curse, though the state of the literature there is questionable and if the effect exists at all it’s either weak or depends on other factors to materialize with a meaningful effect size.
We have automated backups, and should even those somehow find themselves compromised (which is a completely different concern from getting DDoSed), there are archive.org backups of a decent percentage of LW posts, which would be much easier to restore than paper copies.
I learned it elsewhere, but his LinkedIn confirms that he started at Anthropic sometime in January.
I know I’m late to the party, but I’m pretty confused by https://www.astralcodexten.com/p/its-still-easier-to-imagine-the-end (I haven’t read the post it’s responding to, but I can extrapolate). Surely the “we have a friendly singleton that isn’t Just Following Orders from Your Local Democratically Elected Government or Your Local AGI Lab” is a scenario that deserves some analysis...? Conditional on “not dying” that one seems like the most likely stable end state, in fact.
Lots of interesting questions in that situation! Like, money still seems obviously useful for allocating rivalrous goods (which is… most of them, really). Is a UBI likely when you have a friendly singleton around? Well, I admit I’m not currently coming up with a better plan for the cosmic endowment. But then you have population ethics questions—it really does seem like you have to “solve” population ethics somehow, or you run into issues. Most “just do X” proposals seem to fall totally flat on their face—“give every moral patient an equal share” fails if you allow uploads (or even sufficiently motivated biological reproduction), “don’t give anyone born post-singularity anything” seems grossly unfair, etc.
And this is really only scratching the surface. Do you allow arbitrary cognitive enhancement, with all that that implies for likely future distribution of resources?
I was thinking the same thing. This post badly, badly clashes with the vibe of Less Wrong. I think you should delete it, and repost to a site in which catty takedowns are part of the vibe. Less Wrong is not the place for it.
I think this is a misread of LessWrong’s “vibes” and would discourage other people from thinking of LessWrong as a place where such discussions should be avoided by default.
With the exception of the title, I think the post does a decent job at avoiding making it personal.
Well, that’s unfortunate. That feature isn’t super polished and isn’t currently in the active development path, but will try to see if it’s something obvious. (In the meantime, would recommend subscribing to fewer people, or seeing if the issue persists in Chrome. Other people on the team are subscribed to 100-200 people without obvious issues.)
No, such outputs will almost certainly fail this criteria (since they will by default be written with the typical LLM “style”).