LessWrong Team
I have signed no contracts or agreements whose existence I cannot mention.
LessWrong Team
I have signed no contracts or agreements whose existence I cannot mention.
I think the guide should be 10x more prominent in this post.
You should see the option when you click on the triple dot menu (next to the Like button).
So the nice thing about karma is that if someone thinks a wikitag is worthy of attention for any reason (article, tagged posts, importance of concept), they’re able to upvote it and make it appear higher.
Much of the current karma comes from Ben Pace and I who did a pass. Rationality Quotes didn’t strike me a page I particularly wanted to boost up the list, but if you disagree with me you’re able to Like it.
In general, I don’t think have a lot of tagged posts should mean a wikitag should be ranked highly. It’s a consideration, but I like it flowing via people’s judgments about whether or not to upvote it.
The categorization is an interesting question. Indeed currently only admins can do it and that perhaps requires more thought.
Interesting. Doesn’t replicate for me. What phone are you using?
It’s a compass rose, thematic with the Map and Territory metaphor for rationality/truthseeking.
The real question is why does NATO have our logo.
Curated! I like this post for the object-level interestingness of the cited papers, but also for pulling in some interesting models from elsewhere and generally reminding us that this is something we can do.
In times of yore, LessWrong venerated the the neglected virtue of scholarship. And well, sometimes it feels like it’s still neglected. It’s tough because indeed many domains have a lot of low quality work, especially outside of hard sciences, but I’d wager on there being a fair amount worth reading, and appreciate Buck point at a domain where that seems to be the case.
Was there the text of the post in the email or just a link to it?
Curated. I was reluctant to curate this post because I found myself bouncing off it some due to length – I guess in pedagogy there’s a tradeoff between explaining at length (and you lose people) and you convey enough info vs keeping it brief and people read it but they don’t get enough. Based on private convo, Raemon thinks length is warranted.
I’m curating because I do think this kind of project is valuable. Everyday it feels easier to lose our minds entirely to AI, and I think it’s important to remember we can think better or worse, and we should be trying to do the former.
I have mixed feeling about Raemon’s project overall. Parts of it feel good, something feels missing (I think I’m partial to John Wentworth’s claim elsewhere that you need a bunch of technical study in the recipe), but I except the stuff Raemon is developing to be helpful to have engaged with for anyone who gets better at thinking.
This doesn’t seem right. Suppose there are two main candidates for how to get there, I-5 and J-6 (but who knows, maybe we’ll be surprised by a K-7) and I don’t know which Alice will choose. Suppose I know there’s already a Very General Helper and Kinda Decent Generalizer, then I might say “I assign 65% chance that Alice is going to choose the I-5 and will try to contribute having conditioned on that”. This seems like a reasonable thing to do. It might be for naught, but I’d guess in many case the EV of something definitely helpful if we go down Route A is better than the EV of finding something that’s helpful no matter the choice.
One should definitely track the major route they’re betting on and make updates and maybe switch, but seems okay to say your plan is conditioning on some bigger plan.
Edit: we are not going to technically curate this post since it’s an EA Forum crosspost and for boring technical reasons that breaks the curation email. I will leave this notice up though.
Curated. This piece definitely got me thinking. If we grant that some people are unusually altruistic, empathetic, etc., it stands to reason that there are others on the other end of various distributions. And then we should also expect various selection effects on where they end up.
It was definitely a puzzle piece clicking for me that these traits can coexist with [genuine] moral conviction and that the traits are egodystonic. This rings true but somehow hasn’t been an explicit model for me, but yes. Combine with this the difficult of detecting these traits and resultant behaviors...and yeah, there’s stuff here to think about.
I appreciate that the authors were thorough in their research but don’t especially love the format. This was pretty dense and I think a post that pulled out the most key pieces of info and argued for some conclusions would be a better read, but I much prefer this to no post.
To the extent I should add my own opinions to curation notices, my thought is this makes me update against “benefit of the doubt” when witnessing concerning behaviors. I don’t know that everyone beginning to scrutinize everyone else for having big D vibes would be good, but I do think scrutinizing behaviors for being high-integrity, cooperative, transparent, etc. might actually be a good direction – with the understanding that good norms around acceptable behaviors prevents abuses that anyone (however much D) is tempted towards. Something like we want to build “robust-to-malevolence” orgs and community that make it impractical or too costly to manipulate, etc.
Welcome! Don’t be too worried, you can try posting some stuff and see how it’s received. Based on how you wrote this comment, I think you won’t have much trouble. The New User Guide and other stuff gets worded a bit sternly because of the people who tend not to put in much effort at all and expect to be well received – which doesn’t sound like you at all. It’s hard hard to write one document that’s stern to those who need it and more welcoming to those who need that, unfortunately.
duplicate with Hyperstitions
Curated! It strikes me that asking “how would I update in response to...?” is both sensible and straightforward thing to be asking and yet not a form of question I’m seeing. I think we could be asking the same about slow vs fast takeoff, etc. and similar questions.
The value and necessity of this question also isn’t just about not waiting for future evidence to come in, but realizing that “negative results” require interpretation too. I also think there’s a nice degree of “preregistration” here is well that seems neat and maybe virtuous. Kudos and thank you.
I’m curious why the section on “Applying Rationality” in the About page you cited doesn’t feel like an answer.
Applying Rationality
You might value Rationality for its own sake, however, many people want to be better reasoners so they can have more accurate beliefs about topics they care about, and make better decisions.
Using LessWrong-style reasoning, contributors to LessWrong have written essays on an immense variety of topics on LessWrong, each time approaching the topic with a desire to know what’s actually true (not just what’s convenient or pleasant to believe), being deliberate about processing the evidence, and avoiding common pitfalls of human reason.
Beyond that, The Twelve Virtues of Rationality includes “scholarship” as the 11th virtue, and I think that’s a deep part of LessWrong’s culture and aims:
The eleventh virtue is scholarship. Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole. If you are gluttonous you will become vaster than mountains. It is especially important to eat math and science which impinge upon rationality: evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study. The Art must have a purpose other than itself, or it collapses into infinite recursion.
I would think it strange though if one could get better about reasoning and believing true things without actually trying to do that on specific cases. Maybe you could sketch out what you expect LW content to look like more.
Errors are my own
At first blush, I find this caveat amusing.
1. If there are errors, we can infer that those providing feedback were unable to identify them.
2. If the author was fallible enough to have made errors, perhaps they are are fallible enough to miss errors in input sourced from others.
What purpose does it serve? Given its often paired with “credit goes to..<list of names> it seems like an attempt that people providing feedback/input to a post are only exposed to upside from doing so, and the author takes all the downside reputation risk if the post is received poorly or exposed as flawed.
Maybe this works? It seems that as a capable reviewer/feedback haver, I might agree to offer feedback on a poor post written by a poor author, perhaps pointing out flaws, and my having given feedback on it might reflect poorly on my time allocation, but the bad output shouldn’t be assigned to me. Whereas if my name is attached to something quite good, it’s plausible that I contributed to that. I think because it’s easier to help a good post be great than to save a bad post.
But these inferences seem like they’re there to be made and aren’t changed by what an author might caveat at the start. I suppose the author might want to remind the reader of them rather than make them true through an utterance.
Upon reflection, I think (1) doesn’t hold. The reviewers/input makers might be aware of the errors but be unable to save the author from them. (2) That the reviewers made mistakes that have flowed into the piece seems all the more likely the worse the piece is overall, since we can update that the author wasn’t likely to catch them.
On the whole, I think I buy the premise that we can’t update too much negatively on reviewers and feedback givers from them having deigned to give feedback on something bad, though their time allocation is suspect. Maybe they’re bad at saying no, maybe they’re bad at dismissing people’s ideas aren’t that good, maybe they have hope for this person. Unclear. Upside I’m more willing to attribute.
Perhaps I would replace the “errors are my my own[, credit goes to]” with a reminder or pointer that these are the correct inferences to make. The words themselves don’t change them? Not sure, just musing here.
Edited To Add: I do think “errors are my own” is a very weird kind of social move that’s being performed in an epistemic contexts and I don’t like.
This post is comprehensive but I think “safetywashing” and “AGI is inherently risky” are far too towards and the end and get too little treatment, as I think they’re the most significant reasons against.
This post also makes no mention of race dynamics and how contributing to them might outweigh the rest, and as RyanCarey says elsethread, doesn’t talk about other temptations and biases that push people towards working at labs and would apply even if it was on net bad.
Curated. Insurance is a routine part of life, whether it be the car and home insurance we necessarily buy or the Amazon-offered protection one reflexively declines, the insurance we know doctors must have, businesses must have, and so on.
So it’s pretty neat when someone comes along along and (compellingly) says “hey guys, you (or are at least most people) are wrong about when insurance makes sense to buy, the reasons you have are wrong, here’s the formula”.
While assumptions can be questioned, e.g. infiniteness badness of going bankrupt and other factors can be raised, this is just a neat technical treatment of a very practical, everyday question. I expect that I’ll be thinking in terms of this myself making various insurance choices. Kudos!
Curated. The wiki pages collected here, despite being written in 2015-2017 remain excellent resources on concepts and arguments for key AI alignment ideas (both still widely used and those lesser known). I found that even for concepts/arguments like the orthogonality thesis and corrigibility, I felt a gain in crispness from reading these pages. The concept of, e.g. epistemic and instrumental efficiency I didn’t have, yet feels useful in thinking about the rise of increasingly powerful AI.
Of course, there’s also non-AI content that got imported. The Bayes guide likely remains the best resource for building Bayes intuition, and same with the guide on logarithms that is extremely thorough.