Announcing AlignmentForum.org Beta
We’ve just launched the beta for AlignmentForum.org.
Much of the value of LessWrong has come from the development of technical research on AI Alignment. In particular, having those discussions be in an accessible place has allowed newcomers to get up to speed and involved. But the alignment research community has at least some needs that are best met with a semi-private forum.
For the past few years, agentfoundations.org has served as a space for highly technical discussion of AI safety. But some aspects of the site design have made it a bit difficult to maintain, and harder to onboard new researchers. Meanwhile, as the AI landscape has shifted, it seemed valuable to expand the scope of the site. Agent Foundations is one particular paradigm with respect to AGI alignment, and it seemed important for researchers in other paradigms to be in communication with each other.
So for several months, the LessWrong and AgentFoundations teams have been discussing the possibility of using the LW codebase as the basis for a new alignment forum. Over the past couple weeks we’ve gotten ready for a closed beta test, both to iron out bugs and (more importantly) get feedback from researchers on whether the overall approach makes sense.
The current features of the Alignment Forum (subject to change) are:
A small number of admins can invite new members, granting them posting and commenting permissions. This will be the case during the beta—the exact mechanism of curation after launch is still under discussion.
When a researcher posts on AlignmentForum, the post is shared with LessWrong. On LessWrong, anyone can comment. On AlignmentForum, only AF members can comment. (AF comments are also crossposted to LW). The intent is for AF members to have a focused, technical discussion, while still allowing newcomers to LessWrong to see and discuss what’s going on.
AlignmentForum posts and comments on LW will be marked as such.
AF members will have a separate karma total for AlignmentForum (so AF karma will more closely represent what technical researchers think about a given topic).
On AlignmentForum, only AF Karma is visible. (note: not currently implemented but will be by end of day)
On LessWrong, AF Karma will be displayed (smaller) alongside regular karma.
If a commenter on LessWrong is making particularly good contributions to an AF discussion, an AF Admin can tag the comment as an AF comment, which will be visible on the AlignmentForum. The LessWrong user will then have voting privileges (but not necessarily posting privileges), allowing them to start to accrue AF karma, and to vote on AF comments and threads.
We’ve currently copied over some LessWrong posts that seemed like a good fit, and invited a few people to write posts today. (These don’t necessarily represent the longterm vision of the site, but seemed like a good way to begin the beta test)
This is a fairly major experiment, and we’re interested in feedback both from AI alignment researchers (who we’ll be reaching out to more individually in the next two weeks) and LessWrong users, about the overall approach and the integration with LessWrong.
- GreaterWrong—new theme and many enhancements by 1 Oct 2018 7:22 UTC; 35 points) (
- LW Update 2018-7-14 – Styling Rework, CommentsItem, Performance by 14 Jul 2018 1:13 UTC; 30 points) (
- [Resolved] Who else prefers “AI alignment” to “AI safety?” by 13 Dec 2021 0:35 UTC; 5 points) (
- 15 Jul 2018 1:30 UTC; 0 points) 's comment on Meta: IAFF vs LessWrong by (
On one hand, I hate that there’s now even more explicit status stratification on LW. Only special moderator-approved people get the omega tags and votes. This sucks. Please at least hide our lack of worth from us rather than putting it in our face.
From another view, I kind of like the vision (if it becomes that) of LW as an aggregate of a number of special-purpose topics—AI alignment, consciousness philosophy, EA, community-building, etc. Being able to select the topics one is interested in and see them all in one place in one system would be nifty.
I personally don’t really have an interest in contributing to technical AI discussion (it’s not my skill set and I won’t develop it). As a result it would be great to have an option to hide the posts from the Alignment forum.
GreaterWrong now has an Alignment Forum view.
Awesome, thank you so much!
To elaborate a bit—note that Alignment Forum posts are marked, in post listings, with a blue “AF” icon (and you can click that icon to see the view of all Alignment Forum posts).
<3
I have a dream that someday, everyone will be able to invent their own karma scores and LW perspectives. The alignment forum would merely be what users can access by applying the “restrict LW to this set of people” tool to the set Ω.
(Ideally, deleted posts being hidden would be merely another disableable filter.)
Why are there suddenly so many posts today? Is this due to the imports or is this representative of the volume on Alignment Forum?
It’s neither. The MIRI Summer Fellows programme is currently running, and the participants spent today writing up various ideas that they’d been thinking about, and posting them to the forum. (I visited, chatted with them about what to blog about, goals for the new forum, selfish and pro-social benefits to blogging, etc.) It was an exciting one-off thing, we may try more such writing days in future—I enjoyed reading all the posts, from the longer, puzzling walks, to the short-and-sweet nuggets of technical insight.
The goal for the participants was to gain the affordance to just sit down and turn an idea into a blogpost. I think it was successful.
I think it would have been better to space out the posting of the posts more, even if it makes sense to write them up all on the same day. That way each post can receive more attention and discussion, and the authors would get more positive feedback for their efforts.
I think that they didn’t predict that so many of us would actually write blog posts.
This is correct. Last year we got approximately two.
I agree with this. One of the considerations pushing towards having them all posted at the same time, is that we wanted to have people comment on each other’s posts after they had written their own. I think this ended up happening less, but was definitely a goal.
Seconding this. Posting so much at once makes it very hard to digest.
It’s good that AF comments are crossposted to LW. What happens if someone replies on LW—does it end up in the AF author’s inbox, or should we expect AF authors to mostly ignore LW comments?
However the website evolves, by the typical mind law and noticing the overlapping communities, I predict that most Alignment Forum authors will be interested in and check LessWrong comments, although maybe less interested than Alignment Forum comments.
Note: Off the cuff
Figuring out the best approach here is something I think we’re still evaluating. Ultimately I think this is going to depend on the average volume of comments on AF posts, and average quality of LW comments on AF posts.
Current feedback was that alignment forum posters wanted it to be easier to find the LW comments. (As Ray says, still evaluating, and this may change with time.)
Not sure whether this is the right place to voice technical complaints, but. I am unhappy about the handling of LaTeX macros (on which I rely heavily). Currently it seems like you can add macros either as inline equations or as block equations, and these macros are indeed available in the following equations. However, if an equation object contains only macros, it is invisible and seems to be impossible to edit after creation. As a workaround, I can add some text + macros into the same equation object, but this is very hacky. It would be nice if either equation objects with macros would remain visible, or (probably better) there would be a special “header” in each post where I can put the macros. It would be even more amazing if you could load LaTeX packages in that header, but that’s supererogatory.
Another, very serious issue with LaTeX support: When you copy/paste LaTeX objects, the resulting objects are permanently linked. Editing the content of one of them changes the content of another, which is not visible when editing but becomes visible once you save the post. This one made me doubt my sanity for a moment.
Woah. Thanks for pointing that out!
Huh, I never ran into that problem. This might turn out to not be super easy to fix since we are using an external LaTeX library, but we can give it a try.
Unsure about whether a header is a good idea, since the vast majority of posts on LW don’t have LaTeX, and so for them the header field would just be distracting, but we could add something like that only to agentfoundations, which would be fine. I can look into it. Also curious whether other people have similar problems.
And another problem: if an inline LaTeX object is located in the end of the paragraph, there seems to be no easy way to place the cursor right after the object, unless the cursor is already there (neither the mouse nor the arrow keys help here). So I have to either delete the object and create it again, or write some text in the next paragraph and then use backspace to join the two paragraphs together. This second solution doesn’t work if there is also an equation LaTeX object after the end of the first paragraph, in which case you can’t use backspace since it would delete the equation object.
As a more general solution, we now support LaTeX in markdown formatted posts and comments. So if you run into a lot of problems like this, it might make sense to go to your user settings and activate the comment markdown editor.
Does it mean you don’t want any more bug reports regarding the WYSIWYG LaTeX? Not criticism, just asking.
Definitely still interested in bug reports for the WYSIWYG editor.
Another issue is that it seems impossible to find or find/replace strings inside the LaTeX.
Also, a “meta” issue: in IAFF, the source of an article was plain text in which LaTeX appeared as “$...$” or “$$...$$”. This allowed me to write essays in an external LaTeX editor and then copy into IAFF with a only mild amount of effort. Here, the source seems to be inaccessible. This means that the native editor has to be good because there are no alternatives. Maybe improving the native editor is indeed the best and easiest solution. But an alternative solution could be, somehow enabling work with the source.
Yeah, we are working on improving the markdown editor to support LaTeX. It isn’t ready yet, but should be possible at some point in the next few weeks. (You can turn on the Markdown editor in your account settings)
That’s nice. Another reason it seems important is, some of content of these essays will eventually make its way into actual papers, and it will be much easier if you can copy-paste big chunks with mild formatting afterwards, compared to having to copy-paste each LaTeX object by hand.
Another issue with LaTeX support: when I mark a block of text that contains LaTeX objects and copy-paste it, the LaTeX becomes an unuseful sort of plain text. I can copy the contents of particular LaTeX object by editing it, but sometimes it is very convenient to copy entire blocks.
This is a bit of a silly bug, but you can fix this by copying two whole blocks of text that contain LaTeX in which case the content gets properly copy-paste (and then you can just delete one of them). It’s a silly bug in the MathJax framework we are using, that has to do with how copy-pasting of multiple blocks is handled differently than copy pasting of individual lines.
Thank you, that’s very helpful.
Yes, I was only talking about alignmentforum, naturally.
Another issue is, it seems impossible to delete anything, whether a comment or a draft? (and I guess it goes for posts too?)
You can always move posts back to drafts. We have a plan to add a delete button, but want to make sure there is no way to click it accidentally. If you ping us on Intercom we are also happy to delete posts.
Not deleting comments is intentional, because completely deleting them would make it hard to display the children. You can just edit the content out of them. We are planning to make it so that you can delete your comments that don’t have children, but haven’t gotten around to it.