Both before and after reading the post, I think that AIAF caused AI alignment discussion to be much more publicly readable (relative to academia).
After reading the post / comments, I think that the AIAF is more publicly writable than academia. Before reading the post / comments, I did not think this—I wouldn’t have said that writing a post on LW was “submitting” to the AIAF, since it didn’t seem to me like there were people making a considered decision on whether to promote LW posts to AIAF.
Both before and after reading the post, I think that the AIAF will not be perceived (at least by academics) to be more publicly writable than academia.
It seems to me like the main problem is that AIAF currently does a bad job at giving a naive visitor an idea about how it’s setup works. Do you think that a better explanation on the side of AIAF would solve the issue or do you believe it to be deeper?
I’m still confused by half the comments on this post. How can people be confused by a setting explained in detail in the only post always pinned in the AF, which is a FAQ?
I want to push back on that. I agree that most people don’t read the manual, but I think that if you’re confused about something and then don’t read the manual, it’s on you. I also don’t think they could make it much more obvious than being always on the front page.
Maybe the main criticism is that this FAQ/intro post has a bunch of info about the first AF sequences that is probably irrelevant to most newcomers.
It would for example be possible to have a notice at the bottom of alignment forum pages to user that aren’t locked in that says: “If you aren’t a member of the alignment forum and want to comment on this post, you can do so at [link to LessWrong post]. Learn more [link to FAQ]”
Such a link would strengthen the association between LessWrong and AIAF for a naive user that reads a AIAF posts. There might be drawbacks for strengthen that association but it would help the naive user to get the idea that the way to interact with AIAF posts for non-AIAF members is through LessWrong.
I agree that most people don’t read the manual, but I think that if you’re confused about something and then don’t read the manual, it’s on you.
The goal of AIAF is to be well accepted by the AI field. If people from that field come to AIAF and have a lesser opinion of AIAF because they don’t really understand how it works, you can say that’s on them but it’s still bad for AIAF.
I agree that most people don’t read the manual, but I think that if you’re confused about something and then don’t read the manual, it’s on you.
I think responsibility is the wrong framing here? There are empirical questions of ‘what proportion of users will try engaging with the software?’, ‘how many users will feel confused?’, ‘how many users will be frustrated and quit/leave with a bad impression?‘. I think the Alignment Forum should be (in part) designed with these questions in mind. If there’s a post on the front page that people ‘could’ think to read, but in practice don’t, then I think this matters.
I also don’t think they could make it much more obvious than being always on the front page.
I disagree. I think the right way to do user interfaces is to present the relevant information to the user at the appropriate time. Eg, when they try to sign-up, give a pop-up explaining how that process works (or linking to the relevant part of the FAQ). Ditto when they try making a comment, or making a post. I expect this would exposure many more users to the right information at the right time, rather than needing them to think to look at the stickied post, and filter through for the information they want
I think part of the problem is that it’s not always obvious that you’re confused about something.
If you don’t know that the UI has led you to make wrong assumptions about the way it works, you won’t even know to go look at the manual.
(Also, as someone who has designed lots of UI’s...for many types of UI’s, if the user has to go look at the manual it means I’ve got something to improve in the UI.)
I think giving people better beliefs about how the AIAF works would probably solve the issue, though that doesn’t necessarily come from better explanations, e.g. I much prefer things like your suggestion here, where you’re providing some info at exactly the time it is relevant, so that people actually read it. (Perhaps that’s what you mean by “better explanations”.)
To be clear...now having read the post and comments you do not consider it more closed?
I feel like we should taboo “closed”.
Both before and after reading the post, I think that AIAF caused AI alignment discussion to be much more publicly readable (relative to academia).
After reading the post / comments, I think that the AIAF is more publicly writable than academia. Before reading the post / comments, I did not think this—I wouldn’t have said that writing a post on LW was “submitting” to the AIAF, since it didn’t seem to me like there were people making a considered decision on whether to promote LW posts to AIAF.
Both before and after reading the post, I think that the AIAF will not be perceived (at least by academics) to be more publicly writable than academia.
It seems to me like the main problem is that AIAF currently does a bad job at giving a naive visitor an idea about how it’s setup works. Do you think that a better explanation on the side of AIAF would solve the issue or do you believe it to be deeper?
I’m still confused by half the comments on this post. How can people be confused by a setting explained in detail in the only post always pinned in the AF, which is a FAQ?
I think most people just don’t read the manual? And I think good user interfaces don’t assume they do
Speaking personally, I’m an alignment forum member, read a bunch of posts on there, but never even noticed that post existed
I want to push back on that. I agree that most people don’t read the manual, but I think that if you’re confused about something and then don’t read the manual, it’s on you. I also don’t think they could make it much more obvious than being always on the front page.
Maybe the main criticism is that this FAQ/intro post has a bunch of info about the first AF sequences that is probably irrelevant to most newcomers.
It would for example be possible to have a notice at the bottom of alignment forum pages to user that aren’t locked in that says: “If you aren’t a member of the alignment forum and want to comment on this post, you can do so at [link to LessWrong post]. Learn more [link to FAQ]”
Such a link would strengthen the association between LessWrong and AIAF for a naive user that reads a AIAF posts. There might be drawbacks for strengthen that association but it would help the naive user to get the idea that the way to interact with AIAF posts for non-AIAF members is through LessWrong.
The goal of AIAF is to be well accepted by the AI field. If people from that field come to AIAF and have a lesser opinion of AIAF because they don’t really understand how it works, you can say that’s on them but it’s still bad for AIAF.
Yeah, I was proposing something like this in this comment response to Peter.
I think responsibility is the wrong framing here? There are empirical questions of ‘what proportion of users will try engaging with the software?’, ‘how many users will feel confused?’, ‘how many users will be frustrated and quit/leave with a bad impression?‘. I think the Alignment Forum should be (in part) designed with these questions in mind. If there’s a post on the front page that people ‘could’ think to read, but in practice don’t, then I think this matters.
I disagree. I think the right way to do user interfaces is to present the relevant information to the user at the appropriate time. Eg, when they try to sign-up, give a pop-up explaining how that process works (or linking to the relevant part of the FAQ). Ditto when they try making a comment, or making a post. I expect this would exposure many more users to the right information at the right time, rather than needing them to think to look at the stickied post, and filter through for the information they want
I think part of the problem is that it’s not always obvious that you’re confused about something.
If you don’t know that the UI has led you to make wrong assumptions about the way it works, you won’t even know to go look at the manual.
(Also, as someone who has designed lots of UI’s...for many types of UI’s, if the user has to go look at the manual it means I’ve got something to improve in the UI.)
I think giving people better beliefs about how the AIAF works would probably solve the issue, though that doesn’t necessarily come from better explanations, e.g. I much prefer things like your suggestion here, where you’re providing some info at exactly the time it is relevant, so that people actually read it. (Perhaps that’s what you mean by “better explanations”.)