If I put on my startup hat, I hear this proposal as “Have you considered scaling your product by 10x?” A startup is essentially a product that can (and does) scale by multiple orders of magnitude to be useful to massive numbers of people, producing a significant value for the consumer population, and if you share attributes of a startup, it’s a good question to ask yourself.
That said, many startups scale before their product is ready. I have had people boast to me about how much funding they’ve gotten for their startup, without giving me a story for how they think they can actually turn that funding into people using their product. Remember that time Medium fired 1/3rd of its staff. There are many stories of startups getting massive amounts of funding and then crashing. So you don’t want to scale prematurely.
To pick something very concrete, one question you could as is “If I told you that LW had gotten 10x comments this quarter, do you update that we’d made 10x or even 3x progress on the art of human rationality and/or AI alignment (relative to the amount of progress we made on LW the quarter before)?” I think that isn’t implausible, but I think that it’s not obvious, and I think there are other things to focus on. To give a very concrete example that’s closer to work we’ve done lately, if you heard that “LessWrong had gotten 10x answers to questions of >50 karma this quarter”, I think I’d be marginally more confident that core intellectual progress had been made, but still that metric is obviously very goodhart-able.
A second and related reason to be skeptical of focusing on moving comments from 19 to 179 at the current stage (especially if I put on my ‘community manager hat’), is a worry about wasting people’s time. In general, LessWrong is a website where we don’t wantmanycore members of the community to be using it 10 hours per day. Becoming addictive and causing all researchers to be on it all day, could easily be a net negative contribution to the world. While none of your recommendations were about addictiveness, there are related ways of increasing the number of comments such as showing a user’s karma score on every page, like LW 1.0 did.
Anyway, those are some arguments against. I overall feel like we’re in the ‘figuring out the initial idea and product’ stage rather than the ‘execute’ stage and is where my thoughts are spent presently. I’m interested in more things like creating basic intro texts in AI alignment, creating new types of ways of knowing what ideas are needed on the site, and focusing generally on the end of the pipeline of intellectual progress right now, before focusing on getting more people spending their time on the site. I do think I’d quickly change my mind if net engagement of the site was decreasing, but my current sense is that it is slowly increasing.
I agree with this worry, though I have a vague feeling that LW is capturing and retaining less of the rationalist core than is ideal — (EDIT: for example,) I feel like I see LW posts linked/discussed on social media less than is ideal. Not for the purpose of bringing in new readers, but just for the purpose of serving as a common-knowledge hub for rationalists. That’s just a feeling, though, and might reflect the bubbles I’m in. E.g., maybe LW is more of a thing on various Discords, since I don’t use Discord much.
If we’re getting fewer comments than we’d expect and desire given the number of posts or page visits, then that might also suggest that something’s wrong with the incentives for commenting.
An opt-in way to give non-anonymous upvotes (either publicly visible, or visible to the upvoted poster, or both) feels to me like it would help with issues in this space, since it’s a very low-effort way to give much more contentful/meaningful feedback than an anonymous upvote (“ah, Wei Dai liked my post” is way more information and reinforcement than “ah, my post has 4 more karma”, while being a lot less effort than Wei Dai writing more comments). Also separating out “I like this / I want to see more stuff like this” votes from “I agree with this” votes (where I think “I agree with this” votes should only publicly display when they’re non-anonymous). I feel like this helps with making posting more rewarding, and also just makes the site as a whole feel more hedonic and less impersonal.
you made me think of a feature i think can be great, that when someone gives a down-vote to a post the site automatically prompts him to comment. another idea is that you’ll only be able to make a strong down-vote if you comment, but I’m not to sure about that.
I like this idea. I can’t find it now but I remember a recent comment suggesting that any post/comment which ends up with negative karma should have someone commenting on as to why they downvoted it, so that the feedback is practical.
To encourage commentors (and posters) without cluttering up the comments thread:
Non-substantive comments, collapsed by default, where voters can leave a couple of words as to why they voted the way they did.
Yeah, I do think having a simple non-anonymous upvoting option is promising. I wonder whether we can make it a simple extension of the strong-upvoting system (maybe have some kind of additional button show up when you strong-upvoted something, that allows you to add your name to it, though I can imagine that getting too complicated UI-wise).
Idea: if someone hovers over the karma number, a tooltip shows number of voters plus who non-anonymously upvoted; and if you click the karma number, it gives you an option to make your vote non-anonymous (which results in a private notification, plus a public notification if it’s an upvote).
This seems better to me than giving the “<” or “>” more functionality, since those are already pretty interactive and complex; whereas the number itself isn’t really doing much.
It seems to me that there are straightforward interventions:
(1) Provide share buttons. Most websites use share buttons to encourage readers to share content and there’s no reason why it wouldn’t work for us.
Share buttons also provide a way to to recognize which users share articles.
To build on your existing example, having the information “25 people come to this article because Wei Dai shared it on facebook” would be motivating information. It would also provide a way for people to follow the backlink to facebook and witness comments that happened there.
For spam fighting reasons you might set a minimum amount of karma for people to be share in such a fashion.
(2) Automatically, push newly curated posts to Twitter and a Facebook page.
I personally would prefer everything to do with Facebook, Twitter, etc., to stay as far away from LW as possible. Also, adding social-media sharing buttons seems to be asking to have more of the discussion take place away from LW, which is the exact opposite of what I thought was being discussed here.
Sure: the author of a particular article may just want it to be read and shared as widely as possible. But what’s locally best for them is not necessarily the same as what’s globally best for the LW community.
Put yourself in a different role: you’re reading something of the sort that might be on LW. Would you prefer to read and discuss it here or on Facebook? For me, the answer is “definitely here”. If your answer is generally “Facebook” then it seems to me that you want your writings discussed on Facebook, you want to discuss things on Facebook, and what would suit you best is for Less Wrong to go away and for people to just post things on Facebook. Which is certainly a preference you’re entitled to have, but I don’t think Less Wrong should be optimizing for people who feel that way.
I do prefer to read and discuss on LW over discussing on Facebook. As a reader of a post on LW I don’t think it harms me much when a post gets linked on Facebook.
I don’t think this will result on average in fewer comments on LW.com. If people click on the link to LW within Facebook they can both comment on LW and on Facebook. Many of the people who see the post on Facebook would have never read the post otherwise or engaged with it.
External links also increase page-rank which means that posts show up more prominently on Google and additional people will discover LessWrong.
As far as optimization goes, I would prefer LW to optimize to motivate people to write great posts over organizing it in a way that optimizes the reading experience.
I do like the idea of karma-limited share buttons.
I think most of the incentives for commenting are due to network effects, i.e. not everyone is here, or I don’t have evidence that they’re here, so still feel like more people will see discussion on FB.
I think social proof is going to turn out to be pretty important. I’m slightly wary of it because it pushes against the “LW is a place you can talk about ideas, as much as possible without having social status play into it”, but like it or not “High Profile User liked my comment”, or “My Friend liked my comment” is way more motivating.
I’m currently thinking about how to balance those concerns.
I’m slightly wary of it because it pushes against the “LW is a place you can talk about ideas, as much as possible without having social status play into it”, but like it or not “High Profile User liked my comment”, or “My Friend liked my comment” is way more motivating.
As a contrary data point, I prefer LW to Facebook because the identified voting makes the social part of my brain nervous. I’m much more hesitant both to “like” things (for fear of signaling the wrong thing) and also to post/comment (if a post/comment lacks identified likes, that seems to hurt more than lack of anonymous upvotes, while the presence of identified likes don’t seem to be much more rewarding than anonymous upvotes for me).
ETA: If LW implemented optional identified voting (which I’ll call “like”), I’d probably use it very sparingly, because 1) I’m afraid I might “like” something that turns out to be wrong and 2) I feel like if I did use it regularly, then when I don’t “like” something that people can reasonably predict me to endorse they would wonder why I didn’t “like” it. So I’ll probably end up “liking” something only when it seems really important to put my name behind something, but at that point I might as well just write a comment.
The above updates me toward being more uncertain about whether it’s a good idea to add an ‘optional non-anonymized upvoting’ feature. I’ll note that separating out ‘I agree with this’ from ‘I want to see more comments like this’ is potentially extra valuable (maybe even necessary) for a healthy non-anonymized upvoting system, because it’s more important to distinguish those things if your name’s on the line. Also, non-anonymized ‘I factually disagree with this’ is a lot more useful than non-anonymized ‘I want to see fewer comments/posts like this’.
Can you expand on what exactly you mean with “without having social status come into play”?
Social status is a prime way human beings are motivated to do things. The prospect that I might get social status by writing a great article that people find valuable sets good incentives for me to provide quality content.
Thanks for the reply! I see what you’re saying, but here are some considerations on the other side.
Part of what I was trying to point out here is that 179 comments would not be “extraordinary” growth, it would be an “ordinary” return to what used to be the status quo. If you want to talk about startups, Paul Graham says 5-7% a week is a good growth rate during Y Combinator. 5% weekly growth corresponds to 12x annual growth, and I don’t get the sense LW has grown 12x in the past year. Maybe 12x/year is more explosive than ideal, but I think there’s room for more growth even if it’s not explosive. IMO, growth is good partially because it helps you discover product-market fit. You don’t want to overfit to your initial users, or, in the case of an online community, over-adapt to the needs of a small initial userbase. And you don’t want to be one of those people who never ships. Some entrepreneurs say if you’re not embarrassed by your initial product launch, you waited too long.
that metric is obviously very goodhart-able
One could easily goodhart the metric by leaving lots of useless one-line comments, but that’s a little beside the point. The question for me is whether additional audience members are useful on the current margin. I think the answer is yes, if they’re high-quality. The only promo method I suggested which doesn’t filter heavily is the Adwords thing. Honestly I brought it up mostly to point out that we used to do that and it wasn’t terrible, so it’s a data point about how far it’s safe to go.
A second and related reason to be skeptical of focusing on moving comments from 19 to 179 at the current stage (especially if I put on my ‘community manager hat’), is a worry about wasting people’s time. In general, LessWrong is a website where we don’t want many core members of the community to be using it 10 hours per day. Becoming addictive and causing all researchers to be on it all day, could easily be a net negative contribution to the world. While none of your recommendations were about addictiveness, there are related ways of increasing the number of comments such as showing a user’s karma score on every page, like LW 1.0 did.
What if we could make AI alignment research addictive? If you can make work feel like play, that’s a huge win right?
See also Giving Your All. You could argue that I should either be spending 0% of my time on LW or 100% of my time on LW. I don’t think the argument fully works, because time spent on LW is probably a complementary good with time spent reading textbooks and so on, but it doesn’t seem totally unreasonable for me to see the number of upvotes I get as a proxy for the amount of progress I’m making.
I want LW to be more addictive on the current margin. I want to feel motivated to read someone’s post about AI alignment and write some clever comment on it that will get me karma. But my System 1 doesn’t have a sufficient expectation of upvotes & replies for me to experience a lot of intrinsic motivation to do this.
I’d suggest thinking in terms of focus destruction rather than addictiveness. Ideally, I find LW enjoyable to use without it hurting my ability to focus.
I think instead of restricting the audience, a better idea is making discussion dynamics a little less time-driven.
If I leave a comment on LW in the morning, and I’m deep in some equations during the afternoon, I don’t want my brain nagging me to go check if I need to defend my claims on LW while the discussion is still on the frontpage.
Spreading discussions out over time also serves as spaced repetition to reinforce concepts.
I think I heard about research which found that brainstorming 5 minutes on 5 different days, instead of 25 minutes on a single day, is a better way to generate divergent creative insights. This makes sense to me because the effect of being anchored on ideas you’ve already had is lessened.
Re: intro texts, I’d argue having Rohin’s value learning sequence go by without much of an audience to read & comment on it was a big missed opportunity. Paul Christiano’s ideas seem important, and it could’ve been really valuable to have lively discussions of those ideas to see if we could make progress on them, or at least share our objections as they were rerun here on LW.
Ultimately, it’s the idea that matters, not whether it comes in the form of a blog post, journal article, or comment. You mods have talked about the value of people throwing ideas around even when they’re not 100% sure about them. I think comments are a really good format for that. [Say, random idea: what if we had a “you should turn this into a post” button for comments?]
Just wanted to say I agree regarding the problems with conversation being “time driven” (I’ve previously suggested a similar problem with Q&A)
One idea that occurs to me is to personalise Recent Discussion on the homepage. If I’ve read a post and even more if I’ve upvoted it then I’m likely to be interested in comments on that thread. If I’ve upvoted a comment then I’m likely to be interested in replies to that comment.
If Recent Discussion worked more like a personal recommendation section than a rough summary section then I think I’d get more out of it and probably be more motivated to post comments, knowing that people may well read them even if I’m replying to an old post.
If I put on my startup hat, I hear this proposal as “Have you considered scaling your product by 10x?” A startup is essentially a product that can (and does) scale by multiple orders of magnitude to be useful to massive numbers of people, producing a significant value for the consumer population, and if you share attributes of a startup, it’s a good question to ask yourself.
That said, many startups scale before their product is ready. I have had people boast to me about how much funding they’ve gotten for their startup, without giving me a story for how they think they can actually turn that funding into people using their product. Remember that time Medium fired 1/3rd of its staff. There are many stories of startups getting massive amounts of funding and then crashing. So you don’t want to scale prematurely.
To pick something very concrete, one question you could as is “If I told you that LW had gotten 10x comments this quarter, do you update that we’d made 10x or even 3x progress on the art of human rationality and/or AI alignment (relative to the amount of progress we made on LW the quarter before)?” I think that isn’t implausible, but I think that it’s not obvious, and I think there are other things to focus on. To give a very concrete example that’s closer to work we’ve done lately, if you heard that “LessWrong had gotten 10x answers to questions of >50 karma this quarter”, I think I’d be marginally more confident that core intellectual progress had been made, but still that metric is obviously very goodhart-able.
A second and related reason to be skeptical of focusing on moving comments from 19 to 179 at the current stage (especially if I put on my ‘community manager hat’), is a worry about wasting people’s time. In general, LessWrong is a website where we don’t want many core members of the community to be using it 10 hours per day. Becoming addictive and causing all researchers to be on it all day, could easily be a net negative contribution to the world. While none of your recommendations were about addictiveness, there are related ways of increasing the number of comments such as showing a user’s karma score on every page, like LW 1.0 did.
Anyway, those are some arguments against. I overall feel like we’re in the ‘figuring out the initial idea and product’ stage rather than the ‘execute’ stage and is where my thoughts are spent presently. I’m interested in more things like creating basic intro texts in AI alignment, creating new types of ways of knowing what ideas are needed on the site, and focusing generally on the end of the pipeline of intellectual progress right now, before focusing on getting more people spending their time on the site. I do think I’d quickly change my mind if net engagement of the site was decreasing, but my current sense is that it is slowly increasing.
Thoughts?
I agree with this worry, though I have a vague feeling that LW is capturing and retaining less of the rationalist core than is ideal — (EDIT: for example,) I feel like I see LW posts linked/discussed on social media less than is ideal. Not for the purpose of bringing in new readers, but just for the purpose of serving as a common-knowledge hub for rationalists. That’s just a feeling, though, and might reflect the bubbles I’m in. E.g., maybe LW is more of a thing on various Discords, since I don’t use Discord much.
If we’re getting fewer comments than we’d expect and desire given the number of posts or page visits, then that might also suggest that something’s wrong with the incentives for commenting.
An opt-in way to give non-anonymous upvotes (either publicly visible, or visible to the upvoted poster, or both) feels to me like it would help with issues in this space, since it’s a very low-effort way to give much more contentful/meaningful feedback than an anonymous upvote (“ah, Wei Dai liked my post” is way more information and reinforcement than “ah, my post has 4 more karma”, while being a lot less effort than Wei Dai writing more comments). Also separating out “I like this / I want to see more stuff like this” votes from “I agree with this” votes (where I think “I agree with this” votes should only publicly display when they’re non-anonymous). I feel like this helps with making posting more rewarding, and also just makes the site as a whole feel more hedonic and less impersonal.
you made me think of a feature i think can be great, that when someone gives a down-vote to a post the site automatically prompts him to comment. another idea is that you’ll only be able to make a strong down-vote if you comment, but I’m not to sure about that.
I like this idea. I can’t find it now but I remember a recent comment suggesting that any post/comment which ends up with negative karma should have someone commenting on as to why they downvoted it, so that the feedback is practical.
To encourage commentors (and posters) without cluttering up the comments thread:
Non-substantive comments, collapsed by default, where voters can leave a couple of words as to why they voted the way they did.
Yeah, I do think having a simple non-anonymous upvoting option is promising. I wonder whether we can make it a simple extension of the strong-upvoting system (maybe have some kind of additional button show up when you strong-upvoted something, that allows you to add your name to it, though I can imagine that getting too complicated UI-wise).
Idea: if someone hovers over the karma number, a tooltip shows number of voters plus who non-anonymously upvoted; and if you click the karma number, it gives you an option to make your vote non-anonymous (which results in a private notification, plus a public notification if it’s an upvote).
This seems better to me than giving the “<” or “>” more functionality, since those are already pretty interactive and complex; whereas the number itself isn’t really doing much.
It seems to me that there are straightforward interventions:
(1) Provide share buttons. Most websites use share buttons to encourage readers to share content and there’s no reason why it wouldn’t work for us.
Share buttons also provide a way to to recognize which users share articles.
To build on your existing example, having the information “25 people come to this article because Wei Dai shared it on facebook” would be motivating information. It would also provide a way for people to follow the backlink to facebook and witness comments that happened there.
For spam fighting reasons you might set a minimum amount of karma for people to be share in such a fashion.
(2) Automatically, push newly curated posts to Twitter and a Facebook page.
I personally would prefer everything to do with Facebook, Twitter, etc., to stay as far away from LW as possible. Also, adding social-media sharing buttons seems to be asking to have more of the discussion take place away from LW, which is the exact opposite of what I thought was being discussed here.
If I write an article I care about it getting read as widely as possible. I care about engagement happening.
If an article I write on LessWrong gets shared on Facebook or Twitter I would enjoy knowing that it’s shared.
I give less weight to linkposts, because the discussion/comments are split in an annoying way. It would be worse with facebook.
Sure: the author of a particular article may just want it to be read and shared as widely as possible. But what’s locally best for them is not necessarily the same as what’s globally best for the LW community.
Put yourself in a different role: you’re reading something of the sort that might be on LW. Would you prefer to read and discuss it here or on Facebook? For me, the answer is “definitely here”. If your answer is generally “Facebook” then it seems to me that you want your writings discussed on Facebook, you want to discuss things on Facebook, and what would suit you best is for Less Wrong to go away and for people to just post things on Facebook. Which is certainly a preference you’re entitled to have, but I don’t think Less Wrong should be optimizing for people who feel that way.
I do prefer to read and discuss on LW over discussing on Facebook. As a reader of a post on LW I don’t think it harms me much when a post gets linked on Facebook.
I don’t think this will result on average in fewer comments on LW.com. If people click on the link to LW within Facebook they can both comment on LW and on Facebook. Many of the people who see the post on Facebook would have never read the post otherwise or engaged with it.
External links also increase page-rank which means that posts show up more prominently on Google and additional people will discover LessWrong.
As far as optimization goes, I would prefer LW to optimize to motivate people to write great posts over organizing it in a way that optimizes the reading experience.
I do like the idea of karma-limited share buttons.
I think most of the incentives for commenting are due to network effects, i.e. not everyone is here, or I don’t have evidence that they’re here, so still feel like more people will see discussion on FB.
I think social proof is going to turn out to be pretty important. I’m slightly wary of it because it pushes against the “LW is a place you can talk about ideas, as much as possible without having social status play into it”, but like it or not “High Profile User liked my comment”, or “My Friend liked my comment” is way more motivating.
I’m currently thinking about how to balance those concerns.
As a contrary data point, I prefer LW to Facebook because the identified voting makes the social part of my brain nervous. I’m much more hesitant both to “like” things (for fear of signaling the wrong thing) and also to post/comment (if a post/comment lacks identified likes, that seems to hurt more than lack of anonymous upvotes, while the presence of identified likes don’t seem to be much more rewarding than anonymous upvotes for me).
ETA: If LW implemented optional identified voting (which I’ll call “like”), I’d probably use it very sparingly, because 1) I’m afraid I might “like” something that turns out to be wrong and 2) I feel like if I did use it regularly, then when I don’t “like” something that people can reasonably predict me to endorse they would wonder why I didn’t “like” it. So I’ll probably end up “liking” something only when it seems really important to put my name behind something, but at that point I might as well just write a comment.
The above updates me toward being more uncertain about whether it’s a good idea to add an ‘optional non-anonymized upvoting’ feature. I’ll note that separating out ‘I agree with this’ from ‘I want to see more comments like this’ is potentially extra valuable (maybe even necessary) for a healthy non-anonymized upvoting system, because it’s more important to distinguish those things if your name’s on the line. Also, non-anonymized ‘I factually disagree with this’ is a lot more useful than non-anonymized ‘I want to see fewer comments/posts like this’.
Can you expand on what exactly you mean with “without having social status come into play”?
Social status is a prime way human beings are motivated to do things. The prospect that I might get social status by writing a great article that people find valuable sets good incentives for me to provide quality content.
I meant in the other direction, where people judge ideas as better because higher status people said them.
This seems like the thing that happens by default and we can’t really stop it, but I’m wary of UX paradigms that might reinforce it even harder.
Thanks for the reply! I see what you’re saying, but here are some considerations on the other side.
Part of what I was trying to point out here is that 179 comments would not be “extraordinary” growth, it would be an “ordinary” return to what used to be the status quo. If you want to talk about startups, Paul Graham says 5-7% a week is a good growth rate during Y Combinator. 5% weekly growth corresponds to 12x annual growth, and I don’t get the sense LW has grown 12x in the past year. Maybe 12x/year is more explosive than ideal, but I think there’s room for more growth even if it’s not explosive. IMO, growth is good partially because it helps you discover product-market fit. You don’t want to overfit to your initial users, or, in the case of an online community, over-adapt to the needs of a small initial userbase. And you don’t want to be one of those people who never ships. Some entrepreneurs say if you’re not embarrassed by your initial product launch, you waited too long.
One could easily goodhart the metric by leaving lots of useless one-line comments, but that’s a little beside the point. The question for me is whether additional audience members are useful on the current margin. I think the answer is yes, if they’re high-quality. The only promo method I suggested which doesn’t filter heavily is the Adwords thing. Honestly I brought it up mostly to point out that we used to do that and it wasn’t terrible, so it’s a data point about how far it’s safe to go.
What if we could make AI alignment research addictive? If you can make work feel like play, that’s a huge win right?
See also Giving Your All. You could argue that I should either be spending 0% of my time on LW or 100% of my time on LW. I don’t think the argument fully works, because time spent on LW is probably a complementary good with time spent reading textbooks and so on, but it doesn’t seem totally unreasonable for me to see the number of upvotes I get as a proxy for the amount of progress I’m making.
I want LW to be more addictive on the current margin. I want to feel motivated to read someone’s post about AI alignment and write some clever comment on it that will get me karma. But my System 1 doesn’t have a sufficient expectation of upvotes & replies for me to experience a lot of intrinsic motivation to do this.
I’d suggest thinking in terms of focus destruction rather than addictiveness. Ideally, I find LW enjoyable to use without it hurting my ability to focus.
I think instead of restricting the audience, a better idea is making discussion dynamics a little less time-driven.
If I leave a comment on LW in the morning, and I’m deep in some equations during the afternoon, I don’t want my brain nagging me to go check if I need to defend my claims on LW while the discussion is still on the frontpage.
Spreading discussions out over time also serves as spaced repetition to reinforce concepts.
I think I heard about research which found that brainstorming 5 minutes on 5 different days, instead of 25 minutes on a single day, is a better way to generate divergent creative insights. This makes sense to me because the effect of being anchored on ideas you’ve already had is lessened.
See also the CNN effect.
Re: intro texts, I’d argue having Rohin’s value learning sequence go by without much of an audience to read & comment on it was a big missed opportunity. Paul Christiano’s ideas seem important, and it could’ve been really valuable to have lively discussions of those ideas to see if we could make progress on them, or at least share our objections as they were rerun here on LW.
Ultimately, it’s the idea that matters, not whether it comes in the form of a blog post, journal article, or comment. You mods have talked about the value of people throwing ideas around even when they’re not 100% sure about them. I think comments are a really good format for that. [Say, random idea: what if we had a “you should turn this into a post” button for comments?]
Just wanted to say I agree regarding the problems with conversation being “time driven” (I’ve previously suggested a similar problem with Q&A)
One idea that occurs to me is to personalise Recent Discussion on the homepage. If I’ve read a post and even more if I’ve upvoted it then I’m likely to be interested in comments on that thread. If I’ve upvoted a comment then I’m likely to be interested in replies to that comment.
If Recent Discussion worked more like a personal recommendation section than a rough summary section then I think I’d get more out of it and probably be more motivated to post comments, knowing that people may well read them even if I’m replying to an old post.