First of all, I appreciate all the work the LessWrong / Lightcone team does for this website.
The Good
I was skeptical of the agree/disagree voting. After using it, I think it was a very good decision. Well done.
I haven’t used the dialogue feature yet, but I have plans to try it out.
Everything just works. Spam is approximately zero. The garden is gardened so well I can take it for granted.
I love how much you guys experiment. I assume the reason you don’t do more is just engineering capacity.
And yet…
Maybe there’s a lot of boiling feelings out there about the site that never get voiced?
I tend to avoid giving negative feedback unless someone explicitly asks for it. So…here we go.
Over the 1.5 years, I’ve been less excited about LessWrong than any time since I discovered this website. I’m uncertain to what extent this is because I changed or because the community did. Probably a bit of both.
AI Alignment
The most obvious change is the rise of AI Alignment writings on LessWrong. There are two things that bother me about AI Alignment writing.
It’s effectively unfalsifiable. Even betting markets don’t really work when you’re betting on the apocalypse.
It’s highly political. AI Alignment became popular on LessWrong before AI Alignment became a mainstream political issue. I feel like LessWrong has a double-standard, where political writing is held to a high epistemic standard unless it’s about AI.
I have hidden the “AI Alignment” tag from my homepage, but there is still a spillover effect. “Likes unfalsifiable political claims” is the opposite of the kind of community I want to be part of. I think adopting lc’s POC || GTFO burden of proof would make AI Alignment dialogue productive, but I am pessimistic about that happening on a collective scale.
Weird ideas
When I write about weird ideas, I get three kinds of responses.
“Yes and” is great.
“I think you’re wrong because y” is fine.
“We don’t want you to say that” makes me feel unwelcome.
Over the years, I feel like I’ve gotten fewer “yes and” comments and more “we don’t want you to say that” comments. This might be because my writing has changed, but I think what’s really going on is that this happens to every community as it gets older. What was once radical eventually congeals into dogma.
I used to post my weird ideas immediately to LessWrong. Now I don’t, because I feel like the reception on LessWrong would bum me out.[1]
I wonder what fraction of the weirdest writers here feel the same way. I can’t remember the last time I’ve read something on LessWrong and thought to myself, “What a strange, daring, radical idea. It might even be true. I’m scared of what the implications might be.” I miss that.[2]
I get the basic idea
I have learned a lot from reading and writing on LessWrong. Eight months ago, I had an experience where I internalized something very deep about rationality. I felt like I graduated from Level 1 to Level 2.
According to Eliezer Yudkowsky, his target audience for the Sequences was 2nd grade. He missed and ended up hitting college-level. They weren’t supposed to be comprehensive. They were supposed to be Level 1. But after that, nobody wrote a Level 2. (The postrats don’t count.) I’ve been trying―for years―to write Level 2, but I feel like a sequence of blog posts is a suboptimal format in 2023. Yudkowsky started writing the Sequences in 2006, when YouTube was still a startup. That leads me to…
100×
The other reason I’ve been posting less on LessWrong is that I feel like I’m hitting a soft ceiling with what I can accomplish here. I’m nowhere near the my personal skill cap, of course. But there is a much larger potential audience (and therefore impact) if I shifted from writing essays to filming YouTube videos. I can’t think of anything LessWrong is doing wrong here. The editor already allows embedded YouTube links.
Over the years, I feel like I’ve gotten fewer “yes and” comments and more “we don’t want you to say that” comments. This might be because my writing has changed, but I think what’s really going on is that this happens to every community as it gets older. What was once radical eventually congeals into dogma.
This is the part I’m most frustrated with. It used to be you could say some wild stuff on on this site and people would take you seriously. Now there’s a chorus of people who go “eww, gross” if you go too far past what they think should be acceptable. LessWrong culture originally had very high openness to wild ideas. At worst, if you reasoned well and people disagreed, they’d at least ignore you, but now you’re more likely to get downvoted for saying controversial things because they are controversial and it feels bad.
This was always a problem, but feels like it’s gotten worse.
Huh, I am surprised by this. I agree this is a thing in lots of the internet, but do you have any examples? I feel like we really still have a culture of pretty extreme openness and taking random ideas seriously (enough that sometimes I feel like wild sounding bad ideas get upvoted too much because people like being contrarian a bit too much).
Here’s part of a comment on one of my posts. The comment negatively impacted my desire to post deviant ideas on LessWrong.
Bullshit. If your desire to censor something is due to an assessment of how much harm it does, then it doesn’t matter how open-minded you are. It’s not a variable that goes into the calculation.
I happen to not care that much about the object-level question anymore (at least as it pertains to LessWrong), but on a meta level, this kind of argument should be beneath LessWrong. It’s actively framing any concern for unrestricted speech as poorly motivated, making it more difficult to have the object-level discussion.
The comment doesn’t represent a fringe opinion. It has +29 karma and +18 agreement.
I think I’m less open to weird ideas on LW than I used to be, and more likely to go “seems wrong, okay, next”. Probably this is partly a me thing, and I’m not sure it’s bad—as I gain knowledge, wisdom and experience, surely we’d expect me to become better at discerning whether a thing is worth paying attention to? (Which doesn’t mean I am better, but like. Just because I’m dismissing more ideas, doesn’t mean I’m incorrectly dismissing more ideas.)
But my guess is it’s also partly a LW thing. It seems to me that compared to 2013, there are more weird ideas on LW and they’re less worth paying attention to on average.
In this particular case… when you talk about “We don’t want you to say that” comments, it sounds to me like those comments don’t want you to say your ideas. It sounds like Habryka and other commenters interpreted it that way too.
But my read of the the comment you’re talking about here isn’t that it’s opposed to your ideas. Rather, it doesn’t want you to use a particular style of argument, and I agree with it, and I endorse “we don’t want bad arguments on LW”. I downvoted that post of yours because it seemed to be arguing poorly. It’s possible I missed something; I admittedly didn’t do a close read, because while I’ve enjoyed a lot of your posts, I don’t have you flagged in my brain as “if lsusr seems to be making a clear mistake, it’s worth looking closely to see if the error is my own”.
(I am sad that the “avoid paying your taxes” post got downvoted. It does seem to me like an example of the thing you’re talking about here, and I upvoted it myself.)
I also endorse pretty much everything in this comment.
(Except for the bit about the “avoid paying your taxes” post, because I don’t even remember that one.)
To emphasize this point: in many cases, the problem with some “weird ideas” isn’t, like, “oh no, this is too weird, I can’t even, don’t even make me think about this weird stuff :(”. It’s more like: “this is straightforwardly dumb and wrong”. (Indeed, much of the time it’s not even interestingly wrong, so it’s not even worth my time to argue with it. Just: dumb nonsense, already very well known to be dumb nonsense, nothing new to see or say, downvote and move on with life.)
You don’t have to justify your updates to me (and also, I agree that the comment I wrote was too combative, and I’m sorry), but I want to respond to this because the context of this reply implies that I’m against against weird ideas. I vehemently dispute this. My main point was that it’s possible to argue for censorship for genuine reasons (rather than become one is closed-minded). I didn’t advocate for censoring anything, and I don’t think I’m in the habit of downvoting things because they’re weird, at all.
This may sound unbelievable or seem like a warped framing, but I honestly felt like I was going against censorship by writing that comment. Like as a description of my emotional state while writing it, that was absolutely how I felt. Because I viewed (and still view) your post as a character attack on people-who-think-that-sometimes-censorship-is-justified, and one that’s primarily based on an emotional appeal rather than a consequentialist argument. And well, you’re a very high prestige person. Posts like this, if they get no pushback, make it extremely emotionally difficult to argue for a pro-censorship position regardless of the topic. So even though I acknowledge the irony, it genuinely did feel like you were effectively censoring pro-censorship arguments, even if that wasn’t the intent.
I guess you could debate whether or not censoring pro-censorship views is pro or anti censorship. But regardless, I think it’s bad. It’s not impossible for reality to construct a situation in which censorship is necessary. In fact, I think they already exist; if someone posts a trick that genuinely accelerates AI capabilities by 5 years, I want that be censored. (Almost all examples I’d think of would relate to AI or viruses.) The probability that something in this class happens on LW is not high, but it’s high enough that we need to be able to talk about this without people feeling like they’re impure for suggesting it.
I was not personally offended by your example post and upvoted it just now. I probably at least wouldn’t have downvoted it had I seen it earlier, but I hadn’t.
lsusr’s example post seemed to not be a specific deviant idea though. To paraphrase one point: beware of banning apparent falsity lest you inadvertently ban true heresies, without naming any heresy in particular.
Many readers appeared to dislike my example post. IIRC, prior to mentioning it here, it’s karma (excluding my auto hard upvote) was close to zero, despite it having about 40 votes.
I read the post & comment which you linked, and indeed felt that the critical comment was too combative. (As a counterexample, I like this criticism of EY for how civil it is.) That being said, I think I understand the sentiment behind its tone: the commenter saw your post make a bunch of strong claims, felt that these claims were wrong and/or insufficiently supported by sources, and wrote the critical comment in a moment of annoyance.
To give a concrete example, “We do not censor other people more conventional-minded than ourselves.” is an interesting but highly controversial claim. Both because hardly anything in the world has a 100% correlation, and because it leads to unintuitive logical implications like “two people cannot simultaneously want to censor one another”.
Anyway, given that the post began with a controversial claim, I expected the rest of the post to support this initial claim with lots of sources and arguments. Instead, you took the claim further and built on it. That’s a valid way to write, but it puts the essay in an awkward spot with readers that disagree with the initial claim. For this reason, I’m also a bit confused about the purpose of the essay: was it meant to be a libertarian manifesto, or an attempt to convince readers, or what? EDIT: Also, the majority of LW readers are not libertarians. What reaction did you expect to receive from them?
If I were to make a suggestion, the essay might have worked better if it had been a dialogue between a pro-liberty and a pro-censorship character. Why? Firstly, if readers feel like an argument is insufficiently supported, they can criticize or yell at the character, rather than at you. And secondly, such a dialogue would’ve required making a stronger case in favor of censorship, and it would’ve given the censorship character the opportunity to push back against claims by the liberty character. This would’ve forestalled having readers make similar counterarguments. (Also see Scott’s Nonfiction Writing Advice, section “Anticipate and defuse counterarguments”.)
My best example of this comes from this post of mine on EAF (my LW examples are a bit more ambiguous). Multiple folks quickly jumped to making a Nazi argument, almost in parody of Godwin’s Law.
I don’t have an opinion on your post itself, but it is indeed disappointing that the comments immediately jumped to the Nazi comparison, which of course made all further discussion pointless.
I wonder what fraction of the weirdest writers here feel the same way. I can’t remember the last time I’ve read something on LessWrong and thought to myself, “What a strange, daring, radical idea. It might even be true. I’m scared of what the implications might be.” I miss that.
I thought Genesmith’s latest post fully qualified as that!
I totally didn’t think adult gene editing was possible, and had dismissed it. It seems like a huge deal if true, and it’s the kind of thing I don’t expect would have been highlighted anywhere else.
I wonder what fraction of the weirdest writers here feel the same way. I can’t remember the last time I’ve read something on LessWrong and thought to myself, “What a strange, daring, radical idea. It might even be true. I’m scared of what the implications might be.” I miss that.
The post about not paying one’s taxes was pretty out there and had plenty interesting discussion, but now it’s been voted down to the negatives. I wish it was a bit higher (at 0-ish karma, say), which might’ve happened if people could disagree-vote on it.
But yes, overall this criticism seems true, and important.
Another improvement I didn’t notice until right now is the “respond to a part of the original post” feature. I feel like it nudges comments away from nitpicking.
I meant side-comments. I never use them myself, but people often use them to comment on my posts. When they do, the comments tend to be constructive, especially compared to blockquotes.
I can’t think of anything LessWrong is doing wrong here. The editor already allows embedded YouTube links.
One thing that could help is to be able to have automatic crossposting from your YouTube channel like you can currently have from a blog. It would be even more powerful if it generated a transcript automatically (though that’s currently difficult and expansive).
It would be even more powerful if it generated a transcript automatically (though that’s currently difficult and expansive).
A few points on this:
Some Youtube videos already come with good captions.
For the rest, Youtube provides automatic captions. These are really bad, lack punctuation and capitalization, but even at that level of quality they could e.g. be used to pinpoint where something was said.
Transcription via OpenAI Whisper is cheap ($0.36 per hour) and quite decent if there’s only one speaker. For interviews and podcasts, the experience is not good enough for transcription (to create this podcast transcript at the beginning of the year, I used Whisper as a base, but still had to put in many many hours of editing), because it e.g. doesn’t do speaker diarisation or insert paragraph breaks. But I’m pretty sure that by now there are hybrid services out there which can do even the things Whisper is bad at. This still won’t yield a professional-level transcript, though doing an editing pass with GPT4 might close the gap. My point is, these transcripts are not expensive, relative to labor costs.
The implementation of automatic AI transcripts has become surprisingly simple. E.g. as I mentioned here, I now get automatic transcripts for my voice notes, based on following a step-by-step video guide. The difficulty is not yet at consumer-level simple (though for those purposes, one can just pay for an AI transcription service app), but it’s definitely already at the level of hobbyist-simple.
The other reason I’ve been posting less on LessWrong is that I feel like I’m hitting a soft ceiling with what I can accomplish here. I’m nowhere near the my personal skill cap, of course. But there is a much larger potential audience (and therefore impact) if I shifted from writing essays to filming YouTube videos.
There are also writers with a very large reach. A recommendation I saw was to post where most of the people and hence most of the potential readers are, i.e. on the biggest social media sites. If you’re trying to have impact as a writer, the reachable audience on LW is much smaller. (Though of course there are other ways of having a bigger impact than just reaching more readers.)
I wonder what fraction of the weirdest writers here feel the same way. I can’t remember the last time I’ve read something on LessWrong and thought to myself, “What a strange, daring, radical idea. It might even be true. I’m scared of what the implications might be.” I miss that.
Do you remember any examples from back in the day?
I enjoy your content here and would like to continue reading you as you grow into your next platforms.
YouTube grows your audience in the immediate term, among people who have the tech and time to consume videos. However, text is the lowest common denominator for human communication across longer time scales. Text handles copying and archiving in ways that I don’t think we can promise for video on a scale of hundreds of years, let alone thousands. Text handles search with an ease that we can only approximate for video by transcribing it. Transcription is tractable with AI, but still requires investment of additional resources, and yields a text of lower quality and intentionality than an essay crafted directly by its own author.
Plenty of people spend time in situations where they can read text but not listen to audio, and plenty of people spend time in situations where they can listen to audio but not read text. Compare the experience of listening to an essay via text to speech to the experience of reading a youtube video’s auto-generated transcript. Which makes you feel like it’s improving how you think?
Which makes you feel like it’s improving how you think?
I’m learning how to film, light and edit video. I’m learning how to speak better too, and getting a better understanding about how the media ecosystem works.
First of all, I appreciate all the work the LessWrong / Lightcone team does for this website.
The Good
I was skeptical of the agree/disagree voting. After using it, I think it was a very good decision. Well done.
I haven’t used the dialogue feature yet, but I have plans to try it out.
Everything just works. Spam is approximately zero. The garden is gardened so well I can take it for granted.
I love how much you guys experiment. I assume the reason you don’t do more is just engineering capacity.
And yet…
I tend to avoid giving negative feedback unless someone explicitly asks for it. So…here we go.
Over the 1.5 years, I’ve been less excited about LessWrong than any time since I discovered this website. I’m uncertain to what extent this is because I changed or because the community did. Probably a bit of both.
AI Alignment
The most obvious change is the rise of AI Alignment writings on LessWrong. There are two things that bother me about AI Alignment writing.
It’s effectively unfalsifiable. Even betting markets don’t really work when you’re betting on the apocalypse.
It’s highly political. AI Alignment became popular on LessWrong before AI Alignment became a mainstream political issue. I feel like LessWrong has a double-standard, where political writing is held to a high epistemic standard unless it’s about AI.
I have hidden the “AI Alignment” tag from my homepage, but there is still a spillover effect. “Likes unfalsifiable political claims” is the opposite of the kind of community I want to be part of. I think adopting lc’s POC || GTFO burden of proof would make AI Alignment dialogue productive, but I am pessimistic about that happening on a collective scale.
Weird ideas
When I write about weird ideas, I get three kinds of responses.
“Yes and” is great.
“I think you’re wrong because y” is fine.
“We don’t want you to say that” makes me feel unwelcome.
Over the years, I feel like I’ve gotten fewer “yes and” comments and more “we don’t want you to say that” comments. This might be because my writing has changed, but I think what’s really going on is that this happens to every community as it gets older. What was once radical eventually congeals into dogma.
I used to post my weird ideas immediately to LessWrong. Now I don’t, because I feel like the reception on LessWrong would bum me out.[1]
I wonder what fraction of the weirdest writers here feel the same way. I can’t remember the last time I’ve read something on LessWrong and thought to myself, “What a strange, daring, radical idea. It might even be true. I’m scared of what the implications might be.” I miss that.[2]
I get the basic idea
I have learned a lot from reading and writing on LessWrong. Eight months ago, I had an experience where I internalized something very deep about rationality. I felt like I graduated from Level 1 to Level 2.
According to Eliezer Yudkowsky, his target audience for the Sequences was 2nd grade. He missed and ended up hitting college-level. They weren’t supposed to be comprehensive. They were supposed to be Level 1. But after that, nobody wrote a Level 2. (The postrats don’t count.) I’ve been trying―for years―to write Level 2, but I feel like a sequence of blog posts is a suboptimal format in 2023. Yudkowsky started writing the Sequences in 2006, when YouTube was still a startup. That leads me to…
100×
The other reason I’ve been posting less on LessWrong is that I feel like I’m hitting a soft ceiling with what I can accomplish here. I’m nowhere near the my personal skill cap, of course. But there is a much larger potential audience (and therefore impact) if I shifted from writing essays to filming YouTube videos. I can’t think of anything LessWrong is doing wrong here. The editor already allows embedded YouTube links.
Exception: I can usually elicit a positive response by writing fiction instead of nonfiction. But that takes a lot more work.
This might be entirely in my head, due to hedonic adaptation.
This is the part I’m most frustrated with. It used to be you could say some wild stuff on on this site and people would take you seriously. Now there’s a chorus of people who go “eww, gross” if you go too far past what they think should be acceptable. LessWrong culture originally had very high openness to wild ideas. At worst, if you reasoned well and people disagreed, they’d at least ignore you, but now you’re more likely to get downvoted for saying controversial things because they are controversial and it feels bad.
This was always a problem, but feels like it’s gotten worse.
Huh, I am surprised by this. I agree this is a thing in lots of the internet, but do you have any examples? I feel like we really still have a culture of pretty extreme openness and taking random ideas seriously (enough that sometimes I feel like wild sounding bad ideas get upvoted too much because people like being contrarian a bit too much).
Here’s part of a comment on one of my posts. The comment negatively impacted my desire to post deviant ideas on LessWrong.
The comment doesn’t represent a fringe opinion. It has +29 karma and +18 agreement.
I think I’m less open to weird ideas on LW than I used to be, and more likely to go “seems wrong, okay, next”. Probably this is partly a me thing, and I’m not sure it’s bad—as I gain knowledge, wisdom and experience, surely we’d expect me to become better at discerning whether a thing is worth paying attention to? (Which doesn’t mean I am better, but like. Just because I’m dismissing more ideas, doesn’t mean I’m incorrectly dismissing more ideas.)
But my guess is it’s also partly a LW thing. It seems to me that compared to 2013, there are more weird ideas on LW and they’re less worth paying attention to on average.
In this particular case… when you talk about “We don’t want you to say that” comments, it sounds to me like those comments don’t want you to say your ideas. It sounds like Habryka and other commenters interpreted it that way too.
But my read of the the comment you’re talking about here isn’t that it’s opposed to your ideas. Rather, it doesn’t want you to use a particular style of argument, and I agree with it, and I endorse “we don’t want bad arguments on LW”. I downvoted that post of yours because it seemed to be arguing poorly. It’s possible I missed something; I admittedly didn’t do a close read, because while I’ve enjoyed a lot of your posts, I don’t have you flagged in my brain as “if lsusr seems to be making a clear mistake, it’s worth looking closely to see if the error is my own”.
(I am sad that the “avoid paying your taxes” post got downvoted. It does seem to me like an example of the thing you’re talking about here, and I upvoted it myself.)
I also endorse pretty much everything in this comment.
(Except for the bit about the “avoid paying your taxes” post, because I don’t even remember that one.)
To emphasize this point: in many cases, the problem with some “weird ideas” isn’t, like, “oh no, this is too weird, I can’t even, don’t even make me think about this weird stuff :(”. It’s more like: “this is straightforwardly dumb and wrong”. (Indeed, much of the time it’s not even interestingly wrong, so it’s not even worth my time to argue with it. Just: dumb nonsense, already very well known to be dumb nonsense, nothing new to see or say, downvote and move on with life.)
You don’t have to justify your updates to me (and also, I agree that the comment I wrote was too combative, and I’m sorry), but I want to respond to this because the context of this reply implies that I’m against against weird ideas. I vehemently dispute this. My main point was that it’s possible to argue for censorship for genuine reasons (rather than become one is closed-minded). I didn’t advocate for censoring anything, and I don’t think I’m in the habit of downvoting things because they’re weird, at all.
This may sound unbelievable or seem like a warped framing, but I honestly felt like I was going against censorship by writing that comment. Like as a description of my emotional state while writing it, that was absolutely how I felt. Because I viewed (and still view) your post as a character attack on people-who-think-that-sometimes-censorship-is-justified, and one that’s primarily based on an emotional appeal rather than a consequentialist argument. And well, you’re a very high prestige person. Posts like this, if they get no pushback, make it extremely emotionally difficult to argue for a pro-censorship position regardless of the topic. So even though I acknowledge the irony, it genuinely did feel like you were effectively censoring pro-censorship arguments, even if that wasn’t the intent.
I guess you could debate whether or not censoring pro-censorship views is pro or anti censorship. But regardless, I think it’s bad. It’s not impossible for reality to construct a situation in which censorship is necessary. In fact, I think they already exist; if someone posts a trick that genuinely accelerates AI capabilities by 5 years, I want that be censored. (Almost all examples I’d think of would relate to AI or viruses.) The probability that something in this class happens on LW is not high, but it’s high enough that we need to be able to talk about this without people feeling like they’re impure for suggesting it.
I stumbled over this part. What makes someone high prestige? Their total LW karma? To me that doesn’t really make sense as a proxy for prestige.
Hmm, is LessWrong really so intolerant of being reminded of the existence of “deviant ideas”?
Social Dark Matter was pretty well received, with 248 karma, and was posted quite recently.
The much older KOLMOGOROV COMPLICITY AND THE PARABLE OF LIGHTNING opened with a quote from the same Paul Graham essay you linked to (What You Can’t Say).
I was not personally offended by your example post and upvoted it just now. I probably at least wouldn’t have downvoted it had I seen it earlier, but I hadn’t.
People love deviant ideas in abstract, hate to deal with specific deviant ideas that attack beliefs they hold dear.
lsusr’s example post seemed to not be a specific deviant idea though. To paraphrase one point: beware of banning apparent falsity lest you inadvertently ban true heresies, without naming any heresy in particular.
Many readers appeared to dislike my example post. IIRC, prior to mentioning it here, it’s karma (excluding my auto hard upvote) was close to zero, despite it having about 40 votes.
Hi there, lsusr!
I read the post & comment which you linked, and indeed felt that the critical comment was too combative. (As a counterexample, I like this criticism of EY for how civil it is.) That being said, I think I understand the sentiment behind its tone: the commenter saw your post make a bunch of strong claims, felt that these claims were wrong and/or insufficiently supported by sources, and wrote the critical comment in a moment of annoyance.
To give a concrete example, “We do not censor other people more conventional-minded than ourselves.” is an interesting but highly controversial claim. Both because hardly anything in the world has a 100% correlation, and because it leads to unintuitive logical implications like “two people cannot simultaneously want to censor one another”.
Anyway, given that the post began with a controversial claim, I expected the rest of the post to support this initial claim with lots of sources and arguments. Instead, you took the claim further and built on it. That’s a valid way to write, but it puts the essay in an awkward spot with readers that disagree with the initial claim. For this reason, I’m also a bit confused about the purpose of the essay: was it meant to be a libertarian manifesto, or an attempt to convince readers, or what? EDIT: Also, the majority of LW readers are not libertarians. What reaction did you expect to receive from them?
If I were to make a suggestion, the essay might have worked better if it had been a dialogue between a pro-liberty and a pro-censorship character. Why? Firstly, if readers feel like an argument is insufficiently supported, they can criticize or yell at the character, rather than at you. And secondly, such a dialogue would’ve required making a stronger case in favor of censorship, and it would’ve given the censorship character the opportunity to push back against claims by the liberty character. This would’ve forestalled having readers make similar counterarguments. (Also see Scott’s Nonfiction Writing Advice, section “Anticipate and defuse counterarguments”.)
My best example of this comes from this post of mine on EAF (my LW examples are a bit more ambiguous). Multiple folks quickly jumped to making a Nazi argument, almost in parody of Godwin’s Law.
I don’t have an opinion on your post itself, but it is indeed disappointing that the comments immediately jumped to the Nazi comparison, which of course made all further discussion pointless.
I thought Genesmith’s latest post fully qualified as that!
I totally didn’t think adult gene editing was possible, and had dismissed it. It seems like a huge deal if true, and it’s the kind of thing I don’t expect would have been highlighted anywhere else.
The post about not paying one’s taxes was pretty out there and had plenty interesting discussion, but now it’s been voted down to the negatives. I wish it was a bit higher (at 0-ish karma, say), which might’ve happened if people could disagree-vote on it.
But yes, overall this criticism seems true, and important.
I’ve strong-upvoted it to −1, because I agree.
Another improvement I didn’t notice until right now is the “respond to a part of the original post” feature. I feel like it nudges comments away from nitpicking.
I didn’t quite parse that – which UI element are you referring to?
I meant side-comments. I never use them myself, but people often use them to comment on my posts. When they do, the comments tend to be constructive, especially compared to blockquotes.
Ah cool. That was my best guess but wasn’t sure.
One thing that could help is to be able to have automatic crossposting from your YouTube channel like you can currently have from a blog. It would be even more powerful if it generated a transcript automatically (though that’s currently difficult and expansive).
A few points on this:
Some Youtube videos already come with good captions.
For the rest, Youtube provides automatic captions. These are really bad, lack punctuation and capitalization, but even at that level of quality they could e.g. be used to pinpoint where something was said.
Transcription via OpenAI Whisper is cheap ($0.36 per hour) and quite decent if there’s only one speaker. For interviews and podcasts, the experience is not good enough for transcription (to create this podcast transcript at the beginning of the year, I used Whisper as a base, but still had to put in many many hours of editing), because it e.g. doesn’t do speaker diarisation or insert paragraph breaks. But I’m pretty sure that by now there are hybrid services out there which can do even the things Whisper is bad at. This still won’t yield a professional-level transcript, though doing an editing pass with GPT4 might close the gap. My point is, these transcripts are not expensive, relative to labor costs.
The implementation of automatic AI transcripts has become surprisingly simple. E.g. as I mentioned here, I now get automatic transcripts for my voice notes, based on following a step-by-step video guide. The difficulty is not yet at consumer-level simple (though for those purposes, one can just pay for an AI transcription service app), but it’s definitely already at the level of hobbyist-simple.
There are also writers with a very large reach. A recommendation I saw was to post where most of the people and hence most of the potential readers are, i.e. on the biggest social media sites. If you’re trying to have impact as a writer, the reachable audience on LW is much smaller. (Though of course there are other ways of having a bigger impact than just reaching more readers.)
Do you remember any examples from back in the day?
I enjoy your content here and would like to continue reading you as you grow into your next platforms.
YouTube grows your audience in the immediate term, among people who have the tech and time to consume videos. However, text is the lowest common denominator for human communication across longer time scales. Text handles copying and archiving in ways that I don’t think we can promise for video on a scale of hundreds of years, let alone thousands. Text handles search with an ease that we can only approximate for video by transcribing it. Transcription is tractable with AI, but still requires investment of additional resources, and yields a text of lower quality and intentionality than an essay crafted directly by its own author.
Plenty of people spend time in situations where they can read text but not listen to audio, and plenty of people spend time in situations where they can listen to audio but not read text. Compare the experience of listening to an essay via text to speech to the experience of reading a youtube video’s auto-generated transcript. Which makes you feel like it’s improving how you think?
I’m learning how to film, light and edit video. I’m learning how to speak better too, and getting a better understanding about how the media ecosystem works.
Making videos is harder than writing, which means I learn more from it.
Ah, that makes perfect sense. On the other side, watching videos is often easier than reading, so I often feel like I learn more from the latter =)