I formulate the ideas. And then I communicate them.
I see what you’re saying. However, I also think that the act of writing often helps one to generate ideas, not just communicate ones that you already had. Paul Graham argues for this in The Age of the Essay and I agree with him.
Thinking and communicating are two separate processes, even if they often happen at the same time.
I think that in practice, content on LessWrong is basically all about communicating, not thinking out loud.
In theory things like, shortforms, open threads, personal blog posts, private messages and meetups all help with the thinking part, but I think that social norms aren’t strong enough for it to pick up. Like, you could write a bunch of shortform posts where you’re thinking out loud, but you don’t see others do so, and thus you don’t feel comfortable/compelled to do so yourself.
What I am doing right now, writing this essay, is, technically, a linear walk through the network of my ideas. That is what writing is: turning a net into a line. But it is also very concretely what I do, since I have externalized my ideas in a note-taking system where the thoughts are linked with hyperlinks. My notes are a knowledge graph, a net of notes. When I sit down to write, I simply choose a thought that strikes me as interesting and use that as my starting point. Then I click my way, linearly, from one note to the next until I have reached the logical endpoint of the thought-line I want to communicate.
Woah. That was really insightful.
This would greatly reduce the cost of communicating ideas. And a lowered cost has the potential to unleash large amounts of knowledge that are now locked in minds that cannot communicate it, or that are too occupied doing more important things to take the time. (It will also, naturally, unleash an endless flood of misinformation.)
I worry not only about misinformation, but also low quality content. There might be more high quality content, but if it’s hard enough to find, the average quality of content that people are actually able to find might end up being lower. But maybe we’d also be able to improve content discovery enough to make this risk acceptable.
These five to ten hours, when ideas are made human-readable, should be possible to outsource to GPT-3.
I worry about this making us dumber. As Paul Graham notes, “Observation suggests that people are switching to using ChatGPT to write things for them with almost indecent haste. Most people hate to write as much as they hate math. Way more than admit it. Within a year the median piece of writing could be by AI. I warn you now, this is going to have unfortunate consequences, just as switching to living in suburbia and driving everywhere did. When you lose the ability to write, you also lose some of your ability to think.”
If you click on something that seems interesting, the essay meanders in that direction. If you feel the reading is becoming a bit of a slog, with too many irrelevant details, you zoom out with an Engelbart zoom, and get a summary of the content instead, at whatever level of abstraction suits you.
I see what you’re saying. However, I also think that the act of writing often helps one to generate ideas, not just communicate ones that you already had. Paul Graham argues for this in The Age of the Essay and I agree with him.
I think that in practice, content on LessWrong is basically all about communicating, not thinking out loud.
In theory things like, shortforms, open threads, personal blog posts, private messages and meetups all help with the thinking part, but I think that social norms aren’t strong enough for it to pick up. Like, you could write a bunch of shortform posts where you’re thinking out loud, but you don’t see others do so, and thus you don’t feel comfortable/compelled to do so yourself.
Woah. That was really insightful.
I worry not only about misinformation, but also low quality content. There might be more high quality content, but if it’s hard enough to find, the average quality of content that people are actually able to find might end up being lower. But maybe we’d also be able to improve content discovery enough to make this risk acceptable.
I worry about this making us dumber. As Paul Graham notes, “Observation suggests that people are switching to using ChatGPT to write things for them with almost indecent haste. Most people hate to write as much as they hate math. Way more than admit it. Within a year the median piece of writing could be by AI. I warn you now, this is going to have unfortunate consequences, just as switching to living in suburbia and driving everywhere did. When you lose the ability to write, you also lose some of your ability to think.”
Wow! What a powerful thought.