I feel like I’ve heard this before, and can sympathize, but I’m skeptical.
I feel like this prescribes an almost magical thinking to how many blog posts are produced. The phrase “original seeing” sounds much more profound than I’m comfortable with for such a discussion.
Let’s go through some examples:
Lots of Zvi’s posts are summaries of content, done in a ways that’s fairly formulaic.
A lot of Scott Alexander’s posts read to me like, “Here’s an interesting area that blog readers like but haven’t investigated much. I read a few things about it, and have some takes that make a lot of sense upon some level of reflection.”
A lot of my own posts seem like things that wouldn’t be too hard to come up with some search process to create.
Broadly, I think that “coming up with bold new ideas” gets too much attention, and more basic things like “doing lengthy research” or “explaining to people the next incremental set of information that they would be comfortable with, in a way that’s very well expressed” gets too little.
I expect that future AI systems will get good at going from a long list of [hypotheses of what might make for interesting topics] and [some great areas, where a bit of research provides surprising insights] and similar. We don’t really have this yet, but it seems doable to me.
We probably don’t disagree that much. What “original seeing” means is just going and investigating things you’re interested in. So doing lengthy research is actually a much more central example of this than coming up with a bold new idea is.
As I say above: “There’s not any principled reason why an AI system, even a LLM in particular, couldn’t do this.”
I think some of it is that I find the term “original seeing” to be off-putting. I’m not sure if I got the point of the corresponding blog post.
In general, going forward, I’d recommend people try to be very precise on what they mean here. I’m suspicious that “original seeing” will mean different things to different people. I’d expect that trying to more precisely clarify what tasks or skills involved would make it easier to pinpoint which parts of it are good/bad for LLMs.
doing lengthy research and summarizing it is important work but not typically what I associate with “blogging”. But I think pulling that together into an attractive product uses much the same cognitive skills as producing original seeing. The missing step in the process you describe is figuring out when the research did produce surprising insights, which might be a class of novel problems (unless a general formulaic approach works and someone scaffolds that in). To the extent it doesn’t require solving novel problems, I think it’s predictably easier than quality blogging that doesn’t rely on research for the novel insights.
“The missing step in the process you describe is figuring out when the research did produce surprising insights, which might be a class of novel problems (unless a general formulaic approach works and someone scaffolds that in).”
-> I feel optimistic about the ability to use prompts to get us fairly far with this. More powerful/agentic systems will help a lot to actually execute those prompts at scale, but the core technical challenge seems like it could be fairly straightforward. I’ve been experimenting with LLMs to try to detect what information that they could come up with that would later surprise them. I think this is fairly measurable.
I feel like I’ve heard this before, and can sympathize, but I’m skeptical.
I feel like this prescribes an almost magical thinking to how many blog posts are produced. The phrase “original seeing” sounds much more profound than I’m comfortable with for such a discussion.
Let’s go through some examples:
Lots of Zvi’s posts are summaries of content, done in a ways that’s fairly formulaic.
A lot of Scott Alexander’s posts read to me like, “Here’s an interesting area that blog readers like but haven’t investigated much. I read a few things about it, and have some takes that make a lot of sense upon some level of reflection.”
A lot of my own posts seem like things that wouldn’t be too hard to come up with some search process to create.
Broadly, I think that “coming up with bold new ideas” gets too much attention, and more basic things like “doing lengthy research” or “explaining to people the next incremental set of information that they would be comfortable with, in a way that’s very well expressed” gets too little.
I expect that future AI systems will get good at going from a long list of [hypotheses of what might make for interesting topics] and [some great areas, where a bit of research provides surprising insights] and similar. We don’t really have this yet, but it seems doable to me.
(I similarly didn’t agree with the related post)
We probably don’t disagree that much. What “original seeing” means is just going and investigating things you’re interested in. So doing lengthy research is actually a much more central example of this than coming up with a bold new idea is.
As I say above: “There’s not any principled reason why an AI system, even a LLM in particular, couldn’t do this.”
Thanks for the clarification!
I think some of it is that I find the term “original seeing” to be off-putting. I’m not sure if I got the point of the corresponding blog post.
In general, going forward, I’d recommend people try to be very precise on what they mean here. I’m suspicious that “original seeing” will mean different things to different people. I’d expect that trying to more precisely clarify what tasks or skills involved would make it easier to pinpoint which parts of it are good/bad for LLMs.
doing lengthy research and summarizing it is important work but not typically what I associate with “blogging”. But I think pulling that together into an attractive product uses much the same cognitive skills as producing original seeing. The missing step in the process you describe is figuring out when the research did produce surprising insights, which might be a class of novel problems (unless a general formulaic approach works and someone scaffolds that in). To the extent it doesn’t require solving novel problems, I think it’s predictably easier than quality blogging that doesn’t rely on research for the novel insights.
“The missing step in the process you describe is figuring out when the research did produce surprising insights, which might be a class of novel problems (unless a general formulaic approach works and someone scaffolds that in).”
-> I feel optimistic about the ability to use prompts to get us fairly far with this. More powerful/agentic systems will help a lot to actually execute those prompts at scale, but the core technical challenge seems like it could be fairly straightforward. I’ve been experimenting with LLMs to try to detect what information that they could come up with that would later surprise them. I think this is fairly measurable.