And we’re trying to produce reliable answers to much harder questions by, what, writing better blog posts, and hoping that a few of the best ideas stick? This is not what a desperate effort to find the truth looks like.
It seems to me that maybe this is what a certain stage in the desperate effort to find the truth looks like?
Like, the early stages of intellectual progress look a lot like thinking about different ideas and seeing which ones stand up robustly to scrutiny. Then the best ones can be tested more rigorously and their edges refined through experimentation.
It seems to me like there needs to be some point in the desparate search for truth in which you’re allowing for half-formed thoughts and unrefined hypotheses, or else you simply never get to a place where the hypotheses you’re creating even brush up against the truth.
In the half-formed thoughts stage, I’d expect to see a lot of literature reviews, agendas laying out problems, and attempts to identify and question fundamental assumptions. I expect that (not blog-post-sized speculation) to be the hard part of the early stages of intellectual progress, and I don’t see it right now.
Perhaps we can split this into technical AI safety and everything else. Above I’m mostly speaking about “everything else” that Less Wrong wants to solve. Since AI safety is now a substantial enough field that its problems need to be solved in more systemic ways.
In the half-formed thoughts stage, I’d expect to see a lot of literature reviews, agendas laying out problems, and attempts to identify and question fundamental assumptions. I expect that (not blog-post-sized speculation) to be the hard part of the early stages of intellectual progress, and I don’t see it right now.
I would expect that later in the process. Agendas laying out problems and fundamental assumptions don’t spring from nowhere (at least for me), they come from conversations where I’m trying to articulate some intuition, and I recognize some underlying pattern. The pattern and structure doesn’t emerge spontaneously, it comes from trying to pick around the edges of a thing, get thoughts across, explain my intuitions and see where they break.
I think it’s fair to say that crystallizing these patterns into a formal theory is a “hard part”, but the foundation for making it easy is laid out in the floundering and flailing that came before.
It seems to me that maybe this is what a certain stage in the desperate effort to find the truth looks like?
Like, the early stages of intellectual progress look a lot like thinking about different ideas and seeing which ones stand up robustly to scrutiny. Then the best ones can be tested more rigorously and their edges refined through experimentation.
It seems to me like there needs to be some point in the desparate search for truth in which you’re allowing for half-formed thoughts and unrefined hypotheses, or else you simply never get to a place where the hypotheses you’re creating even brush up against the truth.
In the half-formed thoughts stage, I’d expect to see a lot of literature reviews, agendas laying out problems, and attempts to identify and question fundamental assumptions. I expect that (not blog-post-sized speculation) to be the hard part of the early stages of intellectual progress, and I don’t see it right now.
Perhaps we can split this into technical AI safety and everything else. Above I’m mostly speaking about “everything else” that Less Wrong wants to solve. Since AI safety is now a substantial enough field that its problems need to be solved in more systemic ways.
I would expect that later in the process. Agendas laying out problems and fundamental assumptions don’t spring from nowhere (at least for me), they come from conversations where I’m trying to articulate some intuition, and I recognize some underlying pattern. The pattern and structure doesn’t emerge spontaneously, it comes from trying to pick around the edges of a thing, get thoughts across, explain my intuitions and see where they break.
I think it’s fair to say that crystallizing these patterns into a formal theory is a “hard part”, but the foundation for making it easy is laid out in the floundering and flailing that came before.