Concerning AI Safety stuff, my understanding is that the focus on pure theory comes from the fact that it’s potentially world-endingly disastrous to try and develop the field by experimentation. For most domains, knowledge has been built by centuries of interplay between trial and error experimentation and “unpractical” theoretical work. That’s probably also a very effective approach for most new domains. But with AI safety, we don’t get trial and error, making the task way harder, and leaving people to bust their asses over developing the pure theory to a point where it will eventually become the practical applicable stuff.
While the recent AI posts certainly have played a part it’s also been in general. Moreover, it may well be more about me than about the contributions to LW. While both questions you identify are part of my thinking but the core is really about valid logic/argument structure (formalism) and reality/truths (with truths perhaps being a more complex item than mere facts). Valid argument that reaches false conclusions is not really helping to get less wrong in our thinking or actions.
I think the post on counter-factual, thick and thin, also bring the question to mind for me. Like I say, however, this might be more about me lacking the skills to fully follow and appreciate the formalized parts so missing how it helps get less wrong.
I suppose expressing my though a bit differently, do these highly formalized approaches shed more light on the underlying question and getting to the best answers we can given our state of knowledge or would Occam suggest whittling them down a bit.
This last bit prompts me to think that answer will depend a great deal on the audience in question so maybe my musings are really more about who the target audience of the posts are (perhaps not me ;-)
Pre-emptive response.
Concerning AI Safety stuff, my understanding is that the focus on pure theory comes from the fact that it’s potentially world-endingly disastrous to try and develop the field by experimentation. For most domains, knowledge has been built by centuries of interplay between trial and error experimentation and “unpractical” theoretical work. That’s probably also a very effective approach for most new domains. But with AI safety, we don’t get trial and error, making the task way harder, and leaving people to bust their asses over developing the pure theory to a point where it will eventually become the practical applicable stuff.
While the recent AI posts certainly have played a part it’s also been in general. Moreover, it may well be more about me than about the contributions to LW. While both questions you identify are part of my thinking but the core is really about valid logic/argument structure (formalism) and reality/truths (with truths perhaps being a more complex item than mere facts). Valid argument that reaches false conclusions is not really helping to get less wrong in our thinking or actions.
I think the post on counter-factual, thick and thin, also bring the question to mind for me. Like I say, however, this might be more about me lacking the skills to fully follow and appreciate the formalized parts so missing how it helps get less wrong.
I suppose expressing my though a bit differently, do these highly formalized approaches shed more light on the underlying question and getting to the best answers we can given our state of knowledge or would Occam suggest whittling them down a bit.
This last bit prompts me to think that answer will depend a great deal on the audience in question so maybe my musings are really more about who the target audience of the posts are (perhaps not me ;-)