At times as I read through the posts and comments here I find myself wondering if things are sometimes too wrapped up in formalization and “pure theory”. In some cases (all cases?) I suspect my lack of skills lead me to miss the underlying, important aspect and only see the analytical tools/rigor. In such cases I find myself thinking of the old Hayek (free-market economist, classical liberal thinker) title: The Pretense of Knowledge.
From many, many years ago when I took my Intro to Logic and years ago from a Discrete Math course I know there is a difference between a true conclusion and a valid conclusion (or perhaps better, a conclusion from a valid argument).
Am I missing the insights and any discussion on that distinction and how to identify and avoid such errors—confusing a valid conclusion from a true conclusion. Seems like it’s kind of important to the idea of getting Less Wrong.
2. Are people making the mistake of confusing A implies B with A and B are true?
When I first read your comment, I assumed you were referring to the posts that are about AI related stuff, though I’m now realizing you could have been thinking of LW content in general. Are there certain parts of LW that you were referring to?
Concerning AI Safety stuff, my understanding is that the focus on pure theory comes from the fact that it’s potentially world-endingly disastrous to try and develop the field by experimentation. For most domains, knowledge has been built by centuries of interplay between trial and error experimentation and “unpractical” theoretical work. That’s probably also a very effective approach for most new domains. But with AI safety, we don’t get trial and error, making the task way harder, and leaving people to bust their asses over developing the pure theory to a point where it will eventually become the practical applicable stuff.
While the recent AI posts certainly have played a part it’s also been in general. Moreover, it may well be more about me than about the contributions to LW. While both questions you identify are part of my thinking but the core is really about valid logic/argument structure (formalism) and reality/truths (with truths perhaps being a more complex item than mere facts). Valid argument that reaches false conclusions is not really helping to get less wrong in our thinking or actions.
I think the post on counter-factual, thick and thin, also bring the question to mind for me. Like I say, however, this might be more about me lacking the skills to fully follow and appreciate the formalized parts so missing how it helps get less wrong.
I suppose expressing my though a bit differently, do these highly formalized approaches shed more light on the underlying question and getting to the best answers we can given our state of knowledge or would Occam suggest whittling them down a bit.
This last bit prompts me to think that answer will depend a great deal on the audience in question so maybe my musings are really more about who the target audience of the posts are (perhaps not me ;-)
At times as I read through the posts and comments here I find myself wondering if things are sometimes too wrapped up in formalization and “pure theory”. In some cases (all cases?) I suspect my lack of skills lead me to miss the underlying, important aspect and only see the analytical tools/rigor. In such cases I find myself thinking of the old Hayek (free-market economist, classical liberal thinker) title: The Pretense of Knowledge.
From many, many years ago when I took my Intro to Logic and years ago from a Discrete Math course I know there is a difference between a true conclusion and a valid conclusion (or perhaps better, a conclusion from a valid argument).
Am I missing the insights and any discussion on that distinction and how to identify and avoid such errors—confusing a valid conclusion from a true conclusion. Seems like it’s kind of important to the idea of getting Less Wrong.
There seems to be at least two questions here.
1. Are people too wrapped up in “pure theory”?
2. Are people making the mistake of confusing A implies B with A and B are true?
When I first read your comment, I assumed you were referring to the posts that are about AI related stuff, though I’m now realizing you could have been thinking of LW content in general. Are there certain parts of LW that you were referring to?
Pre-emptive response.
Concerning AI Safety stuff, my understanding is that the focus on pure theory comes from the fact that it’s potentially world-endingly disastrous to try and develop the field by experimentation. For most domains, knowledge has been built by centuries of interplay between trial and error experimentation and “unpractical” theoretical work. That’s probably also a very effective approach for most new domains. But with AI safety, we don’t get trial and error, making the task way harder, and leaving people to bust their asses over developing the pure theory to a point where it will eventually become the practical applicable stuff.
While the recent AI posts certainly have played a part it’s also been in general. Moreover, it may well be more about me than about the contributions to LW. While both questions you identify are part of my thinking but the core is really about valid logic/argument structure (formalism) and reality/truths (with truths perhaps being a more complex item than mere facts). Valid argument that reaches false conclusions is not really helping to get less wrong in our thinking or actions.
I think the post on counter-factual, thick and thin, also bring the question to mind for me. Like I say, however, this might be more about me lacking the skills to fully follow and appreciate the formalized parts so missing how it helps get less wrong.
I suppose expressing my though a bit differently, do these highly formalized approaches shed more light on the underlying question and getting to the best answers we can given our state of knowledge or would Occam suggest whittling them down a bit.
This last bit prompts me to think that answer will depend a great deal on the audience in question so maybe my musings are really more about who the target audience of the posts are (perhaps not me ;-)