Let’s take the best computer programmer. Imagine he tries to write down all his important knowledge in a book. He writes down all statements where he believes that he can justify that they are true in a book.
Then he gives the book to a person who never programmed with equal IQ.
How much of the knowledge of the expert knowledge get’s passed down through this process? I grant that some knowledge get’s passed down, but I don’t think that all knowledge does get passed down. The expert programmer has what’s commonly called “unconscious competence”.
Allen might call that kind of knowledge part of the best knowledge of our civilization. It’s crucial knowledge for our technological progress.
But to get back to the main point, to accept that the contemplative, logocentric approach has flaws is not simply about focusing on it itself but on demonstrating alternatives.
This seems to be a complicated, abstruse way of saying “reading statements of knowledge doesn’t thereby convey practical skills”.
If I explain one paradigm in the concepts of another paradigm that leads in it’s nature to complicated and abstruse ways of making a statement.
But in this case the claim is more general. There are cases where the programmer can describe a heuristic that he uses to make decisions without pointing to a statement that has justified veracity.
Google for example wants to give it’s managers good management skills. To do that they give them checklists of what they are supposed to do when faced with a new recruit. Lazlo Bock from Google’s People Operations credits the email that gives that checklist with a resulting productivity improvement of 25% due to new recruits coming up to speed faster.
You don’t need to understand the justification for a checklist item to be able profit from following a ritual that goes through all the items on checklist. Following a ritualistic checklist would be knowledge in the Chinese sense where there’s a huge emphasis of following proper protocols but it wouldn’t be seen as knowledge in the philosophic western tradition.
But why does it matter? What harm can come from thinking that knowledge is about demonstrable truths? If generating knowledge is about generating demonstrable truths you can use the patent system to effectively reward knowledge creation.
You don’t need to understand the justification for a checklist item to be able profit from following a ritual that goes through all the items on checklist. Following a ritualistic checklist would be knowledge in the Chinese sense where there’s a huge emphasis of following proper protocols but it wouldn’t be seen as knowledge in the philosophic western tradition.
I don’t understand your point. The western tradition is perfectly capable of talking about the knowledge that following this checklist results in a measurable 25% improvement. So you must mean something else but I don’t know what.
Nobody knows everything at the same time. The knowledge is split between the person following the checklist and the one who designed it. That doesn’t make it a different kind of knowledge. And if the person who designed it just tested lots of random variations and has no idea why this one works, or if the designer is dead and didn’t pass on his ideas, then there is less knowledge, but it’s still the same kind of knowledge.
The programmer is a paradigm case. He works with very well defined logical or mathematical models of code execution. But he constantly relies on the correct functioning of a myriad other pieces of software and hardware. He doesn’t know in full detail why he has to talk to these other things the way he does; he just memorizes a great deal of API details which are neither arbitrary not clearly self-evident, and trusts that the hardware designers knew what they were doing.
So when you say:
There are cases where the programmer can describe a heuristic that he uses to make decisions without pointing to a statement that has justified veracity.
It seems to me that almost everything the programmer ever does can be framed this way. Suppose I know that under high contention I should switch to a different lock implementation; but I don’t know how the two implementations actually work, so I don’t know why each one is better in a different case. I also don’t know where exactly the cutoff is, because it’s hard to measure, so there’s an indeterminate zone in between where I’m not sure what to use; so I have a heuristic that uses an arbitrary cutoff.
Is this a heuristic that has no “justified veracity”, or is this a kind of knowledge where I can prove (with benchmarks) that the heuristic leads to good results, with an underlying model (map) of ‘lock A has less contention overhead, but lock B takes less time to acquire’?
He doesn’t know in full detail why he has to talk to these other things the way he does; he just memorizes a great deal of API details which are neither arbitrary not clearly self-evident, and trusts that the hardware designers knew what they were doing.
I don’t think knowing the API is sufficient to being a good programmer. The productivity difference between what Google sees in a 10x programmer from a normal programmer is not about the 10x programmer having memorized more API calls.
Simply teaching someone about API calls and about the specifics about the cutoff between different lock implementations isn’t going to make someone a 10x programmer.
Part of being a good programmer is knowing when to check in your code and when it makes sense to write additional tests to check your code. One way to check that some people at the local LW dojo use is to ask themselves: “Would I be surprised if my changes crash the program?” If there system I wouldn’t be suprised they go and spent additional time writing tests or cleaning up the code.
That heuristic is knowledge that’s able to be verbalized but that moves farther away from justified veracity. You can go a step further and talk about how to pass down the system I sense of surprise from one programmer to another.
If you go and study computer science you won’t find classes on developing an appropriate level of getting surprised. It’s not the kind of knowledge that professors of computer science work to create.
I don’t understand your point. The western tradition is perfectly capable of talking about the knowledge that following this checklist results in a measurable 25% improvement. So you must mean something else but I don’t know what.
How many checklists have you used the last week? How many thing you do follow strict checklists?
How many serious philosophers deal with the issues surrounding checklists?
I think that there societal resistance towards adopting more checklists.
In Google’s case they have hard data to justify their checklist but a lot of checklists that are in usage don’t have hard data that backs them up but are still useful.
To move meta, of course the ideas that I try to express on LW can be expressed in the English language. Trying to express ideas that can’t be expressed in English doesn’t make any sense.
I don’t think knowing the API is sufficient to being a good programmer.
Of course it isn’t (but it is necessary). I didn’t mean to imply that it was. But I do think this example generalizes to almost all the other things that a very good programmer needs to do.
That heuristic is knowledge that’s able to be verbalized but that moves farther away from justified veracity. You can go a step further and talk about how to pass down the system I sense of surprise from one programmer to another.
That heuristic is knowledge that’s able to be verbalized but that moves farther away from justified veracity.
Why do you think so? To me (as a programmer) heuristics about when to check what feel perfectly knowable and able to be verbalized. To be sure, they would take a lot words. Maybe more importantly, they’re highly entanged with many other things a programmer needs to know and do. But I don’t see what would make them less justified or less explicit, just more complex.
You can go a step further and talk about how to pass down the system I sense of surprise from one programmer to another.
It’s a truism that you can’t gain habits of thought, or mental heuristics, just by abstractly understanding and memorizing a bunch of facts; that’s just not how humans learn things.
That doesn’t necessarily mean there’s a lot of information in the heuristics that isn’t contained in the dry facts. You can’t get the heuristics by practicing without being aware of the facts. If you can’t explain why you act out the heuristics you do in terms of the facts you learned, or if you can’t verbalize what heuristics you’re acting on, that is more likely to be a failure of introspection, rather than evidence that your mind developed extra incommunicable knowledge the facts didn’t imply.
If you go and study computer science you won’t find classes on developing an appropriate level of getting surprised. It’s not the kind of knowledge that professors of computer science work to create.
Because they’re studying computer science, not programming.
Yes, if you look at software engineering, its state of formal education is quite bad compared to some other engineering professions. I even have a good idea of the historical causes of this. But that doesn’t mean programming can’t be taught or even that nobody learns it well formally, just that most programmers don’t, as a social fact. They’re encouraged to experiment and self-teach; they start working as soon as someone will pay them, which is much earlier than ‘when they’ve mastered programming’; they influence one another; and the industry on average doesn’t have a lot of quality control, quality standards or external verification, just ‘ship it once it’s ready’.
How many checklists have you used the last week? How many thing you do follow strict checklists? How many serious philosophers deal with the issues surrounding checklists?
No checklists that I can think of. I have no idea what philosophers en masse spend their time on, serious or otherwise.
Checklists are a specific solution which need to be justified wrt. specific problems, most of which have alternative solutions. I don’t think ‘not using checklists’ is a good proxy for ‘not doing a job as well as possible’ without considering alternatives and the details of the job involved. At least, as long as you’re talking about explicit checklists consulted by humans, and not generalized automated processes that reify dependencies in a way that doesn’t let you proceed without completing the “checklist” items.
Going back to your general argument, are you saying that Eastern philosophical traditions are better at getting people to use checklists (or other tools) without understanding them, while Western ones encourage people not to use things they don’t understand explicitly?
Going back to your general argument, are you saying that Eastern philosophical traditions are better at getting people to use checklists (or other tools) without understanding them, while Western ones encourage people not to use things they don’t understand explicitly?
In Confucism a wise person is a person who follows the proper rituals for every occasion (as the book argues). I think checklists do define rituals. A person who values following rituals is thus more likely to accept a checklist and follow it.
Culturally there’s a sense that asking a Western doctor to use a checklist means to assumes that he’s not smart enough to do the right thing. I don’t think that exists to the same extend in China.
Before germ theory Western doctors refused to wash their hands because they didn’t see the point of cleanness as a value. I need to do a bit of research to get data about Chinese medicine but from what I have seen of Ajuvedic medicine they do tons of saucha rituals that are about producing cleanness like tongue-scrapping.
Why do you think so? To me (as a programmer) heuristics about when to check what feel perfectly knowable and able to be verbalized. To be sure, they would take a lot words.
I think you can describe me easily a system II heuristic that you use to decide when to check more. I don’t think you can easily describe how you feel the emotion of surprise that exists on a system I level. Transfering triggers of the emotion of surprise from one person to another is hard.
Yes, if you look at software engineering, its state of formal education is quite bad compared to some other engineering professions. I even have a good idea of the historical causes of this.
I would say it’s because the relevant professors see issues of algorithm design as higher status than asking themselves when programmers should recheck their code. It seems no Computer Science professor took the time to setup a study to test whether teaching programmers to be faster at typing increases their programming output. That’s because the mathematical knowledge get’s seen as more pure and more worthy. It has to do with the kind of the knowledge that’s valued.
Mathematical proofs can provide strong justification and are thus more high status than messy experiements about teaching programming that can be confounded by all sorts of factors.
This leads to a misallocation of intellectual resources.
Culturally there’s a sense that asking a Western doctor to use a checklist means to assumes that he’s not smart enough to do the right thing. I don’t think that exists to the same extend in China.
Before germ theory Western doctors refused to wash their hands because they didn’t see the point of cleanness as a value.
Checklists are known to be very helpful with certain things, even if the relevant profession (e.g. doctors) don’t always widely recognize this. On the other hand, why should I wash my hands if you can’t give me a reason for cleanliness, neither theoretical (germ theory) nor empirical (it reduces disease incidence)?
Ideally, we should value checklists and rituals as a tool, but also require there to be good reasons for rituals, and trust that those who institute or choose the rituals know what they’re doing. We should also be open to changing rituals, sometimes quickly, as new evidence comes in.
Maybe Eastern traditions achieve a better social balance than Western ones on this matter; I wouldn’t know.
I think you can describe me easily a system II heuristic that you use to decide when to check more. I don’t think you can easily describe how you feel the emotion of surprise that exists on a system I level. Transfering triggers of the emotion of surprise from one person to another is hard.
I think everyone agrees on this. Humans can’t fully learn new behaviors just through abstract knowledge without practice.
I would say it’s because the relevant professors see issues of algorithm design as higher status than asking themselves when programmers should recheck their code. It seems no Computer Science professor took the time to setup a study to test whether teaching programmers to be faster at typing increases their programming output. That’s because the mathematical knowledge get’s seen as more pure and more worthy. It has to do with the kind of the knowledge that’s valued.
I would say it’s because most CS professors don’t really care about programming, and certainly not about typing speed. Programming isn’t computer science! CS is a branch of applied math. The professors don’t care about misallocation of intellectual resources across different fields, because they’ve already chosen their own field. You’d see the same problems if electrical engineers all studied physics instead, and picked up all the missing knowledge outside of formal education.
There are dedicated software engineering majors, some of them are even good (or at least better at teaching to program than CS ones), but numerically they produce far fewer graduates.
On the other hand, why should I wash my hands if you can’t give me a reason for cleanliness, neither theoretical (germ theory) nor empirical (it reduces disease incidence)?
At the time where the hand washing conflict happened there wasn’t much of evidence-based medicine.
Today there is some evidence for checklists improving medical outcomes but they don’t get easily adopted.
I think there’s decent evidence that combining hypnosis and anesthetic drugs is an improvement over just using anesthetic drugs.
I think everyone agrees on this. Humans can’t fully learn new behaviors just through abstract knowledge without practice.
I think the ability to be suprised by the right things is reasonably called knowledge and not only behavior.
There are dedicated software engineering majors, some of them are even good (or at least better at teaching to program than CS ones), but numerically they produce far fewer graduates.
According to Google some of their programmers are 10x as productive as the average. Can a decidated software engineering major teach the knowledge to be required to reach that level reliably? I don’t think so. I don’t think it even get’s 2x.
Is there any software engineering major that tested whether they produce better programmers if they also teach typing? I don’t think so.
At the time where the hand washing conflict happened there wasn’t much of evidence-based medicine.
Today there is some evidence for checklists improving medical outcomes but they don’t get easily adopted.
I think there’s decent evidence that combining hypnosis and anesthetic drugs is an improvement over just using anesthetic drugs.
This is all true, but it’s a rather far jump from here to ‘and a culture permeated by Eastern philosophy handles this better, controlling for the myriad unrelated differences, and accounting for whatever advantages Western philosophy may or may not have.’
I think the ability to be suprised by the right things is reasonably called knowledge and not only behavior.
I agree.
According to Google some of their programmers are 10x as productive as the average.
Google hires programmers who are already 10x as productive as the average. It doesn’t hire average programmers and train them to be 10x as productive using checklists or anything else. Maybe it hires programmers 9x as productive as the average and then helps them improve, but that’s a lot harder to measure than a whole order of magnitude improvement.
Can a decidated software engineering major teach the knowledge to be required to reach that level reliably? I don’t think so. I don’t think it even get’s 2x.
If you’re asking whether there exist two different institutions with software engineering majors, where the graduates of one are 2x as good as those of the other, or 2x better than the industry average, then the answer is clearly yes.
If you’re asking the same, but want to control for incoming freshman quality (i.e. measure the actual improvement due to teaching), then you hit the problem that there are no RCTs and there’s no control group (other than those who don’t go to college at all). There’s also no way to make two test groups of college students not learn anything ‘on the side’ from the Internet or from their friends, or to do so in the same way. So it’s really hard to measure anything on the scale of a whole major.
Lots of people have measured interventions on the scale of a single course. Some of them may help (like typing); in fact I hope some of them do help, otherwise the whole major would only give you credentials. I’m not disputing this, but I also don’t see the relation between there being some useful skills that aren’t explicit knowledge (in this case they’re motor skills everyone has explicit knowledge about) and a grand difference between societal or philosophical differences.
I’m a programmer, and the only part of college that was useful in my field was the freshman “intro to coding” courses. Six months in I was able to do the job I was hired for out of college.
Let’s take the best computer programmer. Imagine he tries to write down all his important knowledge in a book. He writes down all statements where he believes that he can justify that they are true in a book.
Then he gives the book to a person who never programmed with equal IQ.
How much of the knowledge of the expert knowledge get’s passed down through this process? I grant that some knowledge get’s passed down, but I don’t think that all knowledge does get passed down. The expert programmer has what’s commonly called “unconscious competence”.
Allen might call that kind of knowledge part of the best knowledge of our civilization. It’s crucial knowledge for our technological progress.
But to get back to the main point, to accept that the contemplative, logocentric approach has flaws is not simply about focusing on it itself but on demonstrating alternatives.
This seems to be a complicated, abstruse way of saying “reading statements of knowledge doesn’t thereby convey practical skills”.
If I explain one paradigm in the concepts of another paradigm that leads in it’s nature to complicated and abstruse ways of making a statement.
But in this case the claim is more general. There are cases where the programmer can describe a heuristic that he uses to make decisions without pointing to a statement that has justified veracity.
Google for example wants to give it’s managers good management skills. To do that they give them checklists of what they are supposed to do when faced with a new recruit. Lazlo Bock from Google’s People Operations credits the email that gives that checklist with a resulting productivity improvement of 25% due to new recruits coming up to speed faster.
You don’t need to understand the justification for a checklist item to be able profit from following a ritual that goes through all the items on checklist. Following a ritualistic checklist would be knowledge in the Chinese sense where there’s a huge emphasis of following proper protocols but it wouldn’t be seen as knowledge in the philosophic western tradition.
But why does it matter? What harm can come from thinking that knowledge is about demonstrable truths? If generating knowledge is about generating demonstrable truths you can use the patent system to effectively reward knowledge creation.
I don’t understand your point. The western tradition is perfectly capable of talking about the knowledge that following this checklist results in a measurable 25% improvement. So you must mean something else but I don’t know what.
Nobody knows everything at the same time. The knowledge is split between the person following the checklist and the one who designed it. That doesn’t make it a different kind of knowledge. And if the person who designed it just tested lots of random variations and has no idea why this one works, or if the designer is dead and didn’t pass on his ideas, then there is less knowledge, but it’s still the same kind of knowledge.
The programmer is a paradigm case. He works with very well defined logical or mathematical models of code execution. But he constantly relies on the correct functioning of a myriad other pieces of software and hardware. He doesn’t know in full detail why he has to talk to these other things the way he does; he just memorizes a great deal of API details which are neither arbitrary not clearly self-evident, and trusts that the hardware designers knew what they were doing.
So when you say:
It seems to me that almost everything the programmer ever does can be framed this way. Suppose I know that under high contention I should switch to a different lock implementation; but I don’t know how the two implementations actually work, so I don’t know why each one is better in a different case. I also don’t know where exactly the cutoff is, because it’s hard to measure, so there’s an indeterminate zone in between where I’m not sure what to use; so I have a heuristic that uses an arbitrary cutoff.
Is this a heuristic that has no “justified veracity”, or is this a kind of knowledge where I can prove (with benchmarks) that the heuristic leads to good results, with an underlying model (map) of ‘lock A has less contention overhead, but lock B takes less time to acquire’?
I don’t think knowing the API is sufficient to being a good programmer. The productivity difference between what Google sees in a 10x programmer from a normal programmer is not about the 10x programmer having memorized more API calls.
Simply teaching someone about API calls and about the specifics about the cutoff between different lock implementations isn’t going to make someone a 10x programmer.
Part of being a good programmer is knowing when to check in your code and when it makes sense to write additional tests to check your code. One way to check that some people at the local LW dojo use is to ask themselves: “Would I be surprised if my changes crash the program?” If there system I wouldn’t be suprised they go and spent additional time writing tests or cleaning up the code.
That heuristic is knowledge that’s able to be verbalized but that moves farther away from justified veracity. You can go a step further and talk about how to pass down the system I sense of surprise from one programmer to another.
If you go and study computer science you won’t find classes on developing an appropriate level of getting surprised. It’s not the kind of knowledge that professors of computer science work to create.
How many checklists have you used the last week? How many thing you do follow strict checklists? How many serious philosophers deal with the issues surrounding checklists?
I think that there societal resistance towards adopting more checklists.
In Google’s case they have hard data to justify their checklist but a lot of checklists that are in usage don’t have hard data that backs them up but are still useful.
To move meta, of course the ideas that I try to express on LW can be expressed in the English language. Trying to express ideas that can’t be expressed in English doesn’t make any sense.
Of course it isn’t (but it is necessary). I didn’t mean to imply that it was. But I do think this example generalizes to almost all the other things that a very good programmer needs to do.
That heuristic is knowledge that’s able to be verbalized but that moves farther away from justified veracity.
Why do you think so? To me (as a programmer) heuristics about when to check what feel perfectly knowable and able to be verbalized. To be sure, they would take a lot words. Maybe more importantly, they’re highly entanged with many other things a programmer needs to know and do. But I don’t see what would make them less justified or less explicit, just more complex.
It’s a truism that you can’t gain habits of thought, or mental heuristics, just by abstractly understanding and memorizing a bunch of facts; that’s just not how humans learn things.
That doesn’t necessarily mean there’s a lot of information in the heuristics that isn’t contained in the dry facts. You can’t get the heuristics by practicing without being aware of the facts. If you can’t explain why you act out the heuristics you do in terms of the facts you learned, or if you can’t verbalize what heuristics you’re acting on, that is more likely to be a failure of introspection, rather than evidence that your mind developed extra incommunicable knowledge the facts didn’t imply.
Because they’re studying computer science, not programming.
Yes, if you look at software engineering, its state of formal education is quite bad compared to some other engineering professions. I even have a good idea of the historical causes of this. But that doesn’t mean programming can’t be taught or even that nobody learns it well formally, just that most programmers don’t, as a social fact. They’re encouraged to experiment and self-teach; they start working as soon as someone will pay them, which is much earlier than ‘when they’ve mastered programming’; they influence one another; and the industry on average doesn’t have a lot of quality control, quality standards or external verification, just ‘ship it once it’s ready’.
No checklists that I can think of. I have no idea what philosophers en masse spend their time on, serious or otherwise.
Checklists are a specific solution which need to be justified wrt. specific problems, most of which have alternative solutions. I don’t think ‘not using checklists’ is a good proxy for ‘not doing a job as well as possible’ without considering alternatives and the details of the job involved. At least, as long as you’re talking about explicit checklists consulted by humans, and not generalized automated processes that reify dependencies in a way that doesn’t let you proceed without completing the “checklist” items.
Going back to your general argument, are you saying that Eastern philosophical traditions are better at getting people to use checklists (or other tools) without understanding them, while Western ones encourage people not to use things they don’t understand explicitly?
In Confucism a wise person is a person who follows the proper rituals for every occasion (as the book argues). I think checklists do define rituals. A person who values following rituals is thus more likely to accept a checklist and follow it.
Culturally there’s a sense that asking a Western doctor to use a checklist means to assumes that he’s not smart enough to do the right thing. I don’t think that exists to the same extend in China.
Before germ theory Western doctors refused to wash their hands because they didn’t see the point of cleanness as a value. I need to do a bit of research to get data about Chinese medicine but from what I have seen of Ajuvedic medicine they do tons of saucha rituals that are about producing cleanness like tongue-scrapping.
I think you can describe me easily a system II heuristic that you use to decide when to check more. I don’t think you can easily describe how you feel the emotion of surprise that exists on a system I level. Transfering triggers of the emotion of surprise from one person to another is hard.
I would say it’s because the relevant professors see issues of algorithm design as higher status than asking themselves when programmers should recheck their code. It seems no Computer Science professor took the time to setup a study to test whether teaching programmers to be faster at typing increases their programming output. That’s because the mathematical knowledge get’s seen as more pure and more worthy. It has to do with the kind of the knowledge that’s valued.
Mathematical proofs can provide strong justification and are thus more high status than messy experiements about teaching programming that can be confounded by all sorts of factors.
This leads to a misallocation of intellectual resources.
Checklists are known to be very helpful with certain things, even if the relevant profession (e.g. doctors) don’t always widely recognize this. On the other hand, why should I wash my hands if you can’t give me a reason for cleanliness, neither theoretical (germ theory) nor empirical (it reduces disease incidence)?
Ideally, we should value checklists and rituals as a tool, but also require there to be good reasons for rituals, and trust that those who institute or choose the rituals know what they’re doing. We should also be open to changing rituals, sometimes quickly, as new evidence comes in.
Maybe Eastern traditions achieve a better social balance than Western ones on this matter; I wouldn’t know.
I think everyone agrees on this. Humans can’t fully learn new behaviors just through abstract knowledge without practice.
I would say it’s because most CS professors don’t really care about programming, and certainly not about typing speed. Programming isn’t computer science! CS is a branch of applied math. The professors don’t care about misallocation of intellectual resources across different fields, because they’ve already chosen their own field. You’d see the same problems if electrical engineers all studied physics instead, and picked up all the missing knowledge outside of formal education.
There are dedicated software engineering majors, some of them are even good (or at least better at teaching to program than CS ones), but numerically they produce far fewer graduates.
At the time where the hand washing conflict happened there wasn’t much of evidence-based medicine.
Today there is some evidence for checklists improving medical outcomes but they don’t get easily adopted.
I think there’s decent evidence that combining hypnosis and anesthetic drugs is an improvement over just using anesthetic drugs.
I think the ability to be suprised by the right things is reasonably called knowledge and not only behavior.
According to Google some of their programmers are 10x as productive as the average. Can a decidated software engineering major teach the knowledge to be required to reach that level reliably? I don’t think so. I don’t think it even get’s 2x.
Is there any software engineering major that tested whether they produce better programmers if they also teach typing? I don’t think so.
This is all true, but it’s a rather far jump from here to ‘and a culture permeated by Eastern philosophy handles this better, controlling for the myriad unrelated differences, and accounting for whatever advantages Western philosophy may or may not have.’
I agree.
Google hires programmers who are already 10x as productive as the average. It doesn’t hire average programmers and train them to be 10x as productive using checklists or anything else. Maybe it hires programmers 9x as productive as the average and then helps them improve, but that’s a lot harder to measure than a whole order of magnitude improvement.
If you’re asking whether there exist two different institutions with software engineering majors, where the graduates of one are 2x as good as those of the other, or 2x better than the industry average, then the answer is clearly yes.
If you’re asking the same, but want to control for incoming freshman quality (i.e. measure the actual improvement due to teaching), then you hit the problem that there are no RCTs and there’s no control group (other than those who don’t go to college at all). There’s also no way to make two test groups of college students not learn anything ‘on the side’ from the Internet or from their friends, or to do so in the same way. So it’s really hard to measure anything on the scale of a whole major.
Lots of people have measured interventions on the scale of a single course. Some of them may help (like typing); in fact I hope some of them do help, otherwise the whole major would only give you credentials. I’m not disputing this, but I also don’t see the relation between there being some useful skills that aren’t explicit knowledge (in this case they’re motor skills everyone has explicit knowledge about) and a grand difference between societal or philosophical differences.
I’m a programmer, and the only part of college that was useful in my field was the freshman “intro to coding” courses. Six months in I was able to do the job I was hired for out of college.
College is a racket.