Okay, here’s something that could grow into an article, but it’s just rambling at this point. I was planning this as a prelude to my ever-delayed “Explain yourself!” article, since it eases into some of the related social issues. Please tell me what you would want me to elaborate on given what I have so far.
Title: On Mechanizing Science (Epistemology?)
“Silas, there is no Bayesian ‘revival’ in science. There is one amongst people who wish to reduce science to a mechanical procedure.” – Gene Callahan
“It is not possible … to construct a system of thought that improves on common sense. … The great enemy of the reservationist is the automatist[,] who believes he can reduce or transcend reason. … And the most pernicious [of them] are algorithmists, who believe they have some universal algorithm which is a drop-in replacement for any and all cogitation.” – “Mencius Moldbug”
And I say: What?
Forget about the issue of how many Bayesians are out there – I’m interested in the other claim. There are two ways to read it, and I express those views here (with a bit of exaggeration):
View 1: “Trying to come up with a mechanical procedure for acquiring knowledge is futile, so you are foolish to pursue this approach. The remaining mysterious aspects of nature are so complex you will inevitably require a human to continually intervene to ‘tweak’ the procedure based on human judgment, making it no mechanical procedure at all.”
View 2: “How dare, how dare those people try to mechanize science! I want science to be about what my elite little cadre has collectively decided is real science. We want to exercise our own discretion, and we’re not going to let some Young Turk outsiders upstage us with their theories. They don’t ‘get’ real science. Real science is about humans, yes, humans making wise, reasoned judgments, in a social context, where expertise is recognized and a rewarded. A machine necessarily cannot do that, so don’t even try.”
View 1, I find respectable, even as I disagree with it.
I think there is an additional interpretation that you’re not taking into account, and an eminently reasonable one.
First, to clarify the easy question: unless you believe that there is something mysteriously uncomputable going on in the human brain, the question of whether science can be automated in principle is trivial. Obviously, all you’d need to do is to program a sufficiently sophisticated AI, and it will do automated science. That much is clear.
However, the more important question is—what about our present abilities to automate science? By this I mean both the hypothetical methods we could try and the ones that have actually been tried in practice. Here, at the very least, a strong case can be made that the 20th century attempt to transform science into a bureaucratic enterprise that operates according to formal, automated procedures has largely been a failure. It has undoubtedly produced an endless stream of cargo-cult science that satisfies all these formal bureaucratic procedures, but is nevertheless worthless—or worse. At the same time, it’s unclear how much valid science is coming out except for those scientists who have maintained a high degree of purely informal and private enthusiasm for discovering truth (and perhaps also those in highly practical applied fields where the cash worth of innovations provides a stringent reality check).
This is how I read Moldbug: in many important questions, we can only admit honestly that we still have no way to find answers backed by scientific evidence in any meaningful sense of the term, and we have to grapple with less reliable forms of reasoning. Yet, there is the widespread idea that if only the proper formal bureaucratic structures are established, we can get “science” to give us answers about whichever questions we find interesting, and we should guide our lives and policies according to the results of such “science.” It’s not hard to see how this situation can give birth to a diabolical network of perverse incentives, producing endless reams of cargo-cult scientific work published by prestigious outlets and venerated as “science” by the general public and the government.
The really scary prospect is that our system of government might lead us to a complete disaster guided by policy prescriptions coming from this perverted system that has, arguably, already become its integral part.
Okay, thanks, that tells me what I was looking for: clarification of what it is I’m trying to refute, and what substantive reasons I have to disagree.
So “Moldbug” is pointing out that the attempt to make science into an algorithm has produced a lot of stuff that’s worthless but adheres to the algorithm, and we can see this with common sense, however less accurate it might be.
The point I would make in response (and elaborate on in the upcoming article), is that this is no excuse not to look inside the black box that we call common sense and understand why it works, and what about it could be improved, while the Moldbug view asks that we not do it. Like E. T. Jaynes says in chapter 1 of PLoS, the question we should ask is, if we were going to make a robot that infers everything we should infer, what constraints would we place on it?
This exercise is not just some attempt to make robots “as good as humans”; rather, it reveals why that-which-we-call “common sense” works in the first place, and exposes more general principles of superior inference.
In short, I claim that we can have Level 3 understanding of our own common sense. That, contra Moldbug, we can go beyond just being able to produce its output (Level 1), but also know why we regard certain things as common sense but not others, and be able to explain why it works, for what domains, and why and where it doesn’t work.
That it should be possible to Algorithmize Science seems clear from that the human brain can do science and the human brain should be possible to describe algorthmically. If not at a higher level, so at least—in principle—by quantum electrodynamics which is the (known and computable in principle) dynamics of electrons and nuclei that are the building blocks of the brain.( If it should be possible to do in practice it would have to be done at a higher level but as a proof of principle that argument should be enough.)
I guess, however, that what is actually meant is if the scientific method itself could be formalised (algorithmized), so that science could be “mechanized” in a more direct way than building human-level AIs and then let them learn and do science by the somewhat informal process used today by human scientists. That seems plausible. But has still to be done and seems rather difficult. The philosophers of science is working on understanding the scientific process better and better, but they seem still to have a long way to go before an actually working algorithmic description has been achieved. See also the discussion below on the recent article by Gelman and Shalizi criticizing bayesianism.
EDIT “done at a lower level” changed to “done at a higher level”
The scientific method is already a vague sort of algorithm, and I can see how it might be possible to mechanize many of the steps. The part that seems AGI-hard to me is the process of generating good hypotheses. Humans are incredibly good at plucking out reasonable hypotheses from the infinite search space that is available; that we are so very often says more of the difficulty of the problem than our own abilities.
The problem that I hear most often in regard to mechanizing this process has the basic form, “Obviously, you need a human in the loop because of all the cases where you need to be able to recognize that a correlation is spurious, and thus to ignore it, and that comes from having good background knowledge.”
But you have to wonder: the human didn’t learn how to recognize spurious correlations through magic. So however they came up with that capability should be some identifiable process.
The problem that I hear most often in regard to mechanizing this process has the basic form, “Obviously, you need a human in the loop because of all the cases where you need to be able to recognize that a correlation is spurious, and thus to ignore it, and that comes from having good background knowledge.”
Those people should be glad they’ve never heard of TETRAD—their heads might have exploded!
It’s apparently been put to use with some success. Clark Glymour—a philosophy professor who helped develop TETRAD—wrote a long review of The Bell Curve that lists applications of an earlier version of TETRAD (see section 6 of the review):
Several other applications have been made of the techniques, for example:
Spirtes et al. (1993) used published data on a small observational sample of Spartina grass from the Cape Fear estuary to correctly predict—contrary both to regression results and expert opinion—the outcome of an unpublished greenhouse experiment on the influence of salinity, pH and aeration on growth.
Druzdzel and Glymour (1994) used data from the US News and World Report survey of American colleges and universities to predict the effect on dropout rates of manipulating average SAT scores of freshman classes. The prediction was confirmed at Carnegie Mellon University.
Waldemark used the techniques to recalibrate a mass spectrometer aboard a Swedish satellite, reducing errors by half.
Shipley (1995, 1997, in review) used the techniques to model a variety of biological problems, and developed adaptations of them for small sample problems.
Akleman et al. (1997) have found that the graphical model search techniques do as well or better than standard time series regression techniques based on statistical loss functions at out of sample predictions for data on exchange rates and corn prices.
Personally I find it a little odd that such a useful tool is still so obscure, but I guess a lot of scientists are loath to change tools and techniques.
Is that directed at, or intended to be any more convincing to those holding Callahan’s view in the link? I’m not trying to criticize you, I just want to make sure you know the kind of worldview you’re dealing with here. If you’ll remember, this is the same guy who categorically rejects the idea that anything human-related is mechanized. ( Recent blog post about the issue … he’s proud to be a “Silas-free” zone now.)
On a slightly related note, I was thinking about what analogous positions would look like, and I thought of this one for comparison: “There is no automatist revival in industry. There is one amongst people who wish to reduce every production process into a mechanical procedure.”
So why would this Serious Thinker feel the need to reject, on sight, my comments from appearing, and then advertise it?
You don’t think your making a horrible impression on people you argue with may have anything to do with it? ;)
Seriously, that would be my first hypothesis. “You don’t catch flies with vinegar.” Go enough out of your way to antagonize people even as you’re making strong rebuttals to their weak arguments, and you’re giving them an easy way out of listening to you.
The nicer you are, the harder you make it for others to dismiss you as an asshole. I’d count that as a good reason to learn nice. (If you need role models, there are plenty of people here who are consistently nice without being pushovers in arguments—far from it.)
The evidence against that position is that Callahan, for a while, had no problem allowing my comments on his site, but then called me a “douche” and deleted them the moment they started disagreeing with him. Here’s another example.
Also, on this post, I responded with something like, “It’s real, in the sense of being an observable regularity in nature. Okay, what trap did I walk into?” but it was diallowed. Yet I wouldn’t call that comment rude.
It’s not about him banning me because of my tone; he bans anyone who makes the same kinds of arguments, unless they do it badly, in which case he keeps their comments for the easy kill, gets in the last word, and closes the thread. Which is his prerogative, of course, but not something to be equated with “being interested in meaningful exchange of ideas, and only banning those who are rude”.
“There is no automatist revival in industry. There is one amongst people who wish to reduce every production process into a mechanical procedure.”
I’m not sure that claim would be entirely absurd.
In the software engineering business, there’s a subculture whose underlying ideology can be caricatured as “Programming would be so simple if only we could get those pesky programmers out of the loop.” This subculture invests heavily into code generation, model-driven architectures, and so on.
Arguably, too, this goal only sems plausible if you have swallowed quite a few confusions regarding the respective roles of problem-solving, design, construction, and testing. A closer examination reveals that what passes for attempts at “mechanizing” the creation of software punts on most of the serious questions, focusing only on what is easily mechanizable.
But that is nothing other than the continuation of a trend that has existed in the software profession from the beginning: the provision of mechanized aids to a process that remains largely creative (and as such poorly understood). We don’t say that compilers have mechanized the production of software; we say that they have raised the level of abstraction at which a programmer works.
Okay, but that point only concerns production of software, a relatively new “production output”. The statement (“there is no automatist revival in industry …”) would apply just the same to any factory, and ridicules the idea that there can be a mechanical procedure for producing any good. In reality, of course, this seems to be the norm: someone figures out what combination of motions converts the input to the output, refuting the notion that e.g. “There is no mechanical procedure for preparing a bottle of Coca-cola …”
In any case, my dispute with Callahan’s remark is not merely about its pessimism regarding mechanizing this or that (which I called View 1), but rather, the implication that such mechanization would be fundamentally impossible (View 2), and that this impossibility can be discerned from philosophical considerations.
And regarding software, the big difficulty in getting rid of human programmers seems to come from how their role is, ultimately, to find a representation for a function (in a standard language) that converts a specified input into a specified output. Those specifications come from … other humans, who often conceal properties of the desired I/O behavior, or fail to articulate them.
Is that directed at, or intended to be any more convincing to those holding Callahan’s view in the link? I’m not trying to criticize you,
No, you’re absolutely right. My comment definitely would not be convincing. The best that could be said for it is that it would help to clarify the nature of my rejection of View 2. That is, if I were talking to Callahan, that comment would, at best, just help him to understand which position he was dealing with.
“Silas, there is no Bayesian ‘revival’ in science. There is one amongst people who wish to reduce science to a mechanical procedure.” – Gene Callahan
Am I the only one who finds this extremely unlikely? So far as I know, Bayesian methods have become massively more popular in science over the last 50 years. (Count JSTOR hits for the word ‘Bayesian,’ for example, and watch the numbers shoot up over time!)
Half of those hits are in the social sciences. I suspect that is economists defining the rational agents they study as bayesian, but that is rather different from the economists being bayesian themselves! The other half are in math & staticstics is probably that bayesian statisticians are becoming more common, which you might count as science (and 10% are in science proper).
Anyhow, it’s clear from the context (I’d have thought from the quote) that he just means that the vast majority of scientists are not interested in defining science precisely.
It might well have been clear from the quote itself, but not to me—I just read the quote as saying Bayesian thinking and Bayesian methods haven’t become more popular in science, which doesn’t mesh with my intuition/experience.
I’ve been poking at the question of to what extent computers could help people do science, beyond the usual calculation and visualization which is already being done.
I’m not getting very far—a lot of the most interesting stuff seems like getting meaning out of noise.
However, could computers check to make sure that the use of statistics isn’t too awful? Or is finding out whether what’s deduced follows from the raw data too much like doing natural language? What about finding similar patterns in different fields? Possibly promising areas which haven’t been explored?
Not exactly sure, to be honest, though your estimate sounds correct. What matters is that I deem it possible in a non-trivial sense; and more importantly, that we can currently identify rough boundaries of ideal mechanized science, and can categorize much of existing science as being definitely in or out.
It’s probably best to take a cyborg point of view—consciously followed algorithms (like probabilistic updating) aren’t a replacement for common sense, but they can be integrated into common sense, or used as measuring sticks, to turn common sense into common awesome cybersense.
You probably won’t find much opposition to your opinion here on LW. Duh, of course science can and will be automated! It’s pretty amusing that the thesis of Cosma Shalizi, an outspoken anti-Bayesian, deals with automated extraction of causal architecture from observed behavior of systems. (If you enjoy math, read it all; it’s very eye-opening.)
Really? I read enough of that thesis to add it to the pile of “papers about fully generally learning programs with no practical use or insight into general intelligence”.
Though I did get one useful insight from Shalizi’s thesis: that I should judge complexity by the program length needed to produce something functionally equivalent, not something exactly identical, as that metric makes more sense when judging complexity as it pertains to real-world systems and their entropy.
And regarding your other point, I’m sure people agree with holding view 2 in contempt. But what about the more general question of mechanizing epistemology?
Also, would people be interested in a study of what actually does motivate opposition to the attempt to mechanize science? (i.e. one that goes beyond my rants and researches it)
I read Moldbug’s quote as saying: there is currently no system, algorithmic or bureaucratic, that is even remotely close to the power of human intuition, common sense, genius, etc. But there are people who implicitly claim they have such a system, and those people are dangerous liars.
Those quotes do seem to be in conflict, but if he is talking about people that claim they already have the blueprints for such a thing, it would make more sense to read what he is saying as “it is not possible, with our current level of knowledge, to construct a system of thought that improves on common sense”. Is he really pushing back against people that say that it is possible to construct such a system (at some far off point in the future), or is he pushing back against people that say they have (already) found such a system?
The Moldbug article that the quote comes from does not seem to be expressing anything much like either Silas’ view 1 or view 2. Moldbug clarifies in a comment that he is not making an argument against the possibility of AGI:
Think of it in terms of Searle’s Chinese Room gedankenexperiment. If you can build a true AI, you can build the Chinese Room. Since I do not follow Penrose and the neo-vitalists in believing that AI is in principle impossible, I think the Chinese Room can be built, although it would take a lot of people and be very slow.
My argument is that, not only is it the Room rather than the people in it that speaks Chinese, but (in my opinion) the algorithm that the Room executes will not be one that is globally intelligible to humans, in the way that a human can understand, say, how Windows XP works.
In other words, the human brain is not powerful enough to virtualize itself. It can reason, and with sufficient technology it can build algorithmic devices capable of artificial reason, and this implies that it can explain why these devices work. But it cannot upgrade itself to a superhuman level of reason by following the same algorithm itself.
That sounds like a justification for view 1. Remember, view 1 doesn’t provide a justification for why there will need to be continual tweaks to mechanized reasoners to bring them in line with (more-) human reasoning, so remains agnostic on how exactly one justifies this view.
(Of course, “Moldbug’s” view still doesn’t seem any more defensible, because it equates a machine virtualizing a human, with a machine virtualizing the critical aspects of reasoning, but whatever.)
Okay, here’s something that could grow into an article, but it’s just rambling at this point. I was planning this as a prelude to my ever-delayed “Explain yourself!” article, since it eases into some of the related social issues. Please tell me what you would want me to elaborate on given what I have so far.
Title: On Mechanizing Science (Epistemology?)
“Silas, there is no Bayesian ‘revival’ in science. There is one amongst people who wish to reduce science to a mechanical procedure.” – Gene Callahan
“It is not possible … to construct a system of thought that improves on common sense. … The great enemy of the reservationist is the automatist[,] who believes he can reduce or transcend reason. … And the most pernicious [of them] are algorithmists, who believe they have some universal algorithm which is a drop-in replacement for any and all cogitation.” – “Mencius Moldbug”
And I say: What?
Forget about the issue of how many Bayesians are out there – I’m interested in the other claim. There are two ways to read it, and I express those views here (with a bit of exaggeration):
View 1: “Trying to come up with a mechanical procedure for acquiring knowledge is futile, so you are foolish to pursue this approach. The remaining mysterious aspects of nature are so complex you will inevitably require a human to continually intervene to ‘tweak’ the procedure based on human judgment, making it no mechanical procedure at all.”
View 2: “How dare, how dare those people try to mechanize science! I want science to be about what my elite little cadre has collectively decided is real science. We want to exercise our own discretion, and we’re not going to let some Young Turk outsiders upstage us with their theories. They don’t ‘get’ real science. Real science is about humans, yes, humans making wise, reasoned judgments, in a social context, where expertise is recognized and a rewarded. A machine necessarily cannot do that, so don’t even try.”
View 1, I find respectable, even as I disagree with it.
View 2, I hold in utter contempt.
I think there is an additional interpretation that you’re not taking into account, and an eminently reasonable one.
First, to clarify the easy question: unless you believe that there is something mysteriously uncomputable going on in the human brain, the question of whether science can be automated in principle is trivial. Obviously, all you’d need to do is to program a sufficiently sophisticated AI, and it will do automated science. That much is clear.
However, the more important question is—what about our present abilities to automate science? By this I mean both the hypothetical methods we could try and the ones that have actually been tried in practice. Here, at the very least, a strong case can be made that the 20th century attempt to transform science into a bureaucratic enterprise that operates according to formal, automated procedures has largely been a failure. It has undoubtedly produced an endless stream of cargo-cult science that satisfies all these formal bureaucratic procedures, but is nevertheless worthless—or worse. At the same time, it’s unclear how much valid science is coming out except for those scientists who have maintained a high degree of purely informal and private enthusiasm for discovering truth (and perhaps also those in highly practical applied fields where the cash worth of innovations provides a stringent reality check).
This is how I read Moldbug: in many important questions, we can only admit honestly that we still have no way to find answers backed by scientific evidence in any meaningful sense of the term, and we have to grapple with less reliable forms of reasoning. Yet, there is the widespread idea that if only the proper formal bureaucratic structures are established, we can get “science” to give us answers about whichever questions we find interesting, and we should guide our lives and policies according to the results of such “science.” It’s not hard to see how this situation can give birth to a diabolical network of perverse incentives, producing endless reams of cargo-cult scientific work published by prestigious outlets and venerated as “science” by the general public and the government.
The really scary prospect is that our system of government might lead us to a complete disaster guided by policy prescriptions coming from this perverted system that has, arguably, already become its integral part.
Okay, thanks, that tells me what I was looking for: clarification of what it is I’m trying to refute, and what substantive reasons I have to disagree.
So “Moldbug” is pointing out that the attempt to make science into an algorithm has produced a lot of stuff that’s worthless but adheres to the algorithm, and we can see this with common sense, however less accurate it might be.
The point I would make in response (and elaborate on in the upcoming article), is that this is no excuse not to look inside the black box that we call common sense and understand why it works, and what about it could be improved, while the Moldbug view asks that we not do it. Like E. T. Jaynes says in chapter 1 of PLoS, the question we should ask is, if we were going to make a robot that infers everything we should infer, what constraints would we place on it?
This exercise is not just some attempt to make robots “as good as humans”; rather, it reveals why that-which-we-call “common sense” works in the first place, and exposes more general principles of superior inference.
In short, I claim that we can have Level 3 understanding of our own common sense. That, contra Moldbug, we can go beyond just being able to produce its output (Level 1), but also know why we regard certain things as common sense but not others, and be able to explain why it works, for what domains, and why and where it doesn’t work.
This could lead to a good article.
That it should be possible to Algorithmize Science seems clear from that the human brain can do science and the human brain should be possible to describe algorthmically. If not at a higher level, so at least—in principle—by quantum electrodynamics which is the (known and computable in principle) dynamics of electrons and nuclei that are the building blocks of the brain.( If it should be possible to do in practice it would have to be done at a higher level but as a proof of principle that argument should be enough.)
I guess, however, that what is actually meant is if the scientific method itself could be formalised (algorithmized), so that science could be “mechanized” in a more direct way than building human-level AIs and then let them learn and do science by the somewhat informal process used today by human scientists. That seems plausible. But has still to be done and seems rather difficult. The philosophers of science is working on understanding the scientific process better and better, but they seem still to have a long way to go before an actually working algorithmic description has been achieved. See also the discussion below on the recent article by Gelman and Shalizi criticizing bayesianism.
EDIT “done at a lower level” changed to “done at a higher level”
The scientific method is already a vague sort of algorithm, and I can see how it might be possible to mechanize many of the steps. The part that seems AGI-hard to me is the process of generating good hypotheses. Humans are incredibly good at plucking out reasonable hypotheses from the infinite search space that is available; that we are so very often says more of the difficulty of the problem than our own abilities.
I’m pretty sure that judging whether one has adequately tested a hypothesis is also going to be very hard to mechanize.
The problem that I hear most often in regard to mechanizing this process has the basic form, “Obviously, you need a human in the loop because of all the cases where you need to be able to recognize that a correlation is spurious, and thus to ignore it, and that comes from having good background knowledge.”
But you have to wonder: the human didn’t learn how to recognize spurious correlations through magic. So however they came up with that capability should be some identifiable process.
Those people should be glad they’ve never heard of TETRAD—their heads might have exploded!
That’s intriguing. Has it turned out to be useful?
It’s apparently been put to use with some success. Clark Glymour—a philosophy professor who helped develop TETRAD—wrote a long review of The Bell Curve that lists applications of an earlier version of TETRAD (see section 6 of the review):
Personally I find it a little odd that such a useful tool is still so obscure, but I guess a lot of scientists are loath to change tools and techniques.
Maybe it’s just a matter of people kidding themselves about how hard it is to explain something.
On the other hand, some things (like vision and natural language) are genuinely hard to figure out.
I’m not saying the problem is insoluble. I’m saying it looks very difficult.
One possible way to get started is to do what the ‘Distilling Free-Form Natural Laws from Experimental Data’ project did: feed measurements of time and other variables of interest into a computer program which uses a genetic algorithm to build functions that best represent one variable as a function of itself and the other variables. The Science article is paywalled but available elsewhere. (See also this bunch of presentation slides.)
They also have software for you to do this at home.
The pithy reply would be that science already is mechanized. We just don’t understand the mechanism yet.
Is that directed at, or intended to be any more convincing to those holding Callahan’s view in the link? I’m not trying to criticize you, I just want to make sure you know the kind of worldview you’re dealing with here. If you’ll remember, this is the same guy who categorically rejects the idea that anything human-related is mechanized. ( Recent blog post about the issue … he’s proud to be a “Silas-free” zone now.)
On a slightly related note, I was thinking about what analogous positions would look like, and I thought of this one for comparison: “There is no automatist revival in industry. There is one amongst people who wish to reduce every production process into a mechanical procedure.”
From looking at his blog, I think you should take this as a compliment.
About “Silas-free zones” you blogged:
You don’t think your making a horrible impression on people you argue with may have anything to do with it? ;)
Seriously, that would be my first hypothesis. “You don’t catch flies with vinegar.” Go enough out of your way to antagonize people even as you’re making strong rebuttals to their weak arguments, and you’re giving them an easy way out of listening to you.
The nicer you are, the harder you make it for others to dismiss you as an asshole. I’d count that as a good reason to learn nice. (If you need role models, there are plenty of people here who are consistently nice without being pushovers in arguments—far from it.)
The evidence against that position is that Callahan, for a while, had no problem allowing my comments on his site, but then called me a “douche” and deleted them the moment they started disagreeing with him. Here’s another example.
Also, on this post, I responded with something like, “It’s real, in the sense of being an observable regularity in nature. Okay, what trap did I walk into?” but it was diallowed. Yet I wouldn’t call that comment rude.
It’s not about him banning me because of my tone; he bans anyone who makes the same kinds of arguments, unless they do it badly, in which case he keeps their comments for the easy kill, gets in the last word, and closes the thread. Which is his prerogative, of course, but not something to be equated with “being interested in meaningful exchange of ideas, and only banning those who are rude”.
I’m not sure that claim would be entirely absurd.
In the software engineering business, there’s a subculture whose underlying ideology can be caricatured as “Programming would be so simple if only we could get those pesky programmers out of the loop.” This subculture invests heavily into code generation, model-driven architectures, and so on.
Arguably, too, this goal only sems plausible if you have swallowed quite a few confusions regarding the respective roles of problem-solving, design, construction, and testing. A closer examination reveals that what passes for attempts at “mechanizing” the creation of software punts on most of the serious questions, focusing only on what is easily mechanizable.
But that is nothing other than the continuation of a trend that has existed in the software profession from the beginning: the provision of mechanized aids to a process that remains largely creative (and as such poorly understood). We don’t say that compilers have mechanized the production of software; we say that they have raised the level of abstraction at which a programmer works.
Okay, but that point only concerns production of software, a relatively new “production output”. The statement (“there is no automatist revival in industry …”) would apply just the same to any factory, and ridicules the idea that there can be a mechanical procedure for producing any good. In reality, of course, this seems to be the norm: someone figures out what combination of motions converts the input to the output, refuting the notion that e.g. “There is no mechanical procedure for preparing a bottle of Coca-cola …”
In any case, my dispute with Callahan’s remark is not merely about its pessimism regarding mechanizing this or that (which I called View 1), but rather, the implication that such mechanization would be fundamentally impossible (View 2), and that this impossibility can be discerned from philosophical considerations.
And regarding software, the big difficulty in getting rid of human programmers seems to come from how their role is, ultimately, to find a representation for a function (in a standard language) that converts a specified input into a specified output. Those specifications come from … other humans, who often conceal properties of the desired I/O behavior, or fail to articulate them.
No, you’re absolutely right. My comment definitely would not be convincing. The best that could be said for it is that it would help to clarify the nature of my rejection of View 2. That is, if I were talking to Callahan, that comment would, at best, just help him to understand which position he was dealing with.
Am I the only one who finds this extremely unlikely? So far as I know, Bayesian methods have become massively more popular in science over the last 50 years. (Count JSTOR hits for the word ‘Bayesian,’ for example, and watch the numbers shoot up over time!)
Half of those hits are in the social sciences. I suspect that is economists defining the rational agents they study as bayesian, but that is rather different from the economists being bayesian themselves! The other half are in math & staticstics is probably that bayesian statisticians are becoming more common, which you might count as science (and 10% are in science proper).
Anyhow, it’s clear from the context (I’d have thought from the quote) that he just means that the vast majority of scientists are not interested in defining science precisely.
It might well have been clear from the quote itself, but not to me—I just read the quote as saying Bayesian thinking and Bayesian methods haven’t become more popular in science, which doesn’t mesh with my intuition/experience.
How hard do you think mechanizing science would be? It strikes me as being at least in the same class with natural language.
I’ve been poking at the question of to what extent computers could help people do science, beyond the usual calculation and visualization which is already being done.
I’m not getting very far—a lot of the most interesting stuff seems like getting meaning out of noise.
However, could computers check to make sure that the use of statistics isn’t too awful? Or is finding out whether what’s deduced follows from the raw data too much like doing natural language? What about finding similar patterns in different fields? Possibly promising areas which haven’t been explored?
Not exactly sure, to be honest, though your estimate sounds correct. What matters is that I deem it possible in a non-trivial sense; and more importantly, that we can currently identify rough boundaries of ideal mechanized science, and can categorize much of existing science as being definitely in or out.
It’s probably best to take a cyborg point of view—consciously followed algorithms (like probabilistic updating) aren’t a replacement for common sense, but they can be integrated into common sense, or used as measuring sticks, to turn common sense into common awesome cybersense.
You probably won’t find much opposition to your opinion here on LW. Duh, of course science can and will be automated! It’s pretty amusing that the thesis of Cosma Shalizi, an outspoken anti-Bayesian, deals with automated extraction of causal architecture from observed behavior of systems. (If you enjoy math, read it all; it’s very eye-opening.)
Really? I read enough of that thesis to add it to the pile of “papers about fully generally learning programs with no practical use or insight into general intelligence”.
Though I did get one useful insight from Shalizi’s thesis: that I should judge complexity by the program length needed to produce something functionally equivalent, not something exactly identical, as that metric makes more sense when judging complexity as it pertains to real-world systems and their entropy.
And regarding your other point, I’m sure people agree with holding view 2 in contempt. But what about the more general question of mechanizing epistemology?
Also, would people be interested in a study of what actually does motivate opposition to the attempt to mechanize science? (i.e. one that goes beyond my rants and researches it)
I read Moldbug’s quote as saying: there is currently no system, algorithmic or bureaucratic, that is even remotely close to the power of human intuition, common sense, genius, etc. But there are people who implicitly claim they have such a system, and those people are dangerous liars.
Those quotes do seem to be in conflict, but if he is talking about people that claim they already have the blueprints for such a thing, it would make more sense to read what he is saying as “it is not possible, with our current level of knowledge, to construct a system of thought that improves on common sense”. Is he really pushing back against people that say that it is possible to construct such a system (at some far off point in the future), or is he pushing back against people that say they have (already) found such a system?
The Moldbug article that the quote comes from does not seem to be expressing anything much like either Silas’ view 1 or view 2. Moldbug clarifies in a comment that he is not making an argument against the possibility of AGI:
That sounds like a justification for view 1. Remember, view 1 doesn’t provide a justification for why there will need to be continual tweaks to mechanized reasoners to bring them in line with (more-) human reasoning, so remains agnostic on how exactly one justifies this view.
(Of course, “Moldbug’s” view still doesn’t seem any more defensible, because it equates a machine virtualizing a human, with a machine virtualizing the critical aspects of reasoning, but whatever.)