I think the best way to display the sheer mind-boggling absurdity of the “problem of induction” is to consider that we have two laws: the first law is the law science gives us for the evolution of a system and the second law simply states that the first law holds until time t and then “something else” happens. The first law is a product of the scientific method and the second law conforms to our intuition of what could happen. What the problem of induction is actually saying is that imagination trumps science. That’s ridiculous. It’s apparently very hard for people to acknowledge that what they can conceive of happening holds no weight over the world.
The absurdity comes in earlier on though. You have to go way back to the very notion that science is mediated by human psychology; without that nobody would think their imagination portends the future. Let’s say you have a robotic arm that snaps Lego pieces together. Is the way Lego pieces can snap together mediated by the control system of the robotic arm? No. You need the robotic arm (or something like it) to do the work but nothing about the robotic arm itself determines whether the work can be done. Science is just a more complex example of the robotic arm. Science requires an entity that can do the experiments and manipulate the equations but that does not mean that the experiments and equations are therefore somehow “mediated” by said entity. Nothing about human psychology is relevant to whether the science can be done.
You need to go taboo crazy, throw out “belief,” “knowledge,” “understanding,” and the whole apparatus of philosophy of science. Think of it in completely physical terms. What science requires is a group of animals that are capable of fine-grained manipulation both of physical objects and of symbol systems. These animals must be able to coordinate their action, through sound or whatever, and have a means of long-term coordination, such as marks on paper. Taboo “meaning,” “correspondence,” etc. Science can be done in this situation. The entire history of science can be carried out by these entities under the right conditions given the right dynamics. There’s no reason those dynamics have to include anything remotely resembling “belief” or “knowledge” in order to get the job done. They do the measurements, make the marks on a piece of paper that have, by convention, been agreed to stand for the measurements, and some other group can then use those measurements to make other measures, and so forth. They have best practices to minimize the effect of errors entering the system, sure, but none of this has anything to do with “belief.”
The whole story about “belief” and “knowledge” that philosophy provides us is a story of justification against skepticism. But no scientist has reason to believe in the philosophical tale of skepticism. We’re not stuck in our heads. That makes sense if you’re Descartes, if you’re a dualist and believe knowledge comes from a priori reasoning. If you’re a scientist, we’re just physical systems in a physical world, and there’s no great barrier to be penetrated. Physically speaking, we’re limited by the accuracy of our measurements and the scale of the Universe, but we’re not limited by our psychology except by limitations it imposes on our ability to manipulate the world (which aren’t different in kind from the size of our fingers or the amount of weight we can lift). Fortunately our immediate environment has provided the kind of technological feedback loop that’s allowed us to overcome such limitations to a high degree.
Justification is a pseudo-problem because skepticism is a pseudo-problem. Nothing needs to be justified in the philosophical sense of the term. How errors enter the system and compound is an interesting problem but, beyond that, the line from an experiment to your sitting reading a paper 50 years later in an unbroken causal chain and if you want to talk about “truth” and “justification” then, beyond particular this-worldy errors, there’s nothing to discuss. There’s no general project of justifying our beliefs about the world. This or that experiment can go wrong in this or that way. This or that channel of communication can be noisy. These are all finite problems and there’s no insurmountable issue of recursion involved. There’s no buck to be passed. There might be a general treatment of these issues (in terms of Bayes or whatever) but let’s not confuse such practical concerns with the alleged philosophical problems. We can throw out the whole philosophical apparatus without loss; it doesn’t solve any problems that it didn’t create to begin with.
Nothing needs to be justified in the philosophical sense of the term.
I think justification is important, especially in matters like AI design, as an uFAI could destroy the world.
In the case of AI design in general, consider the question “Why should we program an AI with a prior biased towards simpler theories?” I don’t think anyone would just walk away from a more detailed answer than “It’s our best guess right now.”, if they were certain such an answer existed.
They do the measurements, make the marks on a piece of paper that have, by convention, been agreed to stand for the measurements, and some other group can then use those measurements to make other measures, and so forth. They have best practices to minimize the effect of errors entering the system, sure, but none of this has anything to do with “belief.”
You seem to have a picture of science that consists of data-gathering. Once you bring in theories, you then have a situation where there a multuple theories, and some groups of scientists are exploring theory A rather than B..and that might as well be called belief.
Nothing needs to be justified in the philosophical sense of the term.
I think justification is important, especially in matters like AI design, as an uFAI could destroy the world.
In the case of AI design in general, consider the question “Why should we program an AI with a prior biased towards simpler theories?” I don’t think anyone would argue that a more detailed answer than “It’s our best guess right now.” would be desirable.
I think the best way to display the sheer mind-boggling absurdity of the “problem of induction” is to consider that we have two laws: the first law is the law science gives us for the evolution of a system and the second law simply states that the first law holds until time t and then “something else” happens. The first law is a product of the scientific method and the second law conforms to our intuition of what could happen. What the problem of induction is actually saying is that imagination trumps science. That’s ridiculous. It’s apparently very hard for people to acknowledge that what they can conceive of happening holds no weight over the world.
The absurdity comes in earlier on though. You have to go way back to the very notion that science is mediated by human psychology; without that nobody would think their imagination portends the future. Let’s say you have a robotic arm that snaps Lego pieces together. Is the way Lego pieces can snap together mediated by the control system of the robotic arm? No. You need the robotic arm (or something like it) to do the work but nothing about the robotic arm itself determines whether the work can be done. Science is just a more complex example of the robotic arm. Science requires an entity that can do the experiments and manipulate the equations but that does not mean that the experiments and equations are therefore somehow “mediated” by said entity. Nothing about human psychology is relevant to whether the science can be done.
You need to go taboo crazy, throw out “belief,” “knowledge,” “understanding,” and the whole apparatus of philosophy of science. Think of it in completely physical terms. What science requires is a group of animals that are capable of fine-grained manipulation both of physical objects and of symbol systems. These animals must be able to coordinate their action, through sound or whatever, and have a means of long-term coordination, such as marks on paper. Taboo “meaning,” “correspondence,” etc. Science can be done in this situation. The entire history of science can be carried out by these entities under the right conditions given the right dynamics. There’s no reason those dynamics have to include anything remotely resembling “belief” or “knowledge” in order to get the job done. They do the measurements, make the marks on a piece of paper that have, by convention, been agreed to stand for the measurements, and some other group can then use those measurements to make other measures, and so forth. They have best practices to minimize the effect of errors entering the system, sure, but none of this has anything to do with “belief.”
The whole story about “belief” and “knowledge” that philosophy provides us is a story of justification against skepticism. But no scientist has reason to believe in the philosophical tale of skepticism. We’re not stuck in our heads. That makes sense if you’re Descartes, if you’re a dualist and believe knowledge comes from a priori reasoning. If you’re a scientist, we’re just physical systems in a physical world, and there’s no great barrier to be penetrated. Physically speaking, we’re limited by the accuracy of our measurements and the scale of the Universe, but we’re not limited by our psychology except by limitations it imposes on our ability to manipulate the world (which aren’t different in kind from the size of our fingers or the amount of weight we can lift). Fortunately our immediate environment has provided the kind of technological feedback loop that’s allowed us to overcome such limitations to a high degree.
Justification is a pseudo-problem because skepticism is a pseudo-problem. Nothing needs to be justified in the philosophical sense of the term. How errors enter the system and compound is an interesting problem but, beyond that, the line from an experiment to your sitting reading a paper 50 years later in an unbroken causal chain and if you want to talk about “truth” and “justification” then, beyond particular this-worldy errors, there’s nothing to discuss. There’s no general project of justifying our beliefs about the world. This or that experiment can go wrong in this or that way. This or that channel of communication can be noisy. These are all finite problems and there’s no insurmountable issue of recursion involved. There’s no buck to be passed. There might be a general treatment of these issues (in terms of Bayes or whatever) but let’s not confuse such practical concerns with the alleged philosophical problems. We can throw out the whole philosophical apparatus without loss; it doesn’t solve any problems that it didn’t create to begin with.
I think justification is important, especially in matters like AI design, as an uFAI could destroy the world.
In the case of AI design in general, consider the question “Why should we program an AI with a prior biased towards simpler theories?” I don’t think anyone would just walk away from a more detailed answer than “It’s our best guess right now.”, if they were certain such an answer existed.
You seem to have a picture of science that consists of data-gathering. Once you bring in theories, you then have a situation where there a multuple theories, and some groups of scientists are exploring theory A rather than B..and that might as well be called belief.
I think justification is important, especially in matters like AI design, as an uFAI could destroy the world.
In the case of AI design in general, consider the question “Why should we program an AI with a prior biased towards simpler theories?” I don’t think anyone would argue that a more detailed answer than “It’s our best guess right now.” would be desirable.