I vaguely remember from the last time I visited this site that you are in the inductivist camp. In several articles you seemed to express a deep belief in Bayesian reasoning.
I think that while you are an intelligent guy but I think your abandonment of falsification in favor of induction is one of your primary mistakes. Falsification subsumes induction. Popper wins over Bayes.
Any presumed inductivism has foundations in trial and error, and not the other way around. Poppers construction is so much more straightforward than this convoluted edifice you are creating.
Once you understand falsification there is no problem explaining why science isn’t based on “faith”. That’s because once you accept falsification as the basis for science it is clear that one is not using mere induction.
At this point I’m wondering if you are a full blown inductionist. Do you believe that my beliefs are founded upon induction? Do you believe that because you believe I have no way to avoid the use of induction? I had a long discussion once with an inductivist and for the life of me I couldn’t get him to understand the difference between being founded upon and using.
I don’t even believe that I am using induction in many of the cases where inductivists claim that I am. I don’t assume the floor will be there when I step out of bed in the morning because of induction, nor do I believe sun will rise tomorrow because of induction.
I believe those things because I have well tested models. Models about how wood behaves, and models about how objects behave. Often I don’t even believe what is purported to be my belief.
The question, “will the sun rise tomorrow” has a broader meaning than “The sun will rise on August 24, 2008” in this discussion. In fact, I don’t explicitly and specifically hold such beliefs in any sort of long term storage. I don’t have a buffer for whether the sun is going to rise on the 24th, the 25th, and so forth. I don’t have enough memory for that. Nor do I determine the values to place in each of those buffers by an algorithm of induction.
I only think the question refers to August the 24th with further clarification by the speaker. I think he means “how do we know the sun will keep rising” and not that the questioner had any particular concern about the 24th.
I did run into a guy at a park who asked me if I believed the world would end on December 21, 2012. I had no idea what he was on about till he mentioned something about the Mayan calendar.
So in fact, in this discussion, when we are talking about the question of “will the sun will rise tomorrow” we aren’t concerned about whether any single new observation will match priors we are concerned about the principles upon which the sun operates. We are talking models, not observations.
As a child I remember just assuming the sun would rise. I don’t in fact remember any process of induction I went through to justify it. Of course that doesn’t mean my brain might not be operating via induction unbeknownst to me. The same could be said of animals. They two operate on the assumption that the sun will rise tomorrow.
They even have specific built in behaviors that are geared towards this. It’s pretty clear that where these assumption are encoded outside the brain, that the encoding was done by evolutionary processes and we know natural selection does not operate via induction.
What about the mental processes of animals. Must the fact that animals mentally operate on the presumption that “the sun will rise tomorrow” mean that they much have somewhere deep inside an inductive module to deal with the sun rising. I don’t think so. It isn’t even clear that they believe that they believe “the sun will rise tomorrow” either specifically or generally.
Even if they do it is not clear that induction plays a part in such a belief. It may be that natural selection has built up a many different possible mental models for operational possibilities and that observation is only used to classify things as fitting one of these predefined models.
Heck, I can even build new categories of models on the fly this way, this too on the basis of trial and error. A flexible mind finding that the behavior of some object in the real world does not quite fit one of the categories can take guesses at ways to tweak the model to better fit.
So it is not at all clear that anything has been foundationally been arrived at via induction.
In fact, if my memory serves me when I first inquired about the sun I was seeking a more sophisticated model. I knew I already had it categorized as the kind of object that behaved the same way as it did in the past, but was concerned that perhaps I was mistaken and that it might be categorized in some other way. Perhaps as something that doesn’t follow such a simple rule.
Now I’m not even sure I asked the question precisely as “will the sun rise tomorrow” but I do remember my mental transitions. At first I don’t remember even thinking about it. Later I modified my beliefs in various ways and I don’t recall in what order, or why. I came to understand the sun rose repetitively, on a schedule, etc.
I do remember certain specific transitions. Like the time I realized because of tweaking of other models that, in fact, the statement “The sun will rise tomorrow” taken generally is not true. That I know certainly came to mind when I learned the sun was going to burn out in six billion years. My model, in the sense I believed the “sun will rise tomorrow” meaning the next day would come on schedule, was wrong.
In my view, “things that act Bayesian” is just another model. Thus, I never found the argument that Bayes refutes Popper very compelling. Reading many of the articles linked off this one I see that you seem to be spinning your wheels. Popper covered the issue of justification much more satisfactorily than you have with your article, http://lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/””>“Where Recursive Justification Hits Rock Bottom”.
The proper answer is that justification doesn’t hit rock bottom and that science isn’t about absolute proof. Science is about having tentative beliefs that are open to change given more information based on models that are open to falsification by whatever means.
Pursuing a foundationalist philosophical belief system is a fools errand once you understand that there is no base foundation to knowledge. The entire question of whether knowledge is based on faith vs. empiricism evaporates with this understanding. Proper knowledge is based on neither.
I could go on with this. I have thought these things through to a very great extent but I know you have a comment length restriction here and I’ve probably already violated it. That’s a shame because it limits the discussion and allows you to continue in your biases.
You are definitely on the wrong track here with your discussions on morality also. You are missing the fundamentality of natural selection in all this, both to constrain our creations and to how it arises. In my view, the Pebblesorters morality is already divorced from survival and therefore it should be of no concern to themselves whatever if their AI becomes uncontrollable, builds it’s own civilization, etc. Fish, in fact, do create piles of pebbles despite their beliefs and you expressed no belief on their part that they must destroy incorrectly piled pebbles created by nature. So why should they have moral cares if their AI wins independence and goes of and does the “wrong” thing.
For them to be concerned about the AI requires broader assumptions than you have made explicit in your assumption. Assumptions like feeling responsible for chains of events you have set in place. There are assumptions that are objectively required to even consider something a morality. Otherwise we have classified incorrectly. In fact, the pebble sorters are suffering from an obsessive delusion and not a true morality. Pebblesorting fails to fit even the most simplistic criteria for a morality.
Since I am limited in both length and quantity of posts and I don’t feel like splitting this into multiple posts over multiple articles. This is in response to many of your articles. Invisible Frameworks, Mirrors and Paintings, Pebblesorters, When Recursive Justification Hits Rock Bottom, etc.
I could post it on an older thread to be buried a hundred comments deep but that two isn’t a rational choice as I’d like people to actually see it. To see that this abandonment of falsification for induction is based on faulty reasoning. I’m concerned about this because I have been watching science become increasingly corrupted by politics over my lifetime and one of the main levers used to do this is the argument that real scientists don’t use falsification (while totally misunderstanding what the term means) but induction.
Eliezer,
I vaguely remember from the last time I visited this site that you are in the inductivist camp. In several articles you seemed to express a deep belief in Bayesian reasoning.
I think that while you are an intelligent guy but I think your abandonment of falsification in favor of induction is one of your primary mistakes. Falsification subsumes induction. Popper wins over Bayes.
Any presumed inductivism has foundations in trial and error, and not the other way around. Poppers construction is so much more straightforward than this convoluted edifice you are creating.
Once you understand falsification there is no problem explaining why science isn’t based on “faith”. That’s because once you accept falsification as the basis for science it is clear that one is not using mere induction.
At this point I’m wondering if you are a full blown inductionist. Do you believe that my beliefs are founded upon induction? Do you believe that because you believe I have no way to avoid the use of induction? I had a long discussion once with an inductivist and for the life of me I couldn’t get him to understand the difference between being founded upon and using.
I don’t even believe that I am using induction in many of the cases where inductivists claim that I am. I don’t assume the floor will be there when I step out of bed in the morning because of induction, nor do I believe sun will rise tomorrow because of induction.
I believe those things because I have well tested models. Models about how wood behaves, and models about how objects behave. Often I don’t even believe what is purported to be my belief.
The question, “will the sun rise tomorrow” has a broader meaning than “The sun will rise on August 24, 2008” in this discussion. In fact, I don’t explicitly and specifically hold such beliefs in any sort of long term storage. I don’t have a buffer for whether the sun is going to rise on the 24th, the 25th, and so forth. I don’t have enough memory for that. Nor do I determine the values to place in each of those buffers by an algorithm of induction.
I only think the question refers to August the 24th with further clarification by the speaker. I think he means “how do we know the sun will keep rising” and not that the questioner had any particular concern about the 24th.
I did run into a guy at a park who asked me if I believed the world would end on December 21, 2012. I had no idea what he was on about till he mentioned something about the Mayan calendar.
So in fact, in this discussion, when we are talking about the question of “will the sun will rise tomorrow” we aren’t concerned about whether any single new observation will match priors we are concerned about the principles upon which the sun operates. We are talking models, not observations.
As a child I remember just assuming the sun would rise. I don’t in fact remember any process of induction I went through to justify it. Of course that doesn’t mean my brain might not be operating via induction unbeknownst to me. The same could be said of animals. They two operate on the assumption that the sun will rise tomorrow.
They even have specific built in behaviors that are geared towards this. It’s pretty clear that where these assumption are encoded outside the brain, that the encoding was done by evolutionary processes and we know natural selection does not operate via induction.
What about the mental processes of animals. Must the fact that animals mentally operate on the presumption that “the sun will rise tomorrow” mean that they much have somewhere deep inside an inductive module to deal with the sun rising. I don’t think so. It isn’t even clear that they believe that they believe “the sun will rise tomorrow” either specifically or generally.
Even if they do it is not clear that induction plays a part in such a belief. It may be that natural selection has built up a many different possible mental models for operational possibilities and that observation is only used to classify things as fitting one of these predefined models.
Heck, I can even build new categories of models on the fly this way, this too on the basis of trial and error. A flexible mind finding that the behavior of some object in the real world does not quite fit one of the categories can take guesses at ways to tweak the model to better fit.
So it is not at all clear that anything has been foundationally been arrived at via induction.
In fact, if my memory serves me when I first inquired about the sun I was seeking a more sophisticated model. I knew I already had it categorized as the kind of object that behaved the same way as it did in the past, but was concerned that perhaps I was mistaken and that it might be categorized in some other way. Perhaps as something that doesn’t follow such a simple rule.
Now I’m not even sure I asked the question precisely as “will the sun rise tomorrow” but I do remember my mental transitions. At first I don’t remember even thinking about it. Later I modified my beliefs in various ways and I don’t recall in what order, or why. I came to understand the sun rose repetitively, on a schedule, etc.
I do remember certain specific transitions. Like the time I realized because of tweaking of other models that, in fact, the statement “The sun will rise tomorrow” taken generally is not true. That I know certainly came to mind when I learned the sun was going to burn out in six billion years. My model, in the sense I believed the “sun will rise tomorrow” meaning the next day would come on schedule, was wrong.
In my view, “things that act Bayesian” is just another model. Thus, I never found the argument that Bayes refutes Popper very compelling. Reading many of the articles linked off this one I see that you seem to be spinning your wheels. Popper covered the issue of justification much more satisfactorily than you have with your article, http://lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/””>“Where Recursive Justification Hits Rock Bottom”.
The proper answer is that justification doesn’t hit rock bottom and that science isn’t about absolute proof. Science is about having tentative beliefs that are open to change given more information based on models that are open to falsification by whatever means.
Pursuing a foundationalist philosophical belief system is a fools errand once you understand that there is no base foundation to knowledge. The entire question of whether knowledge is based on faith vs. empiricism evaporates with this understanding. Proper knowledge is based on neither.
I could go on with this. I have thought these things through to a very great extent but I know you have a comment length restriction here and I’ve probably already violated it. That’s a shame because it limits the discussion and allows you to continue in your biases.
You are definitely on the wrong track here with your discussions on morality also. You are missing the fundamentality of natural selection in all this, both to constrain our creations and to how it arises. In my view, the Pebblesorters morality is already divorced from survival and therefore it should be of no concern to themselves whatever if their AI becomes uncontrollable, builds it’s own civilization, etc. Fish, in fact, do create piles of pebbles despite their beliefs and you expressed no belief on their part that they must destroy incorrectly piled pebbles created by nature. So why should they have moral cares if their AI wins independence and goes of and does the “wrong” thing.
For them to be concerned about the AI requires broader assumptions than you have made explicit in your assumption. Assumptions like feeling responsible for chains of events you have set in place. There are assumptions that are objectively required to even consider something a morality. Otherwise we have classified incorrectly. In fact, the pebble sorters are suffering from an obsessive delusion and not a true morality. Pebblesorting fails to fit even the most simplistic criteria for a morality.
Since I am limited in both length and quantity of posts and I don’t feel like splitting this into multiple posts over multiple articles. This is in response to many of your articles. Invisible Frameworks, Mirrors and Paintings, Pebblesorters, When Recursive Justification Hits Rock Bottom, etc.
I could post it on an older thread to be buried a hundred comments deep but that two isn’t a rational choice as I’d like people to actually see it. To see that this abandonment of falsification for induction is based on faulty reasoning. I’m concerned about this because I have been watching science become increasingly corrupted by politics over my lifetime and one of the main levers used to do this is the argument that real scientists don’t use falsification (while totally misunderstanding what the term means) but induction.