It seems like people are presenting a moving target. First I was directed to one essay. In response to my criticism of a statement from that essay, you suggest that a different technique other than the one I quoted could work. Do you think I was right that the section of the essay I quoted doesn’t solve the problem?
The ‘moving target’ effect is caused by the fact that you are talking to several different people, the grandparent is my first comment in this discussion.
The concept mentioned in that essay is Bayes’ Theorem, which tells us how to update our probabilities on new evidence. It does not solve the problem of how to avoid infinitely many hypotheses for the same reason that Newton’s laws to not explain the price of gold in London, its not supposed to. Bayes theorem tells us how to change our probabilities with new evidence, and in the process assumes that those probabilities are real numbers (as opposed to infinitesimals).
Solomonoff induction tells us how to assign the initial probabilities, which are then fed into Bayes theorem to determine the current probabilities after adjusting based on the evidence. Both are essential, criticising BT for not doing SI’s job is like saying a car’s wheels are useless because they can’t do the engine’s job of providing power.
I don’t see any infinite regress at all, Solomonoff Induction tells us the prior, Bayesian Updating turns the prior into a posterior. They depend on each other to work properly but I don’t think they depend on anything else (unless you wish to doubt the basics of probability theory).
The regress was discussed in other comments here. I took you to be saying “everything together, works” and wanting to discuss the philosophy as a whole.
I thought that would be more productive than arguing with you about whether Bayes theorem really “assumes that those probabilities are real numbers” and various other details. That’s certainly not what other people here told me when I brought up infinitesimals. I also thought it would be more productive than going back to the text I quoted and explaining why that quote doesn’t make sense. Whether it is correct or not isn’t very important if a better idea, along the same lines, works.
The regress argument begins like this: What is the justification or probability for Solomonoff Induction and Bayesian updating? Or if they are not justified, and do not have a probability, then why should we accept them in the first place?
When you say they don’t depend on anything else, maybe you are answering the regress issue by saying they are unquestionable foundations. Is that it?
Well, to some extent every system must have unquestionable foundations, even maths must assume the axioms. The principle of induction (the more something has happened in the past, the more likely it is to happen in the future, all else being equal), cannot be justified without the justification being circular, but I doubt you could get through a single day without it. Ultimately every approach must fall back on an infinite regress as you put it, this doesn’t prevent that system from working.
However, both Bayes’ Theorem and Solomonoff Induction can be justified:
Bayes’ Theorem is an elementary deductive consequence of basic probability theory, particular the fairly obvious (at least it seems that way to me) that P(A&B) = P(A)*P(B|A). If it doesn’t seem obvious to you, then I know of at least two approaches for proving it. One is the Cox theorems, which begin by saying we want to rank statements by their plausibility, and we want certain things to be true this ranking (it must obey the laws of logic, it must treat hypotheses consistently etc), and from these derive probability theory.
Another approach is the Dutch Book arguments, which show that if you are making bets based on your probability estimates of certain things being true, then unless your probability estimates obey Bayes Theorem you can be tricked into a set of bets which result in a guaranteed loss.
To justify Solomonoff Induction, we imagine a theoretical predictor which bases its prior on Solomonoff Induction and updates by Bayes Theorem. Given any other predictor, we can compare our predictor to this opponent by comparing the probability estimates they assign to the actual outcome, then Solomonoff induction will at worst lose by a constant factor based on the complexity of the opponent.
This is the best that can be demanded of any prior, it is impossible to give perfect predictions in every possible universe, since you can always be beaten by a predictor taylor-made for that universe (which will generally perform very badly in most others).
(note: I am not an expert, it is possible that I have some details wrong, please correct me if I do)
“Well, to some extent every system must have unquestionable foundations”
No, Popper’s epistemology does not have unquestionable foundations.
You doubt I could get by without induction, but I can and do. Popper’s epistemology has no induction. It also has no regress.
Arguing that there is no choice but these imperfect concepts only works if there really is no choice. But there are alternatives.
I think that things like unquestionable foundations, or an infinite regress, are flaws. I think we should reject flawed things when we have better options. And I think Bayesian Epistemology has these flaws. Am I going wrong somewhere?
“However, both Bayes’ Theorem and Solomonoff Induction can be justified”
Justified by statements which are themselves justified (which leads to regress issues)? Or you mean justified given some unquestionable foundations? In your statements below, I don’t think you specify precisely what you deem to be able to justify things.
“Bayes’ Theorem is an elementary deductive consequence of basic probability theory”
Yes. It is not controversial itself. What I’m not convinced of is the claim that this basic bit of math solves any major epistemological problem.
Regarding Solomonoff induction, I think you are now attempting to justify it by argument. But you haven’t stated what are the rules for what counts as a good argument and why. Could you specify that? There’s not enough rigor here. And in my understanding Bayesian epistemology aims for rigor and that is one of the reasons they like math and try to use math in their epistemology. It seems to me you are departing from that worldview and its methods.
Another aspect of the situation is you have focussed on prediction. That is instrumentalist. Epistemologies should be able to deal with all categories of knowledge, not just predictive knowledge. For example they should be able to deal with creating non-emprical, non-predictive moral knowledge. Can Solomonoff induction do that? How?
Hang on, Popper’s philosophy doesn’t depend on any foundations? I’m going to call shenanigans on this. Earlier you gave and example of Popperian inference:
Consider a theory, T, that all swans are white. T is a universal theory.
No confirming evidence can prove T is true. You can see 5 white swans or 500 or 50 million. Still might be false.
But if you see one black swan it is false.
This is an asymmetry between confirmation and falsification when applied to universal theories. It does not hold for all theories.
Consider the negation ~T. At least one swan is not white. This theory cannot be refuted by any amount of observations. But it can be confirmed with only one observation. ~T is a non-universal theory and not the kind science is after.
Unquestioned assumptions include, but are not limited to the following:
The objects under discussion actually exist (Solomonoff Induction does not make this assumption)
“There is no evidence which could prove T” is stated without any proof, what if you got all the swans in one place, what if you found a reason why the existence of a black swan was impossibile?
Any observation of a black swan must be correct (Bayes Theorem is explicitly designed to avoid this assumption)
You can generalise from this one example to a point about all theories
“Science is only interested in universal theories”. Really? Are palaeontology and astronomy not sciences? They are both often concerned with specifics.
You must always begin with assumptions, if nothing else you must assume maths (which is pretty much the only thing that Bayes Theorem and Solomonoff Induction do assume).
I think that things like unquestionable foundations, or an infinite regress, are flaws. I think we should reject flawed things when we have better options. And I think Bayesian Epistemology has these flaws. Am I going wrong somewhere?
To be perfectly honest I care more about getting results in the real world than having some mythical perfect philosophy which can be justified to a rock.
What I’m not convinced of is the claim that this basic bit of math solves any major epistemological problem.
Stating that you believe Bayes’ theorem but doubt that it can actually solve epistemic problems is like saying you believe Pythagoras’ theorem but doubt it can actually tell you the side lengths of right angled triangles, it demonstrates a failure to internalise.
Bayes’ theorem tells you how to adjust beliefs based on evidence, every time you adjust your beliefs you must use it, otherwise your map will not reflect the territory.
Regarding Solomonoff induction, I think you are now attempting to justify it by argument. But you haven’t stated what are the rules for what counts as a good argument and why. Could you specify that? There’s not enough rigor here. And in my understanding Bayesian epistemology aims for rigor and that is one of the reasons they like math and try to use math in their epistemology. It seems to me you are departing from that worldview and its methods.
Does Popper not argue for his own philosophy, or does he just state it and hope people will believe him?
You cannot set up rules for arguments which are not themselves backed up by argument. Any argument will be convincing to some possible minds and not to others, and I’m okay with that, because I only have one mind.
Epistemologies should be able to deal with all categories of knowledge, not just predictive knowledge. For example they should be able to deal with creating non-emprical, non-predictive moral knowledge. Can Solomonoff induction do that? How?
Allow me to direct you to my all time favourite philosopher
Popper’s philosophy itself is not a deductive argument which depends on the truth of its premises and which, given their truth, is logically indisputable.
We’re well aware of issues like the fallibility of evidence (you may think you see a black swan, but didn’t). Those do not contradict this logical point about a particular asymmetry.
“You must always begin with assumptions”
No you don’t have to. Popper’s approach begins with conjectures. None of them are assumed, they are simply conjectured.
Here’s an example. You claim this is an assumption:
“You can generalise from this one example to a point about all theories”
In a Popperian approach, that is not assumed. It is conjectured. It is then open to critical debate. Do you see something wrong with it? Do you have an argument against it? Conjectures can be refuted by criticism.
BTW Popper wasn’t “generalizing”. He was making a point about all theories (in particular categories) in the first place and then illustrating it second. “Generalizing” is a vague and problematic concept.
“Does Popper not argue for his own philosophy, or does he just state it and hope people will believe him?
You cannot set up rules for arguments which are not themselves backed up by argument. ”
He argues, but without setting up predefined, static rules for argument. The rules for argument are conjectured, criticized, modified. They are a work in progress.
Regarding the Hume quote, are you saying you’re a positivist or similar?
“Bayes’ theorem tells you how to adjust beliefs based on evidence, every time you adjust your beliefs you must use it, otherwise your map will not reflect the territory.”
Only probabilistic beliefs. I think it is only appropriate to use when you have actual numbers instead of simply having to assign them to everything involved by estimating.
“To be perfectly honest I care more about getting results in the real world than having some mythical perfect philosophy which can be justified to a rock.”
Mistakes have real world consequences. I think Popper’s epistemology works better in the real world. Everyone thinks their epistemology is more practical. How can we decide? By looking at whether they make sense, whether they are refuted by criticism, etc… If you have a practical criticism of Popperian epistemology you’re welcome to state it.
I think Popper’s epistemology works better in the real world. How can we decide? By looking at whether they make sense, whether they are refuted by criticism, etc...
How does this translate into illustrating whether either epistemology has “real world consequences”? Criticism and “sense making” are widespread, varied, and not always valuable.
I think what would be most helpful if you set up a hypothetical example and then proceeded to show how Popperian espistemology would lead to a success while a Bayesian approach would lead to a “real world consequence.” I think your question, “How can we decide?” was perfect, but I think your answer was incorrect.
Example: we want to know if liberalism or socialism is correct.
Popperian approach: consider what problem the ideas in question are intended to solve and whether they solve it. They should explain how they solve the problem; if they don’t, reject them. Criticize them. If a flaw is discovered, reject them. Conjecture new theories also to solve the problem. Criticize those too. Theories similar to rejected theories may be conjectured; and it’s important to do that if you think you see a way to not have the same flaw as before. Some more specific statements follow:
Liberalism offers us explanations such as: voluntary trade is mutually beneficial to everyone involved, and harms no one, so it should not be restricted. And: freedom is compatible with a society that makes progress because as people have new ideas they can try them out without the law having to be changed first. And: tolerance of people with different ideas is important because everyone with an improvement on existing customs will at first have a different idea which is unpopular.
Socialism offers explanations like, “People should get what they need, and give what they are able to” and “Central planning is more efficient than the chaos of free trade.”
Socialim’s explanations have been refuted by criticisms like Mises’s 1920 paper which explained that central planners have no rational way to plan (in short: because you need prices to do economic calculation). And “need” has been criticized, e.g. how do you determine what is a need? And the concept of what people are “able to give” is also problematic. Of course the full debate on this is very long.
Many criticisms of liberalism have been offered. Some were correct. Older theories of liberalism were rejected and new versions formulated. If we consider the best modern version, then there are currently no outstanding criticisms of it. It is not refuted, and it has no rivals with the same status. So we should (until this situation changes) accept and use liberalism.
New socialist ideas were also created many times in response to criticism. However, no one has been able to come up with coherent ideas which address all the criticisms and still reach the same conclusions (or anything even close).
Liberalism’s “justification” is merely this: it is the only theory we do not currently have a criticism of. A criticism is an explanation of what we think is a flaw or mistake. It’s a better idea to use a theory we don’t see anything wrong with than one we do. Or in other words: we should act on our best (fallible) knowledge that we have so far. In this way, the Popperian approach doesn’t really justify anything in the normal sense, and does without foundations.
Bayesian approach: Assign them probabilities (how?), try to find relevant evidence to update the probabilities (this depends on more assumptions), ignore that whenever you increase the probability of liberalism (say) you should also be increasing the probability of infinitely many other theories which made the same empirical assertions. Halt when—I don’t know. Make sure the evidence you update with doesn’t have any bias by—I don’t know, it sure can’t be a random sample of all possible evidence.
No doubt my Bayesian approach was unfair. Please correct it and add more specific details (e.g. what prior probability does liberalism have, what is some evidence to let us update that, what is the new probability, etc...)
PS is it just me or is it difficult to navigate long discussions and to find new nested posts? And I wasn’t able to find a way to get email notification of replies.
I’m beginning to see where the problem in this debate is coming from.
Bayesian humans don’t always assign actual probabilities, I almost never do. What we do in practice is vaguely similar to your Popperian approach.
The main difference is that we do thought experiments about Ideal Bayesians, strange beings with the power of logical omniscience (which gets them round the problem of Solomonoff Induction being uncomputable), and we see which types of reasoning might be convincing to them, and use this a standard for which types are legitimate.
Even this might in practice be questioned, if someone showed be a thought experiment in which an ideal Bayesian systematically arrived at worse beliefs than some competitor I might stop being a Bayesian. I can’t tell you what I would use as a standard in this case, since if I could predict that theory X would turn out to be better than Bayesianism I would already be an X theorist.
Popperian reasoning, on the other hand, appears to use human intuition as its standard. The conjectures he makes ultimately come from his own head, and inevitably they will be things that seem intuitively plausible to him. It is also his own head which does the job of evaluating which criticisms are plausible. He may bootstrap himself up into something that looks more rigorous, but ultimately if his intuitions are wrong he’s unlikely to recover from it. The intuitions may not be unquestioned but they might as well be for all the chance he has of getting away from their flaws.
Cognitive science tells us that our intuitions are often wrong. In extreme cases they contradict logic itself, in ways that we rarely notice. Thus they need to be improved upon, but to improve upon them we need a standard to judge them by, something where we can say “I know this heuristic is a cognitive bias because it tells us Y when the correct answer is in fact X”. A good example of this is conjunction bias, conjunctions are often more plausible than disjunctions to human intuition, but they are always less likely to be correct, and we know this through probability theory.
So here’s how a human Bayesian might look, this approach only reflects the level of Bayesian strength I currently have, and can definitely be improved upon.
We wouldn’t think in terms of Liberalism and Socialism, both of them are package deals containing many different epistemic beliefs and prescriptive advice. Conjunction bias might fool you into thinking that one of them is probably right, but in fact both are astonishingly unlikely.
We hold off on proposing solutions (scientifically proven to lead to better solutions) and instead just discuss the problem. We clearly frame exactly what our values are in this situation, possibly in the form of a precisely delineated utility function and possibly not, so we know what we are trying to achieve.
We attempt to get our facts straight. Each fact is individually analysed, to see whether we have enough evidence to overcome its complexity. This process continues permanently, every statement is evaluated.
We then suggest policies which seem likely to satisfy our values, and calculate which one is likely to do so best.
I’m not sure there’s actually a difference between the two approaches, ultimately only arrived at Bayesian through my intuitions as well, so there is no difference at the foundations. Bayesianism is just Popperianism done better.
PS there is a little picture of an envelope next to your name and karma score in the right hand corner. It turns red when one of your comments has a reply. Click on it too see the most recent replies to your comments.
Popperian reasoning, on the other hand, appears to use human intuition as its standard.
No. It does not have a fixed standard. Fixed standards are part of the justificationist attitude which is a mistake which leads to problems such as regress. Justification isn’t possible and the idea of seeking it must be dropped.
Instead, the standard should use our current knowledge (the starting point isn’t very important) and then change as people find mistakes in it (no matter what standard we use for now, we should expect it to have many mistakes to improve later).
The conjectures he makes ultimately come from his own head, and inevitably they will be things that seem intuitively plausible to him.
Popperian epistemology has no standard for conjectures. The flexible, tentative standard is for criticism, not conjecture.
The “work”—the sorting of good ideas from bad—is all done by criticism and not by rules for how to create ideas in the first place.
You imply that people are parochial and biased and thus stuck. First, note the problems you bring up here are for all epistemologies to deal with. Having a standard you tell everyone to follow does not solve them. Second, people can explain their methods of criticism and theory evaluation to other people and get feedback. We aren’t alone in this. Third, some ways (e.g. having less bias) as a matter of fact work better than others, so people can get feedback from reality when they are doing it right, plus it makes their life better (incentive). More could be said. Tell me if you think it needs more (why?).
“I know this heuristic is a cognitive bias because it tells us Y when the correct answer is in fact X”
I think by “know” here you are referring to the justified, true belief theory of knowledge. And you are expecting that the authority or certainty of objective knowledge will defeat bias. This is a mistake. Like it or not, we cannot ever have knowledge of that type (e.g. b/c justification attempts lead to regress). What we can have is fallible, conjectural knowledge. This isn’t bad; it works fine; it doesn’t not devolve into everyone believing their bias.
Liberalism is not a package by accident. It is a collection of ideas around one theme. They are all related and fit together. They are less good in isolation—e.g. if you take away one idea you’ll find that now one of the other ideas has an unsolved and unaddressed problem. It is sometimes interesting to consider the ideas individually but to a significant extent they all are correct or incorrect as a group.
The way I’m seeing it is that most of the time you (and everyone else) does something roughly similar to what Popper said to. This isn’t a surprise b/c most people do learn stuff and that is the only method possible of creating any knowledge. But when you start using Bayesian philosophy more directly, by e.g. explicitly assigning and updating probabilities to try to settle non-probabilistic issues (like moral issues), then you start making mistakes. You say you don’t do that very often. OK. But there’s other more subtle ones. One is what Popper called The Myth of the Framework where you suggest that people with different frameworks (initial biases) will both be stuck on thinking that what seems right to them (now) is correct and won’t change. And you suggest the way past this is, basically, authoritative declarations where you put someone’s biases against Truth so he has no choice but to recant. This is a mistake!
You say you don’t do that very often. OK. But there’s other more subtle ones. One is what Popper called The Myth of the Framework where you suggest that people with different frameworks (initial biases) will both be stuck on thinking that what seems right to them (now) is correct and won’t change.
To some extent our thought processes can certainly improve, however there is no guarantee of this, let me give an example to illustrate:
Alice is an inductive thinker, in general she believes that is something has happened often in the past it is more likely to happen in the future. She does treat this as an absolute, it is only probabilistic, and it does not work in certain specific situations (such as pulling beads out of a jar with 5 red and 5 blue beads), but she used induction to discover which situations those were. She is not particular worried that induction might be wrong, after all, it has almost always worked in the past.
Bob is an anti-inductive thinker, he believes that the more often something happens, the less likely it is to happen in the future. To him, the universe is like a giant bag of beads, and the more something happens the more depleted the universe’s supply of it becomes. He also concedes that anti-induction is merely probabilistic, and there are certain situations (the bag of beads example) were it has already worked a few times so he doesn’t think its very likely to work now. He isn’t particularly worried that he might be wrong, anti-induction has almost never worked for him before, so he must be set up for the winning streak of a lifetime.
Ultimately, neither will ever be convinced of the other’s viewpoint. If Alice conjectures anti-induction then she will immediately have a knock-down criticism, and vice versa for Bob and Induction. One of them has an irreversibly flawed starting point.
Like it or not, you, me, Popper and every other human is an Alice. If you don’t believe me, just ask which of the following criticisms seems more logically appealing to you:
“Socialism has never worked in the past, every socialist state has turned into a nightmarish tyranny, so this country shouldn’t become socialist”
“Liberalism has usually worked in the past, most liberal democracies are wealthy and have the highest standards of living in human history, so this country shouldn’t become liberal”
Liberalism is not a package by accident. It is a collection of ideas around one theme. They are all related and fit together. They are less good in isolation—e.g. if you take away one idea you’ll find that now one of the other ideas has an unsolved and unaddressed problem. It is sometimes interesting to consider the ideas individually but to a significant extent they all are correct or incorrect as a group.
This might be correct, but there is a heavy burden of proof to show it. Liberalism and Socialism are two philosophies out of thousands (maybe millions) of possibilities. This means that you need huge amounts evidence to distinguish the two of them from the crowd and comparatively little evidence to distinguish one from the other.
Popperian epistemology has no standard for conjectures. The flexible, tentative standard is for criticism, not conjecture.
That is a recipe for disaster. There are too many possible conjectures, we cannot consider them all, we need some way to prioritise some over others. If you do not specify a way then people will just do so according to personal preference.
As I see it, Popperian reasoning is pretty much the way humans reason naturally, and you only have to look at any modern political debate to see why that’s a problem.
To some extent our thought processes can certainly improve, however there is no guarantee of this
Yes, there is no guarantee. One doesn’t need a guarantee for something to happen. And one can’t have guarantees about anything, ever. So the request for guarantees is itself a mistake.
Ultimately, neither will ever be convinced of the other’s viewpoint. If Alice conjectures anti-induction then she will immediately have a knock-down criticism, and vice versa for Bob and Induction. One of them has an irreversibly flawed starting point.
The sketches you give of Bob and Alice are not like real people. They are simplified and superficial, and people like that could not function in day to day life. The situation with normal people is different. No everyday people have irreversibly flawed starting points.
The argument for this is not short and simple, but I can give it. First I’d like to get clear what it means, and why we would be discussing it. Would you agree that if my statement here is correct then Popper is substantially right about epistemology? Would you concede? If not, what would you make of it?
Like it or not, you, me, Popper and every other human is an Alice.
That is a misconception. One of its prominent advocates was Hume. We do not dispute things like this out of ignorance, out of never hearing it before. One of the many problems with it is that people can’t be like Alice because there is no method of induction—it is a myth that one could possibly do induction because induction doesn’t describe a procedure a person could do. Induction has no set of instructions to follow to offer.
That may sound strange to you. You may think it offers a procedure like:
1) gather data
2) generalize/extrapolate (induce) a conclusion from the data
3) the conclusion is probably right, with some exceptions
The problem is step 2 which does not how how to extrapolate a conclusion from a set of data. There are infinitely many conclusions consistent with any finite data set. So the entire procedure rests on having a method of choosing between them. All proposals made for this either don’t work or are vague. The one I would guess you favor is Occam’s Razor—pick the simplest one. This is both vague (what are the precise guidelines for deciding what is simpler) and wrong (under many interpretations. for example because it might reject all explanatory theories b/c omitting the explanation is simpler).
Another issue is how one thinks about things he has no past experience about. Induction does not answer that. Yet people do it.
which of the following criticisms seems more logically appealing to you
I think they are both terrible arguments and they aren’t how I think about the issue.
This might be correct, but there is a heavy burden of proof to show it.
The “burden of proof” concept is a justificationist mistake. Ideas cannot be proven (which violates fallibility) and they can’t be positively shown to be true. You are judging Popperian ideas by standards which Popper rejected which is a mistake.
That is a recipe for disaster.
But it works in practice. The reason it doesn’t turn into a disaster is people want to find the truth. They aren’t stopped from making a mess of things by authoritative rules but by their own choices because they have some understanding of what will and won’t work.
The authority based approach is a mistake in many ways. For example, authorities can themselves be mistaken and could impose disasters on people. And people don’t always listen to authority. We don’t need to try to force people to follow some authoritative theory to make them think properly, they need to understand the issues and do it voluntarily.
Personal preferences aren’t evil, and imposing what you deem the best preference as a replacement is an anti-liberal mistake.
As I see it, Popperian reasoning is pretty much the way humans reason naturally
No. Since Aristotle, justificationism has dominated philosophy and governs the unconscious assumptions people make in debates. They do not think like Popperians or understand Popper’s philosophy (except to the extent that some of their mental processes are capable of creating knowledge, and those have to be in line with the truth of the matter about what does create knowledge).
The argument for this is not short and simple, but I can give it. First I’d like to get clear what it means, and why we would be discussing it. Would you agree that if my statement here is correct then Popper is substantially right about epistemology? Would you concede? If not, what would you make of it?
Since I’m not familiar with the whole of Popper’s position I’m noting going to accept it blindly. I’m also not even certain that he’s incompatible with Bayesianism.
Anyway, that fact that no human has a starting point as badly flawed as anti-induction doesn’t make Bayesianism invalid. It may well be that we are just very badly flawed, and can only get out of those flaws by taking the mathematically best approach to truth. This is Bayesianism, it has been proven in more than one way.
The problem is step 2 which does not how how to extrapolate a conclusion from a set of data. There are infinitely many conclusions consistent with any finite data set.
This is exactly we we need induction. It is usually possible to stick any future onto any past and get a consistent history, induction tells us that if we want a probable history we need to make the future and the past resemble each other.
The reason it doesn’t turn into a disaster is people want to find the truth.
People certainly say that. Most of them even believe it on a conscious level, but there in your average discussion there is a huge amount of other stuff going on, from signalling tribal loyalty to rationalising away unpleasant conclusions. You will not wander down the correct path by chance, you must use a map and navigate.
The authority based approach is a mistake in many ways. For example, authorities can themselves be mistaken and could impose disasters on people. And people don’t always listen to authority. We don’t need to try to force people to follow some authoritative theory to make them think properly, they need to understand the issues and do it voluntarily.
Personal preferences aren’t evil, and imposing what you deem the best preference as a replacement is an anti-liberal mistake.
I have no further interest in talking with you if you resort to straw men like this. I am not proposing we set up a dictatorship and kill all non-Bayesians, nor am I advocating censorship of views opposed the correct Bayesian conclusion.
All I am saying is your mind was not designed to do philosophical reasoning. It was designed to chase antelope across the savannah, lob a spear in them, drag them back home to the tribe, and come up with an eloquent explanation for why you deserve a bigger share of the meat (this last bit got the lion’s share of the processing power).
Your brain is not well suited to abstract reasoning, it is a fortunate coincidence that you are capable of it at all. Hopefully, you are lucky enough to have a starting point which is not irreversibly flawed, and you may be able to self improve, but this should be in the direction of realising that you run on corrupt hardware, distrusting your own thoughts, and forcing them to follow rigorous rules. Which rules? The ones that have been mathematically proven to be the best seem like a good starting point.
(The above is not intended as a personal attack, it is equally true of everyone)
Anyway, that fact that no human has a starting point as badly flawed as anti-induction doesn’t make Bayesianism invalid.
I did not say it makes Bayesianism invalid. I said it doesn’t make Popperism invalid or require epistemological pessimism. You were making myth of the framework arguments against Popper’s view. My comments on those were not intended to refute Bayesianism itself.
This is exactly we we need induction. It is usually possible to stick any future onto any past and get a consistent history, induction tells us that if we want a probable history we need to make the future and the past resemble each other.
That is a mistake and Popper’s approach is superior.
Part 1: It is a mistake because the future does not resemble the past except in some vacuous senses. Why? Because stuff changes. For example an object in motion moves to a different place in the future. And human societies invent new technologies.
It is always the case that some things resemble the past and some don’t. And the guideline that “the future resembles the past” gives no guidance whatsoever in figuring out which are which.
Popper’s approach is to improve our knowledge piecemeal by criticizing mistakes. The primary criticisms of this approach are that is it is incapable of offering guarantees, authority, justification, a way to force people to go against their biases, etc.. These criticisms are mistaken: no viable theory offers what they want. Setting aside those objections—that Popper doesn’t meet standard too high for anything to meet—it works and is how we make progress.
Regarding people wanting to find the truth, indeed they don’t always. Sometimes they don’t learn. Telling them they should be Bayesians won’t change that either. What can change it is sorting out the mess of their psychology enough to figure out some advice they can use. BTW the basic problem you refer to is static memes, the theory of which David Deutsch explains in his new (Popperian) book The Beginning of Infinity.
I have no further interest in talking with you if you resort to straw men like this.
Please calm down. I am trying my best to explain clearly. If I think that some of your ideas have nasty consequences that doesn’t mean I’m trying to insult you. It could be the case that some of your ideas actually do have nasty consequences of which you are unaware, and that by pointing out some of the ways your ideas relate to some ideas you consciously deem bad, you may learn better.
All justificationist epistemologies have connections to authority, and authority has nasty connections to politics. You hold a justificationist epistemology. When it comes down to it, justification generally consists of authority. And no amount of carefully deciding what is the right thing to set up as that authority changes that.
This connect to one of Popper’s political insights which is that most political theories focus on the problem “Who should rule?” (or: what policies should rule?). This question is a mistake which begs for an authoritarian answer. The right question is a fallibilist one: how can we set up political institutions that help us find and fix errors?
Getting back to epistemology, when you ask questions like, “What is the correct criterion for induction to use in step 2 to differentiate between the infinity of theories?” that is a bad question which begs for an authoritarian answer.
All I am saying is your mind was not designed to do philosophical reasoning
My mind is a universal knowledge creator. What design could be better? I agree with you that it wasn’t designed for this in the sense that evolution doesn’t have intentions, but I don’t regard that as relevant.
Evolutionary psychology contains mistakes. I think discussion of universality is a way to skip past most of them (when universality is accepted, they become pretty irrelevant).
Your brain is not well suited to abstract reasoning, it is a fortunate coincidence that you are capable of it at all.
I’d urge you to read The Beginning of Infinity by David Deutsch which refutes this. I can give the arguments but I think reading it would be more efficient and we have enough topics going already.
forcing them to follow rigorous rules.
See! I told you the authoritarian attitude was there!
And there is no mathematical proof of Bayesian epistemology. Bayes’ theorem itself is a bit of math/logic which everyone accepts (including Popper of course). But Bayesian epistemology is an application of it to certain philosophical questions, which leaves the domain of math/logic, and there is no proof that application is correct.
Part 1: It is a mistake because the future does not resemble the past except in some vacuous senses. Why? Because stuff changes. For example an object in motion moves to a different place in the future. And human societies invent new technologies.
The object in motion moves according to the same laws in both the future and the past, in this sense the future resembles the past. You are right that the future does not resemble the past in all ways, but the ways in which it does themselves remain constant over time. Induction doesn’t apply in all cases but we can use induction to determine which cases it applies in and which it doesn’t. If this looks circular that’s because it is, but it works.
Popper’s approach is to improve our knowledge piecemeal by criticizing mistakes. The primary criticisms of this approach are that is it is incapable of offering guarantees, authority, justification, a way to force people to go against their biases, etc.. These criticisms are mistaken: no viable theory offers what they want. Setting aside those objections—that Popper doesn’t meet standard too high for anything to meet—it works and is how we make progress.
As far as Bayesianism is concerned this is a straw man. Most Bayesians don’t offer any guarantees in the sense of absolute certainty at all.
All justificationist epistemologies have connections to authority, and authority has nasty connections to politics. You hold a justificationist epistemology. When it comes down to it, justification generally consists of authority. And no amount of carefully deciding what is the right thing to set up as that authority changes that.
No Bayesian has ever proposed setting up some kind of Bayesian dictatorship. As far as I can tell the only governmental proposal based on Bayesianism thus far is Hanson’s futarchy, which could hardly be further from Authoritarianism.
forcing them to follow rigorous rules.
See! I told you the authoritarian attitude was there!
You misunderstand me. What I meant was that as a Bayesian I force my own thoughts to follow certain rules. I don’t force other people to do so. You are arguing from a superficial resemblance. Maths follows rigorous, unbreakable rules, does this mean that all mathematicians are evil fascists?
And there is no mathematical proof of Bayesian epistemology. Bayes’ theorem itself is a bit of math/logic which everyone accepts (including Popper of course). But Bayesian epistemology is an application of it to certain philosophical questions, which leaves the domain of math/logic, and there is no proof that application is correct.
Incorrect. E.T. Jaynes book Probability Theory: The Logic of Science gives a proof in the first two chapters.
My mind is a universal knowledge creator. What design could be better? I agree with you that it wasn’t designed for this in the sense that evolution doesn’t have intentions, but I don’t regard that as relevant.
Evolutionary psychology contains mistakes. I think discussion of universality is a way to skip past most of them (when universality is accepted, they become pretty irrelevant).
You obviously haven’t read much of the heuristics and biases program. I can’t describe it all very quickly here but I’ll just give you a taster.
Subjects asked to rank statements about a woman called Jill in order of probability of being true ranked “Jill is a feminist and a bank teller” as more probable than “Jill is a bank teller” despite this being logically impossible.
U.N. diplomats, when asked to guess the probabilities of various international events occurring in the nest year gave a higher probability to “USSR invades Poland causing complete cessation of diplomatic activities between USA and USSR” than they did to “Complete cessation of diplomatic activities between USA and USSR”.
Subjects who are given a handful of evidence and arguments for both sides of some issue, and asked to weigh them up, will inevitably conclude that the weight of the evidence given is in favour of their side. Different subjects will interpret the same evidence to mean precisely opposite things.
Employers can have their decision about whether to hire someone changed by whether they held a warm coffee or a cold coke in the elevator prior to the meeting.
People can have their opinion on an issue like nuclear power changed by a single image of a smiley or frowny face, flashed to briefly for conscious attention.
People’s estimates of the number of countries in Africa can be changed simply by telling them a random number beforehand, even if it is explicitly stated that this number has nothing to do with the question.
Students asked to estimate a day by which they are 99% confident their project will be finished, go past this day more than half the time.
People are more like to move to a town if the town’s name and their name begin with the same letter.
There’s a lot more, most of which can’t easily be explained in bullet form. Suffice to say these are not irrelevant to thinking, they are disastrous. It takes constant effort to keep them back, because they are so insidious you will not notice when they are influencing you.
And there is no mathematical proof of Bayesian epistemology. Bayes’ theorem itself is a bit of math/logic which everyone accepts (including Popper of course). But Bayesian epistemology is an application of it to certain philosophical questions, which leaves the domain of math/logic, and there is no proof that application is correct.
Incorrect. E.T. Jaynes book Probability Theory: The Logic of Science gives a proof in the first two chapters.
You obviously haven’t read much of the heuristics and biases program.
Would you agree that this is a bit condescending and you’re basically assuming in advance that you know more than me?
I actually have read about it and disagree with it on purpose, not out of ignorance.
Does that interest you?
And on the other hand, do you know anything about universality? You made no comment about that. Given that I said the universality issue trumps the details you discuss in your bullet points, and you didn’t dispute that, I’m not quite sure why you are providing these details, other than perhaps a simple assumption that I had no idea what I was talking about and that my position can be ignored without reply because, once my deep ignorance is addressed, I’ll forget all about this Popperian nonsense..
Incorrect. E.T. Jaynes book Probability Theory: The Logic of Science gives a proof in the first two chapters.
Ordered but there’s an error in the library system and I’m not sure if it will actually come or not. I don’t suppose the proof is online anywhere (I can access major article databases), or that you could give it or an outline? BTW I wonder why the proof takes 2 chapters. Proofs are normally fairly short things. And, well, even if it was 100 pages of straight math I don’t see why you’d break it into separate chapters.
You misunderstand me. What I meant was that as a Bayesian I force my own thoughts to follow certain rules. I don’t force other people to do so.
No I understood that. And that is authoritarian in regard to your own thoughts. It’s still a bad attitude even if you don’t do it to other people. When you force your thoughts to follow certain rules all the epistemological problems with authority and force will plague you (do you know what those are?).
Regarding Popper, you say you don’t agree with the common criticisms of him. OK. Great. So, what are your criticisms? You didn’t say.
If this looks circular that’s because it is, but it works.
If there was an epistemology that didn’t endorse circular arguments, would you prefer it over yours which does?
Would you agree that this is a bit condescending and you’re basically assuming in advance that you know more than me?
I actually have read about it and disagree with it on purpose, not out of ignorance.
I apologise for this, but I really don’t see how anyone could go through those studies without losing all faith in human intuition.
I don’t suppose the proof is online anywhere (I can access major article databases), or that you could give it or an outline?
The text can be found online. My browser (Chrome) wouldn’t open the files but you may have more luck.
BTW I wonder why the proof takes 2 chapters. Proofs are normally fairly short things. And, well, even if it was 100 pages of straight math I don’t see why you’d break it into separate chapters.
Part of the reason for length is that probability theory has a number of axioms and he has to prove them all. The reason for the two chapter split is that the first chapter is about explaining what he wants to do, why he wants to do it, and laying out his desiderata. It also contains a few digressions in case the reader isn’t familiar with one or more of the prerequisites for understanding it (propositional logic for example). All of the actual maths is in the second chapter.
No I understood that. And that is authoritarian in regard to your own thoughts.
I agree to the explicit meaning of this statement but you are sneaking in connotations. Let us look more closely about what ‘authoritarian’ means.
You probably mean it in the sense of centralised as opposed to decentralized control, and in that sense I will bite the bullet and say that thinking should be authoritarian.
However, the word has a number of negative connotations. Corruption, lack of respect for human rights and massive bureaucracy that stifles innovation to name a few. None of those apply to my thinking process, so even though the term may be technically correct it is somewhat intellectually dishonest to use it, something more value-neutral like ‘centralized control’ might be better.
Regarding Popper, you say you don’t agree with the common criticisms of him. OK. Great. So, what are your criticisms? You didn’t say.
I will confess that I am not familiar with the whole of Popper’s viewpoint. I have never read anything written by him although after this conversation I am planning to.
Therefore I do not know whether or not I broadly agree or disagree with him. I did not come here to attack him, originally I was just responding to a criticism of yours that Bayesianism fails in a certain situation
To some extent I think the approach with conjectures and criticisms may be correct, at least as a description of how thinking must get off the ground. Can you be a Popperian and conjecture Bayesianism?
The point that I do disagree with is the proposed asymmetry between confirmation and falsification. In my view neither the black swan or the white swan proves anything with certainty, but both do provide some evidence. It happens in this case that one piece of evidence is very strong while the other is very weak, in fact they are pretty much at opposite extremes of the full spectrum of evidence encountered in the real world. This does not mean there is a difference of type.
If there was an epistemology that didn’t endorse circular arguments, would you prefer it over yours which does?
All else being equal, yes. Other factors, such as real-world results might take precedence. I also doubt that any philosophy could manage without either circularity or assumptions, explicit or otherwise. As I see it when you start thinking you need something to begin your inference, logic derives truths form other truths, it cannot manufacture them out of a vacuum. So any philosophy has two choices:
Either, pick a few axioms, call them self evident and derive everything from them. This seems to work fairly well in pure maths, but not anywhere else. I suspect the difference lies in whether the axioms really are self evident or not.
Or, start out with some procedures for thinking. All claims are judged by these, including proposals to change the procedures for thinking. Thus the procedures may self-modify and will hopefully improve. This seems better to me, as long as the starting point passes a certain threshold of accuracy any errors are likely to get removed (the phrase used here is the Lens that Sees its Flaws). It is ultimately circular, since whatever the current procedures are they are justified only by themselves, but I can live with that.
Ideal Bayesians are of the former type, but they can afford to be as they are mathematically perfect beings who never make mistakes. Human Bayesians take the latter approach, which means in principle they might stop being Bayesians if they could see that for some reason it was wrong.
So I guess my answer is that if a position didn’t endorse circular arguments, I would be very worried that it is going down the unquestionable axioms route, even if it does not do so explicitly, so I would probably not prefer it.
Notice how it is only through the benefits of the second approach that I can even consider such a scenario.
I agree to the explicit meaning of this statement but you are sneaking in connotations. Let us look more closely about what ‘authoritarian’ means.
I’m not trying to argue by connotation. It’s hard to avoid connotations and I think the words I’m using are accurate.
You probably mean it in the sense of centralised as opposed to decentralized control, and in that sense I will bite the bullet and say that thinking should be authoritarian.
That’s not what I had in mind, but I do think that centralized control is a mistake.
I take fallibilism seriously: any idea may be wrong, and many are. Mistakes are common.
Consequently, it’s a bad idea to set something up to be in charge of your whole mind. It will have mistakes. And corrections to those mistakes which aren’t in charge will sometimes get disregarded.
However, the word has a number of negative connotations. Corruption, lack of respect for human rights and massive bureaucracy that stifles innovation to name a few. None of those apply to my thinking process, so even though the term may be technically correct it is somewhat intellectually dishonest to use it, something more value-neutral like ‘centralized control’ might be better.
Those 3 things are not what I had in mind. But I think the term is accurate. You yourself used the word “force”. Force is authoritarian. The reason for that is that the forcer is always claiming some kind of authority—I’m right, you’re wrong, and never mind further discussion, just obey.
You may find this statement strange. How can this concept apply to ideas within one mind? Doesn’t it only apply to disagreements between separate people?
But ideas are roughly autonomous portions of a mind (see: http://fallibleideas.com/ideas). They can conflict, they can force each other in the sense of one taking priority over another without the conflict being settled rationally.
Force is a fundamentally epistemological concept. Its political meanings are derivative. It is about non-truth-seeking ways of approaching disputes. It’s about not reaching agreement by one idea wins out anyway (by force).
Settling conflicts between the ideas in your mind by force is authoritarian. It is saying some ideas have authority/preference/priority/whatever, so they get their way. I reject this approach. If you don’t find a rational way to resolve a conflict between ideas, you should say you don’t know the answer, never pick a side b/c the ideas you deem the central controllers are on that side, and they have the authority to force other ideas to conform to them.
This is a big topic, and not so easy to explain. But it is important.
Force, in the sense of solving difficulties without argument, is not what I meant when I said I force my thoughts to follow certain rules. I don’t even see how that could work, my individual ideas do not argue with each-other, if they did I would speak to a psychiatrist.
I’m afraid you are going to have to explain in more detail.
They argue notionally. They are roughly autonomous, they have different substance/assertions/content, sometimes their content contradicts, and when you have two or more conflicting ideas you have to deal with that. You (sometimes) approach the conflict by what we might call an internal argument/debate. You think of arguments for all the sides (the substance/content of the conflict ideas), you try to think of a way to resolve the debate by figuring out the best answer, you criticize what you think may be mistakes in any of the ideas, you reject ideas you decide are mistaken, you assign probabilities to stuff and do math, perhaps, etc...
When things go well, you reach a conclusion you deem to be an improvement. It resolves the issue. Each of the ideas which is improved on notionally acknowledges this new idea is better, rather than still conflicting. For example, if one idea was to get pizza, and one was to get sushi, and both had the supporting idea that you can’t get both because it would cost too much, or take too long, or make you fat, then you could resolve the issue by figuring out how to do it quickly, cheaply and without getting fat (smaller portions). If you came up with a new idea that does all that, none of the previously conflicting ideas would have any criticism of it, no objection to it. The conflict is resolved.
Sometimes we don’t come up with a solution that resolves all the issues cleanly. This can be due to not trying, or because it’s hard, or whatever.
Then what?
Big topic, but what not to do is use force: arbitrarily decide which side wins (often based on some kind of authority or justification), and declare it the winner even though the substance of the other side is not addressed. Don’t force some of your ideas, which have substantive unaddressed points, to defer to the ideas you put in charge (granted authority).
Big topic, but what not to do is use force: arbitrarily decide which side wins (often based on some kind of authority or justification), and declare it the winner even though the substance of the other side is not addressed. Don’t force some of your ideas, which have substantive unaddressed points, to defer to the ideas you put in charge (granted authority).
I certainly don’t advocate deciding arbitrarily. The would fall into the fallacy of just making sh*t up which is the exact of everything Bayes stands for. However, I don’t have to be arbitrary, most of the ideas that run up against Bayes don’t have the same level of support. In general, I’ve found that a heuristic of “pick the idea that has a mathematical proof backing it up” seems to work fairly well.
There are also sometimes other clues, rationalisations tend to have a slightly different ‘feel’ to them if you introspect closely (in my experience at any rate), and when the ideas going up against Bayes seem to include a disproportionately high number of rationalisations, I start to notice a pattern.
I also disagree about ideas being autonomous. Ideas are entangled with each other in complex webs of mutual support and anti-support.
Did you read my link? Where did the argument about approximately autonomous ideas go wrong?
I did. To see what is wrong with it let me give an analogy. Cars have both engines and tyres. It is possible to replace the tyres without replacing the engine. Thus you will find many cars with very different tyres but identical engines, and many different engines but identical tyres. This does not mean that tyres are autonomous and would work fine without engines.
Well this changes the topic. But OK. How do you decide what has support? What is support and how does it differ from consistency?
Well, mathematical proofs are support, and they are not at all the same a consistency. In general however, if some random idea pops into my head, and I spot that it in fact it only occurred to me as a result of conjunction bias I am not going to say “well, it would be unfair of me to reject this just because it contradicts probability theory, so I must reject both it and probability theory until I can find a superior compromise position”. Frankly, that would be stupid.
@autonomous—you know we said “approximately autonomous” right? And that, for various purposes, tires are approximately autonomous, which means things like they can be replaced individually without touching the engine or knowing what type of engine it is. And a tire could be taken off one car and put on another.
No one was saying it’d function in isolation. Just like a person being autonomous doesn’t mean they would do well in isolation (e.g. in deep space). Just because people do need to be in appropriate environments to function doesn’t make “people are approximately autonomous” meaningless or false.
Well, mathematical proofs are support, and they are not at all the same a consistency.
First,l you have not answered my question. What is support? The general purpose definition. I want you to specify how it is determined if X supports Y, and also what that means (why should we care? what good is “support”?).
Second, let’s be more precise. If a person writes what he thinks to be a proof, what is supported? What he thinks is the conclusion of what he thinks is a proof, and nothing else? An infinite set of things which have wildly different properties? Something else?
No one was saying it’d function in isolation. Just like a person being autonomous doesn’t mean they would do well in isolation (e.g. in deep space). Just because people do need to be in appropriate environments to function doesn’t make “people are approximately autonomous” meaningless or false.
You argue from ideas being approximately autonomous to the fact that words like ‘authoritarian’ apply to them, and that the approximately debate, but this is not true in the car analogy. Is it ‘authoritarian’ that the brakes, accelerator and steering wheel have total control of the car, while the tyres and engine get no say, or is it just efficient?
I didn’t give a loose argument by analogy. You’re attacking a simplified straw man. I explained stuff at some length and you haven’t engaged here with all of what I said. e.g. your comments on “authoritarian” here do not mention or discuss anything I said about that. You also don’t mention force.
I don’t know the etiquette or format of this website well or how it works. When I have comments on the book, would it make sense to start a new thread or post somewhere/somehow?
Can you be a Popperian and conjecture Bayesianism?
You can conjecture Bayes’ theorem. You can also conjecture all the rest, however some things (such as induction, justificationism, foundationalism) contradict Popper’s epistemology. So at least one of them has a mistake to fix. Fixing that may or may not lead to drastic changes, abandonment of the main ideas, etc
The point that I do disagree with is the proposed asymmetry between confirmation and falsification.
That is a purely logical point Popper used to criticize some mistaken ideas. Are you disputing the logic? If you’re merely disputing the premises, it doesn’t really matter because its purpose is to criticize people who use those premises on their own terms.
In my view neither the black swan or the white swan proves anything with certainty,
Agreed.
but both do provide some evidence. It happens in this case that one piece of evidence is very strong while the other is very weak, in fact they are pretty much at opposite extremes of the full spectrum of evidence encountered in the real world. This does not mean there is a difference of type.
I think you are claiming that seeing a white swan is positive support for the assertion that all swans are white. (If not, please clarify). If so, this gets into important issues. Popper disputed the idea of positive support. The criticism of the concept begins by considering: what is support? And in particular, what is the difference between “X supports Y” and “X is consistent with Y”?
I also doubt that any philosophy could manage without either circularity or assumptions, explicit or otherwise. As I see it when you start thinking you need something to begin your inference, logic derives truths form other truths, it cannot manufacture them out of a vacuum.
Questioning this was one of Popper’s insights. The reason most people doubt it is possible is because, since Aristotle, pretty much all epistemology has taken this for granted. These ideas seeped into our culture and became common sense.
What’s weird about the situation is that most people are so attached to them that they are willing to accept circular arguments, arbitrary foundations, or other things like that. Those are OK! But that Popper might have a point is hard to swallow. I find circular arguments rather more doubtful than doing without what Popperians refer to broadly as “justification”. I think it’s amazing that people run into circularity or other similar problems and still don’t want to rethink all their premises. (No offense intended. Everyone has biases, and if we try to overcome them we can become less wrong about some matters, and stating guesses at what might be biases can help with that.)
All the circularity and foundations stem from seeking to justify ideas. To show they are correct. Popper’s epistemology is different: ideas never have any positive support, confirmation, verification, justification, high probability, etc… So how do we act? How do we decide which idea is better than the others? We can differentiate ideas by criticism. When we see a mistake in an idea, we criticize it (criticism = explaining a mistake/flaw). That refutes the idea. We should act on or use non-refuted ideas in preference over refuted ideas.
That’s the very short outline, but does that make any sense?
You can conjecture Bayes’ theorem. You can also conjecture all the rest, however some things (such as induction, justificationism, foundationalism) contradict Popper’s epistemology. So at least one of them has a mistake to fix. Fixing that may or may not lead to drastic changes, abandonment of the main ideas, etc
Fully agreed. In principle, if Popper’s epistemology is of the second, self-modifying type, there would be nothing wrong with drastic changes. One could argue that something like that is exactly how I arrived at my current beliefs, I wasn’t born a Bayesian.
I can also see some ways to make induction and foundationalism easer to swallow.
I don’t know the etiquette or format of this website well or how it works. When I have comments on the book, would it make sense to start a new thread or post somewhere/somehow?
A discussion post sounds about right for this, if enough people like it you might consider moving it to the main site.
I think you are claiming that seeing a white swan is positive support for the assertion that all swans are white. (If not, please clarify).
This is precisely what I am saying.
If so, this gets into important issues. Popper disputed the idea of positive support. The criticism of the concept begins by considering: what is support? And in particular, what is the difference between “X supports Y” and “X is consistent with Y”?
The beauty of Bayes is how it answers these questions. To distinguish between the two statements we express them each in terms of probabilities.
“X is consistent with Y” is not really a Bayesian way of putting things, I can see two ways of interpreting it. One is as P(X&Y) > 0, meaning it is at least theoretically possible that both X and Y are true. The other is that P(X|Y) is reasonably large, i.e. that X is plausible if we assume Y.
“X supports Y” means P(Y|X) > P(Y), X supports Y if and only if Y becomes more plausible when we learn of X. Bayes tells us that this is equivalent to P(X|Y) > P(X), i.e. if Y would suggest that X is more likely that we might think otherwise then X is support of Y.
Suppose we make X the statement “the first swan I see today is white” and Y the statement “all swans are white”. P(X|Y) is very close to 1, P(X|~Y) is less than 1 so P(X|Y) > P(X), so seeing a white swan offers support for the view that all swans are white. Very, very weak support, but support nonetheless.
(The above is not meant to be condescending, I apologise if you know all of it already).
To show they are correct. Popper’s epistemology is different: ideas never have any positive support, confirmation, verification, justification, high probability, etc...
This is a very tough bullet to bite.
How do we decide which idea is better than the others? We can differentiate ideas by criticism. When we see a mistake in an idea, we criticize it (criticism = explaining a mistake/flaw). That refutes the idea. We should act on or use non-refuted ideas in preference over refuted ideas.
One thing I don’t like about this is the whole ‘one strike and you’re out’ feel of it. It’s very boolean, the real world isn’t usually so crisp. Even a correct theory will sometimes have some evidence pointing against it, and in policy debates almost every suggestion will have some kind of downside.
There is also the worry that there could be more than one non-refuted idea, which makes it a bit difficult to make decisions. Bayesianism, on the other hand, when combined with expected utility theory, is perfect for making decisions.
1) gather data 2) generalize/extrapolate (induce) a conclusion from the data 3) the conclusion is probably right, with some exceptions
The problem is step 2 which does not how how to extrapolate a conclusion from a set of data.
Step 1 is problematic also, as I explained in some of my comments to Tim Tyler. What should I gather data about? What kind of data? What measurements are important? How accurate? And so on.
Yes I agree. Another issue I mentioned in one of my comments here is that your data isn’t a random sample of all possible data, so what do you do about bias? (I mean bias in the data, not bias in the person.)
Step 3 is also problematic (as it explicitly acknowledges).
PS is it just me or is it difficult to navigate long discussions and to find new nested posts? And I wasn’t able to find a way to get email notification of replies.
I don’t think I have the grasp on these subjects to hang in this, but this is great. -- I hope someone else comments in a more detailed manner.
In Popperian analysis, who ends the discussion of “what’s better?” You seem to have alluded to it being “whatever has no criticisms.” Is that accurate?
try to find relevant evidence to update the probabilities (this depends on more assumptions)
Why would Bayesian epistemology not be able to use the same evidence that Popperians used (e.g. the 1920 paper) and thus not require “assumptions” for new evidence? My rookie statement would be that the Bayesian has access to all the same kinds of evidence and tools that the Popperian approach does, as well as a reliable method for estimating probability outcomes.
Could you also clarify the difference between “conjecture” and “assumption.” Is it just that you’re saying that a conjecture is just a starting point for departure, whereas an assumption is assumed to be true?
An assumption seems both 1) justified if it has supporting evidence to make it highly likely as true to the best of our knowledge and 2) able to be just as “revisable” given counter-evidence as a “conjecture.”
Are you thinking that a Bayesian “assumption” is set in stone or that it could not be updated/modified if new evidence came along?
Lastly, what are “conjectures” based on? Are they random? If not, it would seem that they must be supported by at least some kind of assumptions to even have a reason for being conjectured in the first place. I think of them as “best guesses” and don’t see that as wildly different from the assumptions needed to get off the ground in any other analysis method.
In Popperian analysis, who ends the discussion of “what’s better?” You seem to have alluded to it being “whatever has no criticisms.” Is that accurate?
Yes, “no criticisms” is accurate. There are issues of what to do when you have a number of theories remaining which isn’t exactly one which I didn’t go into.
It’s not a matter of “who”—learning is a cooperative thing and people can use their own individual judgment. In a free society it’s OK if they don’t agree (for now—there’s always hope for later) about almost all topics.
I don’t regard the 1920 paper as evidence. It contains explanations and arguments. By “evidence” I normally mean “empirical evidence”—i.e. observation data. Is that not what you guys mean? There is some relevant evidence for liberalism vs socialism (e.g. the USSR’s empirical failure) but I don’t regard this evidence as crucial, and I don’t think that if you were to rely only on it that would work well (e.g. people could say the USSR did it wrong and if they did something a bit different, which has never been tried, then it would work. And the evidence could not refute that.)
BTW in the Popperian approach, the role of evidence is purely in criticism (and inspiration for ideas, which has no formal rules or anything). This is in contrast to inductive approaches (in general) which attempt to positively support/confirm/whatever theories with the weight of evidence.
If the Bayesian approach uses arguments as a type of evidence, and updates probabilities accordingly, how is that done? How is it decided which arguments win, and how much they win by? One aspect of the criticism approach is theories do not have probabilities but only two statuses: they are refuted or non-refuted. There’s never an issue of judging how strong an argument is (how do you do that?).
If you try to follow along with the Popperian approach too closely (to claim to have all the same tools) one objection will be that I don’t see Bayesian literature acknowledging Popper’s tools as valuable, talking about how to use them, etc… I will suspect that you aren’t in line with the Bayesian tradition. You might be improving it, but good luck convincing e.g. Yudkowsky of that.
The difference between a conjecture and an assumption is just as you say: conjectures aren’t assumed true but are open to criticism and debate.
I think the word “assumption” means not revisable (normally assumptions are made in a particular context, e.g. you assume X for the purposes of a particular debate which means you don’t question it. But you could have a different debate later and question it.). But I didn’t think Bayesianism made any assumptions except for its foundational ones. I don’t mind if you want to use the word a different way.
Regarding justification by supporting evidence, that is a very problematic concept which Popper criticized. The starting place of the criticism is to ask what “support” means. And in particular, what is the difference between support and mere consistency (non-contradiction)?
Conjectures are not based on anything and not supported. They are whatever you care to imagine. It’s good to have reasons for conjectures but there are no rules about what the reasons should be, and conjectures are never rejected because of the reason they were conjectured (nor because of the source of the conjecture), only because of criticisms of their substance. If someone makes too many poor conjectures and annoys people, it’s possible to criticize his methodology in order to help him. Popperian epistemology does not have any built-in guidelines for conjecturing on which it depends; they can be changed and violated as people see fit. I would rather call them “guesses” than “best guesses” because it’s often a good idea for one person to make several conjectures, including ones he suspects are mistaken, in order to learn more about them. It should not be each person puts forward his best theory and they face off, but everyone puts forward all the theories he thinks may be interesting and then everyone cooperates in criticizing all of them.
Edit: BTW I use the words “theory” and “idea” interchangeably. I do not mean by “theory” ideas with a certain amount of status/justification. I think “idea” is the better word but I frequently forget to use it (because Popper and Deutsch say “theory” all the time and I got used to it).
The ‘moving target’ effect is caused by the fact that you are talking to several different people, the grandparent is my first comment in this discussion.
The concept mentioned in that essay is Bayes’ Theorem, which tells us how to update our probabilities on new evidence. It does not solve the problem of how to avoid infinitely many hypotheses for the same reason that Newton’s laws to not explain the price of gold in London, its not supposed to. Bayes theorem tells us how to change our probabilities with new evidence, and in the process assumes that those probabilities are real numbers (as opposed to infinitesimals).
Solomonoff induction tells us how to assign the initial probabilities, which are then fed into Bayes theorem to determine the current probabilities after adjusting based on the evidence. Both are essential, criticising BT for not doing SI’s job is like saying a car’s wheels are useless because they can’t do the engine’s job of providing power.
Does any of this deal with the infinite regress problem?
I’m sorry, what is the infinite regress problem?
I don’t see any infinite regress at all, Solomonoff Induction tells us the prior, Bayesian Updating turns the prior into a posterior. They depend on each other to work properly but I don’t think they depend on anything else (unless you wish to doubt the basics of probability theory).
The regress was discussed in other comments here. I took you to be saying “everything together, works” and wanting to discuss the philosophy as a whole.
I thought that would be more productive than arguing with you about whether Bayes theorem really “assumes that those probabilities are real numbers” and various other details. That’s certainly not what other people here told me when I brought up infinitesimals. I also thought it would be more productive than going back to the text I quoted and explaining why that quote doesn’t make sense. Whether it is correct or not isn’t very important if a better idea, along the same lines, works.
The regress argument begins like this: What is the justification or probability for Solomonoff Induction and Bayesian updating? Or if they are not justified, and do not have a probability, then why should we accept them in the first place?
When you say they don’t depend on anything else, maybe you are answering the regress issue by saying they are unquestionable foundations. Is that it?
Well, to some extent every system must have unquestionable foundations, even maths must assume the axioms. The principle of induction (the more something has happened in the past, the more likely it is to happen in the future, all else being equal), cannot be justified without the justification being circular, but I doubt you could get through a single day without it. Ultimately every approach must fall back on an infinite regress as you put it, this doesn’t prevent that system from working.
However, both Bayes’ Theorem and Solomonoff Induction can be justified:
Bayes’ Theorem is an elementary deductive consequence of basic probability theory, particular the fairly obvious (at least it seems that way to me) that P(A&B) = P(A)*P(B|A). If it doesn’t seem obvious to you, then I know of at least two approaches for proving it. One is the Cox theorems, which begin by saying we want to rank statements by their plausibility, and we want certain things to be true this ranking (it must obey the laws of logic, it must treat hypotheses consistently etc), and from these derive probability theory.
Another approach is the Dutch Book arguments, which show that if you are making bets based on your probability estimates of certain things being true, then unless your probability estimates obey Bayes Theorem you can be tricked into a set of bets which result in a guaranteed loss.
To justify Solomonoff Induction, we imagine a theoretical predictor which bases its prior on Solomonoff Induction and updates by Bayes Theorem. Given any other predictor, we can compare our predictor to this opponent by comparing the probability estimates they assign to the actual outcome, then Solomonoff induction will at worst lose by a constant factor based on the complexity of the opponent.
This is the best that can be demanded of any prior, it is impossible to give perfect predictions in every possible universe, since you can always be beaten by a predictor taylor-made for that universe (which will generally perform very badly in most others).
(note: I am not an expert, it is possible that I have some details wrong, please correct me if I do)
“Well, to some extent every system must have unquestionable foundations”
No, Popper’s epistemology does not have unquestionable foundations.
You doubt I could get by without induction, but I can and do. Popper’s epistemology has no induction. It also has no regress.
Arguing that there is no choice but these imperfect concepts only works if there really is no choice. But there are alternatives.
I think that things like unquestionable foundations, or an infinite regress, are flaws. I think we should reject flawed things when we have better options. And I think Bayesian Epistemology has these flaws. Am I going wrong somewhere?
“However, both Bayes’ Theorem and Solomonoff Induction can be justified”
Justified by statements which are themselves justified (which leads to regress issues)? Or you mean justified given some unquestionable foundations? In your statements below, I don’t think you specify precisely what you deem to be able to justify things.
“Bayes’ Theorem is an elementary deductive consequence of basic probability theory”
Yes. It is not controversial itself. What I’m not convinced of is the claim that this basic bit of math solves any major epistemological problem.
Regarding Solomonoff induction, I think you are now attempting to justify it by argument. But you haven’t stated what are the rules for what counts as a good argument and why. Could you specify that? There’s not enough rigor here. And in my understanding Bayesian epistemology aims for rigor and that is one of the reasons they like math and try to use math in their epistemology. It seems to me you are departing from that worldview and its methods.
Another aspect of the situation is you have focussed on prediction. That is instrumentalist. Epistemologies should be able to deal with all categories of knowledge, not just predictive knowledge. For example they should be able to deal with creating non-emprical, non-predictive moral knowledge. Can Solomonoff induction do that? How?
Hang on, Popper’s philosophy doesn’t depend on any foundations? I’m going to call shenanigans on this. Earlier you gave and example of Popperian inference:
Unquestioned assumptions include, but are not limited to the following:
The objects under discussion actually exist (Solomonoff Induction does not make this assumption)
“There is no evidence which could prove T” is stated without any proof, what if you got all the swans in one place, what if you found a reason why the existence of a black swan was impossibile?
Any observation of a black swan must be correct (Bayes Theorem is explicitly designed to avoid this assumption)
You can generalise from this one example to a point about all theories
“Science is only interested in universal theories”. Really? Are palaeontology and astronomy not sciences? They are both often concerned with specifics.
You must always begin with assumptions, if nothing else you must assume maths (which is pretty much the only thing that Bayes Theorem and Solomonoff Induction do assume).
To be perfectly honest I care more about getting results in the real world than having some mythical perfect philosophy which can be justified to a rock.
Stating that you believe Bayes’ theorem but doubt that it can actually solve epistemic problems is like saying you believe Pythagoras’ theorem but doubt it can actually tell you the side lengths of right angled triangles, it demonstrates a failure to internalise.
Bayes’ theorem tells you how to adjust beliefs based on evidence, every time you adjust your beliefs you must use it, otherwise your map will not reflect the territory.
Does Popper not argue for his own philosophy, or does he just state it and hope people will believe him?
You cannot set up rules for arguments which are not themselves backed up by argument. Any argument will be convincing to some possible minds and not to others, and I’m okay with that, because I only have one mind.
Allow me to direct you to my all time favourite philosopher
That “Popperian inference” is simply logic.
Deductive arguments have premises, as you say.
Popper’s philosophy itself is not a deductive argument which depends on the truth of its premises and which, given their truth, is logically indisputable.
We’re well aware of issues like the fallibility of evidence (you may think you see a black swan, but didn’t). Those do not contradict this logical point about a particular asymmetry.
“You must always begin with assumptions”
No you don’t have to. Popper’s approach begins with conjectures. None of them are assumed, they are simply conjectured.
Here’s an example. You claim this is an assumption:
“You can generalise from this one example to a point about all theories”
In a Popperian approach, that is not assumed. It is conjectured. It is then open to critical debate. Do you see something wrong with it? Do you have an argument against it? Conjectures can be refuted by criticism.
BTW Popper wasn’t “generalizing”. He was making a point about all theories (in particular categories) in the first place and then illustrating it second. “Generalizing” is a vague and problematic concept.
“Does Popper not argue for his own philosophy, or does he just state it and hope people will believe him?
You cannot set up rules for arguments which are not themselves backed up by argument. ”
He argues, but without setting up predefined, static rules for argument. The rules for argument are conjectured, criticized, modified. They are a work in progress.
Regarding the Hume quote, are you saying you’re a positivist or similar?
“Bayes’ theorem tells you how to adjust beliefs based on evidence, every time you adjust your beliefs you must use it, otherwise your map will not reflect the territory.”
Only probabilistic beliefs. I think it is only appropriate to use when you have actual numbers instead of simply having to assign them to everything involved by estimating.
“To be perfectly honest I care more about getting results in the real world than having some mythical perfect philosophy which can be justified to a rock.”
Mistakes have real world consequences. I think Popper’s epistemology works better in the real world. Everyone thinks their epistemology is more practical. How can we decide? By looking at whether they make sense, whether they are refuted by criticism, etc… If you have a practical criticism of Popperian epistemology you’re welcome to state it.
I agree with that.
How does this translate into illustrating whether either epistemology has “real world consequences”? Criticism and “sense making” are widespread, varied, and not always valuable.
I think what would be most helpful if you set up a hypothetical example and then proceeded to show how Popperian espistemology would lead to a success while a Bayesian approach would lead to a “real world consequence.” I think your question, “How can we decide?” was perfect, but I think your answer was incorrect.
Example: we want to know if liberalism or socialism is correct.
Popperian approach: consider what problem the ideas in question are intended to solve and whether they solve it. They should explain how they solve the problem; if they don’t, reject them. Criticize them. If a flaw is discovered, reject them. Conjecture new theories also to solve the problem. Criticize those too. Theories similar to rejected theories may be conjectured; and it’s important to do that if you think you see a way to not have the same flaw as before. Some more specific statements follow:
Liberalism offers us explanations such as: voluntary trade is mutually beneficial to everyone involved, and harms no one, so it should not be restricted. And: freedom is compatible with a society that makes progress because as people have new ideas they can try them out without the law having to be changed first. And: tolerance of people with different ideas is important because everyone with an improvement on existing customs will at first have a different idea which is unpopular.
Socialism offers explanations like, “People should get what they need, and give what they are able to” and “Central planning is more efficient than the chaos of free trade.”
Socialim’s explanations have been refuted by criticisms like Mises’s 1920 paper which explained that central planners have no rational way to plan (in short: because you need prices to do economic calculation). And “need” has been criticized, e.g. how do you determine what is a need? And the concept of what people are “able to give” is also problematic. Of course the full debate on this is very long.
Many criticisms of liberalism have been offered. Some were correct. Older theories of liberalism were rejected and new versions formulated. If we consider the best modern version, then there are currently no outstanding criticisms of it. It is not refuted, and it has no rivals with the same status. So we should (until this situation changes) accept and use liberalism.
New socialist ideas were also created many times in response to criticism. However, no one has been able to come up with coherent ideas which address all the criticisms and still reach the same conclusions (or anything even close).
Liberalism’s “justification” is merely this: it is the only theory we do not currently have a criticism of. A criticism is an explanation of what we think is a flaw or mistake. It’s a better idea to use a theory we don’t see anything wrong with than one we do. Or in other words: we should act on our best (fallible) knowledge that we have so far. In this way, the Popperian approach doesn’t really justify anything in the normal sense, and does without foundations.
Bayesian approach: Assign them probabilities (how?), try to find relevant evidence to update the probabilities (this depends on more assumptions), ignore that whenever you increase the probability of liberalism (say) you should also be increasing the probability of infinitely many other theories which made the same empirical assertions. Halt when—I don’t know. Make sure the evidence you update with doesn’t have any bias by—I don’t know, it sure can’t be a random sample of all possible evidence.
No doubt my Bayesian approach was unfair. Please correct it and add more specific details (e.g. what prior probability does liberalism have, what is some evidence to let us update that, what is the new probability, etc...)
PS is it just me or is it difficult to navigate long discussions and to find new nested posts? And I wasn’t able to find a way to get email notification of replies.
I’m beginning to see where the problem in this debate is coming from.
Bayesian humans don’t always assign actual probabilities, I almost never do. What we do in practice is vaguely similar to your Popperian approach.
The main difference is that we do thought experiments about Ideal Bayesians, strange beings with the power of logical omniscience (which gets them round the problem of Solomonoff Induction being uncomputable), and we see which types of reasoning might be convincing to them, and use this a standard for which types are legitimate.
Even this might in practice be questioned, if someone showed be a thought experiment in which an ideal Bayesian systematically arrived at worse beliefs than some competitor I might stop being a Bayesian. I can’t tell you what I would use as a standard in this case, since if I could predict that theory X would turn out to be better than Bayesianism I would already be an X theorist.
Popperian reasoning, on the other hand, appears to use human intuition as its standard. The conjectures he makes ultimately come from his own head, and inevitably they will be things that seem intuitively plausible to him. It is also his own head which does the job of evaluating which criticisms are plausible. He may bootstrap himself up into something that looks more rigorous, but ultimately if his intuitions are wrong he’s unlikely to recover from it. The intuitions may not be unquestioned but they might as well be for all the chance he has of getting away from their flaws.
Cognitive science tells us that our intuitions are often wrong. In extreme cases they contradict logic itself, in ways that we rarely notice. Thus they need to be improved upon, but to improve upon them we need a standard to judge them by, something where we can say “I know this heuristic is a cognitive bias because it tells us Y when the correct answer is in fact X”. A good example of this is conjunction bias, conjunctions are often more plausible than disjunctions to human intuition, but they are always less likely to be correct, and we know this through probability theory.
So here’s how a human Bayesian might look, this approach only reflects the level of Bayesian strength I currently have, and can definitely be improved upon.
We wouldn’t think in terms of Liberalism and Socialism, both of them are package deals containing many different epistemic beliefs and prescriptive advice. Conjunction bias might fool you into thinking that one of them is probably right, but in fact both are astonishingly unlikely.
We hold off on proposing solutions (scientifically proven to lead to better solutions) and instead just discuss the problem. We clearly frame exactly what our values are in this situation, possibly in the form of a precisely delineated utility function and possibly not, so we know what we are trying to achieve.
We attempt to get our facts straight. Each fact is individually analysed, to see whether we have enough evidence to overcome its complexity. This process continues permanently, every statement is evaluated.
We then suggest policies which seem likely to satisfy our values, and calculate which one is likely to do so best.
I’m not sure there’s actually a difference between the two approaches, ultimately only arrived at Bayesian through my intuitions as well, so there is no difference at the foundations. Bayesianism is just Popperianism done better.
PS there is a little picture of an envelope next to your name and karma score in the right hand corner. It turns red when one of your comments has a reply. Click on it too see the most recent replies to your comments.
No. It does not have a fixed standard. Fixed standards are part of the justificationist attitude which is a mistake which leads to problems such as regress. Justification isn’t possible and the idea of seeking it must be dropped.
Instead, the standard should use our current knowledge (the starting point isn’t very important) and then change as people find mistakes in it (no matter what standard we use for now, we should expect it to have many mistakes to improve later).
Popperian epistemology has no standard for conjectures. The flexible, tentative standard is for criticism, not conjecture.
The “work”—the sorting of good ideas from bad—is all done by criticism and not by rules for how to create ideas in the first place.
You imply that people are parochial and biased and thus stuck. First, note the problems you bring up here are for all epistemologies to deal with. Having a standard you tell everyone to follow does not solve them. Second, people can explain their methods of criticism and theory evaluation to other people and get feedback. We aren’t alone in this. Third, some ways (e.g. having less bias) as a matter of fact work better than others, so people can get feedback from reality when they are doing it right, plus it makes their life better (incentive). More could be said. Tell me if you think it needs more (why?).
“I know this heuristic is a cognitive bias because it tells us Y when the correct answer is in fact X”
I think by “know” here you are referring to the justified, true belief theory of knowledge. And you are expecting that the authority or certainty of objective knowledge will defeat bias. This is a mistake. Like it or not, we cannot ever have knowledge of that type (e.g. b/c justification attempts lead to regress). What we can have is fallible, conjectural knowledge. This isn’t bad; it works fine; it doesn’t not devolve into everyone believing their bias.
Liberalism is not a package by accident. It is a collection of ideas around one theme. They are all related and fit together. They are less good in isolation—e.g. if you take away one idea you’ll find that now one of the other ideas has an unsolved and unaddressed problem. It is sometimes interesting to consider the ideas individually but to a significant extent they all are correct or incorrect as a group.
The way I’m seeing it is that most of the time you (and everyone else) does something roughly similar to what Popper said to. This isn’t a surprise b/c most people do learn stuff and that is the only method possible of creating any knowledge. But when you start using Bayesian philosophy more directly, by e.g. explicitly assigning and updating probabilities to try to settle non-probabilistic issues (like moral issues), then you start making mistakes. You say you don’t do that very often. OK. But there’s other more subtle ones. One is what Popper called The Myth of the Framework where you suggest that people with different frameworks (initial biases) will both be stuck on thinking that what seems right to them (now) is correct and won’t change. And you suggest the way past this is, basically, authoritative declarations where you put someone’s biases against Truth so he has no choice but to recant. This is a mistake!
PS wow that inbox page is helpful… :-)
To some extent our thought processes can certainly improve, however there is no guarantee of this, let me give an example to illustrate:
Alice is an inductive thinker, in general she believes that is something has happened often in the past it is more likely to happen in the future. She does treat this as an absolute, it is only probabilistic, and it does not work in certain specific situations (such as pulling beads out of a jar with 5 red and 5 blue beads), but she used induction to discover which situations those were. She is not particular worried that induction might be wrong, after all, it has almost always worked in the past.
Bob is an anti-inductive thinker, he believes that the more often something happens, the less likely it is to happen in the future. To him, the universe is like a giant bag of beads, and the more something happens the more depleted the universe’s supply of it becomes. He also concedes that anti-induction is merely probabilistic, and there are certain situations (the bag of beads example) were it has already worked a few times so he doesn’t think its very likely to work now. He isn’t particularly worried that he might be wrong, anti-induction has almost never worked for him before, so he must be set up for the winning streak of a lifetime.
Ultimately, neither will ever be convinced of the other’s viewpoint. If Alice conjectures anti-induction then she will immediately have a knock-down criticism, and vice versa for Bob and Induction. One of them has an irreversibly flawed starting point.
Like it or not, you, me, Popper and every other human is an Alice. If you don’t believe me, just ask which of the following criticisms seems more logically appealing to you:
“Socialism has never worked in the past, every socialist state has turned into a nightmarish tyranny, so this country shouldn’t become socialist”
“Liberalism has usually worked in the past, most liberal democracies are wealthy and have the highest standards of living in human history, so this country shouldn’t become liberal”
This might be correct, but there is a heavy burden of proof to show it. Liberalism and Socialism are two philosophies out of thousands (maybe millions) of possibilities. This means that you need huge amounts evidence to distinguish the two of them from the crowd and comparatively little evidence to distinguish one from the other.
That is a recipe for disaster. There are too many possible conjectures, we cannot consider them all, we need some way to prioritise some over others. If you do not specify a way then people will just do so according to personal preference.
As I see it, Popperian reasoning is pretty much the way humans reason naturally, and you only have to look at any modern political debate to see why that’s a problem.
Yes, there is no guarantee. One doesn’t need a guarantee for something to happen. And one can’t have guarantees about anything, ever. So the request for guarantees is itself a mistake.
The sketches you give of Bob and Alice are not like real people. They are simplified and superficial, and people like that could not function in day to day life. The situation with normal people is different. No everyday people have irreversibly flawed starting points.
The argument for this is not short and simple, but I can give it. First I’d like to get clear what it means, and why we would be discussing it. Would you agree that if my statement here is correct then Popper is substantially right about epistemology? Would you concede? If not, what would you make of it?
That is a misconception. One of its prominent advocates was Hume. We do not dispute things like this out of ignorance, out of never hearing it before. One of the many problems with it is that people can’t be like Alice because there is no method of induction—it is a myth that one could possibly do induction because induction doesn’t describe a procedure a person could do. Induction has no set of instructions to follow to offer.
That may sound strange to you. You may think it offers a procedure like:
1) gather data 2) generalize/extrapolate (induce) a conclusion from the data 3) the conclusion is probably right, with some exceptions
The problem is step 2 which does not how how to extrapolate a conclusion from a set of data. There are infinitely many conclusions consistent with any finite data set. So the entire procedure rests on having a method of choosing between them. All proposals made for this either don’t work or are vague. The one I would guess you favor is Occam’s Razor—pick the simplest one. This is both vague (what are the precise guidelines for deciding what is simpler) and wrong (under many interpretations. for example because it might reject all explanatory theories b/c omitting the explanation is simpler).
Another issue is how one thinks about things he has no past experience about. Induction does not answer that. Yet people do it.
I think they are both terrible arguments and they aren’t how I think about the issue.
The “burden of proof” concept is a justificationist mistake. Ideas cannot be proven (which violates fallibility) and they can’t be positively shown to be true. You are judging Popperian ideas by standards which Popper rejected which is a mistake.
But it works in practice. The reason it doesn’t turn into a disaster is people want to find the truth. They aren’t stopped from making a mess of things by authoritative rules but by their own choices because they have some understanding of what will and won’t work.
The authority based approach is a mistake in many ways. For example, authorities can themselves be mistaken and could impose disasters on people. And people don’t always listen to authority. We don’t need to try to force people to follow some authoritative theory to make them think properly, they need to understand the issues and do it voluntarily.
Personal preferences aren’t evil, and imposing what you deem the best preference as a replacement is an anti-liberal mistake.
No. Since Aristotle, justificationism has dominated philosophy and governs the unconscious assumptions people make in debates. They do not think like Popperians or understand Popper’s philosophy (except to the extent that some of their mental processes are capable of creating knowledge, and those have to be in line with the truth of the matter about what does create knowledge).
Since I’m not familiar with the whole of Popper’s position I’m noting going to accept it blindly. I’m also not even certain that he’s incompatible with Bayesianism.
Anyway, that fact that no human has a starting point as badly flawed as anti-induction doesn’t make Bayesianism invalid. It may well be that we are just very badly flawed, and can only get out of those flaws by taking the mathematically best approach to truth. This is Bayesianism, it has been proven in more than one way.
This is exactly we we need induction. It is usually possible to stick any future onto any past and get a consistent history, induction tells us that if we want a probable history we need to make the future and the past resemble each other.
People certainly say that. Most of them even believe it on a conscious level, but there in your average discussion there is a huge amount of other stuff going on, from signalling tribal loyalty to rationalising away unpleasant conclusions. You will not wander down the correct path by chance, you must use a map and navigate.
I have no further interest in talking with you if you resort to straw men like this. I am not proposing we set up a dictatorship and kill all non-Bayesians, nor am I advocating censorship of views opposed the correct Bayesian conclusion.
All I am saying is your mind was not designed to do philosophical reasoning. It was designed to chase antelope across the savannah, lob a spear in them, drag them back home to the tribe, and come up with an eloquent explanation for why you deserve a bigger share of the meat (this last bit got the lion’s share of the processing power).
Your brain is not well suited to abstract reasoning, it is a fortunate coincidence that you are capable of it at all. Hopefully, you are lucky enough to have a starting point which is not irreversibly flawed, and you may be able to self improve, but this should be in the direction of realising that you run on corrupt hardware, distrusting your own thoughts, and forcing them to follow rigorous rules. Which rules? The ones that have been mathematically proven to be the best seem like a good starting point.
(The above is not intended as a personal attack, it is equally true of everyone)
I did not say it makes Bayesianism invalid. I said it doesn’t make Popperism invalid or require epistemological pessimism. You were making myth of the framework arguments against Popper’s view. My comments on those were not intended to refute Bayesianism itself.
That is a mistake and Popper’s approach is superior.
Part 1: It is a mistake because the future does not resemble the past except in some vacuous senses. Why? Because stuff changes. For example an object in motion moves to a different place in the future. And human societies invent new technologies.
It is always the case that some things resemble the past and some don’t. And the guideline that “the future resembles the past” gives no guidance whatsoever in figuring out which are which.
Popper’s approach is to improve our knowledge piecemeal by criticizing mistakes. The primary criticisms of this approach are that is it is incapable of offering guarantees, authority, justification, a way to force people to go against their biases, etc.. These criticisms are mistaken: no viable theory offers what they want. Setting aside those objections—that Popper doesn’t meet standard too high for anything to meet—it works and is how we make progress.
Regarding people wanting to find the truth, indeed they don’t always. Sometimes they don’t learn. Telling them they should be Bayesians won’t change that either. What can change it is sorting out the mess of their psychology enough to figure out some advice they can use. BTW the basic problem you refer to is static memes, the theory of which David Deutsch explains in his new (Popperian) book The Beginning of Infinity.
Please calm down. I am trying my best to explain clearly. If I think that some of your ideas have nasty consequences that doesn’t mean I’m trying to insult you. It could be the case that some of your ideas actually do have nasty consequences of which you are unaware, and that by pointing out some of the ways your ideas relate to some ideas you consciously deem bad, you may learn better.
All justificationist epistemologies have connections to authority, and authority has nasty connections to politics. You hold a justificationist epistemology. When it comes down to it, justification generally consists of authority. And no amount of carefully deciding what is the right thing to set up as that authority changes that.
This connect to one of Popper’s political insights which is that most political theories focus on the problem “Who should rule?” (or: what policies should rule?). This question is a mistake which begs for an authoritarian answer. The right question is a fallibilist one: how can we set up political institutions that help us find and fix errors?
Getting back to epistemology, when you ask questions like, “What is the correct criterion for induction to use in step 2 to differentiate between the infinity of theories?” that is a bad question which begs for an authoritarian answer.
My mind is a universal knowledge creator. What design could be better? I agree with you that it wasn’t designed for this in the sense that evolution doesn’t have intentions, but I don’t regard that as relevant.
Evolutionary psychology contains mistakes. I think discussion of universality is a way to skip past most of them (when universality is accepted, they become pretty irrelevant).
I’d urge you to read The Beginning of Infinity by David Deutsch which refutes this. I can give the arguments but I think reading it would be more efficient and we have enough topics going already.
See! I told you the authoritarian attitude was there!
And there is no mathematical proof of Bayesian epistemology. Bayes’ theorem itself is a bit of math/logic which everyone accepts (including Popper of course). But Bayesian epistemology is an application of it to certain philosophical questions, which leaves the domain of math/logic, and there is no proof that application is correct.
I know. My comments weren’t either.
The object in motion moves according to the same laws in both the future and the past, in this sense the future resembles the past. You are right that the future does not resemble the past in all ways, but the ways in which it does themselves remain constant over time. Induction doesn’t apply in all cases but we can use induction to determine which cases it applies in and which it doesn’t. If this looks circular that’s because it is, but it works.
As far as Bayesianism is concerned this is a straw man. Most Bayesians don’t offer any guarantees in the sense of absolute certainty at all.
No Bayesian has ever proposed setting up some kind of Bayesian dictatorship. As far as I can tell the only governmental proposal based on Bayesianism thus far is Hanson’s futarchy, which could hardly be further from Authoritarianism.
You misunderstand me. What I meant was that as a Bayesian I force my own thoughts to follow certain rules. I don’t force other people to do so. You are arguing from a superficial resemblance. Maths follows rigorous, unbreakable rules, does this mean that all mathematicians are evil fascists?
Incorrect. E.T. Jaynes book Probability Theory: The Logic of Science gives a proof in the first two chapters.
You obviously haven’t read much of the heuristics and biases program. I can’t describe it all very quickly here but I’ll just give you a taster.
Subjects asked to rank statements about a woman called Jill in order of probability of being true ranked “Jill is a feminist and a bank teller” as more probable than “Jill is a bank teller” despite this being logically impossible.
U.N. diplomats, when asked to guess the probabilities of various international events occurring in the nest year gave a higher probability to “USSR invades Poland causing complete cessation of diplomatic activities between USA and USSR” than they did to “Complete cessation of diplomatic activities between USA and USSR”.
Subjects who are given a handful of evidence and arguments for both sides of some issue, and asked to weigh them up, will inevitably conclude that the weight of the evidence given is in favour of their side. Different subjects will interpret the same evidence to mean precisely opposite things.
Employers can have their decision about whether to hire someone changed by whether they held a warm coffee or a cold coke in the elevator prior to the meeting.
People can have their opinion on an issue like nuclear power changed by a single image of a smiley or frowny face, flashed to briefly for conscious attention.
People’s estimates of the number of countries in Africa can be changed simply by telling them a random number beforehand, even if it is explicitly stated that this number has nothing to do with the question.
Students asked to estimate a day by which they are 99% confident their project will be finished, go past this day more than half the time.
People are more like to move to a town if the town’s name and their name begin with the same letter.
There’s a lot more, most of which can’t easily be explained in bullet form. Suffice to say these are not irrelevant to thinking, they are disastrous. It takes constant effort to keep them back, because they are so insidious you will not notice when they are influencing you.
Replied here:
http://lesswrong.com/r/discussion/lw/54u/bayesian_epistemology_vs_popper/
Would you agree that this is a bit condescending and you’re basically assuming in advance that you know more than me?
I actually have read about it and disagree with it on purpose, not out of ignorance.
Does that interest you?
And on the other hand, do you know anything about universality? You made no comment about that. Given that I said the universality issue trumps the details you discuss in your bullet points, and you didn’t dispute that, I’m not quite sure why you are providing these details, other than perhaps a simple assumption that I had no idea what I was talking about and that my position can be ignored without reply because, once my deep ignorance is addressed, I’ll forget all about this Popperian nonsense..
Ordered but there’s an error in the library system and I’m not sure if it will actually come or not. I don’t suppose the proof is online anywhere (I can access major article databases), or that you could give it or an outline? BTW I wonder why the proof takes 2 chapters. Proofs are normally fairly short things. And, well, even if it was 100 pages of straight math I don’t see why you’d break it into separate chapters.
No I understood that. And that is authoritarian in regard to your own thoughts. It’s still a bad attitude even if you don’t do it to other people. When you force your thoughts to follow certain rules all the epistemological problems with authority and force will plague you (do you know what those are?).
Regarding Popper, you say you don’t agree with the common criticisms of him. OK. Great. So, what are your criticisms? You didn’t say.
If there was an epistemology that didn’t endorse circular arguments, would you prefer it over yours which does?
I apologise for this, but I really don’t see how anyone could go through those studies without losing all faith in human intuition.
The text can be found online. My browser (Chrome) wouldn’t open the files but you may have more luck.
Part of the reason for length is that probability theory has a number of axioms and he has to prove them all. The reason for the two chapter split is that the first chapter is about explaining what he wants to do, why he wants to do it, and laying out his desiderata. It also contains a few digressions in case the reader isn’t familiar with one or more of the prerequisites for understanding it (propositional logic for example). All of the actual maths is in the second chapter.
I agree to the explicit meaning of this statement but you are sneaking in connotations. Let us look more closely about what ‘authoritarian’ means.
You probably mean it in the sense of centralised as opposed to decentralized control, and in that sense I will bite the bullet and say that thinking should be authoritarian.
However, the word has a number of negative connotations. Corruption, lack of respect for human rights and massive bureaucracy that stifles innovation to name a few. None of those apply to my thinking process, so even though the term may be technically correct it is somewhat intellectually dishonest to use it, something more value-neutral like ‘centralized control’ might be better.
I will confess that I am not familiar with the whole of Popper’s viewpoint. I have never read anything written by him although after this conversation I am planning to.
Therefore I do not know whether or not I broadly agree or disagree with him. I did not come here to attack him, originally I was just responding to a criticism of yours that Bayesianism fails in a certain situation
To some extent I think the approach with conjectures and criticisms may be correct, at least as a description of how thinking must get off the ground. Can you be a Popperian and conjecture Bayesianism?
The point that I do disagree with is the proposed asymmetry between confirmation and falsification. In my view neither the black swan or the white swan proves anything with certainty, but both do provide some evidence. It happens in this case that one piece of evidence is very strong while the other is very weak, in fact they are pretty much at opposite extremes of the full spectrum of evidence encountered in the real world. This does not mean there is a difference of type.
All else being equal, yes. Other factors, such as real-world results might take precedence. I also doubt that any philosophy could manage without either circularity or assumptions, explicit or otherwise. As I see it when you start thinking you need something to begin your inference, logic derives truths form other truths, it cannot manufacture them out of a vacuum. So any philosophy has two choices:
Either, pick a few axioms, call them self evident and derive everything from them. This seems to work fairly well in pure maths, but not anywhere else. I suspect the difference lies in whether the axioms really are self evident or not.
Or, start out with some procedures for thinking. All claims are judged by these, including proposals to change the procedures for thinking. Thus the procedures may self-modify and will hopefully improve. This seems better to me, as long as the starting point passes a certain threshold of accuracy any errors are likely to get removed (the phrase used here is the Lens that Sees its Flaws). It is ultimately circular, since whatever the current procedures are they are justified only by themselves, but I can live with that.
Ideal Bayesians are of the former type, but they can afford to be as they are mathematically perfect beings who never make mistakes. Human Bayesians take the latter approach, which means in principle they might stop being Bayesians if they could see that for some reason it was wrong.
So I guess my answer is that if a position didn’t endorse circular arguments, I would be very worried that it is going down the unquestionable axioms route, even if it does not do so explicitly, so I would probably not prefer it.
Notice how it is only through the benefits of the second approach that I can even consider such a scenario.
I’m not trying to argue by connotation. It’s hard to avoid connotations and I think the words I’m using are accurate.
That’s not what I had in mind, but I do think that centralized control is a mistake.
I take fallibilism seriously: any idea may be wrong, and many are. Mistakes are common.
Consequently, it’s a bad idea to set something up to be in charge of your whole mind. It will have mistakes. And corrections to those mistakes which aren’t in charge will sometimes get disregarded.
Those 3 things are not what I had in mind. But I think the term is accurate. You yourself used the word “force”. Force is authoritarian. The reason for that is that the forcer is always claiming some kind of authority—I’m right, you’re wrong, and never mind further discussion, just obey.
You may find this statement strange. How can this concept apply to ideas within one mind? Doesn’t it only apply to disagreements between separate people?
But ideas are roughly autonomous portions of a mind (see: http://fallibleideas.com/ideas). They can conflict, they can force each other in the sense of one taking priority over another without the conflict being settled rationally.
Force is a fundamentally epistemological concept. Its political meanings are derivative. It is about non-truth-seeking ways of approaching disputes. It’s about not reaching agreement by one idea wins out anyway (by force).
Settling conflicts between the ideas in your mind by force is authoritarian. It is saying some ideas have authority/preference/priority/whatever, so they get their way. I reject this approach. If you don’t find a rational way to resolve a conflict between ideas, you should say you don’t know the answer, never pick a side b/c the ideas you deem the central controllers are on that side, and they have the authority to force other ideas to conform to them.
This is a big topic, and not so easy to explain. But it is important.
Force, in the sense of solving difficulties without argument, is not what I meant when I said I force my thoughts to follow certain rules. I don’t even see how that could work, my individual ideas do not argue with each-other, if they did I would speak to a psychiatrist.
I’m afraid you are going to have to explain in more detail.
They argue notionally. They are roughly autonomous, they have different substance/assertions/content, sometimes their content contradicts, and when you have two or more conflicting ideas you have to deal with that. You (sometimes) approach the conflict by what we might call an internal argument/debate. You think of arguments for all the sides (the substance/content of the conflict ideas), you try to think of a way to resolve the debate by figuring out the best answer, you criticize what you think may be mistakes in any of the ideas, you reject ideas you decide are mistaken, you assign probabilities to stuff and do math, perhaps, etc...
When things go well, you reach a conclusion you deem to be an improvement. It resolves the issue. Each of the ideas which is improved on notionally acknowledges this new idea is better, rather than still conflicting. For example, if one idea was to get pizza, and one was to get sushi, and both had the supporting idea that you can’t get both because it would cost too much, or take too long, or make you fat, then you could resolve the issue by figuring out how to do it quickly, cheaply and without getting fat (smaller portions). If you came up with a new idea that does all that, none of the previously conflicting ideas would have any criticism of it, no objection to it. The conflict is resolved.
Sometimes we don’t come up with a solution that resolves all the issues cleanly. This can be due to not trying, or because it’s hard, or whatever.
Then what?
Big topic, but what not to do is use force: arbitrarily decide which side wins (often based on some kind of authority or justification), and declare it the winner even though the substance of the other side is not addressed. Don’t force some of your ideas, which have substantive unaddressed points, to defer to the ideas you put in charge (granted authority).
I certainly don’t advocate deciding arbitrarily. The would fall into the fallacy of just making sh*t up which is the exact of everything Bayes stands for. However, I don’t have to be arbitrary, most of the ideas that run up against Bayes don’t have the same level of support. In general, I’ve found that a heuristic of “pick the idea that has a mathematical proof backing it up” seems to work fairly well.
There are also sometimes other clues, rationalisations tend to have a slightly different ‘feel’ to them if you introspect closely (in my experience at any rate), and when the ideas going up against Bayes seem to include a disproportionately high number of rationalisations, I start to notice a pattern.
I also disagree about ideas being autonomous. Ideas are entangled with each other in complex webs of mutual support and anti-support.
Did you read my link? Where did the argument about approximately autonomous ideas go wrong?
Well this changes the topic. But OK. How do you decide what has support? What is support and how does it differ from consistency?
I did. To see what is wrong with it let me give an analogy. Cars have both engines and tyres. It is possible to replace the tyres without replacing the engine. Thus you will find many cars with very different tyres but identical engines, and many different engines but identical tyres. This does not mean that tyres are autonomous and would work fine without engines.
Well, mathematical proofs are support, and they are not at all the same a consistency. In general however, if some random idea pops into my head, and I spot that it in fact it only occurred to me as a result of conjunction bias I am not going to say “well, it would be unfair of me to reject this just because it contradicts probability theory, so I must reject both it and probability theory until I can find a superior compromise position”. Frankly, that would be stupid.
@autonomous—you know we said “approximately autonomous” right? And that, for various purposes, tires are approximately autonomous, which means things like they can be replaced individually without touching the engine or knowing what type of engine it is. And a tire could be taken off one car and put on another.
No one was saying it’d function in isolation. Just like a person being autonomous doesn’t mean they would do well in isolation (e.g. in deep space). Just because people do need to be in appropriate environments to function doesn’t make “people are approximately autonomous” meaningless or false.
First,l you have not answered my question. What is support? The general purpose definition. I want you to specify how it is determined if X supports Y, and also what that means (why should we care? what good is “support”?).
Second, let’s be more precise. If a person writes what he thinks to be a proof, what is supported? What he thinks is the conclusion of what he thinks is a proof, and nothing else? An infinite set of things which have wildly different properties? Something else?
You argue from ideas being approximately autonomous to the fact that words like ‘authoritarian’ apply to them, and that the approximately debate, but this is not true in the car analogy. Is it ‘authoritarian’ that the brakes, accelerator and steering wheel have total control of the car, while the tyres and engine get no say, or is it just efficient?
I didn’t give a loose argument by analogy. You’re attacking a simplified straw man. I explained stuff at some length and you haven’t engaged here with all of what I said. e.g. your comments on “authoritarian” here do not mention or discuss anything I said about that. You also don’t mention force.
I haven’t got any faith in human intuition. That’s not what I said.
OK fair enough.
Oh the book is here: http://bayes.wustl.edu/etj/prob/book.pdf
That was easy.
I don’t know the etiquette or format of this website well or how it works. When I have comments on the book, would it make sense to start a new thread or post somewhere/somehow?
You can conjecture Bayes’ theorem. You can also conjecture all the rest, however some things (such as induction, justificationism, foundationalism) contradict Popper’s epistemology. So at least one of them has a mistake to fix. Fixing that may or may not lead to drastic changes, abandonment of the main ideas, etc
That is a purely logical point Popper used to criticize some mistaken ideas. Are you disputing the logic? If you’re merely disputing the premises, it doesn’t really matter because its purpose is to criticize people who use those premises on their own terms.
Agreed.
I think you are claiming that seeing a white swan is positive support for the assertion that all swans are white. (If not, please clarify). If so, this gets into important issues. Popper disputed the idea of positive support. The criticism of the concept begins by considering: what is support? And in particular, what is the difference between “X supports Y” and “X is consistent with Y”?
Questioning this was one of Popper’s insights. The reason most people doubt it is possible is because, since Aristotle, pretty much all epistemology has taken this for granted. These ideas seeped into our culture and became common sense.
What’s weird about the situation is that most people are so attached to them that they are willing to accept circular arguments, arbitrary foundations, or other things like that. Those are OK! But that Popper might have a point is hard to swallow. I find circular arguments rather more doubtful than doing without what Popperians refer to broadly as “justification”. I think it’s amazing that people run into circularity or other similar problems and still don’t want to rethink all their premises. (No offense intended. Everyone has biases, and if we try to overcome them we can become less wrong about some matters, and stating guesses at what might be biases can help with that.)
All the circularity and foundations stem from seeking to justify ideas. To show they are correct. Popper’s epistemology is different: ideas never have any positive support, confirmation, verification, justification, high probability, etc… So how do we act? How do we decide which idea is better than the others? We can differentiate ideas by criticism. When we see a mistake in an idea, we criticize it (criticism = explaining a mistake/flaw). That refutes the idea. We should act on or use non-refuted ideas in preference over refuted ideas.
That’s the very short outline, but does that make any sense?
Fully agreed. In principle, if Popper’s epistemology is of the second, self-modifying type, there would be nothing wrong with drastic changes. One could argue that something like that is exactly how I arrived at my current beliefs, I wasn’t born a Bayesian.
I can also see some ways to make induction and foundationalism easer to swallow.
A discussion post sounds about right for this, if enough people like it you might consider moving it to the main site.
This is precisely what I am saying.
The beauty of Bayes is how it answers these questions. To distinguish between the two statements we express them each in terms of probabilities.
“X is consistent with Y” is not really a Bayesian way of putting things, I can see two ways of interpreting it. One is as P(X&Y) > 0, meaning it is at least theoretically possible that both X and Y are true. The other is that P(X|Y) is reasonably large, i.e. that X is plausible if we assume Y.
“X supports Y” means P(Y|X) > P(Y), X supports Y if and only if Y becomes more plausible when we learn of X. Bayes tells us that this is equivalent to P(X|Y) > P(X), i.e. if Y would suggest that X is more likely that we might think otherwise then X is support of Y.
Suppose we make X the statement “the first swan I see today is white” and Y the statement “all swans are white”. P(X|Y) is very close to 1, P(X|~Y) is less than 1 so P(X|Y) > P(X), so seeing a white swan offers support for the view that all swans are white. Very, very weak support, but support nonetheless.
(The above is not meant to be condescending, I apologise if you know all of it already).
This is a very tough bullet to bite.
One thing I don’t like about this is the whole ‘one strike and you’re out’ feel of it. It’s very boolean, the real world isn’t usually so crisp. Even a correct theory will sometimes have some evidence pointing against it, and in policy debates almost every suggestion will have some kind of downside.
There is also the worry that there could be more than one non-refuted idea, which makes it a bit difficult to make decisions. Bayesianism, on the other hand, when combined with expected utility theory, is perfect for making decisions.
When replying it said “comment too long” so I posted my reply here:
http://lesswrong.com/r/discussion/lw/552/reply_to_benelliott_about_popper_issues/
Step 1 is problematic also, as I explained in some of my comments to Tim Tyler. What should I gather data about? What kind of data? What measurements are important? How accurate? And so on.
Yes I agree. Another issue I mentioned in one of my comments here is that your data isn’t a random sample of all possible data, so what do you do about bias? (I mean bias in the data, not bias in the person.)
Step 3 is also problematic (as it explicitly acknowledges).
Finding it difficult also.
Have you found: http://lesswrong.com/message/inbox/
I don’t think I have the grasp on these subjects to hang in this, but this is great. -- I hope someone else comments in a more detailed manner.
In Popperian analysis, who ends the discussion of “what’s better?” You seem to have alluded to it being “whatever has no criticisms.” Is that accurate?
Why would Bayesian epistemology not be able to use the same evidence that Popperians used (e.g. the 1920 paper) and thus not require “assumptions” for new evidence? My rookie statement would be that the Bayesian has access to all the same kinds of evidence and tools that the Popperian approach does, as well as a reliable method for estimating probability outcomes.
Could you also clarify the difference between “conjecture” and “assumption.” Is it just that you’re saying that a conjecture is just a starting point for departure, whereas an assumption is assumed to be true?
An assumption seems both 1) justified if it has supporting evidence to make it highly likely as true to the best of our knowledge and 2) able to be just as “revisable” given counter-evidence as a “conjecture.”
Are you thinking that a Bayesian “assumption” is set in stone or that it could not be updated/modified if new evidence came along?
Lastly, what are “conjectures” based on? Are they random? If not, it would seem that they must be supported by at least some kind of assumptions to even have a reason for being conjectured in the first place. I think of them as “best guesses” and don’t see that as wildly different from the assumptions needed to get off the ground in any other analysis method.
Yes, “no criticisms” is accurate. There are issues of what to do when you have a number of theories remaining which isn’t exactly one which I didn’t go into.
It’s not a matter of “who”—learning is a cooperative thing and people can use their own individual judgment. In a free society it’s OK if they don’t agree (for now—there’s always hope for later) about almost all topics.
I don’t regard the 1920 paper as evidence. It contains explanations and arguments. By “evidence” I normally mean “empirical evidence”—i.e. observation data. Is that not what you guys mean? There is some relevant evidence for liberalism vs socialism (e.g. the USSR’s empirical failure) but I don’t regard this evidence as crucial, and I don’t think that if you were to rely only on it that would work well (e.g. people could say the USSR did it wrong and if they did something a bit different, which has never been tried, then it would work. And the evidence could not refute that.)
BTW in the Popperian approach, the role of evidence is purely in criticism (and inspiration for ideas, which has no formal rules or anything). This is in contrast to inductive approaches (in general) which attempt to positively support/confirm/whatever theories with the weight of evidence.
If the Bayesian approach uses arguments as a type of evidence, and updates probabilities accordingly, how is that done? How is it decided which arguments win, and how much they win by? One aspect of the criticism approach is theories do not have probabilities but only two statuses: they are refuted or non-refuted. There’s never an issue of judging how strong an argument is (how do you do that?).
If you try to follow along with the Popperian approach too closely (to claim to have all the same tools) one objection will be that I don’t see Bayesian literature acknowledging Popper’s tools as valuable, talking about how to use them, etc… I will suspect that you aren’t in line with the Bayesian tradition. You might be improving it, but good luck convincing e.g. Yudkowsky of that.
The difference between a conjecture and an assumption is just as you say: conjectures aren’t assumed true but are open to criticism and debate.
I think the word “assumption” means not revisable (normally assumptions are made in a particular context, e.g. you assume X for the purposes of a particular debate which means you don’t question it. But you could have a different debate later and question it.). But I didn’t think Bayesianism made any assumptions except for its foundational ones. I don’t mind if you want to use the word a different way.
Regarding justification by supporting evidence, that is a very problematic concept which Popper criticized. The starting place of the criticism is to ask what “support” means. And in particular, what is the difference between support and mere consistency (non-contradiction)?
Conjectures are not based on anything and not supported. They are whatever you care to imagine. It’s good to have reasons for conjectures but there are no rules about what the reasons should be, and conjectures are never rejected because of the reason they were conjectured (nor because of the source of the conjecture), only because of criticisms of their substance. If someone makes too many poor conjectures and annoys people, it’s possible to criticize his methodology in order to help him. Popperian epistemology does not have any built-in guidelines for conjecturing on which it depends; they can be changed and violated as people see fit. I would rather call them “guesses” than “best guesses” because it’s often a good idea for one person to make several conjectures, including ones he suspects are mistaken, in order to learn more about them. It should not be each person puts forward his best theory and they face off, but everyone puts forward all the theories he thinks may be interesting and then everyone cooperates in criticizing all of them.
Edit: BTW I use the words “theory” and “idea” interchangeably. I do not mean by “theory” ideas with a certain amount of status/justification. I think “idea” is the better word but I frequently forget to use it (because Popper and Deutsch say “theory” all the time and I got used to it).