Popperians say scientific ideas should be (empirically) falsifiable. Philosophy isn’t empirically falsifiable, it’s addressed by critical arguments.
I let you take substantial control over conversation flow. You took it here – you overestimated your knowledge of Popper and were totally wrong. You do not seem to have learned from this error.
You didn’t answer my question about your interest, and you seem totally lost as to what we disagree about. You’re still, in response to “your questions are based on assuming aspects of your philosophy are true”, making the same assumptions while denying it. You don’t have anything like a sense of what we disagree about, but you’re trying to lead the conversation anyway. Your questions are in service of lines of argument, not finding out what I think – and the lines of argument don’t make sense because you don’t know what to target.
But you don’t want references, and I don’t want to rewrite or copy/paste my blog post which is itself summarizing some information from books that would be better to look at directly.
Do you have a website with information I could skim to find disagreements? Earlier, IIRC, I tried to ask about some of your important beliefs but you didn’t put forward some positions to debate.
Is there any written philosophy material you think is correct, and would be super interested to learn contains mistakes? Or do you just think the ideas in your head are correct but they aren’t written down, and you’d like to learn about mistakes in those? Or do you think your own ideas have some flaws, but are pretty good, so if I pointed out a couple mistakes it might not make much difference to you?
What do you want to get out of this discussion? Coming to agree about some major philosophy issues would be a big effort. Under what sort of circumstances do you expect you would stop discussing? Do you have a discussion methodology which is written down anywhere? I do. http://curi.us/1898-paths-forward-short-summary
I have a philosophy I think is non-refuted. I don’t know of any mistakes and would be happy to find out. It’s also written down in public to expose it to scrutiny.
Your philosophy is advertised as “All problems can be solved by knowing how. I tell you how.”
This looks to me as crossing the demarcation threshold. Would you insist that there are no possible empirical observations which can invalidate you advice?
Do you have a website with information I could skim to find disagreements? … Is there any written philosophy material you think is correct, and would be super interested to learn contains mistakes?
You asked before. Still nope and nope.
Under what sort of circumstances do you expect you would stop discussing?
When you stop being interesting.
I don’t know of any mistakes and would be happy to find out.
You can bring up observations in a discussion of a piece of advice, but as always the role of the evidence is governed by arguments stating its role. And the primary issue here is argument.
All problems can be solved by knowing how.
This is a theory claim.
I tell you how.
This is a claim that I have substantial problem solving knowledge for sale, but is not intended to indicate I already know full solutions to all problems. It’s sufficiently non-specific that I don’t think it’s a very good target for discussion.
And are you really unfamiliar with this common English word? Do you know what being wrong is? Less wrong? Error? Flaw?
Are you trying to raise some sort of philosophical issue? If so, please state it directly.
You asked before. Still nope and nope.
What about the rest?
Or do you just think the ideas in your head are correct but they aren’t written down, and you’d like to learn about mistakes in those? Or do you think your own ideas have some flaws, but are pretty good, so if I pointed out a couple mistakes it might not make much difference to you?
And are you really unfamiliar with this common English word?
Oh, boy. We are having fundamental philosophical disagreements and you think dictionary definitions of things like “wrong” are adequate?
You say that philosophy is not falsifiable. OK, let’s assume that for the time being. So can we apply the term “wrong” to some philosophies and “right” to others? On which basis? You will say “critical arguments”. What is a critical argument? Within which framework are you going to evaluate them? You want “mistakes” pointed out to you. What kind of things will you accept as a “mistake” and what kind of things will you accept as indicating that it’s valid?
I disagree that definitions are not all that important.
do you just think the ideas in your head are correct
Well, obviously I think they are correct to some degree (remember, for me “truth” is not a binary category).
and you’d like to learn about mistakes in those?
See above: what is a “mistake”, given that we’re deliberately ignoring empirical testing?
Things I’d like to learn are more like new to me frameworks, angles of view, reinterpretations of known facts. To use Scott Alexander’s terminology, I want to notice concept-shaped holes.
Criteria of mistakes are themselves open to discussion. Some typical important ways to point out mistakes are:
1) internal contradictions, logical errors
2) non sequiturs
3) a reason X wouldn’t solve problem Y, even though X is being offered as a solution to Y
4) an idea assumes/uses and also contradicts some context (e.g. background knowledge)
5) pointing out a contradiction with evidence
6) pointing out ambiguity, vagueness
there are many other types of critical arguments. for example, sometimes an argument, X, claims to refute Y, but X, if correct, refutes everything (or everything in a relevant category). it’s a generic argument that could equally well be used on everything, and is being selectively applied to Y. that’s a criticism of X’s capacity to criticize Y.
Ideas solve problems (put another way, they have purposes), with “problem” understood very broadly (including answering questions, explaining an issue, accomplishing a goal). A mistake is something which prevents an idea from solving a problem it’s intended to solve (it fails to work for its purpose).
By correcting mistakes we get better ideas. We fix issues preventing our problems from being solved and our purposes achieved (including the purpose of correctly intellectually understanding philosophy, science, etc). We should prefer non-refuted ideas (no known mistakes) to refuted ideas (known mistakes).
Ways to point out mistakes? Then the question remains: what is a “mistake”? A finger pointing at the moon is not the moon.
Your (4) is the same thing as (1) -- or (5), take your pick. Your (5) is forbidden here—remember, we are deliberately keeping to one side of the demarcation threshold—no empirical evidence or empirical testing allowed. (6) is quite curious—is being vague a “mistake”?
Ideas solve problems
In the real world? Then they are falsifiable and we can bring empirical evidence to bear. You were very anxious to avoid that.
By correcting mistakes we get better ideas
Looks like a non sequitur: generating new (and better) ideas is quite distinct from fixing the errors of old ideas—similar to the difference between writing a new program and debugging an existing one.
We should prefer non-refuted ideas (no known mistakes) to refuted ideas (known mistakes).
I would argue that we should prefer ideas which successfully solve problems to ideas which solve them less successfully (demarcation! science! :-D)
Ways to point out mistakes? Then the question remains: what is a “mistake”? A finger pointing at the moon is not the moon.
I actually wrote a sentence
A mistake is [...]
Do you not read ahead before replying, and don’t go back and edit either?
(6) is quite curious—is being vague a “mistake”?
In general, yes. It technically depends on context (like the problem specification details). Normally e.g. the context of answering a question is you want an adequately clear answer, so an inadequately clear answer fails.
In the real world? Then they are falsifiable and we can bring empirical evidence to bear. You were very anxious to avoid that.
Ideas solve intellectual problems, and some of those solutions can be used to solve problems we care about in the real world by acting according to a solution. Some problems (e.g. in math) are more abstract and it’s unclear what to use the solutions for.
I have nothing against the real world. But even when the real world is relevant, you still have to make an argument saying how to use some evidence in the intellectual debate. The intellectual debate is always primary. You can’t just directly look at the world and know the answers, though sometimes the arguments involved with getting from evidence X to rejecting idea Y are sufficiently standard that people don’t write them out.
You are welcome to mention some evidence in a criticism of my philosophy claims if you think you see a way to relevantly do that.
Looks like a non sequitur: generating new (and better) ideas is quite distinct from fixing the errors of old ideas—similar to the difference between writing a new program and debugging an existing one.
You have idea X (plus context) to solve problem P. You find a mistake, M. You come up with a new idea to solve P which doesn’t have M. Whether it’s a slightly adjusted version of X (X2) or a very different idea that solves the same problem is kinda immaterial. Both are acceptable. Methodologically, the standard recommendation is to look for X2 first.
I would argue that we should prefer ideas which successfully solve problems to ideas which solve them less successfully (demarcation! science! :-D)
I consider solving a problem to be binary – X does or doesn’t solve P. And I consider criticisms to be binary – either they are decisive (says why the idea doesn’t work) or not.
Problems without success/failure criteria I consider inadequately specified. Informally we may get away with that, but when trying to be precise and running into difficult issues then we need to specify our problems better.
That’s a curious definition of a “mistake”. It’s very… instrumental and local. A “mistake” is a function of both an idea and a problem—therefore, it seems, if you didn’t specify a particular problem you can’t talk about ideas being mistaken. And yet your examples—e.g. an internal logical inconsistency—don’t seem to require a problem to demonstrate that an idea is broken.
I have nothing against the real world
Oh, I’m sure it’s relieved to hear that
But even when the real world is relevant, you still have to make an argument saying how to use some evidence in the intellectual debate.
Why is that?
The intellectual debate is always primary.
That’s an interesting claim. An intellectual debate is what’s happening inside your head. You are saying that it’s primary compared to the objective reality outside of your head. Am I understanding your correctly?
I consider solving a problem to be binary – X does or doesn’t solve P.
Only if a problem has a binary outcome. Not all problems do.
And I consider criticisms to be binary – either they are decisive (says why the idea doesn’t work) or not.
A black-and-white vision seems unnecessary limiting.
Consider standard statistics. Let’s say we’re trying to figure out the influence of X on Y (where both are real values). First, there is no sharp boundary between a solution and a not-solution. You can build a variety of statistical models which will make different trade-offs and produce different results. There is no natural dividing line between a slightly worse model which would be a not-solution and a slightly better model which will be a solution.
Moreover, since these different models are making trade-offs, you can criticise these trade-offs, but generally speaking it’s difficult to say that this one is outright wrong and that one is clearly right. There’s a reason they’re called trade-offs.
Typically at the end you pick a statistical model or an ensemble of models, but the question “is the problem solved, yes or no?” is silly: it is solved to some extent, not fully, but it’s not at the “we have no idea” stage either.
Problems without success/failure criteria I consider inadequately specified.
Life must be very inconvenient for you.
By the way, what about optimization problems? The goal is to maximize Y by manipulating X. There is no threshold, you want Y to be as large as possible. What’s the criterion for success?
That’s a curious definition of a “mistake”. It’s very… instrumental and local.
This is not local – I specified context matters (whether the context is stated as part of the problem, or specified separately, is merely a matter of terminology.)
You can’t determine whether a particular sentence is a correct or incorrect answer without knowing the context – e.g. what is it supposed to answer? The same statement can be a correct answer to one issue and an incorrect answer to a different issue. If you don’t like this, you can build the problem and the context into the statement itself, and then evaluate it in isolation.
I’m guessing the reason you consider my view on mistakes “instrumental” is because I think one has to look at the purpose of an idea instead of just the raw data. It’s because I add a philosophy layer where you don’t. So your alternative to “instrumental” is to say something like “mistakes are when ideas fail to correspond to empirical reality” – and to ignore non-empirical issues, interpretation issues, and that answers to questions need to correspond to the question which could e.g. be about a hypothetical scenario. To the extent that questions, goals, human problems, etc, are part of reality then, sure, this is all about reality. But I’m guessing we can both agree that’s a difference of perspective.
And yet your examples—e.g. an internal logical inconsistency—don’t seem to require a problem to demonstrate that an idea is broken.
Self-contradictory ideas are broken for many problems. In general, we try to criticize an idea as a solution to a range of problems, not a single one. Those criticisms are more interesting. If your criticism is too narrow, it won’t work on a slight variant of the idea. You normally want to criticize all the variants sharing a particular theme.
Self-contradictory ideas can (as far as we know) only be correct solutions to some specific types of problems, like for use in parody or as a discussion example.
But even when the real world is relevant, you still have to make an argument saying how to use some evidence in the intellectual debate.
Why is that?
Because facts are not self-explanatory. Any set of facts is open to many interpretations. (Not equally correct interpretations or anything like that, merely logically possible interpretations. So you have to talk about your interpretation, unless the other person can guess it. And you have to talk about how your interpretation of the evidence fits into the debate – e.g. that in contradicts a particular claim – though, again, in simple cases other people may guess that without you saying it.)
That’s an interesting claim. An intellectual debate is what’s happening inside your head. You are saying that it’s primary compared to the objective reality outside of your head. Am I understanding your correctly?
You may prefer to think of it as the philosophy issues are always prior to the other issues. E.g. the role of a particular piece of evidence in reaching some conclusion is governed by ideas and methodology about the role of evidence in general, an interpretation of the raw data in this case, some general epistemology about how conclusions are reached and judged, etc.
Oh, I’m sure it’s relieved to hear that
Please stop the sarcasm or tell me how/why it’s productive and non-hostile.
A black-and-white vision seems unnecessary limiting.
it’s intentional in order to solve epistemology problems which (I claim) have no other (known) solution. And it’s not limiting because things like statistics are used in a secondary role. E.g. you can say “if the following statistical metric gives us 99% or more confidence, i will consider that an adequate solution to my problem”. (approaches like that, which use a cutoff amount to determine binary success or failure, are common in science).
First, there is no sharp boundary between a solution and a not-solution.
that depends, as i said, on how the problem is specified.
in the final analysis, when it comes to human action and decision making, for any given issue you decide yes to a particular thing and no to its rivals. if you hedge, then you’re deciding yes about that particular hedge.
There is no natural dividing line between a slightly worse model which would be a not-solution and a slightly better model which will be a solution.
depends on the problem domain. e.g. in school sometimes you need an 87 on the test to pass the class, and an 86 will result in failing. so a slightly better test performance can cross a large dividing line. breakpoints like this come up all over the place, e.g. with faster casting speed in diablo 2 (when you hit 37% faster casting speed the casting animation drops by 1 frame. it doesn’t drop another frame until 55%. so gear sets totally 40% and 45% FCR are actually equal. (not the actual numbers.)).
Moreover, since these different models are making trade-offs, you can criticise these trade-offs, but generally speaking it’s difficult to say that this one is outright wrong and that one is clearly right. There’s a reason they’re called trade-offs.
it may be difficult, but nevertheless you have to make a decision. the decision should itself by judged in a binary way and be non-refuted – you don’t have a criticism of making that particular decision.
By the way, what about optimization problems? The goal is to maximize Y by manipulating X. There is no threshold, you want Y to be as large as possible. What’s the criterion for success?
then do whatever maximizes it. anything with a lower score would be refuted (a mistake to do) since there’s on option which gets a higher score. since the problem is to do the thing with the best score (implicitly limited to only options you know of after allocating some amount of resources to looking for better options), second best fails to address that problem.
more typically you don’t want to maximize a single factor. i go into this at length in my yes or no philosophy.
one has to look at the purpose of an idea instead of just the raw data
Oh, I agree. It’s just that you were very insistent about drawing the line between unfalsifiable philosophy and other empirically-falsifiable stuff and here you’re coming back into the real-life problems realm where things are definitely testable and falsifiable. I’m all for it, but there are consequences.
So you have to talk about your interpretation, unless the other person can guess it.
Sure, but that’s not an intellectual debate. If someone asks how to start a fire and I explain how you arrange kindling, get a flint and a steel, etc. there is no debate—I’m just transferring information.
the philosophy issues are always prior to the other issues
Not necessarily. If you put your hand into a fire, you will get a burn—that’s easy to learn (and small kids learn it fast). Which philosophy issues are prior to that learning?
Please stop the sarcasm
No can do. But tell you what, the fewer silly things you say, the less often you will encounter overt sarcasm :-)
in order to solve epistemology problems
Which problems you can’t solve otherwise?
for any given issue you decide yes to a particular thing and no to its rivals
There are lot of issues with continuous (real number) decisions. Let’s say you’re deciding how much money to put into your retirement fund this year and the reasonable range is between $10K and $20K. You are not going to treat $14,999 and $15,000 as separate solutions, are you?
breakpoints like this come up all over the place
Sure they do, but not always. And your approach requires them.
the decision should itself by judged in a binary way and be non-refuted
I still don’t see the need for these rather severe limitations. You want to deal with reality as if it consists of discrete, well-delineated chunks and, well, it just doesn’t. I understand that you can impose thresholds and breakpoints any time you wish, but they are artifacts and if your method requires them, it’s a drawback.
then do whatever maximizes it
Yes, but you typically have an explore-or-exploit problem. You need to spend resources to look for a better optimum, at each point in time you have some probability of improving your maximum, but there are costs and they grow. At which point do you stop expending resources to look for a better solution?
It’s just that you were very insistent about drawing the line between unfalsifiable philosophy and other empirically-falsifiable stuff
if you have an empirical argument to make, that’s fine. but i don’t think i’m required to provide evidence for my philosophical claims. (btw i criticize the standard burden of proof idea in Yes or No Philosophy. in short, if you can’t criticize an idea then it’s non-refuted and demanding some sort of burden of proof is not a criticism since lack of proof doesn’t prevent an idea from solving a problem.)
in order to solve epistemology problems
Which problems you can’t solve otherwise?
the problem of induction. problems about how to evaluate arguments (how do you score the strength of an argument? and what difference does it really make if one scores higher than another? either something points out why a solution doesn’t work or it doesn’t. unless you specifically try to specify non-binary problems. but that doesn’t really work. you can specify a set of solutions are all equal. ok then either pick any one of them if you’re satisfied, or else solve some other more precise problem that differentiates. you can also specify that higher scoring solutions on some metric are better, but then you just pick the highest scoring one, so you get a single solution or maybe a tie again. and whether you’ve chosen a correct solution given the problem specification, or not, is binary.) and various problems about how you decide what metrics to use (the solution to that being binary arguments about what metrics to use – or in many cases don’t use a metric. metrics are overrated but useful sometimes.)
Yes, but you typically have an explore-or-exploit problem. You need to spend resources to look for a better optimum, at each point in time you have some probability of improving your maximum, but there are costs and they grow. At which point do you stop expending resources to look for a better solution?
Yes so then you guess what to do and criticize your guesses. Or, if you wish, define a metric with positive points for a higher score and negative points for resources spent (after you guess-and-criticize to figure out how to put the positive score and all the different types of resources into the same units) and then guess how to maximize that (e.g. define a metric about resources allocated to getting a higher score on the first metric, spend that much resources, and then use the highest scoring solution.
multi-factor metrics don’t work as well as people think, but are ok sometimes (but you have to make a binary judgement about whether to use a particular metric for a particular situation, or not – so the binary judgement is prior and governs the use of the metric). here’s a good article about issues with them:
scoring systems are overrated but are allowable in binary epistemology given that their use is governed by binary judgements (should I proceed by doing the thing that scores the highest on this metric? make critical arguments about that and make a binary judgement. so the binary judgement is prior but then things like metrics and statistics are allowable as secondary things which are sometimes quite useful.)
You are not going to treat $14,999 and $15,000 as separate solutions, are you?
depends how precise the problem or context says to be. (or bigger picture, it depends how precise is worth the resources to be – which you should either specify in the problem or consider part of the context.)
if you don’t care about single dollar level of precision (cuz you want to save resources like effort to deal with details), just e.g. specify in the problem that you only care about increments of $500 or that (to save problem solving resources like time) you just want to use the first acceptable solution you come up with that you determine meet some standards of good enough (these are no longer strictly single variable maximization problems).
breakpoints like this come up all over the place
Sure they do, but not always. And your approach requires them.
they aren’t required, you can specify the problem however you want (subject to criticism) so you it makes clear what is a solution or not (or a set of tied solutions you’re indifferent btwn which you can then tiebreak arbitrarily if you have no criticism of doing it arbitrarily).
if the problem specifies that some solutions are better than others (not my preferred way to specify problems – i think it’s epistemologically misleading), then when you act you should pick one of the solutions in the highest tier you have a solution in, and reject the others. whether this method (pick a highest tier solution) is correct, and whether you’ve used it in this case, are both binary issues open to criticism.
At which point do you stop expending resources to look for a better solution?
when you guess it’s best to stop and your guess is non-refuted and the guess to continue looking is refuted. (you may, if you want to, define some stopping metric and make a subject-to-criticism binary yes-or-no judgement about whether to use that stopping metric.)
the philosophy issues are always prior to the other issues
Not necessarily. If you put your hand into a fire, you will get a burn—that’s easy to learn (and small kids learn it fast). Which philosophy issues are prior to that learning?
i think small kids do guesses and criticism, and use methods of learning (what I would call philosophical methods), even if they can’t state those methods in English. i also think ppl who have never studied philosophy use philosophy methods, which they picked up from their culture here and there, even if they don’t consciously understand themselves or know the names of the things they’re doing. and to the extent ppl learn, i think it’s guesses and criticism in some form, since that’s the only known method of learning (at a low level, it’s evolution – the only known solution to the problem of where the appearance of design comes from – saying it comes from “intelligence” is like attributing it to God or an intelligent designer – it doesn’t tell you how god/intelligence does it. my answer to that is, at a low level, evolution. layers of abstraction are built on top of that so it looks more varied at a higher level.).
i don’t think i’m required to provide evidence for my philosophical claims
It depends on what do you want to do with them. If all you want to do is keep them on a shelf and once in a while take them out, dust them, and admire them, then no, you don’t. On the other hand, if you want to persuade someone to change their mind, evidence might be useful. And if you want other people to take action based on your claims’, ahem, implications, evidence might even be necessary.
the problem of induction. problems about how to evaluate arguments
It seems that the root of these problems is your insistence that truth is a binary category. If you are forced to operate with single-bit values and have to convert every continuous function into a step one, well, sure you will have problems.
The thread seem to be losing shape, so let’s do a bit of a summary. As far as I can see, the core differences between us are:
You think truth (and arguments) are binary, I think both have continuous values;
You think intellectual debates are primary and empirical testing is secondary, I think the reverse;
do you think there’s a clear, decisive mistake in something i’m saying?
I would probably classify it as suboptimal. It’s not a “clear, decisive mistake” to see only black and white—but it limits you.
can you specify how you think induction works?
In the usual way: additional data points increase the probability of the hypothesis being correct, however their influence tends to rapidly decline to zero and they can’t lift the probability over the asymptote (which is usually less than 1). Induction doesn’t prove anything, but then in my system nothing proves anything.
What you said in the previous message is messy and doesn’t seem to be terribly impactful. Talking about how you can define a loss function or how you can convert scores to a yes/no metric is secondary and tertiary to the core disagreements we have.
the probability of which hypotheses being correct, how much?
For a given problem I would have a set of hypotheses under consideration. A new data point might kill some of them (in the Popperian fashion) or might spawn new ones. Those which survive—all of them—gain some probability. How much, it depends. No simple universal rule.
how do you differentiate hypotheses which do not contradict any of the data?
For which purpose and in which context? I might not need to differentiate them.
Occam’s razor is a common heuristic, though, of course, it is NOT a guide to whether a particular theory is correct or not.
Do all the non-contradicted-by-evidence ideas gain equal probability (so they are always tied and i don’t see the point of the “probabilities”), or differential probability?
EDIT: I’m guessing your answer is you start them with different amounts of probability. after that they gain different amounts accordingly (e.g. the one at 90% gains less from the same evidence than the one at 10%). but the ordering (by amount of probability) always stays the same as how it started, apart from when something is dropped to 0% by contradicting evidence. is that it? or do you have a way (which is part of induction, not critical argument?) to say “evidence X neither contradicts ideas Y nor Z, but fits Y better than Z”?
Different hypotheses (= models) can gain different amounts of probability. They can start with different amounts of probability, too, of course.
to say “evidence X neither contradicts ideas Y nor Z, but fits Y better than Z”?
Of course. That’s basically how all statistics work.
Say, if I have two hypotheses that the true value of X is either 5 or 10, but I can only get noisy estimates, a measurement of 8.7 will add more probability to the “10” hypothesis than to the “5″ hypothesis.
They get identical probabilities—if their prior probabilities were equal.
If (as is the general practice around these parts) you give a markedly bigger prior probability to simpler hypotheses, then you will strongly prefer the simpler idea. (Here “simpler” means something like “when turned into a completely explicit computer program, has shorter source code”. Of course your choice of language matters a bit, but unless you make wilfully perverse choices this will seldom be what decides which idea is simpler.)
In so far as the world turns out to be made of simply-behaving things with complex emergent behaviours, a preference for simplicity will favour ideas expressed in terms of those simply-behaving things (or perhaps other things essentially equivalent to them) and therefore more-explanatory ideas. (It is at least partly the fact that the world seems so far to be made of simply-behaving things with complex emergent behaviours that makes explanations so valuable.)
I do, but more or less only to the extent that they will make potential different predictions. If two models are in principle incapable of making different predictions, I don’t see why should I care.
so e.g. you don’t care if trees exist or not? you think people should stop thinking in terms of trees and stick to empirical predictions only, dropping any kind of non-empricial modeling like the concept of a tree?
Isn’t it convenient that I don’t have to care about these infinitely many theories?
why not?
Since there is an infinity of them, I bet you can’t marshal critical arguments against ALL of them :-P
you can criticize categories, e.g. all ideas with feature X.
I think you’re getting confused between actual trees and the abstract concept of a tree.
i don’t think so. you can’t observe entities. you have to interpret what entities there are (or not – as you advocated by saying only prediction matters)
you can criticize categories, e.g. all ideas with feature X
How can you know that every single theory in that infinity has feature X? or belongs to the same category?
you can’t observe entities
My nervous system makes perfectly good entities out of my sensory stream. Moreover, a rat’s nervous system also makes perfectly good entities out if its sensory stream regardless of the fact that the rat has never heard of epistemology and is not very philosophically literate.
or not
Or not? Prediction matters, but entities are an awfully convenient way to make predictions.
I let you take substantial control over conversation flow. You took it here – you overestimated your knowledge of Popper and were totally wrong. You do not seem to have learned from this error.
You didn’t answer my question about your interest, and you seem totally lost as to what we disagree about. You’re still, in response to “your questions are based on assuming aspects of your philosophy are true”, making the same assumptions while denying it. You don’t have anything like a sense of what we disagree about, but you’re trying to lead the conversation anyway. Your questions are in service of lines of argument, not finding out what I think – and the lines of argument don’t make sense because you don’t know what to target.
What exactly did I say that was totally wrong? Quote, please.
These assumptions take half a sentence. There are exactly three of them:
Which one do you think is unjustified?
Supply me with targets, then :-D
Quoting:
I regard this as indicating you misunderstand CR.
Then later:
In science, yes, testing is a favored idea, though even in science most ideas are rejected without being tested:
http://curi.us/1504-the-most-important-improvement-to-popperian-philosophy-of-science
But you don’t want references, and I don’t want to rewrite or copy/paste my blog post which is itself summarizing some information from books that would be better to look at directly.
I have a lot of targets on my websites, like http://fallibleideas.com and https://reasonandmorality.com, but you’ve said you don’t want to look at them.
Do you have a website with information I could skim to find disagreements? Earlier, IIRC, I tried to ask about some of your important beliefs but you didn’t put forward some positions to debate.
Is there any written philosophy material you think is correct, and would be super interested to learn contains mistakes? Or do you just think the ideas in your head are correct but they aren’t written down, and you’d like to learn about mistakes in those? Or do you think your own ideas have some flaws, but are pretty good, so if I pointed out a couple mistakes it might not make much difference to you?
What do you want to get out of this discussion? Coming to agree about some major philosophy issues would be a big effort. Under what sort of circumstances do you expect you would stop discussing? Do you have a discussion methodology which is written down anywhere? I do. http://curi.us/1898-paths-forward-short-summary
I have a philosophy I think is non-refuted. I don’t know of any mistakes and would be happy to find out. It’s also written down in public to expose it to scrutiny.
Your philosophy is advertised as “All problems can be solved by knowing how. I tell you how.”
This looks to me as crossing the demarcation threshold. Would you insist that there are no possible empirical observations which can invalidate you advice?
You asked before. Still nope and nope.
When you stop being interesting.
Define “mistake”.
You can bring up observations in a discussion of a piece of advice, but as always the role of the evidence is governed by arguments stating its role. And the primary issue here is argument.
This is a theory claim.
This is a claim that I have substantial problem solving knowledge for sale, but is not intended to indicate I already know full solutions to all problems. It’s sufficiently non-specific that I don’t think it’s a very good target for discussion.
Why are you interested now?
http://fallibleideas.com/definitions
And are you really unfamiliar with this common English word? Do you know what being wrong is? Less wrong? Error? Flaw?
Are you trying to raise some sort of philosophical issue? If so, please state it directly.
What about the rest?
I’m interested in smart weird people :-P
Oh, boy. We are having fundamental philosophical disagreements and you think dictionary definitions of things like “wrong” are adequate?
You say that philosophy is not falsifiable. OK, let’s assume that for the time being. So can we apply the term “wrong” to some philosophies and “right” to others? On which basis? You will say “critical arguments”. What is a critical argument? Within which framework are you going to evaluate them? You want “mistakes” pointed out to you. What kind of things will you accept as a “mistake” and what kind of things will you accept as indicating that it’s valid?
I disagree that definitions are not all that important.
Well, obviously I think they are correct to some degree (remember, for me “truth” is not a binary category).
See above: what is a “mistake”, given that we’re deliberately ignoring empirical testing?
Things I’d like to learn are more like new to me frameworks, angles of view, reinterpretations of known facts. To use Scott Alexander’s terminology, I want to notice concept-shaped holes.
Criteria of mistakes are themselves open to discussion. Some typical important ways to point out mistakes are:
1) internal contradictions, logical errors
2) non sequiturs
3) a reason X wouldn’t solve problem Y, even though X is being offered as a solution to Y
4) an idea assumes/uses and also contradicts some context (e.g. background knowledge)
5) pointing out a contradiction with evidence
6) pointing out ambiguity, vagueness
there are many other types of critical arguments. for example, sometimes an argument, X, claims to refute Y, but X, if correct, refutes everything (or everything in a relevant category). it’s a generic argument that could equally well be used on everything, and is being selectively applied to Y. that’s a criticism of X’s capacity to criticize Y.
Ideas solve problems (put another way, they have purposes), with “problem” understood very broadly (including answering questions, explaining an issue, accomplishing a goal). A mistake is something which prevents an idea from solving a problem it’s intended to solve (it fails to work for its purpose).
By correcting mistakes we get better ideas. We fix issues preventing our problems from being solved and our purposes achieved (including the purpose of correctly intellectually understanding philosophy, science, etc). We should prefer non-refuted ideas (no known mistakes) to refuted ideas (known mistakes).
Ways to point out mistakes? Then the question remains: what is a “mistake”? A finger pointing at the moon is not the moon.
Your (4) is the same thing as (1) -- or (5), take your pick. Your (5) is forbidden here—remember, we are deliberately keeping to one side of the demarcation threshold—no empirical evidence or empirical testing allowed. (6) is quite curious—is being vague a “mistake”?
In the real world? Then they are falsifiable and we can bring empirical evidence to bear. You were very anxious to avoid that.
Looks like a non sequitur: generating new (and better) ideas is quite distinct from fixing the errors of old ideas—similar to the difference between writing a new program and debugging an existing one.
I would argue that we should prefer ideas which successfully solve problems to ideas which solve them less successfully (demarcation! science! :-D)
I actually wrote a sentence
Do you not read ahead before replying, and don’t go back and edit either?
In general, yes. It technically depends on context (like the problem specification details). Normally e.g. the context of answering a question is you want an adequately clear answer, so an inadequately clear answer fails.
Ideas solve intellectual problems, and some of those solutions can be used to solve problems we care about in the real world by acting according to a solution. Some problems (e.g. in math) are more abstract and it’s unclear what to use the solutions for.
I have nothing against the real world. But even when the real world is relevant, you still have to make an argument saying how to use some evidence in the intellectual debate. The intellectual debate is always primary. You can’t just directly look at the world and know the answers, though sometimes the arguments involved with getting from evidence X to rejecting idea Y are sufficiently standard that people don’t write them out.
You are welcome to mention some evidence in a criticism of my philosophy claims if you think you see a way to relevantly do that.
You have idea X (plus context) to solve problem P. You find a mistake, M. You come up with a new idea to solve P which doesn’t have M. Whether it’s a slightly adjusted version of X (X2) or a very different idea that solves the same problem is kinda immaterial. Both are acceptable. Methodologically, the standard recommendation is to look for X2 first.
I consider solving a problem to be binary – X does or doesn’t solve P. And I consider criticisms to be binary – either they are decisive (says why the idea doesn’t work) or not.
Problems without success/failure criteria I consider inadequately specified. Informally we may get away with that, but when trying to be precise and running into difficult issues then we need to specify our problems better.
That’s a curious definition of a “mistake”. It’s very… instrumental and local. A “mistake” is a function of both an idea and a problem—therefore, it seems, if you didn’t specify a particular problem you can’t talk about ideas being mistaken. And yet your examples—e.g. an internal logical inconsistency—don’t seem to require a problem to demonstrate that an idea is broken.
Oh, I’m sure it’s relieved to hear that
Why is that?
That’s an interesting claim. An intellectual debate is what’s happening inside your head. You are saying that it’s primary compared to the objective reality outside of your head. Am I understanding your correctly?
Only if a problem has a binary outcome. Not all problems do.
A black-and-white vision seems unnecessary limiting.
Consider standard statistics. Let’s say we’re trying to figure out the influence of X on Y (where both are real values). First, there is no sharp boundary between a solution and a not-solution. You can build a variety of statistical models which will make different trade-offs and produce different results. There is no natural dividing line between a slightly worse model which would be a not-solution and a slightly better model which will be a solution.
Moreover, since these different models are making trade-offs, you can criticise these trade-offs, but generally speaking it’s difficult to say that this one is outright wrong and that one is clearly right. There’s a reason they’re called trade-offs.
Typically at the end you pick a statistical model or an ensemble of models, but the question “is the problem solved, yes or no?” is silly: it is solved to some extent, not fully, but it’s not at the “we have no idea” stage either.
Life must be very inconvenient for you.
By the way, what about optimization problems? The goal is to maximize Y by manipulating X. There is no threshold, you want Y to be as large as possible. What’s the criterion for success?
This is not local – I specified context matters (whether the context is stated as part of the problem, or specified separately, is merely a matter of terminology.)
You can’t determine whether a particular sentence is a correct or incorrect answer without knowing the context – e.g. what is it supposed to answer? The same statement can be a correct answer to one issue and an incorrect answer to a different issue. If you don’t like this, you can build the problem and the context into the statement itself, and then evaluate it in isolation.
I’m guessing the reason you consider my view on mistakes “instrumental” is because I think one has to look at the purpose of an idea instead of just the raw data. It’s because I add a philosophy layer where you don’t. So your alternative to “instrumental” is to say something like “mistakes are when ideas fail to correspond to empirical reality” – and to ignore non-empirical issues, interpretation issues, and that answers to questions need to correspond to the question which could e.g. be about a hypothetical scenario. To the extent that questions, goals, human problems, etc, are part of reality then, sure, this is all about reality. But I’m guessing we can both agree that’s a difference of perspective.
Self-contradictory ideas are broken for many problems. In general, we try to criticize an idea as a solution to a range of problems, not a single one. Those criticisms are more interesting. If your criticism is too narrow, it won’t work on a slight variant of the idea. You normally want to criticize all the variants sharing a particular theme.
Self-contradictory ideas can (as far as we know) only be correct solutions to some specific types of problems, like for use in parody or as a discussion example.
Because facts are not self-explanatory. Any set of facts is open to many interpretations. (Not equally correct interpretations or anything like that, merely logically possible interpretations. So you have to talk about your interpretation, unless the other person can guess it. And you have to talk about how your interpretation of the evidence fits into the debate – e.g. that in contradicts a particular claim – though, again, in simple cases other people may guess that without you saying it.)
You may prefer to think of it as the philosophy issues are always prior to the other issues. E.g. the role of a particular piece of evidence in reaching some conclusion is governed by ideas and methodology about the role of evidence in general, an interpretation of the raw data in this case, some general epistemology about how conclusions are reached and judged, etc.
Please stop the sarcasm or tell me how/why it’s productive and non-hostile.
it’s intentional in order to solve epistemology problems which (I claim) have no other (known) solution. And it’s not limiting because things like statistics are used in a secondary role. E.g. you can say “if the following statistical metric gives us 99% or more confidence, i will consider that an adequate solution to my problem”. (approaches like that, which use a cutoff amount to determine binary success or failure, are common in science).
that depends, as i said, on how the problem is specified.
in the final analysis, when it comes to human action and decision making, for any given issue you decide yes to a particular thing and no to its rivals. if you hedge, then you’re deciding yes about that particular hedge.
depends on the problem domain. e.g. in school sometimes you need an 87 on the test to pass the class, and an 86 will result in failing. so a slightly better test performance can cross a large dividing line. breakpoints like this come up all over the place, e.g. with faster casting speed in diablo 2 (when you hit 37% faster casting speed the casting animation drops by 1 frame. it doesn’t drop another frame until 55%. so gear sets totally 40% and 45% FCR are actually equal. (not the actual numbers.)).
it may be difficult, but nevertheless you have to make a decision. the decision should itself by judged in a binary way and be non-refuted – you don’t have a criticism of making that particular decision.
i’ve addressed this stuff at great length. https://yesornophilosophy.com/argument
then do whatever maximizes it. anything with a lower score would be refuted (a mistake to do) since there’s on option which gets a higher score. since the problem is to do the thing with the best score (implicitly limited to only options you know of after allocating some amount of resources to looking for better options), second best fails to address that problem.
more typically you don’t want to maximize a single factor. i go into this at length in my yes or no philosophy.
Oh, I agree. It’s just that you were very insistent about drawing the line between unfalsifiable philosophy and other empirically-falsifiable stuff and here you’re coming back into the real-life problems realm where things are definitely testable and falsifiable. I’m all for it, but there are consequences.
Sure, but that’s not an intellectual debate. If someone asks how to start a fire and I explain how you arrange kindling, get a flint and a steel, etc. there is no debate—I’m just transferring information.
Not necessarily. If you put your hand into a fire, you will get a burn—that’s easy to learn (and small kids learn it fast). Which philosophy issues are prior to that learning?
No can do. But tell you what, the fewer silly things you say, the less often you will encounter overt sarcasm :-)
Which problems you can’t solve otherwise?
There are lot of issues with continuous (real number) decisions. Let’s say you’re deciding how much money to put into your retirement fund this year and the reasonable range is between $10K and $20K. You are not going to treat $14,999 and $15,000 as separate solutions, are you?
Sure they do, but not always. And your approach requires them.
I still don’t see the need for these rather severe limitations. You want to deal with reality as if it consists of discrete, well-delineated chunks and, well, it just doesn’t. I understand that you can impose thresholds and breakpoints any time you wish, but they are artifacts and if your method requires them, it’s a drawback.
Yes, but you typically have an explore-or-exploit problem. You need to spend resources to look for a better optimum, at each point in time you have some probability of improving your maximum, but there are costs and they grow. At which point do you stop expending resources to look for a better solution?
if you have an empirical argument to make, that’s fine. but i don’t think i’m required to provide evidence for my philosophical claims. (btw i criticize the standard burden of proof idea in Yes or No Philosophy. in short, if you can’t criticize an idea then it’s non-refuted and demanding some sort of burden of proof is not a criticism since lack of proof doesn’t prevent an idea from solving a problem.)
the problem of induction. problems about how to evaluate arguments (how do you score the strength of an argument? and what difference does it really make if one scores higher than another? either something points out why a solution doesn’t work or it doesn’t. unless you specifically try to specify non-binary problems. but that doesn’t really work. you can specify a set of solutions are all equal. ok then either pick any one of them if you’re satisfied, or else solve some other more precise problem that differentiates. you can also specify that higher scoring solutions on some metric are better, but then you just pick the highest scoring one, so you get a single solution or maybe a tie again. and whether you’ve chosen a correct solution given the problem specification, or not, is binary.) and various problems about how you decide what metrics to use (the solution to that being binary arguments about what metrics to use – or in many cases don’t use a metric. metrics are overrated but useful sometimes.)
Yes so then you guess what to do and criticize your guesses. Or, if you wish, define a metric with positive points for a higher score and negative points for resources spent (after you guess-and-criticize to figure out how to put the positive score and all the different types of resources into the same units) and then guess how to maximize that (e.g. define a metric about resources allocated to getting a higher score on the first metric, spend that much resources, and then use the highest scoring solution.
multi-factor metrics don’t work as well as people think, but are ok sometimes (but you have to make a binary judgement about whether to use a particular metric for a particular situation, or not – so the binary judgement is prior and governs the use of the metric). here’s a good article about issues with them:
https://www.newyorker.com/magazine/2011/02/14/the-order-of-things
scoring systems are overrated but are allowable in binary epistemology given that their use is governed by binary judgements (should I proceed by doing the thing that scores the highest on this metric? make critical arguments about that and make a binary judgement. so the binary judgement is prior but then things like metrics and statistics are allowable as secondary things which are sometimes quite useful.)
depends how precise the problem or context says to be. (or bigger picture, it depends how precise is worth the resources to be – which you should either specify in the problem or consider part of the context.)
if you don’t care about single dollar level of precision (cuz you want to save resources like effort to deal with details), just e.g. specify in the problem that you only care about increments of $500 or that (to save problem solving resources like time) you just want to use the first acceptable solution you come up with that you determine meet some standards of good enough (these are no longer strictly single variable maximization problems).
they aren’t required, you can specify the problem however you want (subject to criticism) so you it makes clear what is a solution or not (or a set of tied solutions you’re indifferent btwn which you can then tiebreak arbitrarily if you have no criticism of doing it arbitrarily).
if the problem specifies that some solutions are better than others (not my preferred way to specify problems – i think it’s epistemologically misleading), then when you act you should pick one of the solutions in the highest tier you have a solution in, and reject the others. whether this method (pick a highest tier solution) is correct, and whether you’ve used it in this case, are both binary issues open to criticism.
when you guess it’s best to stop and your guess is non-refuted and the guess to continue looking is refuted. (you may, if you want to, define some stopping metric and make a subject-to-criticism binary yes-or-no judgement about whether to use that stopping metric.)
i think small kids do guesses and criticism, and use methods of learning (what I would call philosophical methods), even if they can’t state those methods in English. i also think ppl who have never studied philosophy use philosophy methods, which they picked up from their culture here and there, even if they don’t consciously understand themselves or know the names of the things they’re doing. and to the extent ppl learn, i think it’s guesses and criticism in some form, since that’s the only known method of learning (at a low level, it’s evolution – the only known solution to the problem of where the appearance of design comes from – saying it comes from “intelligence” is like attributing it to God or an intelligent designer – it doesn’t tell you how god/intelligence does it. my answer to that is, at a low level, evolution. layers of abstraction are built on top of that so it looks more varied at a higher level.).
It depends on what do you want to do with them. If all you want to do is keep them on a shelf and once in a while take them out, dust them, and admire them, then no, you don’t. On the other hand, if you want to persuade someone to change their mind, evidence might be useful. And if you want other people to take action based on your claims’, ahem, implications, evidence might even be necessary.
It seems that the root of these problems is your insistence that truth is a binary category. If you are forced to operate with single-bit values and have to convert every continuous function into a step one, well, sure you will have problems.
The thread seem to be losing shape, so let’s do a bit of a summary. As far as I can see, the core differences between us are:
You think truth (and arguments) are binary, I think both have continuous values;
You think intellectual debates are primary and empirical testing is secondary, I think the reverse;
Looks reasonable to you?
the two things you listed are ok with me. i’d add induction vs guesses-and-criticism/evolution to the list of disagreements.
do you think there’s a clear, decisive mistake in something i’m saying?
can you specify how you think induction works? as a fully defined, step-by-step process i can do today?
though what i’d prefer most is replies to the things i said in my previous message.
I would probably classify it as suboptimal. It’s not a “clear, decisive mistake” to see only black and white—but it limits you.
In the usual way: additional data points increase the probability of the hypothesis being correct, however their influence tends to rapidly decline to zero and they can’t lift the probability over the asymptote (which is usually less than 1). Induction doesn’t prove anything, but then in my system nothing proves anything.
What you said in the previous message is messy and doesn’t seem to be terribly impactful. Talking about how you can define a loss function or how you can convert scores to a yes/no metric is secondary and tertiary to the core disagreements we have.
the probability of which hypotheses being correct, how much? how do you differentiate between hypotheses which do not contradict any of the data?
For a given problem I would have a set of hypotheses under consideration. A new data point might kill some of them (in the Popperian fashion) or might spawn new ones. Those which survive—all of them—gain some probability. How much, it depends. No simple universal rule.
For which purpose and in which context? I might not need to differentiate them.
Occam’s razor is a common heuristic, though, of course, it is NOT a guide to whether a particular theory is correct or not.
Do all the non-contradicted-by-evidence ideas gain equal probability (so they are always tied and i don’t see the point of the “probabilities”), or differential probability?
EDIT: I’m guessing your answer is you start them with different amounts of probability. after that they gain different amounts accordingly (e.g. the one at 90% gains less from the same evidence than the one at 10%). but the ordering (by amount of probability) always stays the same as how it started, apart from when something is dropped to 0% by contradicting evidence. is that it? or do you have a way (which is part of induction, not critical argument?) to say “evidence X neither contradicts ideas Y nor Z, but fits Y better than Z”?
Different hypotheses (= models) can gain different amounts of probability. They can start with different amounts of probability, too, of course.
Of course. That’s basically how all statistics work.
Say, if I have two hypotheses that the true value of X is either 5 or 10, but I can only get noisy estimates, a measurement of 8.7 will add more probability to the “10” hypothesis than to the “5″ hypothesis.
what do you do about ideas which make identical predictions?
They get identical probabilities—if their prior probabilities were equal.
If (as is the general practice around these parts) you give a markedly bigger prior probability to simpler hypotheses, then you will strongly prefer the simpler idea. (Here “simpler” means something like “when turned into a completely explicit computer program, has shorter source code”. Of course your choice of language matters a bit, but unless you make wilfully perverse choices this will seldom be what decides which idea is simpler.)
In so far as the world turns out to be made of simply-behaving things with complex emergent behaviours, a preference for simplicity will favour ideas expressed in terms of those simply-behaving things (or perhaps other things essentially equivalent to them) and therefore more-explanatory ideas. (It is at least partly the fact that the world seems so far to be made of simply-behaving things with complex emergent behaviours that makes explanations so valuable.)
I don’t need to distinguish between them, then.
so you don’t deal with explanations, period?
I do, but more or less only to the extent that they will make potential different predictions. If two models are in principle incapable of making different predictions, I don’t see why should I care.
so e.g. you don’t care if trees exist or not? you think people should stop thinking in terms of trees and stick to empirical predictions only, dropping any kind of non-empricial modeling like the concept of a tree?
I don’t understand what this means.
The concept of a tree seems pretty empirical to me.
there are infinitely many theories which say trees don’t exist but make identical predictions to the standard view involving trees existing.
trees are not an observation, they are a conceptual interpretation. observations are things like the frequencies of photons at times and locations.
Isn’t it convenient that I don’t have to care about these infinitely many theories?
Since there is an infinity of them, I bet you can’t marshal critical arguments against ALL of them :-P
I think you’re getting confused between actual trees and the abstract concept of a tree.
I don’t think so. Human brains do not process sensory input in terms of ” frequencies of photons at times and locations”.
why not?
you can criticize categories, e.g. all ideas with feature X.
i don’t think so. you can’t observe entities. you have to interpret what entities there are (or not – as you advocated by saying only prediction matters)
Why not what?
How can you know that every single theory in that infinity has feature X? or belongs to the same category?
My nervous system makes perfectly good entities out of my sensory stream. Moreover, a rat’s nervous system also makes perfectly good entities out if its sensory stream regardless of the fact that the rat has never heard of epistemology and is not very philosophically literate.
Or not? Prediction matters, but entities are an awfully convenient way to make predictions.