I don’t know what “directly” means, but there certainly is a causal pathway, and we can certainly evaluate whether our brains are tracking reality. Just make a prediction, then go outside and look with your eyes to see if it comes true.
Suppose that I do a rain-making dance in my backyard, and predict that as a consequence of this, it will rain tomorrow. Turns out that it really does rain the next day. Now I argue that I have magical rain-making powers.
Somebody else objects, “of course you don’t, it just happened to rain by coincidence! You need to repeat that experiment!”
So I repeat the rain-making dance on ten separate occasions, and on seven out of ten times, it does happen to rain anyway.
The skeptic says, “ha, your rain-making dance didn’t work after all!” I respond, “ah, but it did work on seven out of ten times; medicine can’t be shown to reliably work every time either, but my magic dance does work statistically significantly often.”
The skeptic answers, “you can’t establish statistical significance without something to compare to! This happens to be rainy season, so it would rain on seven out of ten days anyway!”
I respond, “ah, but notice how it is the custom for people in my tribe do the rain-making dance every day during rainy season, and to not do it during dry season; it is our dance that causes the rainy season.”
The skeptic facepalms. “Your people have developed a tradition to dance during rainy season, but it’s the rain that has caused your dance, not the other way around!”
… and then we go on debating forever.
My point here is that just looking at raw observations is insufficient to judge any nontrivial model. We are always evaluating our observations in light of an existing model; it is the observation + model that says whether something is true, not the observation itself. I dance and it rains, and my model says that dancing causes rain: my predicted observation came true, so I consider my model validated. The skeptic’s model says that dancing does not cause rain but that it rains all the time during the rainy season anyway, so he consider his own model just as confirmed by the observation.
You can, of course, use observations to evaluate models. But to do that, you need to use a meta-model. When I say that we don’t have direct access to the truth, this is what I mean; both you, me, and the schizophrenic all tend to think that we are correctly drawing the right conclusions from our observations, but at least one of us is actually running seriously flawed models and meta-models, and may never know it, being trapped in evaluating all of their models through seriously flawed meta-models.
As clone of saturn notes, the deepest meta-model of them all is the one that is running below the level of conscious decisions; the set of low-level processes which decides what actions we take and what thoughts we think. This is a reinforcement learning system which responds to rewards: if particular thoughts or assumptions (such as assumption of a rain-making dance actually producing rain, or the suggestion of statistical significance being an important factor to consider when evaluating predictions) have led to actions which brought the organism (internally or externally generated rewards), then those kinds of thoughts and assumptions will be reinforced.
In other words, we end up having the kinds of beliefs that seem useful, as evaluated by whether they succeed in giving us rewards. Epistemic and instrumental rationality were the same all along. (I previously discussed this in more detail in my posts What are concepts for and World-models as tools.)
I have a hard time believing that this sort of clever reasoning will lead to anything other than making your beliefs less accurate and merely increasing the number of non-truth-based beliefs above 20%.
Well:
Do you think about distances in Metric or Imperial units? Both are equally true, so probably in whichever units you happen to be more fluent in.
Do you use Newtonian mechanics or full relativity for calculating the motion of some object? Relativity is more true, but sometimes the simpler model is good enough and easier to calculate, so it may be better for the situation.
Do you consider your romantic partner a wonderful person who you love dearly and want to be happy, or someone who does various things that benefit you, in exchange for you doing various things that benefit them? Both are true, but the former framing is probably one that will make for a happier and more meaningful relationship.
You talk about “clever reasoning” that “makes your beliefs less accurate”, but as these examples should hopefully demonstrate, at any given time there are an infinite number of more-or-less true ways of looking at some situation—and when we need to choose between several ways of framing the situation which are equally true, we always end up choosing one or the other based on its usefulness. If we didn’t, it would be impossible to function, since there’d be no criteria for choosing between them. (And sometimes we go with the approximation that’s less strictly true, if it’s good enough for the situation; that is, if it’s more useful to go with it.) That’s the 20%.
This stuff about rain dancing seems like just the most banal epistemological trivialities, which have already been dealt with thoroughly in the Sequences. The reasons why such “tests” of rain dancing don’t work are well known and don’t need to be recapitulated here.
But to do that, you need to use a meta-model. When I say that we don’t have direct access to the truth, this is what I mean;
This has nothing to do with causal pathways, magic or otherwise, direct or otherwise. Magic would not turn a rock into a philosopher even if it should exist.
Yes, carrying out experiments to determine reality relies on Occam’s razor. It relies on Occam’s razor being true. It does not in any way rely on me possessing some magical universally compelling argument for Occam’s razor. Because Occam’s razor is in fact true in our universe, experiment does in fact work, and thus the causal pathway for evaluating our models does in fact exist: experiment and observation (and bayesian statistics).
I’m going to stress this point because I noticed others in this thread make this seemingly elementary map-territory confusion before (though I didn’t comment on it there). In fact it seems to me now that conflating these things is maybe actually the entire source of this debate: “Occam’s razor is true” is an entirely different thing from “I have access to universally compelling arguments for Occam’s razor”, as different as a raven and the abstract concept of corporate debt. The former is true and useful and relevant to epistemology. The latter is false, impossible and useless.
Because the former is true, when I say “in fact, there is a causal pathway to evaluate our models: looking at reality and doing experiments”, what I say is, in fact, true. The process in fact works. It can even be carried out by a suitably programmed robot with no awareness of what Occam’s razor or “truth” even is. No appeals or arguments about whether universally compelling arguments for Occam’s razor exist can change that fact.
(Why am I so lucky as to be a mind whose thinking relies on Occam’s razor in a world where Occam’s razor is true? Well, animals evolved via natural selection in an Occamian world, and those whose minds were more fit for that world survived...)
This is a reinforcement learning system which responds to rewards: if particular thoughts or assumptions (...) have led to actions which brought the organism (internally or externally generated rewards), then those kinds of thoughts and assumptions will be reinforced.
This seems like a gross oversimplification to me. The mind is a complex dynamical system made of locally reinforcement-learning components, which doesn’t do any one thing all the time.
In other words, we end up having the kinds of beliefs that seem useful, as evaluated by whether they succeed in giving us rewards. Epistemic and instrumental rationality were the same all along.
And this seems simply wrong. You might as well say “epistemic rationality and chemical action-potentials were the same all along”. Or “jumbo jets and sheets of aluminium were the same all along”. A jumbo jet might even be made out of sheets of aluminium, but a randomly chosen pile of the latter sure isn’t going to fly.
As for your examples, I don’t have anything to add to Said’s observations.
reasons why such “tests” of rain dancing don’t work are well known and don’t need to be recapitulated here.
Obviously. Which is why I said that the point was not any of the specific arguments in that debate—they were totally arbitrary and could just as well have been two statisticians debating the validity of different statistical approaches—but the fact that any two people can disagree about anything in the first place, as they have different models of how to interpret their observations.
“Occam’s razor is true” is an entirely different thing from “I have access to universally compelling arguments for Occam’s razor”, as different as a raven and the abstract concept of corporate debt.
This is very close to the distinction that I have been trying to point at; thank you for stating it more clearly than I managed to. The way that I’d phrase it is that there’s a difference between considering a claim to be true, and considering its justification universally compelling.
It sounds like you have been interpreting me to say something like “Occam’s Razor is false because its justification is not universally compelling”. That is not what I have been trying to say. Rather, my claim has been “we can consider Occam’s Razor true despite its justification not being universally compelling, but because there are no universally compelling justifications, we should keep trying out different justifications and seeing whether there are any that would seem to work even better”.
If you say “but that’s totally in line with ‘Where Recursive Justification Hits Bottom’ and the standard LW canon...” then yes, it is. That’s my point. Especially since ‘Recursive Justification’ also says that we should just decide to believe in Occam’s Razor, since it doesn’t seem particularly useful to do otherwise, and because practically speaking, we don’t have any better alternative:
Should I trust Occam’s Razor? Well, how well does (any particular version of) Occam’s Razor seem to work in practice? What kind of probability-theoretic justifications can I find for it? When I look at the universe, does it seem like the kind of universe in which Occam’s Razor would work well? [...]
The chain of examination continues—but it continues, unavoidably, using my current brain and my current grasp on reasoning techniques. What else could I possibly use? [...]
At present, I start going around in a loop at the point where I explain, “I predict the future as though it will resemble the past on the simplest and most stable level of organization I can identify, because previously, this rule has usually worked to generate good results; and using the simple assumption of a simple universe, I can see why it generates good results; and I can even see how my brain might have evolved to be able to observe the universe with some degree of accuracy, if my observations are correct.” [...]
But for now… what’s the alternative to saying, “I’m going to believe that the future will be like the past on the most stable level of organization I can identify, because that’s previously worked better for me than any other algorithm I’ve tried”? [...]
At this point I feel obliged to drag up the point that rationalists are not out to win arguments with ideal philosophers of perfect emptiness; we are simply out to win. [...]
The point is not to be reflectively consistent. The point is to win.
As for my story about how the brain works: yes, it is obviously a vast simplification. That does not make it false, especially given that “the brain learns to use what has worked before and what it thinks is likely to make it win in the future” is exactly what Eliezer is advocating in the above post.
But what Eliezer also advocates in that post, is not elevating any rule—Occam’s Razor included—into an unquestioned axiom, but to keep questioning even that, if you can:
Being religious doesn’t make you less than human. Your brain still has the abilities of a human brain. The dangerous part is that being religious might stop you from applying those native abilities to your religion—stop you from reflecting fully on yourself. People don’t heal their errors by resetting themselves to an ideal philosopher of pure emptiness and reconsidering all their sensory experiences from scratch. They heal themselves by becoming more willing to question their current beliefs, using more of the power of their current mind.
This is why it’s important to distinguish between reflecting on your mind using your mind (it’s not like you can use anything else) and having an unquestionable assumption that you can’t reflect on. [...]
The important thing is to hold nothing back in your criticisms of how to criticize; nor should you regard the unavoidability of loopy justifications as a warrant of immunity from questioning.
I would say that there exists two kinds of metarationality: weak and strong. Weak metarationality is just standardly compatible with standard LW rationality, because of things like framing effects and self-fulfilling beliefs, as I have been arguing in other comments. But because the standard canon has given the impression that truth should be the only criteria for beliefs and missed the fact that there are plenty of beliefs that one can choose without violating Occam’s Razor, this seems “metarational” and weird. Arguably like this shouldn’t be called meta/postrationality in the first place, because it’s just standard rationality.
The way to phrase strong metarationality might be, is that classic LW rationality is what you get when you take a specific set of axioms as your starting point, and build on top of that. Metarationality is what you get when you acknowledge that this does indeed seem like the right thing to do most of the time, but that we should also be willing (as Eliezer advocates above) to question that, and try out different starting axioms as well, to see whether there would be any that would be even better.
In my experience, strong metarationality isn’t useful because it would point to any basic axioms that would be better than LW’s standard ones—if it does, I haven’t found any, and the standard assumptions continue to be the most useful ones. But what does make it somewhat useful is in that when you practice questioning everything, and e.g. distinguishing between “Occam’s Razor is true” and “I have assumed Occam’s Razor to be true because that seems useful”, then that helps in catching assumptions which don’t fall directly out of the standard axioms, and which you’ve just assumed to be true without good justification.
E.g. “my preferred system of government is the best one” is a belief that should logically be assigned much lower confidence than “Occam’s Razor is true”; but the brain only has limited precision in assigning credence values to claims. So most people have beliefs which are more like the government one than the Occam’s Razor one, despite being assigned a similar level of credence as the Occam’s Razor one is. By questioning and testing even beliefs which are like Occam’s Razor, one can end up questioning and revising beliefs which actually should be questioned, which one might never have questioned otherwise. This is valuable even if the Occam’s Razor-like beliefs survive that questioning unscathed—but the exercise does not work unless one actually does make a serious attempt to question them.
The way that I’d phrase it is that there’s a difference between considering a claim to be true, and considering its justification universally compelling.
Both of these are different from the claim actually being true. The fact that Occam’s razor is true is what causes the physical process of (occamian) observation and experiment to yield correct results. So you see, you’ve already managed to rephrase what I’ve been saying into something different by conflating map and territory.
Indeed, something being true is further distinct from us considering it true. But given that the whole point of metarationality is fully incorporating the consequences of realizing the map/territory distinction and the fact that we never observe the territory directly (we only observe our brain’s internal representation of the external environment, rather than the external environment directly), a rephrasing that emphazises the way that we only ever experience the map seemed appropriate.
As for my story about how the brain works: yes, it is obviously a vast simplification. That does not make it false, especially given that “the brain learns to use what has worked before and what it thinks is likely to make it win in the future” is exactly what Eliezer is advocating in the above post.
Even if true, this is different from “epistemic rationality is just instrumental rationality”; as different as adaptation executors are from fitness maximisers.
Separately, it’s interesting that you quote this part:
The important thing is to hold nothing back in your criticisms of how to criticize; nor should you regard the unavoidability of loopy justifications as a warrant of immunity from questioning.
Because it seems to me that this is exactly what advocates of “postrationality” here are not doing, when they take the absence of universally compelling arguments as license to dismiss rationality and truth-based arguments against their positions.¹
Eliezer also says this:
Always apply full force, whether it loops or not—do the best you can possibly do, whether it loops or not—and play, ultimately, to win.
It seems to me that applying full force in criticism of postrationality amounts to something like the below:
“Indeed, compellingness-of-story, willingness-to-life, mythic mode, and many other non-evidence-based criteria are alternative criteria which could be used to select beliefs. However we have huge amounts of evidence (catalogued in the Sequences, and in the heuristics and biases literature) that these criteria are not strongly correlated to truth, and therefore will lead you to holding wrong beliefs, and furthermore that holding wrong beliefs is instrumentally harmful, and, and [the rest of the sequences, Ethical Injunctions, etc]...”
“Meanwhile, we also have vast tracts of evidence that science works, that results derived with valid statistical methods replicate far more often than any others, that beliefs approaching truth requires accumulating evidence by observation. I would put the probability that rational methods are the best criteria I have for selecting beliefs at 1−ϵ. Hence, it seems decisively not worth it to adopt some almost certainly harmful ‘postrational’ anti-epistomology just because of that ϵ probability. In any case, per Ethical Injunctions, even if my probabilities were otherwise, it would be far more likely that I’ve made a mistake in reasoning than that adopting non-rational beliefs by such methods would be a good idea.”
Indeed, much of the Sequences could be seen as Eliezer considering alternative ways of selecting beliefs or “viewing the world”, analyzing these alternative ways, and showing that they are contrary to and inferior to rationality. Once this has been demonstrated, we call them “biases”. We don’t cling to them on the basis that “we can’t know the criterion of truth”.
Advocates of postrationality seem to be hoping that the fact that P(Occam’s razor) < 1 makes these arguments go away. It doesn’t work like that. P(Occam’s razor) = 1−ϵ at most makes ϵ of these arguments go away. And we have a lot of evidence for Occam’s razor.
¹ As gworley seems to do here and here seemingly expecting me to provide a universally compelling argument in response.
Advocates of postrationality seem to be hoping that the fact that P(Occam’s razor) < 1 makes these arguments go away. It doesn’t work like that.
This (among other paragraphs) is an enormous strawman of everything that I have been saying. Combined with the fact that the general tone of this whole discussion so far has felt adversarial rather than collaborative, I don’t think that I am motivated to continue any further.
It doesn’t seem to be a strawman of what eg. gworley and TAG have been saying, judging by the repeated demands for me to supply some universally compelling “criterion of truth” before any of the standard criticisms can be applied. Maybe you actually disagree with them on this point?
It doesn’t seem like applying full force in criticism is a priority for the ‘postrationality’ envisioned by the OP, either, or else they would not have given examples (compellingness-of-story, willingness-to-life) so trivial to show as bad ideas using standard arguments.
I agree with Kaj on this point, however I also don’t think you’re intentionally trying to respond to a strawman version of what we’re presenting; what we’re arguing for hinges on what seems to be a subtle point for most people (it doesn’t feel subtle to me but I am empathetic to technical philosophical positions being subtle to other people), so it’s easy to conflate our position with, say, postmodernist-style epistemic relativism, since although it’s drastically different than that it’s different for technical reasons that may not be apparent from reading the broad strokes of what we’re saying.
I suspect what’s going on in this discussion is something like the following: me, Kaj, TAG, and others are coming from a position that relatively small in idea space, but there’s other ideas that sort-of pattern match if you don’t look too close at the details that are getting confused for the point we’re trying to make, and then people respond to these other ideas rather than the one we’re holding. Although we’re trying our best to cut idea space such that you see the part we’re talking about, the process is inexact because although I’ve pointed to it with the technical language of philosophy the technical language of philosophy is easily mistaken for non-technical language since it reused common words (physics sometimes has the same problem: you pick a word because it’s a useful metaphor but give it a technical meaning, and then people misunderstand because they think too much in terms of the metaphor and not in terms of the precise model being referred to by the word) and requires a certain about of fluency with philosophy in general. For example, in all the comments on this post, I think so far only jessicata has asked for clarification in a way that clearly is framed in terms of technical philosophy.
This is not to necessarily demand that you engage with technical philosophy if you don’t want to, but it is I suspect why we continue to have trouble communicating (or if there are other reasons this is a major one). I don’t know a way to explain these points that isn’t in that language and not also easily confused for other ideas I wouldn’t endorse, though, so there may not be much way forward in presenting metarationality to you in a way that I would agree that you understand it and allows you to express a rejection I would consider valid (if indeed such a reason for rejection exists; if I knew one I wouldn’t hold these views!). The only other ways we have of talking about these things tend to rely much more on appeal to intuitions that you don’t seem to share, and transmitting those intuitions is a separate project from what I want to do, although Kaj’s and others’ responses do a much better job than mine of attempting that transmission.
Although we’re trying our best to cut idea space such that you see the part we’re talking about, the process is inexact because although I’ve pointed to it with the technical language of philosophy the technical language of philosophy is easily mistaken for non-technical language since it reused common words
I am sympathetic to this sort of explanation. Could you, then, note specifically which of your terms are supposed to be interpreted at technical language, and link to some definitions / explanations of them? (Can such be found on the SEP, for instance?)
I did not ask for a universally compelling argument: you brought that in.
Trying to solve problems by referring to the Sequences has a way of leading to derailment: people match the topic at hand to which ever of Yudkowsky’s writings is least irrelevant, even if it is not relevant enough to be on the same topic.
Hmm, I think there is some kind of category error happening that you think I’m asking for universally compelling arguments because I agree they don’t and can’t exist as a straightforward corollary of epistemic circularity. You might feel that I do though because I think if you assume to know the criterion of truth or to be able to learn it this would be equivalent to saying you could find a universally compelling argument, because this is exactly the positivist stance. If you disagree then I suspect whatever disagreement we have has become extremely esoteric since I don’t see a natural space into which you could claim the criterion of truth is knowable and that there are no universally compelling arguments.
The non-existence of universally compelling arguments has nothing to do with whether “the criterion of truth is knowable”, or “epistemic circularity”, or any other abstruse epistemic issues, or any other non-abstruse epistemic issues.
There cannot be a universally compelling argument because for any given argument, there can exist a mind which is not persuaded by it.
If it were the case that “the criterion of truth is knowable” (whatever that means), and you had what you considered to be a universally compelling argument, I could still build a mind which remains—stubbornly, irrationally (?), impenetrably—unconvinced by that argument. And that would make that argument not universally compelling after all.
The non-existence of universally compelling arguments has nothing to do with whether “the criterion of truth is knowable”, or “epistemic circularity”, or any other abstruse epistemic issues, or any other non-abstruse epistemic issues.
There cannot be a universally compelling argument because for any given argument, there can exist a mind which is not persuaded by it.
This feels to me similar to saying “don’t worry about all that physics telling us we can’t travel faster than light, we have engineering reasons to think we can’t do it” as if this were a dismissal of the former when it’s in fact an expression of it. Further, Eliezer doesn’t really prove his point in that post if you want a detailed philosophical explanation of the point. Instead, as is often the case, Eliezer is smart and manages to come to a conclusion consistent with the philosophical details despite making arguments at a level where it’s not totally clear he can support the claims he’s making (which is fine because he wasn’t writing to do that, but it does make his words on the subject less relevant here because they’re talking to a different level of abstraction).
Thus, it seems that you’re just agreeing with me even if you’re talking at a different level of abstraction, but I take it from your tone you meant to disagree, so maybe you meant to press some other point that’s not clear to me from what you wrote?
The reason I cited is not an “engineering reason”; it is fundamental. It seems absurd to say that it’s “an expression of” something like “epistemic circularity”. A more apt analogy would be to computability theory. If we make some assertion in computer science, and in support of that assertion, prove that we can, or cannot, construct some particular sort of computer program, is that an “engineering reason”? Applying such a term seems tendentious, at best.
Further, Eliezer doesn’t really prove his point in that post if you want a detailed philosophical explanation of the point.
If you disagree with Eliezer’s arguments in that post, I would be interested in reading what you have to say (as would others, I am sure).
Thus, it seems that you’re just agreeing with me even if you’re talking at a different level of abstraction, but I take it from your tone you meant to disagree, so maybe you meant to press some other point that’s not clear to me from what you wrote?
You said:
I don’t see a natural space into which you could claim the criterion of truth is knowable and that there are no universally compelling arguments
The phrasing is odd (“natural space”? “into”?), but unless there is some very odd meaning hiding behind that phrasing, what you seem to be saying is that if “the criterion of truth is knowable” then there must exist universally compelling arguments. (Because ¬(P ∧ ¬Q) ⇒ (P → Q).)
And I am saying: this is wrong and confused. If “the criterion of truth is knowable”, that has exactly zero to do with whether there exist universally compelling arguments. Criterion of truth or no criterion of truth, I can always build a mind which fails to be convinced by any given argument you propose. Therefore, any argument you propose will fail to be universally compelling.
This is what Eliezer was saying. It is very simple. If you disagree with this reasoning, do please explain why! (And in that case it would be best, I think, if you posted your disagreement as a comment to Eliezer’s post. I will, of course, gladly read it.)
And I am saying: this is wrong and confused. If “the criterion of truth is knowable”, that has exactly zero to do with whether there exist universally compelling arguments. Criterion of truth or no criterion of truth, I can always build a mind which fails to be convinced by any given argument you propose. Therefore, any argument you propose will fail to be universally compelling.
So I don’t disagree with Eliezer’s post at all; I’m saying he doesn’t give a complete argument for the position. It seems to me the only point of disagreement is that you think knowability of the criterion of truth does not imply the existence of universally compelling arguments, so let me spell that out. This is to say, why is it that you can build a mind that fails to be convinced by any given argument, because Eliezer only intimates this and doesn’t fully explain it.
Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/algorithm to assess if any given statement is true. Let P be a statement. Then there exists some argument, A, contingent on C such that A implies P or ~P. Thus for all P we can know if P or ~P. This would make A universally compelling, i.e. A is a mind-independent argument for the truth value of all statements that would convince even rocks.
Since it seems we’re all in agreement C does not exist, I think any disagreement we have lingering is about something other than the point I originally laid out.
Also, for what it’s worth since you bring up computability theory, knowing the criterion of truth would also imply being able to solve the halting problem since you could always answer the question “does this program halt?”.
(Also, I love the irony that I may fail to convince you because no argument is universally compelling!)
Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/algorithm to assess if any given statement is true. Let P be a statement. Then there exists some argument, A, contingent on C such that A implies P or ~P. Thus for all P we can know if P or ~P. This would make A universally compelling
But of course it wouldn’t. What? This seems completely unrelated to compellingness (universal or otherwise). I have but to build a mind that does not implement the procedure in question, or doesn’t implement it for some specific argument(s), or does implement it but then someone reverses it (cf. Eliezer’s “little grey man”), etc.
a mind-independent argument for
There is no such thing as a “mind-independent argument for” anything. That, too, was Eliezer’s point.
For example, suppose C exists. However, it is then an open question where I believe that C exists. How might I come to believe this? Perhaps I might be presented with an argument for C’s existence. I might find this argument compelling, or not. This is dependent on my mind—i.e., both on my mind existing, and on various specific properties of my mind (such as implementing modus ponens).
And who is doing this attempted convincing? Well, perhaps you are. You believe (in this hypothetical scenario) that C exists. And how did you come to believe this? Whatever the chain of causality was that led to this state of affairs, it could only be very much dependent on various properties of your mind.
Again, a “mind-independent argument” for anything is a nonsensical concept. Who is arguing, and with whom? Who is trying to convince whom? Without minds, the very concept of there being arguments, and those arguments being compelling or not compelling, is meaningless.
This is to say, why is it that you can build a mind that fails to be convinced by any given argument, because Eliezer only intimates this and doesn’t fully explain it.
But he does. He explains it very clearly and explicitly! Building a mind that behaves in some specific way in some specific circumstance(s) is all that’s required. Simply build a mind that, when presented with argument A, finds that argument unconvincing. (Again, see the “little grey man” section.) That is all.
Yes, exactly, you get it. I’m not sure what confusion remains or you think remains. The only point seems here:
But of course it wouldn’t. What? This seems completely unrelated to compellingness (universal or otherwise). I have but to build a mind that does not implement the procedure in question, or doesn’t implement it for some specific argument(s), or does implement it but then someone reverses it (cf. Eliezer’s “little grey man”), etc.
The counterfactual I’m proposing with C is exactly one that would allow not just any mind, but literally anything at all to comprehend A. The existence of C would create a universe wholly unlike our own, which is why I think we’re all in agreement that the existence of such a thing is extremely unlikely even though we can’t formally prove that it doesn’t exist.
It seems that you don’t get it. Said just demonstrated that even if C exists it wouldn’t imply a universally compelling argument.
In other words, this:
Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/algorithm to assess if any given statement is true. Let P be a statement. Then there exists some argument, A, contingent on C such that A implies P or ~P. Thus for all P we can know if P or ~P.
This would make A universally compelling, i.e. A is a mind-independent argument for the truth value of all statements that would convince even rocks.
appears to be a total non sequitur. How does the existence of an algorithm enable you to convince a rock of anything? At a minimum, an algorithm needs to be implemented on a computer… Your statement, and therefore your conclusion that C doesn’t exist, doesn’t follow at all.
(Note: In this comment, I am not claiming that C (as you’ve defined it) exists, or agreeing that it needs to exist for any of my criticisms to hold.)
It seems that you don’t get it. Said just demonstrated that even if C exists it wouldn’t imply a universally compelling argument.
So what? Neither the existence or non existence of a Criterion of Truth that is persuasive to our minds is implied by the (non) existence of universally compelling arguments. The issue of universally compelling arguments is a red herring.
See my other comment, but assuming to know something about how to compute C would just already be part of C by definition. It’s very hard to talk about the criterion of truth without accidentally saying something that implies it’s not true because it’s an unknowable thing we can’t grasp onto. C is basically a statement that, if included in a valid argument about the truth of P, causes the argument to tell us either P or ~P. That’s definitionally what it means to be able to know the criterion of truth.
That you want to deny C is great, because I think (as I’m finding with Said), that we already agree, and any disagreement is the consequence of misunderstanding, probably because it comes too close to sounding to you like a position that I would also reject, and the rest of the fundamental disagreement is one of sentiment, perspective, having worked out the details, and emphasis.
C is basically a statement that, if included in a valid argument about the truth of P, causes the argument to tell us either P or ~P. That’s definitionally what it means to be able to know the criterion of truth.
That’s not how algorithms work and seems… incoherent.
That you want to deny C is great,
I did not say that either.
because I think (as I’m finding with Said), that we already agree, and any disagreement is the consequence of misunderstanding, probably because it comes too close to sounding to you like a position that I would also reject, and the rest of the fundamental disagreement is one of sentiment, perspective, having worked out the details, and emphasis.
No, I don’t think we do agree. It seems to me you’re deeply confused about all of this stuff.
Here’s an exercise: Say that we replace “C” by a specific concrete algorithm. For instance the elementary long multiplication algorithm used by primary school children to multiply numbers.
Does anything whatsoever about your argument change with this substitution? Have we proved that we can explain multiplication to a rock? Or perhaps we’ve proved that this algorithm doesn’t exist, and neither do schools?
Another exercise: suppose, as a counterfactual, that Laplace’s demon exists, and furthermore likes answering questions. Now we can take a specific algorithm C: “ask the demon your question, and await the answer, which will be received within the minute”. By construction this algorithm always returns the correct answer. Now, your task is to give the algorithm, given only these premises, that I can follow to convince a rock that Euclid’s theorem is true.
Given that I still think after all this trying that you are confused and that I never wanted to put this much work into the comments on this post, I give up trying to explain further as we are making no progress. I unfortunately just don’t have the energy to devote to this right now to see it through. Sorry.
The counterfactual I’m proposing with C is exactly one that would allow not just any mind, but literally anything at all to comprehend A. The existence of C would create a universe wholly unlike our own, which is why I think we’re all in agreement that the existence of such a thing is extremely unlikely even though we can’t formally prove that it doesn’t exist.
Ok, this is… far weirder than anything I thought you had in mind when you talked about the “knowability of the criterion of truth”. As far as I can tell, this scenario is… incoherent. Certainly it’s extremely bizarre. I guess you agree with that part, at least.
But… what is it that you think the non-reality of this scenario implies? How do you get from “our universe is not, in fact, at all like this bizarre possibly-incoherent hypothetical scenario” to… anything about rationality, in our universe?
Well if you don’t have C, then you have to build up the truth some other way because you don’t have the ability to ground yourself directly in it because truth exists in the map rather than the territory. So then you are left to ground yourself in what you do find in the territory, and I’d describe the thing you find there as telos or will rather than truth because it doesn’t really look like truth. Truth is a thing we have to create for ourselves rather than extract. The rest follows from that.
Sorry, I mean to say “A is a mind-independent argument for the truth value of P and there exists by our construction such an A for all P that would convince even rocks”.
How would you convince rocks?! What in the world does that have to do with there existing or not existing some observable procedure that shows whether something is true?
First, you defined C, a.k.a. the “criterion of truth”, like this:
Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/algorithm to assess if any given statement is true.
Ok, that’s only mildly impossible, let’s see where this leads us…
But then, you say:
The counterfactual I’m proposing with C is exactly one that would allow not just any mind, but literally anything at all to comprehend A. The existence of C would create a universe wholly unlike our own, which is why I think we’re all in agreement that the existence of such a thing is extremely unlikely even though we can’t formally prove that it doesn’t exist.
Why should the thing you defined in the first quote, lead to anything even remotely resembling the second quote? There is no reason, as far as I can tell; the latter quote just adds extremely impossible magic, out of nowhere and for no reason.
There is no reason, as far as I can tell; the latter quote just adds extremely impossible magic, out of nowhere and for no reason.
I’m saying the thing in the first quote, saying C exists, is the extremely impossible magic. I guess I don’t know how to convey this part of the argument any more clearly, as it seems to me to follow directly and objections I can think of to it hinge on assuming things you would know contingent on what you think about C and thus are not admissible here.
Maybe it would help if I gave an example? Let’s say C exists. Okay, great, now we can tell if things are true independent of any mind since C is a real fact of the world, not a belief (it’s part of the territory). Now I can establish as a matter of fact (or rather we have no way to express this correctly, but the fact can be established independent of any subject) whether or not the sky is blue independent of any observer because there is an argument contingent on C which tells us whether the statement “the sky is blue” is true or false. Now this statement is true or false in the territory and not in necessarily in any map. We’d say this is a realist position rather than an anti-realist one. This would have to mean then that this fact would be true for anything we might treat as a subject of which we could ask “does X know the fact of the matter about whether or not the sky is blue”. Thus we could ask if a rock knows whether or not the sky is blue and it would be a meaningful question about a matter of fact and not a category error like it is when we deny the knowability of C because then we have taken an anti-realist position. This is what I’m trying to say about saying there are universally compelling arguments if we assume C: the truth of matters then shifts from existing in the map to existing in the territory, and so now there can be universally compelling arguments for things that are true even if the subject is too dumb to understand them they will still be true for them regardless.
I’m not sure that helps but that’s the best I can think up right now.
I’m also a bit confused about your definition of C.
Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/algorithm to assess if any given statement is true.
Suppose there exists a special magic eight ball that shows the word “true” or “false” when you shake it after making any statement, and that it always gives the correct answer.
Would you agree that use of this special magic eight ball represents a “procedure/algorithm to assess if any given statement is true”, and so anyone who knows how to use the magic eight ball knows the criterion of truth?
If so, I don’t see how you get from there to saying that a rock must be convinced, or really that anyone must therefore be convinced of anything.
Just because there exists a procedure for assessing truth (absolutely correctly), doesn’t therefore mean that everyone uses that procedure, right?
Suppose that Alice has never seen nor heard of the magic eight ball, and does not know it exists. Just the fact that it exists doesn’t imply anything about her state of mind, does it?
Was there supposed to be some part of the definition of C that my magic eight ball story doesn’t capture, which implies that it represents a universally compelling argument?
Just being able to give the correct answer to any yes/no question does not seem like it’s enough to be universally compelling.
EDIT: If the hypothetical was not A) “there exists… a procedure to (correctly) assess if any given statement is true”, but rather B) “every mind has access to and in fact uses a procedure that correctly assesses if any given statement is true”, then I would agree that the hypothetical implies universally compelling arguments.
Do you mean to be supposing B rather than A when you talk about the hypothetical criterion of truth?
Do you think about distances in Metric or Imperial units? Both are equally true, so probably in whichever units you happen to be more fluent in.
Do you use Newtonian mechanics or full relativity for calculating the motion of some object? Relativity is more true, but sometimes the simpler model is good enough and easier to calculate, so it may be better for the situation.
These seem like silly examples to me.
I think about distances in Imperial units, but it seems very weird, inaccurate, and borderline absurd to describe me as believing the Imperial system to be “true”, or “more true”, or believing the metric system to be “not true” or “false” or “less true”. None of those make any sense as descriptions of what I believe. Frankly, I don’t understand how you can suggest otherwise.
Similarly, it is a true fact that Newtonian mechanics allows me to calculate the motion of objects, in certain circumstances (i.e., intermediate-scale situations / phenomena), to a great degree of accuracy, but that relativity will give a more accurate result, at the cost of much greater difficulty in calculation. This is a fact which I believe to be true. Describing Relativity as being “more true” is odd.
Do you consider your romantic partner a wonderful person who you love dearly and want to be happy, or someone who does various things that benefit you, in exchange for you doing various things that benefit them? Both are true, but the former framing is probably one that will make for a happier and more meaningful relationship.
If both are true (as, indeed, they are, in many relationships), then this, too, seems like an odd example. Why choose? These are not in conflict. Why can’t someone be a wonderful person whom you love dearly and want to be happy, and who does various things that benefit you, in exchange for you doing various things that benefit them? I struggle to see any conflict or contradiction.
Meaning no disrespect, Kaj, but I spy a motte-and-bailey approach in these sorts of examples. The motte, of course, is “Newtonian mechanics” and so on. The bailey is “mythic mode”. To call the latter “indefensible” is an understatement.
If both are true (as, indeed, they are, in many relationships), then this, too, seems like an odd example. Why choose? These are not in conflict. Why can’t someone be a wonderful person whom you love dearly and want to be happy, and who does various things that benefit you, in exchange for you doing various things that benefit them? I struggle to see any conflict or contradiction.
That’s my point. That none of this stuff about choosing beliefs is in conflict with standard LW rationality, and that there are plenty of situations where you can just look at the world in one way or the other, and both are equally true, and you just focus on one based on whichever is the most useful for the situation. If you say that “these are not in conflict”, then yes! That is what I have been trying to say! It’s not true that this is a “poisonous philosophy”, because this is mostly just a totally ordinary thing that everyone does every day and which is totally unproblematic!
Someone might then respond, “well if it’s so ordinary, what’s this whole thing about post/metarationality being totally different from ordinary rationality, then?” Honestly, beats me. I don’t think it really is particularly different, and giving it a special label that implies that it’s anything else than just a straightforward application of ordinary rationality is just confusing matters and doing everyone a disservice. But that’s the label we seem to have ended up with.
Meaning no disrespect, Kaj, but I spy a motte-and-bailey approach in these sorts of examples. The motte, of course, is “Newtonian mechanics” and so on. The bailey is “mythic mode”. To call the latter “indefensible” is an understatement.
This is difficult to answer, because just as there are many things going under the label “rational”—some of which are decidedly less rational than others—there are also many ways in which you could think of mythic mode, even if you only limited yourself to different ways of interpreting Val’s post on the topic. Without getting deeper into that topic, I’ll just say that there are ways of interpreting mythic mode which I think are perfectly in line with the kinds of examples I’ve been giving in the comments of this post, and also ways of interpreting it which are not and which are just crazy.
none of this stuff about choosing beliefs is in conflict with standard LW rationality
What do you mean, “choosing beliefs”? The bit of my comment that you quoted said nothing about choosing beliefs. The situation I describe doesn’t seem to require “choosing beliefs”. You just believe what is, to the best of your ability to discern, true. That’s all. What “choosing” is there?
Someone might then respond, “well if it’s so ordinary, what’s this whole thing about post/metarationality being totally different from ordinary rationality, then?” Honestly, beats me. I don’t think it really is particularly different, and giving it a special label that implies that it’s anything else than just a straightforward application of ordinary rationality is just confusing matters and doing everyone a disservice. But that’s the label we seem to have ended up with.
Maybe what you’re talking about is different from what everyone else who is into “postrationality”, or what have you, is talking about?
Without getting deeper into that topic, I’ll just say that there are ways of interpreting mythic mode which I think are perfectly in line with the kinds of examples I’ve been giving in the comments of this post
But… I think that your examples are examples of the wrong way to think about things… “crazy” is probably an overstatement for your comments (as opposed to those of some other people), but “wrong” does not seem to be…
“Someone might then respond, “well if it’s so ordinary, what’s this whole thing about post/metarationality being totally different from ordinary rationality, then?” Honestly, beats me. I don’t think it really is particularly different, and giving it a special label that implies that it’s anything else than just a straightforward application of ordinary rationality is just confusing matters and doing everyone a disservice. But that’s the label we seem to have ended up with.”
Maybe what you’re talking about is different from what everyone else who is into “postrationality”, or what have you, is talking about?
(sorry; I can’t seem to nest blockquotes in the comments; that’s the best I could do)
For myself I find this point is poorly understood by most self-identified rationalists, and I think most people reading the sequences come out of them as positivists because Eliezer didn’t hammer the point home hard enough and positivism is the default within the wider community of rationality-aligned folks (e.g. STEM folks). I wish all this disagreement were just a simple matter politics over who gets to use what names, but it’s not because there’s a real disagreement over epistemology. Given that “rationality” was always a term that was bound to get conflated with the rationality of high modernism, it’s perhaps not surprising that those of us who got fed up with the positivists ended up giving ourselves a new name.
This is made all the more complicated because Eliezer does specifically call out positivism as a failure mode, so it makes pinning people down on this all the more tricky because they can just say “look, Eliezer said rationality is not this”. As the responses to this post make clear, though, the positivist streak is alive and well in the LW community given what I read as a strong reaction against the calling out of positivism or for that matter privileging any particular leap of faith (although positivists don’t necessarily think of themselves as doing that because they disagree with the premise that we can’t know the criterion of truth). So this all leads me to the position that we have need of a distinction for now because of our disagreement on this fundamental issue that has many effects on what is and is not considered to be useful to our shared pursuits.
For myself I find this point is poorly understood by most self-identified rationalists, and I think most people reading the sequences come out of them as positivists because Eliezer didn’t hammer the point home hard enough and positivism is the default within the wider community of rationality-aligned folks (e.g. STEM folks).
Maybe so, but I can’t help noticing that whenever I try to think of concrete examples about what postrationality implies in practice, I always end up with examples that you could just as well justify using the standard rationalist epistemology. E.g. all my examples in this comment section. So while I certainly agree that the postrationalist epistemology is different from the standard rationalist one, I’m having difficulties thinking of any specific actions or predictions that you would really need the postrationalist epistemology to justify. Something like the criterion of truth is a subtle point which a lot of people don’t seem to get, yes, but it also feels like one which doesn’t make any practical difference whether you get it or not. And theoretical points which people can disagree a lot about despite not making any practical difference are almost the prototypical example of tribal labels. John Tooby:
The more biased away from neutral truth, the better the communication functions to affirm coalitional identity, generating polarization in excess of actual policy disagreements. Communications of practical and functional truths are generally useless as differential signals, because any honest person might say them regardless of coalitional loyalty. In contrast, unusual, exaggerated beliefs—such as supernatural beliefs (e.g., god is three persons but also one person), alarmism, conspiracies, or hyperbolic comparisons—are unlikely to be said except as expressive of identity, because there is no external reality to motivate nonmembers to speak absurdities.
(sorry; I can’t seem to nest blockquotes in the comments; that’s the best I could do)
Not related to your points, but re: blockquotes and nesting, try the GreaterWrong editor; you can select some text and click the blockquote button, then select text (including the blockquoted) and click blockquote again, etc., and it’ll nest it properly for you.
Suppose that I do a rain-making dance in my backyard, and predict that as a consequence of this, it will rain tomorrow. Turns out that it really does rain the next day. Now I argue that I have magical rain-making powers.
Somebody else objects, “of course you don’t, it just happened to rain by coincidence! You need to repeat that experiment!”
So I repeat the rain-making dance on ten separate occasions, and on seven out of ten times, it does happen to rain anyway.
The skeptic says, “ha, your rain-making dance didn’t work after all!” I respond, “ah, but it did work on seven out of ten times; medicine can’t be shown to reliably work every time either, but my magic dance does work statistically significantly often.”
The skeptic answers, “you can’t establish statistical significance without something to compare to! This happens to be rainy season, so it would rain on seven out of ten days anyway!”
I respond, “ah, but notice how it is the custom for people in my tribe do the rain-making dance every day during rainy season, and to not do it during dry season; it is our dance that causes the rainy season.”
The skeptic facepalms. “Your people have developed a tradition to dance during rainy season, but it’s the rain that has caused your dance, not the other way around!”
… and then we go on debating forever.
My point here is that just looking at raw observations is insufficient to judge any nontrivial model. We are always evaluating our observations in light of an existing model; it is the observation + model that says whether something is true, not the observation itself. I dance and it rains, and my model says that dancing causes rain: my predicted observation came true, so I consider my model validated. The skeptic’s model says that dancing does not cause rain but that it rains all the time during the rainy season anyway, so he consider his own model just as confirmed by the observation.
You can, of course, use observations to evaluate models. But to do that, you need to use a meta-model. When I say that we don’t have direct access to the truth, this is what I mean; both you, me, and the schizophrenic all tend to think that we are correctly drawing the right conclusions from our observations, but at least one of us is actually running seriously flawed models and meta-models, and may never know it, being trapped in evaluating all of their models through seriously flawed meta-models.
As clone of saturn notes, the deepest meta-model of them all is the one that is running below the level of conscious decisions; the set of low-level processes which decides what actions we take and what thoughts we think. This is a reinforcement learning system which responds to rewards: if particular thoughts or assumptions (such as assumption of a rain-making dance actually producing rain, or the suggestion of statistical significance being an important factor to consider when evaluating predictions) have led to actions which brought the organism (internally or externally generated rewards), then those kinds of thoughts and assumptions will be reinforced.
In other words, we end up having the kinds of beliefs that seem useful, as evaluated by whether they succeed in giving us rewards. Epistemic and instrumental rationality were the same all along. (I previously discussed this in more detail in my posts What are concepts for and World-models as tools.)
Well:
Do you think about distances in Metric or Imperial units? Both are equally true, so probably in whichever units you happen to be more fluent in.
Do you use Newtonian mechanics or full relativity for calculating the motion of some object? Relativity is more true, but sometimes the simpler model is good enough and easier to calculate, so it may be better for the situation.
Do you consider your romantic partner a wonderful person who you love dearly and want to be happy, or someone who does various things that benefit you, in exchange for you doing various things that benefit them? Both are true, but the former framing is probably one that will make for a happier and more meaningful relationship.
You talk about “clever reasoning” that “makes your beliefs less accurate”, but as these examples should hopefully demonstrate, at any given time there are an infinite number of more-or-less true ways of looking at some situation—and when we need to choose between several ways of framing the situation which are equally true, we always end up choosing one or the other based on its usefulness. If we didn’t, it would be impossible to function, since there’d be no criteria for choosing between them. (And sometimes we go with the approximation that’s less strictly true, if it’s good enough for the situation; that is, if it’s more useful to go with it.) That’s the 20%.
This stuff about rain dancing seems like just the most banal epistemological trivialities, which have already been dealt with thoroughly in the Sequences. The reasons why such “tests” of rain dancing don’t work are well known and don’t need to be recapitulated here.
This has nothing to do with causal pathways, magic or otherwise, direct or otherwise. Magic would not turn a rock into a philosopher even if it should exist.
Yes, carrying out experiments to determine reality relies on Occam’s razor. It relies on Occam’s razor being true. It does not in any way rely on me possessing some magical universally compelling argument for Occam’s razor. Because Occam’s razor is in fact true in our universe, experiment does in fact work, and thus the causal pathway for evaluating our models does in fact exist: experiment and observation (and bayesian statistics).
I’m going to stress this point because I noticed others in this thread make this seemingly elementary map-territory confusion before (though I didn’t comment on it there). In fact it seems to me now that conflating these things is maybe actually the entire source of this debate: “Occam’s razor is true” is an entirely different thing from “I have access to universally compelling arguments for Occam’s razor”, as different as a raven and the abstract concept of corporate debt. The former is true and useful and relevant to epistemology. The latter is false, impossible and useless.
Because the former is true, when I say “in fact, there is a causal pathway to evaluate our models: looking at reality and doing experiments”, what I say is, in fact, true. The process in fact works. It can even be carried out by a suitably programmed robot with no awareness of what Occam’s razor or “truth” even is. No appeals or arguments about whether universally compelling arguments for Occam’s razor exist can change that fact.
(Why am I so lucky as to be a mind whose thinking relies on Occam’s razor in a world where Occam’s razor is true? Well, animals evolved via natural selection in an Occamian world, and those whose minds were more fit for that world survived...)
But honestly, I’m just regurgitating Where Recursive Justification Hits Bottom at this point.
This seems like a gross oversimplification to me. The mind is a complex dynamical system made of locally reinforcement-learning components, which doesn’t do any one thing all the time.
And this seems simply wrong. You might as well say “epistemic rationality and chemical action-potentials were the same all along”. Or “jumbo jets and sheets of aluminium were the same all along”. A jumbo jet might even be made out of sheets of aluminium, but a randomly chosen pile of the latter sure isn’t going to fly.
As for your examples, I don’t have anything to add to Said’s observations.
Obviously. Which is why I said that the point was not any of the specific arguments in that debate—they were totally arbitrary and could just as well have been two statisticians debating the validity of different statistical approaches—but the fact that any two people can disagree about anything in the first place, as they have different models of how to interpret their observations.
This is very close to the distinction that I have been trying to point at; thank you for stating it more clearly than I managed to. The way that I’d phrase it is that there’s a difference between considering a claim to be true, and considering its justification universally compelling.
It sounds like you have been interpreting me to say something like “Occam’s Razor is false because its justification is not universally compelling”. That is not what I have been trying to say. Rather, my claim has been “we can consider Occam’s Razor true despite its justification not being universally compelling, but because there are no universally compelling justifications, we should keep trying out different justifications and seeing whether there are any that would seem to work even better”.
If you say “but that’s totally in line with ‘Where Recursive Justification Hits Bottom’ and the standard LW canon...” then yes, it is. That’s my point. Especially since ‘Recursive Justification’ also says that we should just decide to believe in Occam’s Razor, since it doesn’t seem particularly useful to do otherwise, and because practically speaking, we don’t have any better alternative:
As for my story about how the brain works: yes, it is obviously a vast simplification. That does not make it false, especially given that “the brain learns to use what has worked before and what it thinks is likely to make it win in the future” is exactly what Eliezer is advocating in the above post.
But what Eliezer also advocates in that post, is not elevating any rule—Occam’s Razor included—into an unquestioned axiom, but to keep questioning even that, if you can:
I would say that there exists two kinds of metarationality: weak and strong. Weak metarationality is just standardly compatible with standard LW rationality, because of things like framing effects and self-fulfilling beliefs, as I have been arguing in other comments. But because the standard canon has given the impression that truth should be the only criteria for beliefs and missed the fact that there are plenty of beliefs that one can choose without violating Occam’s Razor, this seems “metarational” and weird. Arguably like this shouldn’t be called meta/postrationality in the first place, because it’s just standard rationality.
The way to phrase strong metarationality might be, is that classic LW rationality is what you get when you take a specific set of axioms as your starting point, and build on top of that. Metarationality is what you get when you acknowledge that this does indeed seem like the right thing to do most of the time, but that we should also be willing (as Eliezer advocates above) to question that, and try out different starting axioms as well, to see whether there would be any that would be even better.
In my experience, strong metarationality isn’t useful because it would point to any basic axioms that would be better than LW’s standard ones—if it does, I haven’t found any, and the standard assumptions continue to be the most useful ones. But what does make it somewhat useful is in that when you practice questioning everything, and e.g. distinguishing between “Occam’s Razor is true” and “I have assumed Occam’s Razor to be true because that seems useful”, then that helps in catching assumptions which don’t fall directly out of the standard axioms, and which you’ve just assumed to be true without good justification.
E.g. “my preferred system of government is the best one” is a belief that should logically be assigned much lower confidence than “Occam’s Razor is true”; but the brain only has limited precision in assigning credence values to claims. So most people have beliefs which are more like the government one than the Occam’s Razor one, despite being assigned a similar level of credence as the Occam’s Razor one is. By questioning and testing even beliefs which are like Occam’s Razor, one can end up questioning and revising beliefs which actually should be questioned, which one might never have questioned otherwise. This is valuable even if the Occam’s Razor-like beliefs survive that questioning unscathed—but the exercise does not work unless one actually does make a serious attempt to question them.
I’ll have more to say later but:
Both of these are different from the claim actually being true. The fact that Occam’s razor is true is what causes the physical process of (occamian) observation and experiment to yield correct results. So you see, you’ve already managed to rephrase what I’ve been saying into something different by conflating map and territory.
Indeed, something being true is further distinct from us considering it true. But given that the whole point of metarationality is fully incorporating the consequences of realizing the map/territory distinction and the fact that we never observe the territory directly (we only observe our brain’s internal representation of the external environment, rather than the external environment directly), a rephrasing that emphazises the way that we only ever experience the map seemed appropriate.
Even if true, this is different from “epistemic rationality is just instrumental rationality”; as different as adaptation executors are from fitness maximisers.
Separately, it’s interesting that you quote this part:
Because it seems to me that this is exactly what advocates of “postrationality” here are not doing, when they take the absence of universally compelling arguments as license to dismiss rationality and truth-based arguments against their positions.¹
Eliezer also says this:
It seems to me that applying full force in criticism of postrationality amounts to something like the below:
“Indeed, compellingness-of-story, willingness-to-life, mythic mode, and many other non-evidence-based criteria are alternative criteria which could be used to select beliefs. However we have huge amounts of evidence (catalogued in the Sequences, and in the heuristics and biases literature) that these criteria are not strongly correlated to truth, and therefore will lead you to holding wrong beliefs, and furthermore that holding wrong beliefs is instrumentally harmful, and, and [the rest of the sequences, Ethical Injunctions, etc]...”
“Meanwhile, we also have vast tracts of evidence that science works, that results derived with valid statistical methods replicate far more often than any others, that beliefs approaching truth requires accumulating evidence by observation. I would put the probability that rational methods are the best criteria I have for selecting beliefs at 1−ϵ. Hence, it seems decisively not worth it to adopt some almost certainly harmful ‘postrational’ anti-epistomology just because of that ϵ probability. In any case, per Ethical Injunctions, even if my probabilities were otherwise, it would be far more likely that I’ve made a mistake in reasoning than that adopting non-rational beliefs by such methods would be a good idea.”
Indeed, much of the Sequences could be seen as Eliezer considering alternative ways of selecting beliefs or “viewing the world”, analyzing these alternative ways, and showing that they are contrary to and inferior to rationality. Once this has been demonstrated, we call them “biases”. We don’t cling to them on the basis that “we can’t know the criterion of truth”.
Advocates of postrationality seem to be hoping that the fact that P(Occam’s razor) < 1 makes these arguments go away. It doesn’t work like that. P(Occam’s razor) = 1−ϵ at most makes ϵ of these arguments go away. And we have a lot of evidence for Occam’s razor.
¹ As gworley seems to do here and here seemingly expecting me to provide a universally compelling argument in response.
This (among other paragraphs) is an enormous strawman of everything that I have been saying. Combined with the fact that the general tone of this whole discussion so far has felt adversarial rather than collaborative, I don’t think that I am motivated to continue any further.
It doesn’t seem to be a strawman of what eg. gworley and TAG have been saying, judging by the repeated demands for me to supply some universally compelling “criterion of truth” before any of the standard criticisms can be applied. Maybe you actually disagree with them on this point?
It doesn’t seem like applying full force in criticism is a priority for the ‘postrationality’ envisioned by the OP, either, or else they would not have given examples (compellingness-of-story, willingness-to-life) so trivial to show as bad ideas using standard arguments.
I agree with Kaj on this point, however I also don’t think you’re intentionally trying to respond to a strawman version of what we’re presenting; what we’re arguing for hinges on what seems to be a subtle point for most people (it doesn’t feel subtle to me but I am empathetic to technical philosophical positions being subtle to other people), so it’s easy to conflate our position with, say, postmodernist-style epistemic relativism, since although it’s drastically different than that it’s different for technical reasons that may not be apparent from reading the broad strokes of what we’re saying.
I suspect what’s going on in this discussion is something like the following: me, Kaj, TAG, and others are coming from a position that relatively small in idea space, but there’s other ideas that sort-of pattern match if you don’t look too close at the details that are getting confused for the point we’re trying to make, and then people respond to these other ideas rather than the one we’re holding. Although we’re trying our best to cut idea space such that you see the part we’re talking about, the process is inexact because although I’ve pointed to it with the technical language of philosophy the technical language of philosophy is easily mistaken for non-technical language since it reused common words (physics sometimes has the same problem: you pick a word because it’s a useful metaphor but give it a technical meaning, and then people misunderstand because they think too much in terms of the metaphor and not in terms of the precise model being referred to by the word) and requires a certain about of fluency with philosophy in general. For example, in all the comments on this post, I think so far only jessicata has asked for clarification in a way that clearly is framed in terms of technical philosophy.
This is not to necessarily demand that you engage with technical philosophy if you don’t want to, but it is I suspect why we continue to have trouble communicating (or if there are other reasons this is a major one). I don’t know a way to explain these points that isn’t in that language and not also easily confused for other ideas I wouldn’t endorse, though, so there may not be much way forward in presenting metarationality to you in a way that I would agree that you understand it and allows you to express a rejection I would consider valid (if indeed such a reason for rejection exists; if I knew one I wouldn’t hold these views!). The only other ways we have of talking about these things tend to rely much more on appeal to intuitions that you don’t seem to share, and transmitting those intuitions is a separate project from what I want to do, although Kaj’s and others’ responses do a much better job than mine of attempting that transmission.
I am sympathetic to this sort of explanation. Could you, then, note specifically which of your terms are supposed to be interpreted at technical language, and link to some definitions / explanations of them? (Can such be found on the SEP, for instance?)
Nope, this is explicitly what I wanted to avoid doing, although I note I’ve already been sucked in way deeper into this than I ever meant to be.
But… why would you want to avoid this? (Surely it’s not difficult to post a link?)
I did not ask for a universally compelling argument: you brought that in.
Trying to solve problems by referring to the Sequences has a way of leading to derailment: people match the topic at hand to which ever of Yudkowsky’s writings is least irrelevant, even if it is not relevant enough to be on the same topic.
Hmm, I think there is some kind of category error happening that you think I’m asking for universally compelling arguments because I agree they don’t and can’t exist as a straightforward corollary of epistemic circularity. You might feel that I do though because I think if you assume to know the criterion of truth or to be able to learn it this would be equivalent to saying you could find a universally compelling argument, because this is exactly the positivist stance. If you disagree then I suspect whatever disagreement we have has become extremely esoteric since I don’t see a natural space into which you could claim the criterion of truth is knowable and that there are no universally compelling arguments.
I’m not nshepperd, but:
The non-existence of universally compelling arguments has nothing to do with whether “the criterion of truth is knowable”, or “epistemic circularity”, or any other abstruse epistemic issues, or any other non-abstruse epistemic issues.
There cannot be a universally compelling argument because for any given argument, there can exist a mind which is not persuaded by it.
If it were the case that “the criterion of truth is knowable” (whatever that means), and you had what you considered to be a universally compelling argument, I could still build a mind which remains—stubbornly, irrationally (?), impenetrably—unconvinced by that argument. And that would make that argument not universally compelling after all.
There is nothing esoteric about any of this; Eliezer explained it all very clearly in the Sequences.
This feels to me similar to saying “don’t worry about all that physics telling us we can’t travel faster than light, we have engineering reasons to think we can’t do it” as if this were a dismissal of the former when it’s in fact an expression of it. Further, Eliezer doesn’t really prove his point in that post if you want a detailed philosophical explanation of the point. Instead, as is often the case, Eliezer is smart and manages to come to a conclusion consistent with the philosophical details despite making arguments at a level where it’s not totally clear he can support the claims he’s making (which is fine because he wasn’t writing to do that, but it does make his words on the subject less relevant here because they’re talking to a different level of abstraction).
Thus, it seems that you’re just agreeing with me even if you’re talking at a different level of abstraction, but I take it from your tone you meant to disagree, so maybe you meant to press some other point that’s not clear to me from what you wrote?
The reason I cited is not an “engineering reason”; it is fundamental. It seems absurd to say that it’s “an expression of” something like “epistemic circularity”. A more apt analogy would be to computability theory. If we make some assertion in computer science, and in support of that assertion, prove that we can, or cannot, construct some particular sort of computer program, is that an “engineering reason”? Applying such a term seems tendentious, at best.
If you disagree with Eliezer’s arguments in that post, I would be interested in reading what you have to say (as would others, I am sure).
You said:
The phrasing is odd (“natural space”? “into”?), but unless there is some very odd meaning hiding behind that phrasing, what you seem to be saying is that if “the criterion of truth is knowable” then there must exist universally compelling arguments. (Because ¬(P ∧ ¬Q) ⇒ (P → Q).)
And I am saying: this is wrong and confused. If “the criterion of truth is knowable”, that has exactly zero to do with whether there exist universally compelling arguments. Criterion of truth or no criterion of truth, I can always build a mind which fails to be convinced by any given argument you propose. Therefore, any argument you propose will fail to be universally compelling.
This is what Eliezer was saying. It is very simple. If you disagree with this reasoning, do please explain why! (And in that case it would be best, I think, if you posted your disagreement as a comment to Eliezer’s post. I will, of course, gladly read it.)
So I don’t disagree with Eliezer’s post at all; I’m saying he doesn’t give a complete argument for the position. It seems to me the only point of disagreement is that you think knowability of the criterion of truth does not imply the existence of universally compelling arguments, so let me spell that out. This is to say, why is it that you can build a mind that fails to be convinced by any given argument, because Eliezer only intimates this and doesn’t fully explain it.
Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/algorithm to assess if any given statement is true. Let P be a statement. Then there exists some argument, A, contingent on C such that A implies P or ~P. Thus for all P we can know if P or ~P. This would make A universally compelling, i.e. A is a mind-independent argument for the truth value of all statements that would convince even rocks.
Since it seems we’re all in agreement C does not exist, I think any disagreement we have lingering is about something other than the point I originally laid out.
Also, for what it’s worth since you bring up computability theory, knowing the criterion of truth would also imply being able to solve the halting problem since you could always answer the question “does this program halt?”.
(Also, I love the irony that I may fail to convince you because no argument is universally compelling!)
But of course it wouldn’t. What? This seems completely unrelated to compellingness (universal or otherwise). I have but to build a mind that does not implement the procedure in question, or doesn’t implement it for some specific argument(s), or does implement it but then someone reverses it (cf. Eliezer’s “little grey man”), etc.
There is no such thing as a “mind-independent argument for” anything. That, too, was Eliezer’s point.
For example, suppose C exists. However, it is then an open question where I believe that C exists. How might I come to believe this? Perhaps I might be presented with an argument for C’s existence. I might find this argument compelling, or not. This is dependent on my mind—i.e., both on my mind existing, and on various specific properties of my mind (such as implementing modus ponens).
And who is doing this attempted convincing? Well, perhaps you are. You believe (in this hypothetical scenario) that C exists. And how did you come to believe this? Whatever the chain of causality was that led to this state of affairs, it could only be very much dependent on various properties of your mind.
Again, a “mind-independent argument” for anything is a nonsensical concept. Who is arguing, and with whom? Who is trying to convince whom? Without minds, the very concept of there being arguments, and those arguments being compelling or not compelling, is meaningless.
But he does. He explains it very clearly and explicitly! Building a mind that behaves in some specific way in some specific circumstance(s) is all that’s required. Simply build a mind that, when presented with argument A, finds that argument unconvincing. (Again, see the “little grey man” section.) That is all.
Yes, exactly, you get it. I’m not sure what confusion remains or you think remains. The only point seems here:
The counterfactual I’m proposing with C is exactly one that would allow not just any mind, but literally anything at all to comprehend A. The existence of C would create a universe wholly unlike our own, which is why I think we’re all in agreement that the existence of such a thing is extremely unlikely even though we can’t formally prove that it doesn’t exist.
It seems that you don’t get it. Said just demonstrated that even if C exists it wouldn’t imply a universally compelling argument.
In other words, this:
appears to be a total non sequitur. How does the existence of an algorithm enable you to convince a rock of anything? At a minimum, an algorithm needs to be implemented on a computer… Your statement, and therefore your conclusion that C doesn’t exist, doesn’t follow at all.
(Note: In this comment, I am not claiming that C (as you’ve defined it) exists, or agreeing that it needs to exist for any of my criticisms to hold.)
So what? Neither the existence or non existence of a Criterion of Truth that is persuasive to our minds is implied by the (non) existence of universally compelling arguments. The issue of universally compelling arguments is a red herring.
See my other comment, but assuming to know something about how to compute C would just already be part of C by definition. It’s very hard to talk about the criterion of truth without accidentally saying something that implies it’s not true because it’s an unknowable thing we can’t grasp onto. C is basically a statement that, if included in a valid argument about the truth of P, causes the argument to tell us either P or ~P. That’s definitionally what it means to be able to know the criterion of truth.
That you want to deny C is great, because I think (as I’m finding with Said), that we already agree, and any disagreement is the consequence of misunderstanding, probably because it comes too close to sounding to you like a position that I would also reject, and the rest of the fundamental disagreement is one of sentiment, perspective, having worked out the details, and emphasis.
That’s not how algorithms work and seems… incoherent.
I did not say that either.
No, I don’t think we do agree. It seems to me you’re deeply confused about all of this stuff.
Here’s an exercise: Say that we replace “C” by a specific concrete algorithm. For instance the elementary long multiplication algorithm used by primary school children to multiply numbers.
Does anything whatsoever about your argument change with this substitution? Have we proved that we can explain multiplication to a rock? Or perhaps we’ve proved that this algorithm doesn’t exist, and neither do schools?
Another exercise: suppose, as a counterfactual, that Laplace’s demon exists, and furthermore likes answering questions. Now we can take a specific algorithm C: “ask the demon your question, and await the answer, which will be received within the minute”. By construction this algorithm always returns the correct answer. Now, your task is to give the algorithm, given only these premises, that I can follow to convince a rock that Euclid’s theorem is true.
Given that I still think after all this trying that you are confused and that I never wanted to put this much work into the comments on this post, I give up trying to explain further as we are making no progress. I unfortunately just don’t have the energy to devote to this right now to see it through. Sorry.
Ok, this is… far weirder than anything I thought you had in mind when you talked about the “knowability of the criterion of truth”. As far as I can tell, this scenario is… incoherent. Certainly it’s extremely bizarre. I guess you agree with that part, at least.
But… what is it that you think the non-reality of this scenario implies? How do you get from “our universe is not, in fact, at all like this bizarre possibly-incoherent hypothetical scenario” to… anything about rationality, in our universe?
Well if you don’t have C, then you have to build up the truth some other way because you don’t have the ability to ground yourself directly in it because truth exists in the map rather than the territory. So then you are left to ground yourself in what you do find in the territory, and I’d describe the thing you find there as telos or will rather than truth because it doesn’t really look like truth. Truth is a thing we have to create for ourselves rather than extract. The rest follows from that.
Sorry, I mean to say “A is a mind-independent argument for the truth value of P and there exists by our construction such an A for all P that would convince even rocks”.
How would you convince rocks?! What in the world does that have to do with there existing or not existing some observable procedure that shows whether something is true?
How would you tell if you had convinced a rock of something? Why is it important whether or not you can convince a rock of something?
Eliezer uses “convincing a rock” as a self-evidently absurd reductio, but it sounds like you don’t actually see it that way?
Yep, I agree, which is why I point it out as something absurd that would be true if the counterfactual existence of C were instead factual.
But you’ve done a sleight of hand!
First, you defined C, a.k.a. the “criterion of truth”, like this:
Ok, that’s only mildly impossible, let’s see where this leads us…
But then, you say:
Why should the thing you defined in the first quote, lead to anything even remotely resembling the second quote? There is no reason, as far as I can tell; the latter quote just adds extremely impossible magic, out of nowhere and for no reason.
I’m saying the thing in the first quote, saying C exists, is the extremely impossible magic. I guess I don’t know how to convey this part of the argument any more clearly, as it seems to me to follow directly and objections I can think of to it hinge on assuming things you would know contingent on what you think about C and thus are not admissible here.
Maybe it would help if I gave an example? Let’s say C exists. Okay, great, now we can tell if things are true independent of any mind since C is a real fact of the world, not a belief (it’s part of the territory). Now I can establish as a matter of fact (or rather we have no way to express this correctly, but the fact can be established independent of any subject) whether or not the sky is blue independent of any observer because there is an argument contingent on C which tells us whether the statement “the sky is blue” is true or false. Now this statement is true or false in the territory and not in necessarily in any map. We’d say this is a realist position rather than an anti-realist one. This would have to mean then that this fact would be true for anything we might treat as a subject of which we could ask “does X know the fact of the matter about whether or not the sky is blue”. Thus we could ask if a rock knows whether or not the sky is blue and it would be a meaningful question about a matter of fact and not a category error like it is when we deny the knowability of C because then we have taken an anti-realist position. This is what I’m trying to say about saying there are universally compelling arguments if we assume C: the truth of matters then shifts from existing in the map to existing in the territory, and so now there can be universally compelling arguments for things that are true even if the subject is too dumb to understand them they will still be true for them regardless.
I’m not sure that helps but that’s the best I can think up right now.
I’m also a bit confused about your definition of C.
Suppose there exists a special magic eight ball that shows the word “true” or “false” when you shake it after making any statement, and that it always gives the correct answer.
Would you agree that use of this special magic eight ball represents a “procedure/algorithm to assess if any given statement is true”, and so anyone who knows how to use the magic eight ball knows the criterion of truth?
If so, I don’t see how you get from there to saying that a rock must be convinced, or really that anyone must therefore be convinced of anything.
Just because there exists a procedure for assessing truth (absolutely correctly), doesn’t therefore mean that everyone uses that procedure, right?
Suppose that Alice has never seen nor heard of the magic eight ball, and does not know it exists. Just the fact that it exists doesn’t imply anything about her state of mind, does it?
Was there supposed to be some part of the definition of C that my magic eight ball story doesn’t capture, which implies that it represents a universally compelling argument?
Just being able to give the correct answer to any yes/no question does not seem like it’s enough to be universally compelling.
EDIT: If the hypothetical was not A) “there exists… a procedure to (correctly) assess if any given statement is true”, but rather B) “every mind has access to and in fact uses a procedure that correctly assesses if any given statement is true”, then I would agree that the hypothetical implies universally compelling arguments.
Do you mean to be supposing B rather than A when you talk about the hypothetical criterion of truth?
These seem like silly examples to me.
I think about distances in Imperial units, but it seems very weird, inaccurate, and borderline absurd to describe me as believing the Imperial system to be “true”, or “more true”, or believing the metric system to be “not true” or “false” or “less true”. None of those make any sense as descriptions of what I believe. Frankly, I don’t understand how you can suggest otherwise.
Similarly, it is a true fact that Newtonian mechanics allows me to calculate the motion of objects, in certain circumstances (i.e., intermediate-scale situations / phenomena), to a great degree of accuracy, but that relativity will give a more accurate result, at the cost of much greater difficulty in calculation. This is a fact which I believe to be true. Describing Relativity as being “more true” is odd.
If both are true (as, indeed, they are, in many relationships), then this, too, seems like an odd example. Why choose? These are not in conflict. Why can’t someone be a wonderful person whom you love dearly and want to be happy, and who does various things that benefit you, in exchange for you doing various things that benefit them? I struggle to see any conflict or contradiction.
Meaning no disrespect, Kaj, but I spy a motte-and-bailey approach in these sorts of examples. The motte, of course, is “Newtonian mechanics” and so on. The bailey is “mythic mode”. To call the latter “indefensible” is an understatement.
That’s my point. That none of this stuff about choosing beliefs is in conflict with standard LW rationality, and that there are plenty of situations where you can just look at the world in one way or the other, and both are equally true, and you just focus on one based on whichever is the most useful for the situation. If you say that “these are not in conflict”, then yes! That is what I have been trying to say! It’s not true that this is a “poisonous philosophy”, because this is mostly just a totally ordinary thing that everyone does every day and which is totally unproblematic!
Someone might then respond, “well if it’s so ordinary, what’s this whole thing about post/metarationality being totally different from ordinary rationality, then?” Honestly, beats me. I don’t think it really is particularly different, and giving it a special label that implies that it’s anything else than just a straightforward application of ordinary rationality is just confusing matters and doing everyone a disservice. But that’s the label we seem to have ended up with.
This is difficult to answer, because just as there are many things going under the label “rational”—some of which are decidedly less rational than others—there are also many ways in which you could think of mythic mode, even if you only limited yourself to different ways of interpreting Val’s post on the topic. Without getting deeper into that topic, I’ll just say that there are ways of interpreting mythic mode which I think are perfectly in line with the kinds of examples I’ve been giving in the comments of this post, and also ways of interpreting it which are not and which are just crazy.
What do you mean, “choosing beliefs”? The bit of my comment that you quoted said nothing about choosing beliefs. The situation I describe doesn’t seem to require “choosing beliefs”. You just believe what is, to the best of your ability to discern, true. That’s all. What “choosing” is there?
Maybe what you’re talking about is different from what everyone else who is into “postrationality”, or what have you, is talking about?
But… I think that your examples are examples of the wrong way to think about things… “crazy” is probably an overstatement for your comments (as opposed to those of some other people), but “wrong” does not seem to be…
(sorry; I can’t seem to nest blockquotes in the comments; that’s the best I could do)
For myself I find this point is poorly understood by most self-identified rationalists, and I think most people reading the sequences come out of them as positivists because Eliezer didn’t hammer the point home hard enough and positivism is the default within the wider community of rationality-aligned folks (e.g. STEM folks). I wish all this disagreement were just a simple matter politics over who gets to use what names, but it’s not because there’s a real disagreement over epistemology. Given that “rationality” was always a term that was bound to get conflated with the rationality of high modernism, it’s perhaps not surprising that those of us who got fed up with the positivists ended up giving ourselves a new name.
This is made all the more complicated because Eliezer does specifically call out positivism as a failure mode, so it makes pinning people down on this all the more tricky because they can just say “look, Eliezer said rationality is not this”. As the responses to this post make clear, though, the positivist streak is alive and well in the LW community given what I read as a strong reaction against the calling out of positivism or for that matter privileging any particular leap of faith (although positivists don’t necessarily think of themselves as doing that because they disagree with the premise that we can’t know the criterion of truth). So this all leads me to the position that we have need of a distinction for now because of our disagreement on this fundamental issue that has many effects on what is and is not considered to be useful to our shared pursuits.
Maybe so, but I can’t help noticing that whenever I try to think of concrete examples about what postrationality implies in practice, I always end up with examples that you could just as well justify using the standard rationalist epistemology. E.g. all my examples in this comment section. So while I certainly agree that the postrationalist epistemology is different from the standard rationalist one, I’m having difficulties thinking of any specific actions or predictions that you would really need the postrationalist epistemology to justify. Something like the criterion of truth is a subtle point which a lot of people don’t seem to get, yes, but it also feels like one which doesn’t make any practical difference whether you get it or not. And theoretical points which people can disagree a lot about despite not making any practical difference are almost the prototypical example of tribal labels. John Tooby:
Not related to your points, but re: blockquotes and nesting, try the GreaterWrong editor; you can select some text and click the blockquote button, then select text (including the blockquoted) and click blockquote again, etc., and it’ll nest it properly for you.