All done.
StephenR
Believing that a theory is true that says “true” is not a thing theories can be is obviously silly.
Oh okay. This is a two-part misunderstanding.
I’m not saying that theories can’t be true, I’m just not talking about this truth thing in my meta-model. I’m perfectly a-okay with models of truth popping up wherever they might be handy, but I want to taboo the intuitive notion and refuse to explicate it. Instead I’ll rely on other concepts to do much of the work we give to truth, and see what happens. And if there’s work that they can’t do, I want to evaluate whether it’s important to include in the meta-model or not.
I’m also not saying that my theory is true. At least, not when I’m talking from within the theory. Perhaps I’ll find certain facets of the correspondence theory useful for explaining things or convincing others, in which case I might claim it’s true. My epistemology is just as much a model as anything else, of course; I’m developing it with certain goals in mind.
I were talking about physical systems, not physical laws. Computers, living cells, atoms, the fluid dynamics of the air… “Applied successfully in many cases”, where “many” is “billions of times every second”
The math we use to model computation is a model and a tool just as much as computers are tools; there’s nothing weird (at least from my point of view) about models being used to construct other tools. Living cells can be modeled successfully with math, you’re right; but that again is just a model. And atoms are definitely theoretical constructs used to model experiences, the persuasive images of balls or clouds they conjure notwithstanding. Something similar can be said about fluid dynamics.
I don’t mean any of this to belittle models, of course, or make them seem whimsical. Models are worth taking seriously, even if I don’t think they should be taken literally.
Then ZFC is not one of those cores ones, just one of the peripheral ones. I’m talking ones like set theory as a whole, or arithmetic, or Turing machines.
The best example in the three is definitely arithmetic; the other two aren’t convincing. Math was done without set theory for ages, and besides we have other foundations available for modern math that can be formulated entirely without talking about sets. Turing machines can be replaced with logical systems like the lambda calculus, or with other machine models like register machines.
Arithmetic is more compelling, because it’s very sticky. It’s hard not to take it literally, and it’s hard to imagine things without it. This is because some of the ideas it constitutes are at the core cluster of our categories, i.e. they’re very sticky. But could you imagine that some agent might a) have goals that never require arithmetical concepts, and b) that there could be models that are non-arithmetical that could be used toward some of the same goals for which we use arithmetic? I can imagine … visualise, actually, both, although I would have a very hard time translating my visual into text without going very meta first, or else writing a ridiculously long post.
Many of them render considering them not true pointless, in the sense all my reasoning and senses are invalid if they don’t hold so I might as well give up and save computing time by conditioning on them being true.
I call these sorts of models sticky, in the sense that they are pervasive in our perception and categorisation. Sitcky categories are the sort of thing that we have a hard time not taking literally. I haven’t gone into any of this yet, of course, but I like it when comments anticipate ideas and continue trains of thought.
Maybe a short run-long run model would be good to illustrate this stickiness. In the short run, perception is fixed; this also fixes certain categories, and the “degree of stickiness” that different categories have. For example, chair is remarkably hard to get rid of, whereas “corpuscle” isn’t quite as sticky. In the long run, when perception is free, no category needs to be sticky. At least, not unless we come up with a more restrictive model of possible perceptions. I don’t think that such a restrictive model would be appropriate in a background epistemology. That’s something that agents will develop for themselves based on their needs and perceptual experience.
Many of them are directly implemented in physical systems
Different mathematical models of human perceptual experience might be perfectly suitable for the same purpose.. Physics should be the clearest example, since we have undergone many different changes of mathematical models, and are currently experiencing a plurality of theories with different mathematics in cosmology. The differences between classical mechanics and quantum mechanics should in particular show this nicely: different formalisms, but very good models of a large class of experiences.
you cant have one of them without ALL the others.
I think you slightly underestimate the versatility of mathematicians in making their systems work despite malfunctions. For instance, even if ZFC were proved inconsistent (as Edward Nelson hopes to do), we would not have to abandon it as a foundation. Set theorists would just do some hocus pocus involving ordinals, and voila! all would be well. And there are several alternative formulations of arithmetic, analysis, topology, etc. which are all adequate for most purposes.
You might try to imagine an universe without math.
In the case of some math, this is easy to do. In other cases it is not. This is because we don’t experience the freefloating perceptual long term, not because certain models are necessary for all possible agents and perceptual content.
That was a wonderful comment. I hope you don’t mind if I focus on the last part in particular. If you’d rather I addressed more I can accommodate that, although most of that will be signalling agreement.
To assert P is equivalent to asserting “P is true” (the deflationary theory in reverse). That is still true if P is of the form “so and so works”. Pragmatism is not orthogonal to, or transcendent of, truth. Pragmatists need to be concerned about what truly works.
I’ll note a few things in reply to this:
I’m fine with some conceptual overlap between my proposed epistemology and other epistemologies and vague memes.
You might want to analyse statements “P” as meaning/being equivalent to “P is true,” but I am not going to include any explication of “true” in my epistemology for that analysis to anchor itself to.
Continuing the above, part of what I am doing is tabooing “truth,” to see if we can formulate an epistemology-like framework without it.
What “truly works” is more of a feeling or a proclivity than a proposition, until of course an agent develops a model of what works and why.
What is right is contextual truth.
I agree with you here absolutely, modulo vocabulary. I would rather say that no single framework is universally appropriate (problem of induction) and that developing different tools for different contexts is shrewd. But what I just said is more of a model inspired by my epistemology than part of the epistemology itself.
Applying it to what problem? (If you mean the physics posts you linked to, I need more time to digest it fully)
No, not that comment, I mean the initial post. The problem is handling mathematical systems in an epistemology. A lot of epistemologies have a hard time with that because of ontological issues.
Nobody actually conceptualises science as being about deriving from thinking “pink is my favority color and it isn’t” → “causality doesn’t work”.
No, but many people hold the view that you can talk about valid statements as constraining ontological possibilities. This is including Eliezer of 2012. If you read the High Advance Epistemology posts on math, he does reason about the particular logical laws constraining how the physics of time and space work in our universe. And the view is very old, going back to before Aristotle, through Leibniz to the present.
If you call for a core change in epistemology it sounds like you want more than that. To me it’s not clear what that more happens to be.
I’m going to have to do some strategic review on what exactly I’m not being clear about and what I need to say to make it clear.
In case you don’t know the local LW definition of rationality is : “Behaving in a way that’s likely to make you win.”
Yes, I share that definition, but that’s only the LW definition of instrumental rationality; epistemic rationality on the other hand is making your map more accurately reflect the territory. Part of what I want is to scrap that and judge epistemic matters instrumentally, like I said in the conclusion and addendum.
Still, it’s clear I haven’t said quite enough. You mentioned examples, and that’s kind of what this post was intended to be: an example of applying the sort of reasoning I want to a problem, and contrasting it with epistemic rationality reasoning.
Part of the problem with generating a whole bunch of specific examples is that it wouldn’t help illustrate the change much. I’m not saying that science as it’s practised in general needs to be radically changed. Mostly things would continue as normal, with a few exceptions (like theoretical physics, but I’m going to have to let that particular example stew for a while before I voice it outside of private discussions).
The main target of my change is the way we conceptualise science. Lots of epistemological work focuses on idealised caricatures that are too prescriptive and poorly reflect how we managed to achieve what we did in science. And I think that having a better philosophy of science will make thinking about some problems in existential risk, particularly FAI, easier.
I think there’s a bit of a misunderstanding going on here, though, because I am perfectly okay with people using classical logic if they like. Classical logic is a great way to model circuits, for example, and it provides some nice reasoning heuristics.There’s nothing in my position that commits us to abandoning it entirely in favour of intuitionistic logic.
Intuitionistic logic is applicable to at least three real-world problems: formulating foundations for math, verifying programmes, and computerised theorem-proving. The last two in particular will have applications in everything from climate modeling to population genetics to quantum field theory.
As it happens, mathematician Andrej Bauer wrote a much better defence of doing physics with intuitionistic logic than I could have: http://math.andrej.com/2008/08/13/intuitionistic-mathematics-for-physics/
I’ve added an addendum. If reading that doesn’t help, let me know and I’ll summarise it for you in another way.
Intuitionistic logic can be interpreted as the logic of finite verification.
Truth in intuitionistic logic is just provability. If you assert A, it means you have a proof of A. If you assert ¬A then you have a proof that A implies a contradiction. If you assert A ⇒B then you can produce a proof of B from A. If you assert A ∨ B then you have a proof of at least one of A or B. Note that the law of excluded middle fails here because we aren’t allowing sentences A ∨ ¬A where you have no proof of A or that A implies a contradiction.
In all cases, the assertion of a formula must correspond to a proof, proofs being (of course) finite. Using this idea of finite verification is a nice way to develop topology for computer science and formal epistemology (see Topology via Logic by Steven Vickers). Computer science is concerned with verification as proofs and programmes (and the Curry-Howard isomorphism comes in handy there), and formal epistemology is concerned with verification as observations and scientific modeling.
That isn’t exactly a specific example, but a class of examples. Research on this is currently very active.
I’ve added an addendum that I hope will make things clearer.
Astray with the Truth: Logic and Math
I’m going to drop discussion about the universe in particular for now. Explaining why I think that the map-territory epistemology runs into problems there would require a lot of exposition on points I haven’t made yet, so it’s better suited for a post than a comment.
I’ve realised that there’s a lot more inferential distance than I thought between some of the things I said in this post and the content of other posts on LW. I’m thinking of strategies to bridge that now.
That doesn’t mean that it’s helpful to just tell the positivists to pretend that map(universe) and universe are the same and the issue is solved.
Hm, if you’re attributing that to me then I think I haven’t been nearly clear enough.
Earlier I said that I had ontological considerations but didn’t go into them in my post explicitly. I’ll outline them for you now (although I’ll be talking about them in a post in the near future, over the next couple days if I kick myself into gear properly).
In the end I’m not going to be picky about what different models claim to be real so long as they work, but in the epistemology I use to consider all of those models I’m only going to make reference to agents and their perceptual interfaces. If we consider maps and models as tools that we use to achieve goals, then we’re using them to navigate/manipulate some aspect of our experience.
We understand by trial and error that we don’t have direct control over our experiences. Often we model this lack of control by saying that there’s a real state of affairs that we don’t have perfect access to. Like I said, I think this model has limitations in areas we consider more abstract, like math, so I don’t want this included in my epistemology. Reality is a tool I can use to simplify my thinking in some situations, not something I want getting in the way in every epistemological problem I encounter.
Likewise, in your autism example, we have a model of possible failure modes that empirical research can have. This is an extremely useful tool, and a good application of the map-territory distinction, but that example still doesn’t compel me to use either of those tools in my epistemology. The more tools I commit myself to, the less stable my epistemology is. (Keeping reservationism in the back of your mind would be helpful here.)
Not among people who really follow the “the map is not the territory”. There are many maps of the city of Berlin. I will use a different map when I want to navigate Berlin via the public transport system than when I want to drive via bike.
At the same time if my goal is staying remaining sane, it’s useful to not forget that neither of those maps are the territory of the city of Berlin. In the case of the city of Berlin few people will make the mistake of confusing the two. In other domains people do get into issues because things get complicated and they forget that their maps aren’t the territory.
In not sure whether you position is: “I don’t like positivism, let’s do something different” or “I don’t like positivism, let’s do X”.
I don’t think you need a “real Berlin” for that usage of maps to make sense: instead of saying that a transit map models some aspect of the real Berlin, we can say that the transit map is functional for navigating Berlin.
I’d rather this phrasing because having the concept of a real Berlin can lead to confusions when we apply the idea of by analogy to other things, like theories of arithmetic, the universe. or “the self.” That’s why I want it removed from our base epistemology. Of course I’ll be very happy to use the map and territory epistemology as a heuristic if I find it easier to think with in certain situations, but because of its shortcomings elsewhere I will not claim that it is the correct epistemology.
Hopefully that brief explanation helps answer what I am trying to do to some extent. In any case I’m thankful for both the discussion (which I’d be happy to continue, of course) and the reading suggestion.
Do you think that some of that mysticism is a fruitful path that get’s wrongly rejected?
No, but that’s because I’ve seen it in action and noted that I don’t have much use for it, and not because I’ve constructed an epistemology that proscribes it altogether.
I don’t see the point of barring paths as inherently epistemically irrational. I would rather let anyone judge for themselves which tools would be appropriate or inappropriate, and model the success or failures in whichever way helps them choose tools more effectively later.
For example there’s a commonly held belief that we shouldn’t believe two mutually contradictory models since they can’t both describe reality and at least one of them will lead us astray. In other words it isn’t epistemically rational to believe both. I want to scrap judgements like that from the underpinnings of our epistemology, because that really does close fruitful paths. During revolutions in physics, after one theory gains a slight advantage the competitors all die out. I would like to see more of a plurality, so that we can have multiple tools in our arsenal with different potential uses. Rather than deciding that I can believe only one, I’ll say that I can use any to the extent that they work, and I will hold beliefs about where and how I can apply them.
If I look at the public transportation map of Berlin then the distances between places aren’t very accurate. The map isn’t designed for that purpose. That doesn’t make it a bad map and I can still mentally distinguish the territory of Berlin from the map.
You’re right, of course, motivations vary. Transit maps are not trying to model distances, just the order of stops on various lines. But motivations in some areas, like logic and physics, are much more heavily influenced by the positivists than transit maps. I think we should be paying more attention to the specific uses we have in mind when constructing any model, including logics and theories of physics, whereas model-reality epistemologies make us think only of mirroring reality once we get to things considered more fundamental.
Of course some people are doing what I’m suggesting in “fundamental” areas. Constructivists in the foundations of math are constructing their foundations explicitly so that all math can be computable and subject to automated proof checking and theorem proving. Usually they don’t fret about whether a constructive foundation will give us the real, true picture of math. Like I’ve said, I think we should adopt that mentality everywhere.
What do you mean with “coherent” concept inside pragmatism? In what sense does a pragmatist worry about whether or not something is coherent?
“Coherent” is a stand-in for some worries I have: Does having our epistemology underpinned by a model-reality relationship skew our motivations for creating models? Does it close certain fruitful paths by making us believe they are epistemically nonsensical or questionable? Does it have significant limitations in where it can be fruitfully applied and how? I think the answer to each is yes, which motivates me to get rid of the model-reality relationship from my core epistemology. Although of course I consider it perfectly legitimate to use that relationship as a heuristic in the context of a pragmatic background epistemology.
Why? What wrong with the word ontology? I think you get into problems if you want to do ontology but refuse to think of yourself as doing ontology.
It’s not that I refuse. I just don’t put much stock in the distinction between epistemology and ontology. I think they’re entangled, and that pretending they aren’t leads to confusion (see the p-zombie debate, for example).
I didn’t really bring out the ontological elements of what I was doing in this post, and I recognised that afterward. I’ll fix that oversight later.
In what aspect is your idea of pragmatism supposed to differ from general semantics with the slogan “The map is not the territory”?
I’m not requiring that “territory” be a coherent concept at all. Suppositions about territories are models that my epistemology evaluates rather than assumptions built into the epistemology.
That’s a claim about ontology not a claim about epistemology. When it comes to modern source I consider Barry Smith worth reading. He’s doing practical ontology for bioinformatical problems.
If you like, you can think of this as a an ontological critique of most epistemologies. I wouldn’t like to phrase it that way, though.
A sentence’s meaning is more like the odds ratio multipliers it provides for your priors than like a truth predication.
And what do you mean by this? That the old truth model is less correct than the probabilistic model, or that the probabilistic model performs better in applications? Or maybe you’re prone to say that the latter is more correct, but what you mean by that is that there’s more use for it. That’s the tension I am trying to bring out, those two different interpretations of epistemic claims. And my claim is that the second gets us farther than the first. For instance it permits us to use combinations of tools that most epistemologies would frown upon, like contradictory theories.
There’s a shift in perspective that has to happen in the course of this discussion, from evaluating the intuitive correctness and reality-correspondence (even probabilistically) of theories as sets of claims about the world to evaluating the potential uses and practical strength of theories as tools to accomplish our goals. I’m supporting my approach epistemology in the second more pragmatic way rather than the first way, which is more epistemic.
I think you greatly exaggerate your originality here.
I thought it might come across that way, but didn’t want to invest a bunch of time listing my intellectual debts (the post is long enough already). For the record, I’m aware that my ideas aren’t entirely original, and I suspect that when I think they are I would be able to find similar ideas in others’ writing independently.
the fact that it has been around for quite a while without seeming to have radically triumphed over all rivals does provide some reason for doubt about the extent of its world-beating potential.
I think that part of the problem here is that pragmatists didn’t spend nearly as much energy on the details of applying their ideas as, say, Carnap and Popper did. They also tended to keep their discussion of pragmatism to philosophical circles, rather than engaging with scientific circles about their research. There’s a lot of inertia to fight in order to shift scientific paradigms and the pragmatists didn’t engage in the social and political organisation necessary to do so.
I think I’ve provided a fair summary of some of the benefits of wearing a pragmatic thinking cap. And I’ll be outlining those and others in more detail later.
I’m fine with agents being better at achieving their goals than I am, whether or not computational models of the brain succeed. We can model this phenomenon in several ways: algorithms, intelligence, resource availability, conditioning pressures, so on.
But “most correct” isn’t something I feel comfortable applying as a blanket term across all models. If we’re going to talk about the correctness (or maybe “accuracy,” “efficiency,” “utility,” or whatever) of a model, I think we should use goals as a modulus. So we’d be talking about optimal models relative to this or that goal, and a most correct model would be a model that performs best relative to all goals. There isn’t currently such a model, and even if we thought we had one it would only be best in the goals we applied it to. Under those circumstances there wouldn’t be much reason to think that it would perform well under drastically different demands (i.e. that’s something we should be very uncertain about).
I entered “Other: electic mixture” on the survey. On my Facebook profile, I elaborate this as “classical liberalism, Rawlsian liberalism, reactionary, left-libertarianism, conservatism, and techno-futurism.” Ideologies are for picking apart, not buying wholesale. I gather a variety of them together and cut away the rotten parts like moldy cheese. What’s left is something much more workable than the originals.