(The ‘Mainstream Status’ comment is intended to provide a quick overview of what the status of the post’s ideas are within contemporary academia, at least so far as the poster knows. Anyone claiming a particular paper precedents the post should try to describe the exact relevant idea as presented in the paper, ideally with a quote or excerpt, especially if the paper is locked behind a paywall. Do not represent large complicated ideas as standard if only a part is accepted; do not represent a complicated idea as precedented if only a part is described. With those caveats, all relevant papers and citations are much solicited! Hopefully comment-collections like these can serve as a standard link between LW presentations and academic ones.)
The correspondence theory of truth is the first position listed in the Stanford Encyclopedia of Philosophy, which is my usual criterion for saying that something is a solved problem in philosophy. Clear-cut simple visual illustration inspired by the Sally-Anne experimental paradigm is not something I have previously seen associated with it, so the explanation in this post is—I hope—an improvement over what’s standard.
Alfred Tarski is a famous mathematician whose theory of truth is widely known.
The notion of possible worlds is very standard and popular in philosophy; some of them even ascribe much more realism to them than I would (since I regard them as imaginary constructs, not thingies that can potentially explain real events as opposed to epistemic puzzles).
I haven’t particularly run across any philosophy explicitly making the connection from the correspondence theory of truth to “There are causal processes producing map-territory correspondences” to “You have to look at things in order to draw accurate maps of them, and this is a general rule with no exception for special interest groups who want more forgiving treatment for their assertions”. I would not be surprised to find out it existed, especially on the second clause.
Added: The term “post-utopian” was intended to be a made-up word that had no existing standardized meaning in literature, though it’s simple enough that somebody has probably used it somewhere. It operates as a stand-in for more complicated postmodern literary terms that sound significant but mean nothing. If you think there are none of those, Alan Sokal would like to have a word with you. (Beating up on postmodernism is also pretty mainstream among Traditional Rationalists.)
This is a great post. I think the presentation of the ideas is clearer and more engaging than the sequences, and the cartoons are really nice. Wild applause for the artist.
I have a few things to say about the status of these ideas in mainstream philosophy, since I’m somewhat familiar with the mainstream literature (although admittedly it’s not the area of my expertise). I’ll split up my individual points into separate comments.
Alfred Tarski is a famous mathematician whose theory of truth is widely known.
Summary of my point: Tarski’s biconditionals are not supposed to be a definition of truth. They are supposed to be a test of the adequacy of a proposed definition of truth. Proponents of many different theories claim that their theory passes this test of adequacy, so to identify Tarski’s criterion with the correspondence theory is incorrect, or at the very least, a highly controversial claim that requires defense. What follows is a detailed account of why the biconditionals can’t be an adequate definition of truth, and of what Tarski’s actual theory of truth is.
Describing Tarski’s biconditionals as a definition of truth or a theory of truth is misleading. The relevant paper is The Semantic Conception of Truth. Let’s call sentences of the form ‘p’ is true iff p T-sentences. Tarski’s claim in the paper is that the T-sentences constitute a criterion of adequacy for any proposed theory of truth. Specifically, a theory of truth is only adequate if all the T-sentences follow from it. This basically amounts to the claim that any adequate theory of truth must get the extension of the truth-predicate right—it must assign the truth-predicate to all and only those sentences that are in fact true.
I admit that the conjunction of all the T-sentences does in fact satisfy this criterion of adequacy. All the individual T-sentences do follow from this conjunction (assuming we’ve solved the subtle problem of dealing with infinitely long sentences). So if we are measuring by this criterion alone, I guess this conjunction would qualify as an adequate theory of truth. But there are other plausible criteria according to which it is inadequate. First, it’s a frickin’ infinite conjunction. We usually prefer our definitions to be shorter. More significantly, we usually demand more than mere extensional adequacy from our definitions. We also demand intensional adequacy.
If you ask someone for a definition of “Emperor of Rome” and she responds “X is an Emperor of Rome iff X is one of these...” and then proceeds to list every actual Emperor of Rome, I suspect you would find this definition inadequate. There are possible worlds in which Julius Caesar was an Emperor of Rome, even though he wasn’t in the actual world. If your friend is right, then those worlds are ruled out by definition. Surely that’s not satisfactory. The definition is extensionally adequate but not intensionally adequate. The T-sentence criterion only tests for extensional adequacy of a definition. It is satisfied by any theory that assigns the correct truth predicates in our world, whether or not that theory limns the account of truth in a way that is adequate for other possible worlds. Remember, the biconditionals here are material, not subjunctive. The T-sentences don’t tell us that an adequate theory would assign “Snow is green” as true if snow were green. But surely we want an adequate theory to do just that. If you regard the T-sentences themselves as the definition of truth, all that the definition gives us is a scheme for determining which truth ascriptions are true and false in our world. It tells us nothing about how to make these determinations in other possible worlds.
To make the problem more explicit, suppose I speak a language in which the sentence “Snow is white” means that grass is green. It will still be true that, for my language, “Snow is white” is true iff snow is white. Yet we don’t want to say this biconditional captures what it means for “Snow is white” to be true in my language. After all, in a possible world where snow remained white but grass was red, the sentence would be false.
Tarski was a smart guy, and I’m pretty sure he realized all this (or at least some of it). He constantly refers to the T-sentences as material criteria of adequacy for a definition of truth. He says (speaking about the T-sentences), ”… we shall call a definition of truth ‘adequate’ if all these equivalences follow from it.” (although this seems to ignore the fact that there are other important criteria of adequacy) When discussing a particular objection to his view late in the paper, he says, “The author of this objection mistakenly regards scheme (T)… as a definition of truth.” Unfortunately, he also says stuff that might lead one to think he does think of the conjunction of all T-sentences as a definition: “We can only say that every equivalence of the form (T)… may be considered a partial definition of truth, which explains wherein the truth of this one individual sentence consists. The general definition has to be, in a certain sense, a logical conjunction of all these partial definitions.”
I read the “in a certain sense” there as a subtle concession that we will need more than just a conjunction of the T-sentences for an adequate definition of truth. As support for my reading, I appeal to the fact that Tarski explicitly offers a definition of truth in his paper (in section 11), one that is more than just a conjunction of T-sentences. He defines truth in terms of satisfaction, which in turn is defined recursively using rules like: The objects a and b satisfy the sentential function “P(x, y) or Q(x, y)” iff they satisfy at least one of the functions “P(x, y)” or “Q(x, y)”. His definition of truth is basically that a sentence is true iff it is satisfied by all objects and false otherwise. This works because a sentence, unlike a general sentential function, has no free variables to which objects can be bound.
This definition is clearly distinct from the logical conjunction of all T-sentences. Tarski claims it entails all the T-sentences, and therefore satisfies his criterion of adequacy. Now, I think Tarski’s actual definition of truth isn’t all that helpful. He defines truth in terms of satisfaction, but satisfaction is hardly a more perspicuous concept. True, he provides a recursive procedure for determining satisfaction, but this only tells us when compound sentential functions are satisfied once we know when simple ones are satisfied. His account doesn’t explain what it means for a simple sentential function to be satisfied by an object. This is just left as a primitive in the theory. So, yeah, Tarski’s actual theory of truth kind of sucks.
His criterion of adequacy, though, has been very influential. But it is not a theory of truth, and that is not the way it is treated by philosophers. It is used as a test of adequacy, and proponents of most theories of truth (not just the correspondence theory) claim that their theory satisfies this test. So to identify Tarski’s definition/criterion/whatever with the correspondence theory misrepresents the state of play. There are, incidentally, a group of philosophers who do take the T-sentences to be a full definition of truth, or at least to be all that we can say about truth. But these are not correspondence theorists. They are deflationists.
I’ve slightly edited the OP to say that Tarski “described” rather than “defined” truth—I wish I could include more to reflect this valid point (indeed Tarski’s theorems on truth are a lot more complicated and so are surrounding issues, no language can contain its own truth-predicate, etc.), but I think it might be a distraction from the main text. Thank you for this comment though!
The latest Rationally Speaking post looks relevant: Ian Pollock describes aspects of Eliezer’s view as “minimalism” with a link to that same SEP article. He also mentions Simon Blackburn’s book, where Blackburn describes minimalists or quietists as making the same point Eliezer makes about collapsing “X is true” to “X” and a similar point about the usefulness of the term “truth” as a generalisation (though it seems that minimalists would say that this is only a linguistic convenience, whereas Eliezer seems to have a slightly difference concept of it in that he wants to talk in general about how we get accurate beliefs).
There are, incidentally, a group of philosophers who do take the T-sentences to be a full definition of truth, or at least to be all that we can say about truth. But these are not correspondence theorists. They are deflationists.
My gut instinct is deflationist, but I don’t see this view as being opposed to “correspondence”. The alleged conflict is dubious at best. Stanford Encyclopedia of Philosophy writes:
the correspondence intuition so understood would endorse:
(8) The proposition that snow is white is true because snow is white
Now, the problem with (8) is that, when we combine it with the deflationary theory-or at least with a necessary version of that theory-we can derive something that is plainly false. Someone who holds a necessary version of deflationism would clearly be committed to the necessary truth of:
(9) The proposition that snow is white is true iff snow is white.
And, since (9) is a necessary truth, it is very plausible to suppose that (8) and (9) together entail:
(10) Snow is white because snow is white.
Unfortunately, however, (10) is false. The reason is that the relation reported by ‘because’ in (8) and (10) is a causal or explanatory relation, and such relations must obtain between distinct relata.
Emphasis added: the italicized premise is false. Explanation is a cognitive feat, and the same fact (even if the identity is a necessary one) can be cognized in different ways. (Such explanations occur frequently enough in mathematics, I think.) The SEP author anticipates my objection and writes:
If ‘because’ creates an opaque context, then it would be illegitimate to suppose that (8) and (9) entail (10). This too is a possibility; however, it is not clear that ‘because’ creates opaque context of the right kind. In general we can distinguish two kinds of opaque context: intensional contexts, which allow the substitution of necessarily co-referring expressions but not contingently co-referring expressions; and hyper-intensional contexts, which do not even allow the substitution of necessarily co-referring expressions. If the inference from (8) and (9) to (10) is to be successfully blocked, it is necessary that ‘because’ creates a hyper-intensional context. However, it is open to a friend of the correspondence objection to argue that, while ‘because’ creates an intensional context, it does not create a hyper-intensional context.
It is open to them to argue that “because” does not create a hyper-intensional context, but it is much more plausible that it does. So until a good argument comes along, mark me down as a correspondence deflationist.
It’s vogue to defend correspondence because 1) it sounds like common sense and 2) it signals rejection of largely discredited instrumentalism. But surely a correspondence theorist should have a theory of the nature of the correspondence. How does a proposition or a verbal string correspond to a state of reality? By virtue of what is it a correct description? We can state a metalinguistic relationship about “Snow is white,” but how does this locution hook onto the actual world?
Correspondence theorists think this is a task for a philosophical theory of reference. (Such as in an account where “torekp” refers to you by virtue of the “christening event” of your creating the account and causal connections therefrom.) Deflationists are apt to say it is ultimately a technical problem in the psychology of language.
Interesting. I am inclined to replicate my compatibility claim at this level too; i.e., the technical solution in the psychology of language will be a philosophical theory of reference (as much as one needs) as well. I’d be interested in references to any of the deflationist discussions of reference you have in mind.
I haven’t particularly run across any philosophy explicitly making the connection from the correspondence theory of truth to “There are causal processes producing map-territory correspondences” to “You have to look at things in order to draw accurate maps of them, and this is a general rule with no exception for special interest groups who want more forgiving treatment for their assertions”. I would not be surprised to find out it existed, especially on the second clause.
Depends on what you mean by “explicitly”. Many correspondence theorists believe that an adequate understanding of “correspondence” requires an understanding of reference—how parts of our language are associated with parts of the world. I think this sort of idea stems from trying to fill out Tarski’s (actual) definition of truth, which I discussed in another comment. The hope is that a good theory of reference will fill out Tarski’s obscure notion of satisfaction, and thereby give some substance to his definition of truth in terms of satisfaction.
Anyway, there was a period when a lot of philosophers believed, following Saul Kripke and Hilary Putnam, that we can understand reference in terms of causal relations between objects in the world and our brains (it appears to me that this view is falling out of vogue now, though). What makes it the case that our use of the term “electron” refers to electrons? That there are the appropriate sorts of causal relations, both social—the causal chain from physicists who originated the use of the word to contemporary uses of it—and evidential—the causal connections with the world that govern the ways in which contemporary physicists come to assert new claims involving the word “electron”. The causal theory of reference is used as the basis for a (purportedly) non-mysterious account of satisfaction, which in turn is used as the basis for a theory of truth.
So the idea is that the meanings of the elements in our map are determined by causal processes, and these meanings link the satisfaction conditions of sentential functions to states of affairs in the world. I’m not sure this is exactly the sort of thing you’re saying, but it seems close. For an explicit statement of this kind of view, see Hartry Field’s Tarski’s Theory of Truth. Most of the paper is a (fairly devastating, in my opinion) critique of Tarski’s account of truth, but towards the end of section IV he brings up the causal theory.
ETA: More broadly, reliabilism in epistemology has a lot in common with your view. Reliabilism is a refinement of early causal theories of knowledge. The idea is that our beliefs are warranted in so far as they are produced by reliable mechanisms. Most reliabilists I’m aware of are naturalists, and read “reliable mechanism” as “mechanism which establishes appropriate causal connections between belief states and world states”. Our senses are presumed to be reliable (and therefore sources of warrant) just because the sorts of causal chains you describe in your post are regularly instantiated. Reliabilism is, however, compatible with anti-naturalism. Alvin Plantinga, for instance, believes that the sensus divinitatis should be regarded as a reliable cognitive faculty, one that atheists lack (or ignore).
One example of a naturalist reliabilism (paired with a naturalist theory of mental representation) is Fred Dretske’s Knowledge and the Flow of Information. A summary of the book’s arguments is available here (DOC file). Dretske tries to understand perception, knowledge, the truth and falsity of belief, mental content, etc. using the framework of Shannon’s communication theory. The basis of his analysis is that information transfer from a sender system to a receiver system must be understood in terms of relations of law-like dependence of the receiver system’s state on the sender system’s state. He then analyzes various epistemological problems in terms of information transfer from systems in the external world to our perceptual faculties, and information transfer from our perceptual faculties to our cognitive centers. He’s written a whole book about this, so there’s a lot of detail, and some of the specific details are suspect. In broad strokes, though, Dretske’s book expresses pretty much the same point of view you describe in this post.
Off the top of my head, I also can’t think of a philosopher who has made an explicit connection from the correspondence theory of truth to “there are causal processes producing map-territory correspondences” to “you have to look at things to draw accurate maps of them...”
But if this connection has been made explicitly, I would expect it to be made by someone who accepts both the correspondence theory and “naturalized epistemology”, often summed up in a quote from Quine:
The stimulation of his sensory receptors is all the evidence anybody has had to go on, ultimately, in arriving at his picture of the world. Why not just see how this construction really proceeds? Why not settle for psychology? …Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science.
(Originally, Quine’s naturalized epistemology accounted only for this descriptive part of epistemology, and neglected the normative part, e.g. truth conditions. In the 80s Quine started saying that the normative part entered into naturalized epistemology through “the technology of truth-seeking,” but he was pretty vague about this.)
Edit: Another relevant discussion of embodiment and theories of truth can be found in chapter 7 of Philosophy in the Flesh.
It’s not that clear to me in what sense mainstream academia is a unified thing which holds positions, even regarding questions such as “what fields are legitimate”. Saying that something is known in mainstream academia seems suspiciously like saying that “something is encoded in the matter in my shoelace, given the right decryption schema. OTOH, it’s highly meaningful to say that something is discoverable by someone with competent ’google-fu”
It’s not that clear to me in what sense mainstream academia is a unified thing which holds positions, even regarding questions such as “what fields are legitimate”.
Strongly seconded.
Hell, some “Mainstream” scientists are working on big-money research project that attempt to prove that there’s a worldwide conspiracy attempting to convince people that global warming exists so as to make money off of it. Either they’re all sell-outs, something which seems very unlikely, or at least some of them actually disagree with some other mainstream scientists, who see the “Is there real global warming?” question as obviously resolved long ago.
Off the top of my head, I also can’t think of a philosopher who has made an explicit connection from the correspondence theory of truth to “there are causal processes producing map-territory correspondences” to “you have to look at things to draw accurate maps of them...”
OK, I defended the tweet that got this response from Eliezer as the sort of rhetorical flourish that gets people to actually click on the link. However, it looks like I also underestimated how original the sequences are—I had really expected this sort of thing to mirror work in mainstream philosophy.
Although I wouldn’t think of this particular thing as being an invention on his part—I’m not sure I’ve read that particular chain of thought before, but all the elements of the chain are things I’ve known for years.
However I think it illustrates the strength of Eliezer’s writing well. It’s a perfectly legitimate sequence of thought steps that leads in a clear and obvious way to the right answer. It’s not new thought, but a particularly clear way of expressing something that many people have thought and written about in a less organised way.
To clarify—there are times when Eliezer is inventive—for example his work on CEV—but this isn’t one of those places. I know I’m partly arguing about the meaning of “inventive”, but I don’t think we’re doing him a favor here by claiming this is an example of his inventiveness when there are much better candidates.
I also can’t think of a philosopher who has made an explicit connection from the correspondence theory of truth to “there are causal processes producing map-territory correspondences” to “you have to look at things to draw accurate maps of them...”
Karl Popper did so explicitly, thoroughly and convincingly in The Logic of Scientific Discovery.
Pretty influential, and definitely a part of “Mainstream Academia.”
Here’s an interesting, if lengthy, footnote to Chapter 84 - Remarks Concerning the use of the concepts ‘True’ and ‘Corroborated’.
(1) Not long after this was written, I had the good fortune to meet Alfred Tarski who
explained to me the fundamental ideas of his theory of truth. It is a great pity that this
theory—one of the two great discoveries in the field of logic made since Principia
Mathematica—is still often misunderstood and misrepresented. It cannot be too strongly
emphasized that Tarski’s idea of truth (for whose definition with respect to formalized
languages Tarski gave a method) is the same idea which Aristotle had in mind and indeed
most people (except pragmatists): the idea that truth is correspondence with the facts (or with
reality). But what can we possibly mean if we say of a statement that it corresponds with
the facts (or with reality)? Once we realize that this correspondence cannot be one of
structural similarity, the task of elucidating this correspondence seems hopeless; and as a
consequence, we may become suspicious of the concept of truth, and prefer not to use it.
Tarski solved (with respect to formalized languages) this apparently hopeless problem by
making use of a semantic metalanguage, reducing the idea of correspondence to that of
‘satisfaction’ or ‘fulfilment’.
As a result of Tarski’s teaching, I no longer hesitate to speak of ‘truth’ and ‘falsity’. (...)
A (short) footnote of my own: Popper’s writings have assumed the status of mere “background knowledge”, which is a truly great achievement for any philosopher of science. However, The Logic of Scientific discovery is a glorious book which deserves to be even more widely read. Part I of the book spans no more than 30 pages. It’s nothing short of beautiful. PDF here.
Could you please quote the part of Popper’s book that makes the explicit connection from the correspondence theory of truth to “there are causal processes producing map-territory correspondences” to “you have to look at things to draw accurate maps of them...”?
Could you please quote the part of Popper’s book that makes the explicit connection from the correspondence theory of truth to “there are causal processes producing map-territory correspondences” to “you have to look at things to draw accurate maps of them...”?
Right, this is the obvious next question. I started looking for the appropriate “sound bites” yesterday, but encountered a bit of difficulty in doing so, as I shall explain. Popper’s embrace of (Tarskian) correspondence theory should be at least somewhat clear from the footnote I quoted above.
It seems clear to me, from my recount of the book that “you have to look at things to draw accurate maps of them” is one of the chief aims, and one of the central claims of the book; a claim which is defended, by a lengthy, but quite convincing and unusually successful argument—the premises to which are presented only one at a time, and quite meticulously over at least several chapters, so I’m not exactly sure how to go about quoting only the “relevant parts”.
My claim that his argument was convincing and successful, is based on the historical observation that popperian falsificationism (the hypothetico-deductive framework) won out over the then quite prevalent logical positivist / verificationist view, to such an extent that it quickly became the default mode of Science, a position it has held, mostly uncontested, ever since, and therefore is barely worthy of mention today. Except when it is, that is; when one encounters problems that are metaphysical (according to Popper), such as Susskind’s String Landscape of perhaps 10^500 vacuua, the small (but significant) observed value of the cosmological constant, the (seemingly fine tuned) value of the fine structure constant, and other observations that may require anthropic i.e. metaphysical explanations, since these problems are seemingly not decidable inside of standard, i.e. popperian science.
I feel faced with a claim similar to “I don’t believe any mathematician has convincingly proven Fermat’s last theorem.” To which I reply: Andrew Wiles (1995)
The obvious next question is: “Can you please quote the part where he proves the theorem?” This is unfortunately somewhat involved, as the entire 109 page paper tries and succeds at doing so around as concisely as Wiles himself managed to go about it. Unfortunately, in the Popper case, I cannot simply provide the relevant Wikipedia Article and leave it at that.
I suppose that having made the claim, it is only my duty to back it up, or else concede defeat. If you’re still interested, I shall give it a thorough look, but will need a bit of time to do so. Hopefully, you’ll have my reply before monday.
A (very) quick attempt, perhaps this will suffice? (Let me know if not. )
I begin with the tersest possible defense of my claim that Popper argued that “you actually have to look at things to draw accurate maps of them...”, even though this particular example is particularily trivial:
Page 19:
(Thus the statement, ‘It will rain or not rain here tomorrow’ will not
be regarded as empirical, simply because it cannot be refuted; whereas
the statement, ‘It will rain here tomorrow’ will be regarded as
empirical.)
To paraphrase: You have to look actually out the window to discover whether it is raining or not.
Continuing, page 16:
The task of formulating an acceptable definition of the idea of an
‘empirical science’ is not without its difficulties. Some of these arise
from the fact that there must be many theoretical systems with a logical structure
very similar to the one which at any particular time is the accepted
system of empirical science. This situation is sometimes described by
saying that there is a great number—presumably an infinite number—
of ‘logically possible worlds’. Yet the system called ‘empirical science’
is intended to represent only one world: the ‘real world’ or the ‘world of
our experience’.*1
Various objections might be raised against the criterion of demarcation here proposed. In the first place, it may well seem somewhat
wrong-headed to suggest that science, which is supposed to give us
positive information, should be characterized as satisfying a negative
requirement such as refutability. However, I shall show, in sections 31
to 46, that this objection has little weight, since the amount of positive
information about the world which is conveyed by a scientific statement is the greater the more likely it is to clash, because of its logical
character, with possible singular statements. (Not for nothing do
we call the laws of nature ‘laws’: the more they prohibit the more
they say.)
My proposal is based upon an
asymmetry between verifiability and falsifiability; an asymmetry which
results from the logical form of universal statements.4 For these are
never derivable from singular statements, but can be contradicted by
singular statements. Consequently it is possible by means of purely
deductive inferences (with the help of the modus tollens of classical
logic) to argue from the truth of singular statements to the falsity of
universal statements. Such an argument to the falsity of universal
statements is the only strictly deductive kind of inference that proceeds,
as it were, in the ‘inductive direction’; that is, from singular to
universal statements.
4 This asymmetry is now more fully discussed in section *22 of my Postscript.
According to my proposal, what characterizes the empirical method is its manner of exposing to falsification,
in every conceivable way, the system to be tested. Its aim is not to save
the lives of untenable systems but, on the contrary, to select the one
which is by comparison the fittest, by exposing them all to the fiercest
struggle for survival.
[a number of indicative, but not decisive quotes omitted]
I had hoped to find some decisive sound bite in part one, which is a brief discussion of the epistemological problems facing any theory of scientific method, and an outline of Popper’s framework, but it looks like I shall have to go deeper. Will look into this over the weekend.
I also found another, though much more recent candidate, David Deutsch in The Beginning of Infinity, Chapter 1 on “The Reach of Explanations”. Tough I’m beginning to suspect that although they both point out that “you have to look at things to draw accurate maps of them...”, and describe “causal processes producing map-territory correspondences” (for example, between some state of affairs and the output of some scientific instument) both Deutsch and Popper seem to have omitted what one may call the “neuroscience of epistemology.” (Where the photon reflects off your shoelace, gets absorbed by your retina, leading to information about the configuration of the world becoming entangled with some corresponding state of your brain, and so on.) This is admittedly quite a crucial step, which Yudkowsky’s explanation does cover, and which I cannot recall to have seen elsewhere.
You might also be interested in checking out what Mohandas Gandhi had to say about “the meaning of truth”, just in case you were wondering what things are like in the rest of the world outside the halls of philosophy departments.
Here’s a quote from Perry Anderson’s recent (highly critical) essay on Gandhi:
There can be no doubt that he was, so far as he himself went, sincere enough in his commitment to non-violence. But as a political leader, his conception of himself as a vessel of divine intention allowed him to escape the trammels of human logic or coherence. Truth was not an objective value – correspondence to reality, or even (in a weaker version) common agreement – but simply what he subjectively felt at any given time. ‘It has been my experience,’ he wrote, ‘that I am always true from my point of view.’ His autobiography was subtitled The Story of My Experiments with Truth, as if truth were material for alteration in a laboratory, or the plaything of a séance. In his ‘readiness to obey the call of Truth, my God, from moment to moment’, he was freed from any requirement of consistency. ‘My aim is not to be consistent with my previous statements,’ he declared, but ‘with truth as it may present itself to me at a given moment’: ‘since I am called “Great Soul” I might as well endorse Emerson’s saying that “foolish consistency is the hobgoblin of little minds.”’ The result was a licence to say whatever he wanted, regardless of what he had said before, whenever he saw fit.
I haven’t particularly run across any philosophy explicitly making the connection from the correspondence theory of truth to “There are causal processes producing map-territory correspondences” to “You have to look at things in order to draw accurate maps of them, and this is a general rule...”
Trying to include mainstream academia other than philosophy, and going off your blog post “The Second Law of Thermodynamics, and Engines of Cognition”, it seems the idea of the general rule that you have to look at and interact with things to form accurate beliefs about them was largely due to Leo Szilard in his 1939 paper “On the Decrease in Entropy in a Thermodynamic System by the Intervention of Intelligent Beings” which analyzed Maxwell’s demon thought experiment and introduced the Szilard engine and the entropy cost of gaining information. You gave a more Bayesian analysis than Szilard in that post, possibly going off Jaynes’ work in statistical mechanics, like his 1957 papers “Information Theory and Statistical Mechanics” parts one and two, which are the earliest mention of Liouville’s theorem I can find in that context. Does Pearl have anything to throw in the mix, like a fancy rule about concluding a past causal interaction when you see corresponding maps and cities?
I haven’t particularly run across any philosophy explicitly making the connection from the correspondence theory of truth to “There are causal processes producing map-territory correspondences” to “You have to look at things in order to draw accurate maps of them, and this is a general rule with no exception for special interest groups who want more forgiving treatment for their assertions”. I would not be surprised to find out it existed, especially on the second clause.
DevilWorm and pragmatist point to the “reliabilism” school of philosophy (http://en.wikipedia.org/wiki/Reliabilism & http://plato.stanford.edu/entries/reliabilism). Clicking on either link reveals arguments concerned mainly with that old dispute over whether the word “knowledge” should be used to refer to “justified true belief”. Going on the wording I’m not even sure whether they’re considering how photons from the Sun are involved in correlating your visual cortex to your shoelaces. But it does increase the probability of a precedent—does anyone have something more specific? (A lot of the terminology I’ve seen so far is tremendously vague, and open to many interpretations...)
Incidentally, there might be an even higher probability of finding some explicit precedent in a good modern AI book somewhere?
Incidentally, there might be an even higher probability of finding some explicit precedent in a good modern AI book somewhere?
It might be too obvious to be worth mentioning. If you’re actually building (narrow) AI devices like self-driving cars, then of course your car has to have a way of sensing things round about it if it’s going to build a map of its surroundings.
AI books tend to assume that one pretty explicitly. For those of a more philosophical bent, some might say something like “The world pushes back”, but it’s not like anyone doing engineering is in the business of questioning whether the external world exists.
Epistemology and the Psychology of Human Judgment (badger’s summary) seems relevant, as one of the things they do is attack reliabilism’s uselessness. I don’t recall any direct precedents, but it’s been a while since I read it.
Bishop & Trout call their approach “strategic reliabilism.” A short summary is here. It’s far more Yudkowskian than normal reliabilism. LWers may also enjoy their paper The Pathologies of Standard Analytic Epistemology.
I haven’t particularly run across any philosophy explicitly making the connection from the correspondence theory of truth to “There are causal processes producing map-territory correspondences” to “You have to look at things in order to draw accurate maps of them, and this is a general rule with no exception for special interest groups who want more forgiving treatment for their assertions”. I would not be surprised to find out it existed, especially on the second clause.
Process reliabilism maybe? Defines the “justified” part in “justified true belief” as the belief being formed by a reliable truth-producing process.
From the Stanford Encyclopedia of Philosophy article:
Alvin Goldman offered his first formulation of a reliable process theory of knowing — as a refinement of the causal theory of knowing — in a short paper on innate knowledge (Goldman, 1975).
Whatever a “causal theory of knowing” is. But it sounds like the kind of thing you’re talking about.
(The ‘Mainstream Status’ comment is intended to provide a quick overview of what the status of the post’s ideas are within contemporary academia, at least so far as the poster knows. Anyone claiming a particular paper precedents the post should try to describe the exact relevant idea as presented in the paper, ideally with a quote or excerpt, especially if the paper is locked behind a paywall. Do not represent large complicated ideas as standard if only a part is accepted; do not represent a complicated idea as precedented if only a part is described. With those caveats, all relevant papers and citations are much solicited! Hopefully comment-collections like these can serve as a standard link between LW presentations and academic ones.)
The correspondence theory of truth is the first position listed in the Stanford Encyclopedia of Philosophy, which is my usual criterion for saying that something is a solved problem in philosophy. Clear-cut simple visual illustration inspired by the Sally-Anne experimental paradigm is not something I have previously seen associated with it, so the explanation in this post is—I hope—an improvement over what’s standard.
Alfred Tarski is a famous mathematician whose theory of truth is widely known.
The notion of possible worlds is very standard and popular in philosophy; some of them even ascribe much more realism to them than I would (since I regard them as imaginary constructs, not thingies that can potentially explain real events as opposed to epistemic puzzles).
I haven’t particularly run across any philosophy explicitly making the connection from the correspondence theory of truth to “There are causal processes producing map-territory correspondences” to “You have to look at things in order to draw accurate maps of them, and this is a general rule with no exception for special interest groups who want more forgiving treatment for their assertions”. I would not be surprised to find out it existed, especially on the second clause.
Added: The term “post-utopian” was intended to be a made-up word that had no existing standardized meaning in literature, though it’s simple enough that somebody has probably used it somewhere. It operates as a stand-in for more complicated postmodern literary terms that sound significant but mean nothing. If you think there are none of those, Alan Sokal would like to have a word with you. (Beating up on postmodernism is also pretty mainstream among Traditional Rationalists.)
You might also be interested in checking out what Mohandas Gandhi had to say about “the meaning of truth”, just in case you were wondering what things are like in the rest of the world outside the halls of philosophy departments.
This is a great post. I think the presentation of the ideas is clearer and more engaging than the sequences, and the cartoons are really nice. Wild applause for the artist.
I have a few things to say about the status of these ideas in mainstream philosophy, since I’m somewhat familiar with the mainstream literature (although admittedly it’s not the area of my expertise). I’ll split up my individual points into separate comments.
Summary of my point: Tarski’s biconditionals are not supposed to be a definition of truth. They are supposed to be a test of the adequacy of a proposed definition of truth. Proponents of many different theories claim that their theory passes this test of adequacy, so to identify Tarski’s criterion with the correspondence theory is incorrect, or at the very least, a highly controversial claim that requires defense. What follows is a detailed account of why the biconditionals can’t be an adequate definition of truth, and of what Tarski’s actual theory of truth is.
Describing Tarski’s biconditionals as a definition of truth or a theory of truth is misleading. The relevant paper is The Semantic Conception of Truth. Let’s call sentences of the form ‘p’ is true iff p T-sentences. Tarski’s claim in the paper is that the T-sentences constitute a criterion of adequacy for any proposed theory of truth. Specifically, a theory of truth is only adequate if all the T-sentences follow from it. This basically amounts to the claim that any adequate theory of truth must get the extension of the truth-predicate right—it must assign the truth-predicate to all and only those sentences that are in fact true.
I admit that the conjunction of all the T-sentences does in fact satisfy this criterion of adequacy. All the individual T-sentences do follow from this conjunction (assuming we’ve solved the subtle problem of dealing with infinitely long sentences). So if we are measuring by this criterion alone, I guess this conjunction would qualify as an adequate theory of truth. But there are other plausible criteria according to which it is inadequate. First, it’s a frickin’ infinite conjunction. We usually prefer our definitions to be shorter. More significantly, we usually demand more than mere extensional adequacy from our definitions. We also demand intensional adequacy.
If you ask someone for a definition of “Emperor of Rome” and she responds “X is an Emperor of Rome iff X is one of these...” and then proceeds to list every actual Emperor of Rome, I suspect you would find this definition inadequate. There are possible worlds in which Julius Caesar was an Emperor of Rome, even though he wasn’t in the actual world. If your friend is right, then those worlds are ruled out by definition. Surely that’s not satisfactory. The definition is extensionally adequate but not intensionally adequate. The T-sentence criterion only tests for extensional adequacy of a definition. It is satisfied by any theory that assigns the correct truth predicates in our world, whether or not that theory limns the account of truth in a way that is adequate for other possible worlds. Remember, the biconditionals here are material, not subjunctive. The T-sentences don’t tell us that an adequate theory would assign “Snow is green” as true if snow were green. But surely we want an adequate theory to do just that. If you regard the T-sentences themselves as the definition of truth, all that the definition gives us is a scheme for determining which truth ascriptions are true and false in our world. It tells us nothing about how to make these determinations in other possible worlds.
To make the problem more explicit, suppose I speak a language in which the sentence “Snow is white” means that grass is green. It will still be true that, for my language, “Snow is white” is true iff snow is white. Yet we don’t want to say this biconditional captures what it means for “Snow is white” to be true in my language. After all, in a possible world where snow remained white but grass was red, the sentence would be false.
Tarski was a smart guy, and I’m pretty sure he realized all this (or at least some of it). He constantly refers to the T-sentences as material criteria of adequacy for a definition of truth. He says (speaking about the T-sentences), ”… we shall call a definition of truth ‘adequate’ if all these equivalences follow from it.” (although this seems to ignore the fact that there are other important criteria of adequacy) When discussing a particular objection to his view late in the paper, he says, “The author of this objection mistakenly regards scheme (T)… as a definition of truth.” Unfortunately, he also says stuff that might lead one to think he does think of the conjunction of all T-sentences as a definition: “We can only say that every equivalence of the form (T)… may be considered a partial definition of truth, which explains wherein the truth of this one individual sentence consists. The general definition has to be, in a certain sense, a logical conjunction of all these partial definitions.”
I read the “in a certain sense” there as a subtle concession that we will need more than just a conjunction of the T-sentences for an adequate definition of truth. As support for my reading, I appeal to the fact that Tarski explicitly offers a definition of truth in his paper (in section 11), one that is more than just a conjunction of T-sentences. He defines truth in terms of satisfaction, which in turn is defined recursively using rules like: The objects a and b satisfy the sentential function “P(x, y) or Q(x, y)” iff they satisfy at least one of the functions “P(x, y)” or “Q(x, y)”. His definition of truth is basically that a sentence is true iff it is satisfied by all objects and false otherwise. This works because a sentence, unlike a general sentential function, has no free variables to which objects can be bound.
This definition is clearly distinct from the logical conjunction of all T-sentences. Tarski claims it entails all the T-sentences, and therefore satisfies his criterion of adequacy. Now, I think Tarski’s actual definition of truth isn’t all that helpful. He defines truth in terms of satisfaction, but satisfaction is hardly a more perspicuous concept. True, he provides a recursive procedure for determining satisfaction, but this only tells us when compound sentential functions are satisfied once we know when simple ones are satisfied. His account doesn’t explain what it means for a simple sentential function to be satisfied by an object. This is just left as a primitive in the theory. So, yeah, Tarski’s actual theory of truth kind of sucks.
His criterion of adequacy, though, has been very influential. But it is not a theory of truth, and that is not the way it is treated by philosophers. It is used as a test of adequacy, and proponents of most theories of truth (not just the correspondence theory) claim that their theory satisfies this test. So to identify Tarski’s definition/criterion/whatever with the correspondence theory misrepresents the state of play. There are, incidentally, a group of philosophers who do take the T-sentences to be a full definition of truth, or at least to be all that we can say about truth. But these are not correspondence theorists. They are deflationists.
I’ve slightly edited the OP to say that Tarski “described” rather than “defined” truth—I wish I could include more to reflect this valid point (indeed Tarski’s theorems on truth are a lot more complicated and so are surrounding issues, no language can contain its own truth-predicate, etc.), but I think it might be a distraction from the main text. Thank you for this comment though!
The latest Rationally Speaking post looks relevant: Ian Pollock describes aspects of Eliezer’s view as “minimalism” with a link to that same SEP article. He also mentions Simon Blackburn’s book, where Blackburn describes minimalists or quietists as making the same point Eliezer makes about collapsing “X is true” to “X” and a similar point about the usefulness of the term “truth” as a generalisation (though it seems that minimalists would say that this is only a linguistic convenience, whereas Eliezer seems to have a slightly difference concept of it in that he wants to talk in general about how we get accurate beliefs).
Thanks for this whole comment. In particular,
My gut instinct is deflationist, but I don’t see this view as being opposed to “correspondence”. The alleged conflict is dubious at best. Stanford Encyclopedia of Philosophy writes:
Emphasis added: the italicized premise is false. Explanation is a cognitive feat, and the same fact (even if the identity is a necessary one) can be cognized in different ways. (Such explanations occur frequently enough in mathematics, I think.) The SEP author anticipates my objection and writes:
It is open to them to argue that “because” does not create a hyper-intensional context, but it is much more plausible that it does. So until a good argument comes along, mark me down as a correspondence deflationist.
It’s vogue to defend correspondence because 1) it sounds like common sense and 2) it signals rejection of largely discredited instrumentalism. But surely a correspondence theorist should have a theory of the nature of the correspondence. How does a proposition or a verbal string correspond to a state of reality? By virtue of what is it a correct description? We can state a metalinguistic relationship about “Snow is white,” but how does this locution hook onto the actual world?
Correspondence theorists think this is a task for a philosophical theory of reference. (Such as in an account where “torekp” refers to you by virtue of the “christening event” of your creating the account and causal connections therefrom.) Deflationists are apt to say it is ultimately a technical problem in the psychology of language.
Interesting. I am inclined to replicate my compatibility claim at this level too; i.e., the technical solution in the psychology of language will be a philosophical theory of reference (as much as one needs) as well. I’d be interested in references to any of the deflationist discussions of reference you have in mind.
Depends on what you mean by “explicitly”. Many correspondence theorists believe that an adequate understanding of “correspondence” requires an understanding of reference—how parts of our language are associated with parts of the world. I think this sort of idea stems from trying to fill out Tarski’s (actual) definition of truth, which I discussed in another comment. The hope is that a good theory of reference will fill out Tarski’s obscure notion of satisfaction, and thereby give some substance to his definition of truth in terms of satisfaction.
Anyway, there was a period when a lot of philosophers believed, following Saul Kripke and Hilary Putnam, that we can understand reference in terms of causal relations between objects in the world and our brains (it appears to me that this view is falling out of vogue now, though). What makes it the case that our use of the term “electron” refers to electrons? That there are the appropriate sorts of causal relations, both social—the causal chain from physicists who originated the use of the word to contemporary uses of it—and evidential—the causal connections with the world that govern the ways in which contemporary physicists come to assert new claims involving the word “electron”. The causal theory of reference is used as the basis for a (purportedly) non-mysterious account of satisfaction, which in turn is used as the basis for a theory of truth.
So the idea is that the meanings of the elements in our map are determined by causal processes, and these meanings link the satisfaction conditions of sentential functions to states of affairs in the world. I’m not sure this is exactly the sort of thing you’re saying, but it seems close. For an explicit statement of this kind of view, see Hartry Field’s Tarski’s Theory of Truth. Most of the paper is a (fairly devastating, in my opinion) critique of Tarski’s account of truth, but towards the end of section IV he brings up the causal theory.
ETA: More broadly, reliabilism in epistemology has a lot in common with your view. Reliabilism is a refinement of early causal theories of knowledge. The idea is that our beliefs are warranted in so far as they are produced by reliable mechanisms. Most reliabilists I’m aware of are naturalists, and read “reliable mechanism” as “mechanism which establishes appropriate causal connections between belief states and world states”. Our senses are presumed to be reliable (and therefore sources of warrant) just because the sorts of causal chains you describe in your post are regularly instantiated. Reliabilism is, however, compatible with anti-naturalism. Alvin Plantinga, for instance, believes that the sensus divinitatis should be regarded as a reliable cognitive faculty, one that atheists lack (or ignore).
One example of a naturalist reliabilism (paired with a naturalist theory of mental representation) is Fred Dretske’s Knowledge and the Flow of Information. A summary of the book’s arguments is available here (DOC file). Dretske tries to understand perception, knowledge, the truth and falsity of belief, mental content, etc. using the framework of Shannon’s communication theory. The basis of his analysis is that information transfer from a sender system to a receiver system must be understood in terms of relations of law-like dependence of the receiver system’s state on the sender system’s state. He then analyzes various epistemological problems in terms of information transfer from systems in the external world to our perceptual faculties, and information transfer from our perceptual faculties to our cognitive centers. He’s written a whole book about this, so there’s a lot of detail, and some of the specific details are suspect. In broad strokes, though, Dretske’s book expresses pretty much the same point of view you describe in this post.
Speaking as the author of Eliezer’s Sequences and Mainstream Academia...
Off the top of my head, I also can’t think of a philosopher who has made an explicit connection from the correspondence theory of truth to “there are causal processes producing map-territory correspondences” to “you have to look at things to draw accurate maps of them...”
But if this connection has been made explicitly, I would expect it to be made by someone who accepts both the correspondence theory and “naturalized epistemology”, often summed up in a quote from Quine:
(Originally, Quine’s naturalized epistemology accounted only for this descriptive part of epistemology, and neglected the normative part, e.g. truth conditions. In the 80s Quine started saying that the normative part entered into naturalized epistemology through “the technology of truth-seeking,” but he was pretty vague about this.)
Edit: Another relevant discussion of embodiment and theories of truth can be found in chapter 7 of Philosophy in the Flesh.
It’s not that clear to me in what sense mainstream academia is a unified thing which holds positions, even regarding questions such as “what fields are legitimate”. Saying that something is known in mainstream academia seems suspiciously like saying that “something is encoded in the matter in my shoelace, given the right decryption schema. OTOH, it’s highly meaningful to say that something is discoverable by someone with competent ’google-fu”
Strongly seconded.
Hell, some “Mainstream” scientists are working on big-money research project that attempt to prove that there’s a worldwide conspiracy attempting to convince people that global warming exists so as to make money off of it. Either they’re all sell-outs, something which seems very unlikely, or at least some of them actually disagree with some other mainstream scientists, who see the “Is there real global warming?” question as obviously resolved long ago.
Agree with all this.
OK, I defended the tweet that got this response from Eliezer as the sort of rhetorical flourish that gets people to actually click on the link. However, it looks like I also underestimated how original the sequences are—I had really expected this sort of thing to mirror work in mainstream philosophy.
Although I wouldn’t think of this particular thing as being an invention on his part—I’m not sure I’ve read that particular chain of thought before, but all the elements of the chain are things I’ve known for years.
However I think it illustrates the strength of Eliezer’s writing well. It’s a perfectly legitimate sequence of thought steps that leads in a clear and obvious way to the right answer. It’s not new thought, but a particularly clear way of expressing something that many people have thought and written about in a less organised way.
To clarify—there are times when Eliezer is inventive—for example his work on CEV—but this isn’t one of those places. I know I’m partly arguing about the meaning of “inventive”, but I don’t think we’re doing him a favor here by claiming this is an example of his inventiveness when there are much better candidates.
Karl Popper did so explicitly, thoroughly and convincingly in The Logic of Scientific Discovery. Pretty influential, and definitely a part of “Mainstream Academia.”
Here’s an interesting, if lengthy, footnote to Chapter 84 - Remarks Concerning the use of the concepts ‘True’ and ‘Corroborated’.
A (short) footnote of my own: Popper’s writings have assumed the status of mere “background knowledge”, which is a truly great achievement for any philosopher of science. However, The Logic of Scientific discovery is a glorious book which deserves to be even more widely read. Part I of the book spans no more than 30 pages. It’s nothing short of beautiful. PDF here.
Could you please quote the part of Popper’s book that makes the explicit connection from the correspondence theory of truth to “there are causal processes producing map-territory correspondences” to “you have to look at things to draw accurate maps of them...”?
Right, this is the obvious next question. I started looking for the appropriate “sound bites” yesterday, but encountered a bit of difficulty in doing so, as I shall explain. Popper’s embrace of (Tarskian) correspondence theory should be at least somewhat clear from the footnote I quoted above.
It seems clear to me, from my recount of the book that “you have to look at things to draw accurate maps of them” is one of the chief aims, and one of the central claims of the book; a claim which is defended, by a lengthy, but quite convincing and unusually successful argument—the premises to which are presented only one at a time, and quite meticulously over at least several chapters, so I’m not exactly sure how to go about quoting only the “relevant parts”.
My claim that his argument was convincing and successful, is based on the historical observation that popperian falsificationism (the hypothetico-deductive framework) won out over the then quite prevalent logical positivist / verificationist view, to such an extent that it quickly became the default mode of Science, a position it has held, mostly uncontested, ever since, and therefore is barely worthy of mention today. Except when it is, that is; when one encounters problems that are metaphysical (according to Popper), such as Susskind’s String Landscape of perhaps 10^500 vacuua, the small (but significant) observed value of the cosmological constant, the (seemingly fine tuned) value of the fine structure constant, and other observations that may require anthropic i.e. metaphysical explanations, since these problems are seemingly not decidable inside of standard, i.e. popperian science.
I feel faced with a claim similar to “I don’t believe any mathematician has convincingly proven Fermat’s last theorem.” To which I reply: Andrew Wiles (1995) The obvious next question is: “Can you please quote the part where he proves the theorem?” This is unfortunately somewhat involved, as the entire 109 page paper tries and succeds at doing so around as concisely as Wiles himself managed to go about it. Unfortunately, in the Popper case, I cannot simply provide the relevant Wikipedia Article and leave it at that.
I suppose that having made the claim, it is only my duty to back it up, or else concede defeat. If you’re still interested, I shall give it a thorough look, but will need a bit of time to do so. Hopefully, you’ll have my reply before monday.
A (very) quick attempt, perhaps this will suffice? (Let me know if not. )
I begin with the tersest possible defense of my claim that Popper argued that “you actually have to look at things to draw accurate maps of them...”, even though this particular example is particularily trivial:
Page 19:
To paraphrase: You have to look actually out the window to discover whether it is raining or not.
Continuing, page 16:
(Oops, comment too long.)
(Continued)
Page 20:
[a number of indicative, but not decisive quotes omitted]
I had hoped to find some decisive sound bite in part one, which is a brief discussion of the epistemological problems facing any theory of scientific method, and an outline of Popper’s framework, but it looks like I shall have to go deeper. Will look into this over the weekend.
I also found another, though much more recent candidate, David Deutsch in The Beginning of Infinity, Chapter 1 on “The Reach of Explanations”. Tough I’m beginning to suspect that although they both point out that “you have to look at things to draw accurate maps of them...”, and describe “causal processes producing map-territory correspondences” (for example, between some state of affairs and the output of some scientific instument) both Deutsch and Popper seem to have omitted what one may call the “neuroscience of epistemology.” (Where the photon reflects off your shoelace, gets absorbed by your retina, leading to information about the configuration of the world becoming entangled with some corresponding state of your brain, and so on.) This is admittedly quite a crucial step, which Yudkowsky’s explanation does cover, and which I cannot recall to have seen elsewhere.
Here’s a quote from Perry Anderson’s recent (highly critical) essay on Gandhi:
Trying to include mainstream academia other than philosophy, and going off your blog post “The Second Law of Thermodynamics, and Engines of Cognition”, it seems the idea of the general rule that you have to look at and interact with things to form accurate beliefs about them was largely due to Leo Szilard in his 1939 paper “On the Decrease in Entropy in a Thermodynamic System by the Intervention of Intelligent Beings” which analyzed Maxwell’s demon thought experiment and introduced the Szilard engine and the entropy cost of gaining information. You gave a more Bayesian analysis than Szilard in that post, possibly going off Jaynes’ work in statistical mechanics, like his 1957 papers “Information Theory and Statistical Mechanics” parts one and two, which are the earliest mention of Liouville’s theorem I can find in that context. Does Pearl have anything to throw in the mix, like a fancy rule about concluding a past causal interaction when you see corresponding maps and cities?
DevilWorm and pragmatist point to the “reliabilism” school of philosophy (http://en.wikipedia.org/wiki/Reliabilism & http://plato.stanford.edu/entries/reliabilism). Clicking on either link reveals arguments concerned mainly with that old dispute over whether the word “knowledge” should be used to refer to “justified true belief”. Going on the wording I’m not even sure whether they’re considering how photons from the Sun are involved in correlating your visual cortex to your shoelaces. But it does increase the probability of a precedent—does anyone have something more specific? (A lot of the terminology I’ve seen so far is tremendously vague, and open to many interpretations...)
Incidentally, there might be an even higher probability of finding some explicit precedent in a good modern AI book somewhere?
It might be too obvious to be worth mentioning. If you’re actually building (narrow) AI devices like self-driving cars, then of course your car has to have a way of sensing things round about it if it’s going to build a map of its surroundings.
This fact should be turned into an SMBC cartoon.
That’s what I was thinking. Maybe in something like Knowledge Representation and Reasoning.
AI books tend to assume that one pretty explicitly. For those of a more philosophical bent, some might say something like “The world pushes back”, but it’s not like anyone doing engineering is in the business of questioning whether the external world exists.
Epistemology and the Psychology of Human Judgment (badger’s summary) seems relevant, as one of the things they do is attack reliabilism’s uselessness. I don’t recall any direct precedents, but it’s been a while since I read it.
Bishop & Trout call their approach “strategic reliabilism.” A short summary is here. It’s far more Yudkowskian than normal reliabilism. LWers may also enjoy their paper The Pathologies of Standard Analytic Epistemology.
That was a pretty cool paper. I don’t think I’ve ever seen SPRs in a philosophy paper before.
For the curious, I interviewed Michael Bishop a couple years ago.
Process reliabilism maybe? Defines the “justified” part in “justified true belief” as the belief being formed by a reliable truth-producing process.
From the Stanford Encyclopedia of Philosophy article:
Whatever a “causal theory of knowing” is. But it sounds like the kind of thing you’re talking about.