A criticism—somewhat harsh but hopefully constructive.
As you know, lots of people have written on the subjects of truth and meaning (aside from Tarski). It seems, however, that you don’t accord them much importance (no references, failure to consider alternate points of view, apparent lack of awareness of the significance of the matter of what the bearer of truth (sentence, proposition, ‘neurally embodied belief’) properly is, etc.). I put it to you this is a manifestation of irrationality: you have a known means at your disposal to learn reliably about a subject which is plainly important to you, but you apparently reject it in favour of the more personally satisfying but much less reliable alternative of blogging your own ideas -you willingly choose an inferior path to belief formation. If you want to get a good understanding of such things as truth, reference and mathematical proof, I submit that the rational starting point is to read at least a survey of what experts in the fields have written, and to develop your own thoughts, at least initially, in the context they provide.
Give me an example of a specific thing relevant to constructing an AI which I should have referenced, plus the role it plays in a (self-modifying) AI. Keep in mind that I only care about constructing self-modifying AIs and not about “what is the bearer of truth”.
I’ve read works-not-referenced on “meaning”, they just don’t seem relevant to anything I care about. Though obviously there’s quite a lot of standard work on mathematical proof that I care about (some small amount of which I’ve referenced).
1) I don’t see that this really engages the criticism. I take it you reject that the subjects of truth and reference are important to you. On this, two thoughts:
a) This doesn’t affect the point about the reliability of blogging versus research. The significance of the irrationality maybe, but the point remains. You may hold that the value to you of the creative process of explicating your own thoughts is sufficiently high that it trumps the value of coming to optimally informed beliefs—that the cost-benefit analysis favours blogging. I am sceptical of this, but would be interested to hear the case.
b) It seems just false that you don’t care about these subjects. You’ve written repeatedly on them, and seem to be aiming for an internally coherent epistemology and semantics.
2) My claim was that your lack of references is evidence that you don’t accord importance to experts on truth and meaning, not that there are specific things you should be referencing. That said, if your claim is ultimately just the observation that truth is useful as a device for so-called semantic ascent, you might mention Quine (see the relevant section of Word and Object or the discussion in Pursuit of Truth) or the opening pages of Paul Horwich’s book Truth, to give just two examples.
3) My own view is that AI should have nothing to do with truth, meaning, belief or rationality—that AI theory should be elaborated entirely in terms of pattern matching and generation, and that philosophy (and likewise decision theory) should be close to irrelevant to it. You seem to think you need to do some philosophy (else why these posts?), but not too much (you don’t have to decide whether the sorts of things properly called ‘true’ are sentences, abstract propositions or neural states, or all or none of the above). Where the line lies and why is not clear to me.
I’m saying, “Show me something in particular that I should’ve looked at, and explain why it matters; I do not respond to non-specific claims that I should’ve paid more homage to whatever.”
As far as I can see, your point is something like:
“Your reasoning implies I should read some specific thing; there is no such thing; therefore your reasoning is mistaken.” (or, “unless you can produce such a thing...”)
Is this right? In any case, I don’t see that the conditional is correct. I can only give examples of works which would help. Here are three more. Your second part seeks (as I understand it) a theory of meaning which would imply that your ′ Elaine is a post-utopian’ is meaningless, but that ‘The photon continues to exist...’ is both meaningful and true. I get the impression you think that an adequate answer could be articulated in a few paragraphs. To get a sense of some of the challenges you might face -ie, of what the project of contriving a theory of meaning entails- consider looking at Stephen Schiffer’s excellent Remnants of Meaning and The Things we Mean or Scott Soames’s What is Meaning? .
As far as I can see, your point is something like:
“Your reasoning implies I should read some specific thing; there is no such thing; therefore your reasoning is mistaken.” (or, “unless you can produce such a thing...”)
I think it’s more like
“Your reasoning implies I should have read some specific idea, but so far you haven’t given me any such idea and why it should matter, only general references to books and authors without pointing to any specific idea in them”
Part of the talking-past-each-other may come from the fact that by “thing”, Eliezer seems to mean “specific concept”, and you seem to mean “book”.
There also seems to be some disagreement as to what warrants references—for Eliezer it seems to be “I got idea X from Y”, for you it’s closer to “Y also has idea X”.
“If there is such a thing and you know it, you should be able to describe it at least partially to a highly informed listener who is already familiar with the field. Your failure to describe this thing causes me to think that you might be trying to look impressive by listing a lot of books which, for all I know at this point, you haven’t even read.”
Your comment carries the assumption that studying the work of experts makes you better at understanding epistemology, and I’m not sure why you think that. Much of philosophy has a poor understanding of epistemology, in my mind. Can you explain why you think reading the work of experts is important for having worthwhile thoughts on epistemology?
This seems to me a reasonable question (at least partly—see below). To be clear, I said that reading the work of experts is more likely to produce a good understanding than merely writing-up one’s own thoughts. My answer:
For any given field, reading the thoughts of experts -ie, smart people who have devoted substantial time and effort to thinking and collaborating in the field- is more likely to result in a good understanding of the field’s issues than furrowing one’s brow and typing away in relative isolation. I take this to be common sense, but please say if you need some substantiation. The conclusion about philosophy follows by universal instantiation.
“Ah”, I hear you say, “but philosophy does not fit this pattern, because the people who do it aren’t smart. They’re all at best of mediocre intelligence.” (is there another explanation of the poor understanding you refer to?). From what I’ve seen on LW, this position will be inferred to from a bad experience or two with philosophy profs , or perhaps on the grounds that no smart person would elect to study such a diseased subject.
Two rejoinders:
i) Suppose it were true that only second rate thinkers do philosophy. It would be still the case that with a large number of people discussing the issues over many years, there’d be a good chance something worth knowing -if there’s anything to know- would emerge. It wouldn’t be obvious that the rational course is to ignore it, if interested in the issues.
ii) It’s obviously false (hence the ‘partly’ above). Just try reading the work of Timothy Williamson or David Lewis or Crispin Wright or W.V.O. Quine or Hilary Putnam or Donald Davidson or George Boolos or any of a huge number of other writers, and then making a rational case that the leading thinkers of philosophy are second-rate intellects. I think this is sufficiently obvious that the failure to see it suggests not merely oversight but bias.
Philosophical progress may tend to take the form just of increasingly nuanced understandings of its problems’ parameters rather than clear resolutions of them, and so may not seem worth doing, to some. I don’t know whether I’d argue with someone who thinks this, but I would suggest if one thinks it, one shouldn’t be claiming it even while expounding a philosophical theory.
“Ah”, I hear you say, “but philosophy does not fit this pattern, because the people who do it aren’t smart. They’re all at best of mediocre intelligence.” (is there another explanation of the poor understanding you refer to?)
There’s no fool like an intelligent fool. You have to be really smart to be as stupid as a philosopher.
Even in antiquity it was remarked that “no statement is too absurd for some philosophers to make” (Cicero).
If ever one needed a demonstration that intelligence is not usefully thought of as a one-dimensional attribute, this is it.
Philosophical progress may tend to take the form just of increasingly nuanced understandings of its problems’ parameters
When I hear the word “nuanced”, I reach for my sledgehammer.
I think philosophers include some smart people, and they produced some excellent work (some of which might still help us today). I also think philosophy is not a natural class. You would never lump the members of this category together without specific social factors pushing them together. Studying “philosophy” seems unlikely to produce any good results unless you know what to look for.
I have little confidence in your recommendations, because your sole concrete example to date of a philosophical question seems ludicrous. What would change if a neurally embodied belief rather than a sentence (or vice versa) were the “bearer of meaning”? And as a separate question, why should we care?
The issue is whether a sentence’s meaning is just its truth conditions, or whether it expresses some kind of independent thought or proposition, and this abstract object has truth conditions. These are two quite different approaches to doing semantics.
Why should you care? Personally, I don’t see this problem has anything to do with the problem of figuring out how a brain acquires the patterns of connections needed to create the movements and sounds it does given the stimuli it receives. To me it’s an interesting but independent problem, and the idea of ‘neurally embodied beliefs’ is worthless. Some people (with whom I disagree but whom I nevertheless respect) think the problems are related, in which case there’s an extra reason to care, and what exactly a neurally embodied belief is, will vary. If you don’t care, that’s your business.
You mention that an AI might need a cross-domain notion of truth, or might realise that truth applies accross domains.
Michael Lynch’s functionalist thbeory of truth, mentioned elsewhere on this page, is such a theory.
A criticism—somewhat harsh but hopefully constructive.
As you know, lots of people have written on the subjects of truth and meaning (aside from Tarski). It seems, however, that you don’t accord them much importance (no references, failure to consider alternate points of view, apparent lack of awareness of the significance of the matter of what the bearer of truth (sentence, proposition, ‘neurally embodied belief’) properly is, etc.). I put it to you this is a manifestation of irrationality: you have a known means at your disposal to learn reliably about a subject which is plainly important to you, but you apparently reject it in favour of the more personally satisfying but much less reliable alternative of blogging your own ideas -you willingly choose an inferior path to belief formation. If you want to get a good understanding of such things as truth, reference and mathematical proof, I submit that the rational starting point is to read at least a survey of what experts in the fields have written, and to develop your own thoughts, at least initially, in the context they provide.
Give me an example of a specific thing relevant to constructing an AI which I should have referenced, plus the role it plays in a (self-modifying) AI. Keep in mind that I only care about constructing self-modifying AIs and not about “what is the bearer of truth”.
I’ve read works-not-referenced on “meaning”, they just don’t seem relevant to anything I care about. Though obviously there’s quite a lot of standard work on mathematical proof that I care about (some small amount of which I’ve referenced).
1) I don’t see that this really engages the criticism. I take it you reject that the subjects of truth and reference are important to you. On this, two thoughts:
a) This doesn’t affect the point about the reliability of blogging versus research. The significance of the irrationality maybe, but the point remains. You may hold that the value to you of the creative process of explicating your own thoughts is sufficiently high that it trumps the value of coming to optimally informed beliefs—that the cost-benefit analysis favours blogging. I am sceptical of this, but would be interested to hear the case.
b) It seems just false that you don’t care about these subjects. You’ve written repeatedly on them, and seem to be aiming for an internally coherent epistemology and semantics.
2) My claim was that your lack of references is evidence that you don’t accord importance to experts on truth and meaning, not that there are specific things you should be referencing. That said, if your claim is ultimately just the observation that truth is useful as a device for so-called semantic ascent, you might mention Quine (see the relevant section of Word and Object or the discussion in Pursuit of Truth) or the opening pages of Paul Horwich’s book Truth, to give just two examples.
3) My own view is that AI should have nothing to do with truth, meaning, belief or rationality—that AI theory should be elaborated entirely in terms of pattern matching and generation, and that philosophy (and likewise decision theory) should be close to irrelevant to it. You seem to think you need to do some philosophy (else why these posts?), but not too much (you don’t have to decide whether the sorts of things properly called ‘true’ are sentences, abstract propositions or neural states, or all or none of the above). Where the line lies and why is not clear to me.
I’m saying, “Show me something in particular that I should’ve looked at, and explain why it matters; I do not respond to non-specific claims that I should’ve paid more homage to whatever.”
As far as I can see, your point is something like:
“Your reasoning implies I should read some specific thing; there is no such thing; therefore your reasoning is mistaken.” (or, “unless you can produce such a thing...”)
Is this right? In any case, I don’t see that the conditional is correct. I can only give examples of works which would help. Here are three more. Your second part seeks (as I understand it) a theory of meaning which would imply that your ′ Elaine is a post-utopian’ is meaningless, but that ‘The photon continues to exist...’ is both meaningful and true. I get the impression you think that an adequate answer could be articulated in a few paragraphs. To get a sense of some of the challenges you might face -ie, of what the project of contriving a theory of meaning entails- consider looking at Stephen Schiffer’s excellent Remnants of Meaning and The Things we Mean or Scott Soames’s What is Meaning? .
I think it’s more like
“Your reasoning implies I should have read some specific idea, but so far you haven’t given me any such idea and why it should matter, only general references to books and authors without pointing to any specific idea in them”
Part of the talking-past-each-other may come from the fact that by “thing”, Eliezer seems to mean “specific concept”, and you seem to mean “book”.
There also seems to be some disagreement as to what warrants references—for Eliezer it seems to be “I got idea X from Y”, for you it’s closer to “Y also has idea X”.
“If there is such a thing and you know it, you should be able to describe it at least partially to a highly informed listener who is already familiar with the field. Your failure to describe this thing causes me to think that you might be trying to look impressive by listing a lot of books which, for all I know at this point, you haven’t even read.”
Your comment carries the assumption that studying the work of experts makes you better at understanding epistemology, and I’m not sure why you think that. Much of philosophy has a poor understanding of epistemology, in my mind. Can you explain why you think reading the work of experts is important for having worthwhile thoughts on epistemology?
This seems to me a reasonable question (at least partly—see below). To be clear, I said that reading the work of experts is more likely to produce a good understanding than merely writing-up one’s own thoughts. My answer:
For any given field, reading the thoughts of experts -ie, smart people who have devoted substantial time and effort to thinking and collaborating in the field- is more likely to result in a good understanding of the field’s issues than furrowing one’s brow and typing away in relative isolation. I take this to be common sense, but please say if you need some substantiation. The conclusion about philosophy follows by universal instantiation.
“Ah”, I hear you say, “but philosophy does not fit this pattern, because the people who do it aren’t smart. They’re all at best of mediocre intelligence.” (is there another explanation of the poor understanding you refer to?). From what I’ve seen on LW, this position will be inferred to from a bad experience or two with philosophy profs , or perhaps on the grounds that no smart person would elect to study such a diseased subject.
Two rejoinders:
i) Suppose it were true that only second rate thinkers do philosophy. It would be still the case that with a large number of people discussing the issues over many years, there’d be a good chance something worth knowing -if there’s anything to know- would emerge. It wouldn’t be obvious that the rational course is to ignore it, if interested in the issues.
ii) It’s obviously false (hence the ‘partly’ above). Just try reading the work of Timothy Williamson or David Lewis or Crispin Wright or W.V.O. Quine or Hilary Putnam or Donald Davidson or George Boolos or any of a huge number of other writers, and then making a rational case that the leading thinkers of philosophy are second-rate intellects. I think this is sufficiently obvious that the failure to see it suggests not merely oversight but bias.
Philosophical progress may tend to take the form just of increasingly nuanced understandings of its problems’ parameters rather than clear resolutions of them, and so may not seem worth doing, to some. I don’t know whether I’d argue with someone who thinks this, but I would suggest if one thinks it, one shouldn’t be claiming it even while expounding a philosophical theory.
There’s no fool like an intelligent fool. You have to be really smart to be as stupid as a philosopher.
Even in antiquity it was remarked that “no statement is too absurd for some philosophers to make” (Cicero).
If ever one needed a demonstration that intelligence is not usefully thought of as a one-dimensional attribute, this is it.
When I hear the word “nuanced”, I reach for my sledgehammer.
Quoting this.
Reading the work of experts also puts you in a position to communicate complex ideas in a way others can understand.
I think philosophers include some smart people, and they produced some excellent work (some of which might still help us today). I also think philosophy is not a natural class. You would never lump the members of this category together without specific social factors pushing them together. Studying “philosophy” seems unlikely to produce any good results unless you know what to look for.
I have little confidence in your recommendations, because your sole concrete example to date of a philosophical question seems ludicrous. What would change if a neurally embodied belief rather than a sentence (or vice versa) were the “bearer of meaning”? And as a separate question, why should we care?
The issue is whether a sentence’s meaning is just its truth conditions, or whether it expresses some kind of independent thought or proposition, and this abstract object has truth conditions. These are two quite different approaches to doing semantics.
Why should you care? Personally, I don’t see this problem has anything to do with the problem of figuring out how a brain acquires the patterns of connections needed to create the movements and sounds it does given the stimuli it receives. To me it’s an interesting but independent problem, and the idea of ‘neurally embodied beliefs’ is worthless. Some people (with whom I disagree but whom I nevertheless respect) think the problems are related, in which case there’s an extra reason to care, and what exactly a neurally embodied belief is, will vary. If you don’t care, that’s your business.
This has done very little to convince me that I should care (and I probably care more about academic Philosophy than most here).
Thanks for pointing this out. I tend to conflate the two, and it’s worth keeping the distinction in mind.
You mention that an AI might need a cross-domain notion of truth, or might realise that truth applies accross domains. Michael Lynch’s functionalist thbeory of truth, mentioned elsewhere on this page, is such a theory.