I don’t need a “theory” to explain how a “hypothetical” learning algorithm can build a generative model that can represent this kind of information in its latent variables, and draw appropriate inferences.
Sure, but we would still need a separate explanation if we want to understand how representation/reference works in a model (or in the brain) itself. If we are interested in that, of course. It could be interesting from the standpoint of philosophy of mind, philosophy of language, linguistics, cognitive psychology, and of course machine learning interpretability.
If, when you run these algorithms, you wind up with all kinds of edge cases where it’s unclear what is “about” what, (and you do), then that’s a sign that you should not be treating “aboutness” as a bedrock principle in the first place.
I don’t think we did run into any edge cases of representation so far where something partially represents or is partially represented, like chess is partially sport-like. Representation/reference/aboutness doesn’t seem a very vague concept. Apparently the difficulty of finding an adequate definition isn’t due to vagueness.
That being said, it’s clearly not necessary for your theory to cover this topic if you don’t find it very interesting and/or you have other objectives.
it’s clearly not necessary for your theory to cover this topic if you don’t find it very interesting and/or you have other objectives
I’m trying to make a stronger statement than that. I think the kind of theory you’re looking for doesn’t exist. :)
I don’t think we did run into any edge cases of representation so far where something partially represents or is partially represented
In this comment you talked about a scenario where a perfect map of London was created by “an (extremely unlikely) accident”. I think that’s an “edge case of representation”, right? Just as a virus is an edge case of alive-ness. And (I claim) that arguing about whether viruses are “really” alive or not is a waste of time, and likewise arguing whether accidental maps “really” represent London or not is a waste of time too.
Again, a central case of representation would be:
(A) The map has lots and lots of features that straightforwardly correspond to features of London
(B) The reason for (A) is that the map was made by an optimization process that systematically (though imperfectly) tends to lead to (A), e.g. the map was created by a human map-maker who was trying to accurately map London
(C) The map is in fact useful for navigating London
I claim that if you run a predictive learning algorithm, that’s a valid (B), and we can call the trained generative model a “map”. If we do that…
You can pick out places where all three of (A-C) are very clearly applicable, and when you find one of those places, you can say that there’s some map-territory correspondence / representation there. Those places are clear-cut central examples, just as a cat is a clear-cut central example of alive-ness.
There are other bits of the map that clearly don’t correspond to anything in the territory, like a drug-induced hallucinated ghost that the person believes to be really standing in front of them. That’s a clear-cut non-example of map-territory correspondence, just as a rock is a clear-cut non-example of alive-ness.
And then there are edge cases, like the Gettier problem, or vague impressions, or partially-correct ideas, or hearsay, or vibes, or Scott/Yvain, or latent variables that are somehow helpful for prediction but in a very hard-to-interpret and indirect way, or habits, etc. Those are edge-cases of map-territory correspondence. And arguing about what if anything they’re “really” representing is a waste of time, just as arguing about whether a virus is “really” alive is a waste of time, right? In those cases, I claim that it’s useful to invoke the term “map-territory correspondence” or “representation” as part of a longer description of what’s happening here, but not as a bedrock ground truth that we’re trying to suss out.
In this comment you talked about a scenario where a perfect map of London was created by “an (extremely unlikely) accident”. I think that’s an “edge case of representation”, right?
I think it clearly wasn’t a case of representation, in the same way a random string clearly doesn’t represent anything, nor a cloud that happens to look like something. Those are not edge cases; edge cases are arguably examples where something satisfies a vague predicate partially, like chess being “sporty” to some degree. (“Life” is arguably also vague because it isn’t clear whether it requires a metabolism, which isn’t present in viruses.)
Again, a central case of representation would be:
(A) The map has lots and lots of features that straightforwardly correspond to features of London
(B) The reason for (A) is that the map was made by an optimization process that systematically (though imperfectly) tends to lead to (A), e.g. the map was created by a human map-maker who was trying to accurately map London
(C) The map is in fact useful for navigating London
I think maps are not overly central cases of representation.
(A) is clearly not at all required for clear cases of representation, e.g. in case of language (abstract symbols) or when a fist is used to represent a gas station, or a stone represents a missing chess piece, etc. That’s also again the difference between the sock and the contents of the suitcase. The latter concept doesn’t resemble the thing it represents, like a sock, even though it may refer to one.
Regarding (C), representation also doesn’t have to be helpful to be representation. A secret message written in code is in fact optimized to be maximally unhelpful to most people, perhaps even to everyone in case of an encrypted diary. And the local explaining the way to the next gas station with the help of his hands clearly means something regardless of whether someone is able to understand what he means. Yes, maps specifically are mostly optimized to be helpful, but that doesn’t mean representation has to be helpful. It’s similar to portraits: they tend to be optimized for representing someone, resembling someone, and for being aesthetically pleasing. Which doesn’t mean representation itself requires resemblance or aesthetics.
I agree that something like (B) might be a defining feature of representation, but not quite. Some form of optimization seems to be necessary (randomly created things don’t represent anything), but optimization for (A) (resemblance) is necessary only for things like maps or pictures, but not for other forms of representation, as I argued above.
And then there are edge cases, like the Gettier problem, or vague impressions, or partially-correct ideas, or hearsay, or vibes, or Scott/Yvain, or latent variables that are somehow helpful for prediction but in a very hard-to-interpret and indirect way, or habits, etc. Those are edge-cases of map-territory correspondence. And arguing about what if anything they’re “really” representing is a waste of time, just as arguing about whether a virus is “really” alive is a waste of time.
The Gettier problem is a good illustration of what I think is the misunderstanding here: Gettier cases are not supposed to be edge cases of knowledge. Just like the randomly created string “Eliezer Yudkowsky” is not an edge case of representation. Edge cases have to do with vagueness, but the problem with defining knowledge is not that the concept may be vague. Gettier cases are examples where justified true belief is intuitively present but knowledge is intuitively absent. Not partially present like in edge cases. That is, we would apply the first terms (justification, truth, belief) but not the latter (knowledge). Vagueness itself is often not a problem for definition. A bachelor is clearly an unmarried man, but what may count as marriage, and what separates men from boys, is a matter of degree, and so too must be bachelor status.
Now you may still think that finding something like necessary and sufficient conditions for representation is still “a waste of time” even if vagueness is not the issue. But wouldn’t that, for example, also apply to your attempted explication of “intention”? Or “awareness”? In my opinion, all those concepts are interesting and benefit from analysis.
Back to the Scott / Yvain thing, suppose that I open Google Maps and find that there are two listings for the same restaurant location, one labeled KFC, the other labeled Kentucky Fried Chicken. What do we learn from that?
(X) The common-sense answer is: there’s some function in the Google Maps codebase such that, when someone submits a restaurant entry, it checks if it’s a duplicate before officially adding it. Let’s say the function is check_if_dupe(). And this code evidently had a false negative when someone submitted KFC. Hey, let’s dive into the implementation of check_if_dupe() to understand why there was a false negative! …
(Y) The philosophizing answer is: This observation implies something deep and profound about the nature of Representation! :)
Maybe I’m being too harsh on the philosophizing answer (here and in earlier comments); I’ll do the best I can to steelman it now.
(Y’) We almost always use language in a casual way, but sometimes it’s useful to have some term with a precise universal necessary-and-sufficient definition that can be unambiguously applied to any conceivable situation. (Certainly it would be hard to do math without those kinds of definitions!) And if we want to give the word “Representation” this kind of precise universal definition, then that definition had better be able to deal with this KFC situation.
Anyway, (Y’) is fine, but we need to be clear that we’re not learning anything about the world by doing that activity. At best, it’s a means to an end. Answering (Y) or (Y’) will not give any insight into the straightforward (X) question of why check_if_dupe() returned a false negative. (Nor vice-versa.) I think this is how we’re talking past each other. I suspect that you’re too focused on (Y) or (Y’) because you expect them to answer all our questions including (X), when it’s obvious (in this KFC case) that they won’t, right? I’m putting words in your mouth; feel free to disagree.
(…But that was my interpretation of a few comments ago where I was trying to chat about the implementation of check_if_dupe() in the human cortex as a path to answering your “failure of identification” question, and your response seemed to fundamentally reject that whole way of thinking about things.)
Nowhere in this series do I purport to offer any precise universal necessary-and-sufficient definition that can be applied to any conceivable situation, for any word at all, not “Awareness”, not “Intention”, not anything. It’s just not an activity I’m generally enthusiastic about. You can be the world-leading expert on socks without having a precise universal necessary-and-sufficient definition of what constitutes a sock. Likewise, if there’s a map of London made by an astronomically-unlikely coincidence, and you say it’s not a Representation of London, and I say yes it is a Representation of London, then we’re not disagreeing about anything substantive. We have the same understanding of the situation but a different definition of the word Representation. Does that matter? Sure, a little bit. Maybe your definition of Representation is better, all things considered—maybe it’s more useful for explaining certain things, maybe it better aligns with preexisting intuitions, whatever. But still, it’s just terminology, not substance. We can always just split the term Representation into “Representation_per_cubefox” and “Representation_per_Steve” and then suddenly we’re not disagreeing about anything at all. Or better yet, we can agree to use the word Representation in the obvious central cases where everybody agrees that this term is applicable, and to use multi-word phrases and sentences and analogies to provide nuance in cases where that’s helpful. In other words: the normal way that people communicate all the time! :)
I don’t understand the relevance of the Google Maps example and the emphasis you place on the “check_if_dupe()” function for understanding what “representation” is.
I think my previous explanation of why we can always know the identity of internal objects (to stick with a software analogy: whether two variables refer to the same memory location) but not always the identity of external objects, since there is no full access to those objects. Which is why the function can make mistakes. However, this is not an analysis of representation.
Nowhere in this series do I purport to offer any precise universal necessary-and-sufficient definition that can be applied to any conceivable situation, for any word at all, not “Awareness”, not “Intention”, not anything.
And you don’t even aim for a good definition? For what do you aim then?
You can be the world-leading expert on socks without having a precise universal necessary-and-sufficient definition of what constitutes a sock.
I think if I’m doing a priori armchair reasoning on socks, the way you and I do armchair reasoning here, I’m pretty much constrained to conceptual analysis. Which is the activity of finding necessary and sufficient conditions for a concept.
Likewise, if there’s a map of London made by an astronomically-unlikely coincidence, and you say it’s not a Representation of London, and I say yes it is a Representation of London, then we’re not disagreeing about anything substantive.
Yes. But are you even disagreeing with me on this map example here? If so, do you disagree with the cases of clouds, fists, random strings etc? In philosophy the issue is rarely the disagreement over what a term means (because then we could simply infer that the term is ambiguous, and the case is closed), rather, the problem is finding a definition for a term where everyone already agrees on what it means. Like knowledge, causation, explanation, evidence, probability, truth etc.
And you don’t even aim for a good definition? For what do you aim then? … I think if I’m doing a priori armchair reasoning on socks, the way you and I do armchair reasoning here, I’m pretty much constrained to conceptual analysis. Which is activity of finding necessary and sufficient conditions for a concept.
The goal of this series is to explain how certain observable facts about the physical universe arise from more basic principles of physics, neuroscience, algorithms, etc. See §1.6.
I’m not sure what you mean by “armchair reasoning”. When Einstein invented the theory of General Relativity, was he doing “armchair reasoning”? Well, yes in the sense that he was reasoning, and for all I know he was literally sitting in an armchair while doing it. :) But what he was doing was not “constrained to conceptual analysis”, right?
As a more specific example, one thing that happens a lot in this series is: I describe some algorithm, and then I talk about what happens when you run that algorithm. Those things that the algorithm winds up doing are often not immediately obvious just from looking at the algorithm pseudocode by itself. But they make sense once you spend some time thinking it through. This is the kind of activity that people frequently do in algorithms classes, and it overlaps with math, and I don’t think of it as being related to philosophy or “conceptual analysis” or “a priori armchair reasoning”.
In this case, the algorithm in question happens to be implemented by neurons and synapses in the human brain (I claim). And thus by understanding the algorithm and what it does when you run it, we wind up with new insights into human behavior and beliefs.
Does that help?
are you even disagreeing with me on this map example here
Yes I am disagreeing. If there’s a perfect map of London made by an astronomically-unlikely coincidence, and someone asks whether it’s a “representation” of London, then your answer is “definitely no” and my answer is “Maybe? I dunno. I don’t understand what you’re asking. Can you please taboo the word ‘representation’ and ask it again?” :-P
Einstein (scientists in general) tried to explain empirical observations. The point of conceptual analysis, in contrast, is to analyze general concepts, to answer “What is X?” questions, where X is a general term. I thought your post fit more in the conceptual analysis direction rather than in an empirical one, since it seems focused on concepts rather than observations.
One way to distinguish the two is by what they consider counterexamples. In science, a counterexample is an observation which contradicts a prediction of the proposed explanation. In conceptual analysis, a counterexample is a thought experiment (like a Gettier case or the string example above) to which the proposed definition (the definiens) intuitively applies but the defined term (the definiendum) doesn’t, or the other way round.
The algorithm analysis method arguably doesn’t really fit here, since it requires access to the algorithm, which isn’t available in case of the brain. (Unless I misunderstood the method and it treats algorithms actually as black boxes while only looking at input/output examples. But then it wouldn’t be so different from conceptual analysis, where a thought experiment is the input, and an intuitive application of a term the output.)
are you even disagreeing with me on this map example here
Yes I am disagreeing. If there’s a perfect map of London made by an astronomically-unlikely coincidence, and someone asks whether it’s a “representation” of London, then your answer is “definitely no” and my answer is “Maybe? I dunno. I don’t understand your question.
But I assume you do agree that random strings don’t refer to anyone, that clouds don’t represent anyone they accidentally resemble, that a fist by itself doesn’t represent anything etc. An accidentally created map seems to be the same kind of case, just vastly less likely. So treating them differently doesn’t seem very coherent.
Can you please taboo the word ‘representation’ and ask it again?” :-P
Well… That’s hardly possible when analysing the concept of representation, since this is just the meaning of the word “represents”. Of course nobody is forcing you to do it when you find it pointless, which is okay.
Of course nobody is forcing you to do it when you find it pointless, which is okay.
Yup! :) :)
The algorithm analysis method arguably doesn’t really fit here, since it requires access to the algorithm, which isn’t available in case of the brain.
Oh I have lots and lots of opinions about what algorithms are running in the brain. See my many dozens of blog posts about neuroscience. Post 1 has some of the core pieces: I think there’s a predictive (a.k.a. self-supervised) learning algorithm, that the trained model (a.k.a. generative model space) for that learning algorithm winds up stored in the cortex, and that the generative model space is continually queried in real time by a process that amounts to probabilistic inference. Those are the most basic things, but there’s a ton of other bits and pieces that I introduce throughout the series as needed, things like how “valence” fits into that algorithm, how “valence” is updated by supervised learning and temporal difference learning, how interoception fits into that algorithm, how certain innate brainstem reactions fit into that algorithm, how various types of attention fit into that algorithm … on and on.
Of course, you don’t have to agree! There is never a neuroscience consensus. Some of my opinions about brain algorithms are close to neuroscience consensus, others much less so. But if I make some claim about brain algorithms that seems false, you’re welcome to question it, and I can explain why I believe it. :)
…Or separately, if you’re suggesting that the only way to learn about what an algorithm will do when you run it, is to actually run it on an actual computer, then I strongly disagree. It’s perfectly possible to just write down pseudocode, think for a bit, and conclude non-obvious things about what that pseudocode would do if you were to run it. Smart people can reach consensus on those kinds of questions, without ever running the code. It’s basically math—not so different from the fact that mathematicians are perfectly capable of reaching consensus about math claims without relying on the computer-verified formal proofs as ground truth. Right?
As an example, “the locker problem” is basically describing an algorithm, and asking what happens when you run that algorithm. That question is readily solvable without running any code on a computer, and indeed it would be perfectly reasonable to find that problem on a math test where you don’t even have computer access. Does that help? Or sorry if I’m misunderstanding your point.
Sure, but we would still need a separate explanation if we want to understand how representation/reference works in a model (or in the brain) itself. If we are interested in that, of course. It could be interesting from the standpoint of philosophy of mind, philosophy of language, linguistics, cognitive psychology, and of course machine learning interpretability.
I don’t think we did run into any edge cases of representation so far where something partially represents or is partially represented, like chess is partially sport-like. Representation/reference/aboutness doesn’t seem a very vague concept. Apparently the difficulty of finding an adequate definition isn’t due to vagueness.
That being said, it’s clearly not necessary for your theory to cover this topic if you don’t find it very interesting and/or you have other objectives.
I’m trying to make a stronger statement than that. I think the kind of theory you’re looking for doesn’t exist. :)
In this comment you talked about a scenario where a perfect map of London was created by “an (extremely unlikely) accident”. I think that’s an “edge case of representation”, right? Just as a virus is an edge case of alive-ness. And (I claim) that arguing about whether viruses are “really” alive or not is a waste of time, and likewise arguing whether accidental maps “really” represent London or not is a waste of time too.
Again, a central case of representation would be:
(A) The map has lots and lots of features that straightforwardly correspond to features of London
(B) The reason for (A) is that the map was made by an optimization process that systematically (though imperfectly) tends to lead to (A), e.g. the map was created by a human map-maker who was trying to accurately map London
(C) The map is in fact useful for navigating London
I claim that if you run a predictive learning algorithm, that’s a valid (B), and we can call the trained generative model a “map”. If we do that…
You can pick out places where all three of (A-C) are very clearly applicable, and when you find one of those places, you can say that there’s some map-territory correspondence / representation there. Those places are clear-cut central examples, just as a cat is a clear-cut central example of alive-ness.
There are other bits of the map that clearly don’t correspond to anything in the territory, like a drug-induced hallucinated ghost that the person believes to be really standing in front of them. That’s a clear-cut non-example of map-territory correspondence, just as a rock is a clear-cut non-example of alive-ness.
And then there are edge cases, like the Gettier problem, or vague impressions, or partially-correct ideas, or hearsay, or vibes, or Scott/Yvain, or latent variables that are somehow helpful for prediction but in a very hard-to-interpret and indirect way, or habits, etc. Those are edge-cases of map-territory correspondence. And arguing about what if anything they’re “really” representing is a waste of time, just as arguing about whether a virus is “really” alive is a waste of time, right? In those cases, I claim that it’s useful to invoke the term “map-territory correspondence” or “representation” as part of a longer description of what’s happening here, but not as a bedrock ground truth that we’re trying to suss out.
I think it clearly wasn’t a case of representation, in the same way a random string clearly doesn’t represent anything, nor a cloud that happens to look like something. Those are not edge cases; edge cases are arguably examples where something satisfies a vague predicate partially, like chess being “sporty” to some degree. (“Life” is arguably also vague because it isn’t clear whether it requires a metabolism, which isn’t present in viruses.)
I think maps are not overly central cases of representation.
(A) is clearly not at all required for clear cases of representation, e.g. in case of language (abstract symbols) or when a fist is used to represent a gas station, or a stone represents a missing chess piece, etc. That’s also again the difference between the sock and the contents of the suitcase. The latter concept doesn’t resemble the thing it represents, like a sock, even though it may refer to one.
Regarding (C), representation also doesn’t have to be helpful to be representation. A secret message written in code is in fact optimized to be maximally unhelpful to most people, perhaps even to everyone in case of an encrypted diary. And the local explaining the way to the next gas station with the help of his hands clearly means something regardless of whether someone is able to understand what he means. Yes, maps specifically are mostly optimized to be helpful, but that doesn’t mean representation has to be helpful. It’s similar to portraits: they tend to be optimized for representing someone, resembling someone, and for being aesthetically pleasing. Which doesn’t mean representation itself requires resemblance or aesthetics.
I agree that something like (B) might be a defining feature of representation, but not quite. Some form of optimization seems to be necessary (randomly created things don’t represent anything), but optimization for (A) (resemblance) is necessary only for things like maps or pictures, but not for other forms of representation, as I argued above.
The Gettier problem is a good illustration of what I think is the misunderstanding here: Gettier cases are not supposed to be edge cases of knowledge. Just like the randomly created string “Eliezer Yudkowsky” is not an edge case of representation. Edge cases have to do with vagueness, but the problem with defining knowledge is not that the concept may be vague. Gettier cases are examples where justified true belief is intuitively present but knowledge is intuitively absent. Not partially present like in edge cases. That is, we would apply the first terms (justification, truth, belief) but not the latter (knowledge). Vagueness itself is often not a problem for definition. A bachelor is clearly an unmarried man, but what may count as marriage, and what separates men from boys, is a matter of degree, and so too must be bachelor status.
Now you may still think that finding something like necessary and sufficient conditions for representation is still “a waste of time” even if vagueness is not the issue. But wouldn’t that, for example, also apply to your attempted explication of “intention”? Or “awareness”? In my opinion, all those concepts are interesting and benefit from analysis.
Back to the Scott / Yvain thing, suppose that I open Google Maps and find that there are two listings for the same restaurant location, one labeled KFC, the other labeled Kentucky Fried Chicken. What do we learn from that?
(X) The common-sense answer is: there’s some function in the Google Maps codebase such that, when someone submits a restaurant entry, it checks if it’s a duplicate before officially adding it. Let’s say the function is
check_if_dupe()
. And this code evidently had a false negative when someone submitted KFC. Hey, let’s dive into the implementation ofcheck_if_dupe()
to understand why there was a false negative! …(Y) The philosophizing answer is: This observation implies something deep and profound about the nature of Representation! :)
Maybe I’m being too harsh on the philosophizing answer (here and in earlier comments); I’ll do the best I can to steelman it now.
(Y’) We almost always use language in a casual way, but sometimes it’s useful to have some term with a precise universal necessary-and-sufficient definition that can be unambiguously applied to any conceivable situation. (Certainly it would be hard to do math without those kinds of definitions!) And if we want to give the word “Representation” this kind of precise universal definition, then that definition had better be able to deal with this KFC situation.
Anyway, (Y’) is fine, but we need to be clear that we’re not learning anything about the world by doing that activity. At best, it’s a means to an end. Answering (Y) or (Y’) will not give any insight into the straightforward (X) question of why
check_if_dupe()
returned a false negative. (Nor vice-versa.) I think this is how we’re talking past each other. I suspect that you’re too focused on (Y) or (Y’) because you expect them to answer all our questions including (X), when it’s obvious (in this KFC case) that they won’t, right? I’m putting words in your mouth; feel free to disagree.(…But that was my interpretation of a few comments ago where I was trying to chat about the implementation of
check_if_dupe()
in the human cortex as a path to answering your “failure of identification” question, and your response seemed to fundamentally reject that whole way of thinking about things.)Nowhere in this series do I purport to offer any precise universal necessary-and-sufficient definition that can be applied to any conceivable situation, for any word at all, not “Awareness”, not “Intention”, not anything. It’s just not an activity I’m generally enthusiastic about. You can be the world-leading expert on socks without having a precise universal necessary-and-sufficient definition of what constitutes a sock. Likewise, if there’s a map of London made by an astronomically-unlikely coincidence, and you say it’s not a Representation of London, and I say yes it is a Representation of London, then we’re not disagreeing about anything substantive. We have the same understanding of the situation but a different definition of the word Representation. Does that matter? Sure, a little bit. Maybe your definition of Representation is better, all things considered—maybe it’s more useful for explaining certain things, maybe it better aligns with preexisting intuitions, whatever. But still, it’s just terminology, not substance. We can always just split the term Representation into “Representation_per_cubefox” and “Representation_per_Steve” and then suddenly we’re not disagreeing about anything at all. Or better yet, we can agree to use the word Representation in the obvious central cases where everybody agrees that this term is applicable, and to use multi-word phrases and sentences and analogies to provide nuance in cases where that’s helpful. In other words: the normal way that people communicate all the time! :)
I don’t understand the relevance of the Google Maps example and the emphasis you place on the “check_if_dupe()” function for understanding what “representation” is.
I think my previous explanation of why we can always know the identity of internal objects (to stick with a software analogy: whether two variables refer to the same memory location) but not always the identity of external objects, since there is no full access to those objects. Which is why the function can make mistakes. However, this is not an analysis of representation.
And you don’t even aim for a good definition? For what do you aim then?
I think if I’m doing a priori armchair reasoning on socks, the way you and I do armchair reasoning here, I’m pretty much constrained to conceptual analysis. Which is the activity of finding necessary and sufficient conditions for a concept.
Yes. But are you even disagreeing with me on this map example here? If so, do you disagree with the cases of clouds, fists, random strings etc? In philosophy the issue is rarely the disagreement over what a term means (because then we could simply infer that the term is ambiguous, and the case is closed), rather, the problem is finding a definition for a term where everyone already agrees on what it means. Like knowledge, causation, explanation, evidence, probability, truth etc.
The goal of this series is to explain how certain observable facts about the physical universe arise from more basic principles of physics, neuroscience, algorithms, etc. See §1.6.
I’m not sure what you mean by “armchair reasoning”. When Einstein invented the theory of General Relativity, was he doing “armchair reasoning”? Well, yes in the sense that he was reasoning, and for all I know he was literally sitting in an armchair while doing it. :) But what he was doing was not “constrained to conceptual analysis”, right?
As a more specific example, one thing that happens a lot in this series is: I describe some algorithm, and then I talk about what happens when you run that algorithm. Those things that the algorithm winds up doing are often not immediately obvious just from looking at the algorithm pseudocode by itself. But they make sense once you spend some time thinking it through. This is the kind of activity that people frequently do in algorithms classes, and it overlaps with math, and I don’t think of it as being related to philosophy or “conceptual analysis” or “a priori armchair reasoning”.
In this case, the algorithm in question happens to be implemented by neurons and synapses in the human brain (I claim). And thus by understanding the algorithm and what it does when you run it, we wind up with new insights into human behavior and beliefs.
Does that help?
Yes I am disagreeing. If there’s a perfect map of London made by an astronomically-unlikely coincidence, and someone asks whether it’s a “representation” of London, then your answer is “definitely no” and my answer is “Maybe? I dunno. I don’t understand what you’re asking. Can you please taboo the word ‘representation’ and ask it again?” :-P
Einstein (scientists in general) tried to explain empirical observations. The point of conceptual analysis, in contrast, is to analyze general concepts, to answer “What is X?” questions, where X is a general term. I thought your post fit more in the conceptual analysis direction rather than in an empirical one, since it seems focused on concepts rather than observations.
One way to distinguish the two is by what they consider counterexamples. In science, a counterexample is an observation which contradicts a prediction of the proposed explanation. In conceptual analysis, a counterexample is a thought experiment (like a Gettier case or the string example above) to which the proposed definition (the definiens) intuitively applies but the defined term (the definiendum) doesn’t, or the other way round.
The algorithm analysis method arguably doesn’t really fit here, since it requires access to the algorithm, which isn’t available in case of the brain. (Unless I misunderstood the method and it treats algorithms actually as black boxes while only looking at input/output examples. But then it wouldn’t be so different from conceptual analysis, where a thought experiment is the input, and an intuitive application of a term the output.)
But I assume you do agree that random strings don’t refer to anyone, that clouds don’t represent anyone they accidentally resemble, that a fist by itself doesn’t represent anything etc. An accidentally created map seems to be the same kind of case, just vastly less likely. So treating them differently doesn’t seem very coherent.
Well… That’s hardly possible when analysing the concept of representation, since this is just the meaning of the word “represents”. Of course nobody is forcing you to do it when you find it pointless, which is okay.
Yup! :) :)
Oh I have lots and lots of opinions about what algorithms are running in the brain. See my many dozens of blog posts about neuroscience. Post 1 has some of the core pieces: I think there’s a predictive (a.k.a. self-supervised) learning algorithm, that the trained model (a.k.a. generative model space) for that learning algorithm winds up stored in the cortex, and that the generative model space is continually queried in real time by a process that amounts to probabilistic inference. Those are the most basic things, but there’s a ton of other bits and pieces that I introduce throughout the series as needed, things like how “valence” fits into that algorithm, how “valence” is updated by supervised learning and temporal difference learning, how interoception fits into that algorithm, how certain innate brainstem reactions fit into that algorithm, how various types of attention fit into that algorithm … on and on.
Of course, you don’t have to agree! There is never a neuroscience consensus. Some of my opinions about brain algorithms are close to neuroscience consensus, others much less so. But if I make some claim about brain algorithms that seems false, you’re welcome to question it, and I can explain why I believe it. :)
…Or separately, if you’re suggesting that the only way to learn about what an algorithm will do when you run it, is to actually run it on an actual computer, then I strongly disagree. It’s perfectly possible to just write down pseudocode, think for a bit, and conclude non-obvious things about what that pseudocode would do if you were to run it. Smart people can reach consensus on those kinds of questions, without ever running the code. It’s basically math—not so different from the fact that mathematicians are perfectly capable of reaching consensus about math claims without relying on the computer-verified formal proofs as ground truth. Right?
As an example, “the locker problem” is basically describing an algorithm, and asking what happens when you run that algorithm. That question is readily solvable without running any code on a computer, and indeed it would be perfectly reasonable to find that problem on a math test where you don’t even have computer access. Does that help? Or sorry if I’m misunderstanding your point.