I’d rather this phrasing because having the concept of a real Berlin can lead to confusions when we apply the idea of by analogy to other things, like theories of arithmetic, the universe.
I do think that there a real universe in the same sense that there a real Berlin. map(berlin) is not the same object as berlin just as map(universe) is not the same object as universe. Positivists want to have a state of affair where there’s no difference between map(universe) and universe. That goal doesn’t seem in reach and might even be theoretically impossible. That doesn’t mean that it’s helpful to just tell the positivists to pretend that map(universe) and universe are the same and the issue is solved.
In theory in bioinformatics different models of a phenomena have different sensitivity and specificity for a real phenomena. Depending on what you want to do you might use a model with high sensitivity or a model with high specificity. Neither of those models is more true and both aren’t the same as the real phenomena. But to have the discussion about which models is more useful to describe a certain phenomena it’s useful to have a notion of the phenomena.
In bioinformatics someone who wants to simulate 100 neurons is going to use a different model of neurons as someone who wants to simulate 10,000,000 neurons. At the same time it’s important to understand that the models are not the reality.
The Blue Brain Project claims to simulate a brain. If you want to know how much computational power is needed for “human uploading” you can’t just take the amount of computational power that the Blue Brain project uses for a single neuron. Forgetting that they are investigating a model of a neuron and not a real neuron screws you.
If we take about whether or not there’s more autism than there was 30 years ago it’s very useful to be mentally aware of what you mean with the term autism. It could be that more people are diagnosed because they changed the diagnosis criteria. It could be that more people are diagnosed because there more awareness about autism in the general public and therefore fewer cases of autism stay undiagnosed.
Of course autism doesn’t exist in the same ontological sense that a carbon atom exists. Positivism therefore doesn’t really know what to do with it. You find positivist say silly things like that thing that exist in the same sense that autism exist aren’t “real”. The positivist doesn’t want to talk about the ontology, that you need to talk about to speak meaningfully about how autism exists.
Because few people actual deal with practical ontology we have the DSM-V that defines mental illnesses in a really awful way. The committee that draw up the DSM-V didn’t go and optimized their definitions for sensitivity and specificity so that two doctors will make the same diagnosis.
I’m going to drop discussion about the universe in particular for now. Explaining why I think that the map-territory epistemology runs into problems there would require a lot of exposition on points I haven’t made yet, so it’s better suited for a post than a comment.
I’ve realised that there’s a lot more inferential distance than I thought between some of the things I said in this post and the content of other posts on LW. I’m thinking of strategies to bridge that now.
That doesn’t mean that it’s helpful to just tell the positivists to pretend that map(universe) and universe are the same and the issue is solved.
Hm, if you’re attributing that to me then I think I haven’t been nearly clear enough.
Earlier I said that I had ontological considerations but didn’t go into them in my post explicitly. I’ll outline them for you now (although I’ll be talking about them in a post in the near future, over the next couple days if I kick myself into gear properly).
In the end I’m not going to be picky about what different models claim to be real so long as they work, but in the epistemology I use to consider all of those models I’m only going to make reference to agents and their perceptual interfaces. If we consider maps and models as tools that we use to achieve goals, then we’re using them to navigate/manipulate some aspect of our experience.
We understand by trial and error that we don’t have direct control over our experiences. Often we model this lack of control by saying that there’s a real state of affairs that we don’t have perfect access to. Like I said, I think this model has limitations in areas we consider more abstract, like math, so I don’t want this included in my epistemology. Reality is a tool I can use to simplify my thinking in some situations, not something I want getting in the way in every epistemological problem I encounter.
Likewise, in your autism example, we have a model of possible failure modes that empirical research can have. This is an extremely useful tool, and a good application of the map-territory distinction, but that example still doesn’t compel me to use either of those tools in my epistemology. The more tools I commit myself to, the less stable my epistemology is. (Keeping reservationism in the back of your mind would be helpful here.)
I do think that there a real universe in the same sense that there a real Berlin. map(berlin) is not the same object as berlin just as map(universe) is not the same object as universe. Positivists want to have a state of affair where there’s no difference between map(universe) and universe. That goal doesn’t seem in reach and might even be theoretically impossible. That doesn’t mean that it’s helpful to just tell the positivists to pretend that map(universe) and universe are the same and the issue is solved.
In theory in bioinformatics different models of a phenomena have different sensitivity and specificity for a real phenomena. Depending on what you want to do you might use a model with high sensitivity or a model with high specificity. Neither of those models is more true and both aren’t the same as the real phenomena. But to have the discussion about which models is more useful to describe a certain phenomena it’s useful to have a notion of the phenomena.
In bioinformatics someone who wants to simulate 100 neurons is going to use a different model of neurons as someone who wants to simulate 10,000,000 neurons. At the same time it’s important to understand that the models are not the reality. The Blue Brain Project claims to simulate a brain. If you want to know how much computational power is needed for “human uploading” you can’t just take the amount of computational power that the Blue Brain project uses for a single neuron. Forgetting that they are investigating a model of a neuron and not a real neuron screws you.
If we take about whether or not there’s more autism than there was 30 years ago it’s very useful to be mentally aware of what you mean with the term autism. It could be that more people are diagnosed because they changed the diagnosis criteria. It could be that more people are diagnosed because there more awareness about autism in the general public and therefore fewer cases of autism stay undiagnosed.
Of course autism doesn’t exist in the same ontological sense that a carbon atom exists. Positivism therefore doesn’t really know what to do with it. You find positivist say silly things like that thing that exist in the same sense that autism exist aren’t “real”. The positivist doesn’t want to talk about the ontology, that you need to talk about to speak meaningfully about how autism exists.
Because few people actual deal with practical ontology we have the DSM-V that defines mental illnesses in a really awful way. The committee that draw up the DSM-V didn’t go and optimized their definitions for sensitivity and specificity so that two doctors will make the same diagnosis.
I’m going to drop discussion about the universe in particular for now. Explaining why I think that the map-territory epistemology runs into problems there would require a lot of exposition on points I haven’t made yet, so it’s better suited for a post than a comment.
I’ve realised that there’s a lot more inferential distance than I thought between some of the things I said in this post and the content of other posts on LW. I’m thinking of strategies to bridge that now.
Hm, if you’re attributing that to me then I think I haven’t been nearly clear enough.
Earlier I said that I had ontological considerations but didn’t go into them in my post explicitly. I’ll outline them for you now (although I’ll be talking about them in a post in the near future, over the next couple days if I kick myself into gear properly).
In the end I’m not going to be picky about what different models claim to be real so long as they work, but in the epistemology I use to consider all of those models I’m only going to make reference to agents and their perceptual interfaces. If we consider maps and models as tools that we use to achieve goals, then we’re using them to navigate/manipulate some aspect of our experience.
We understand by trial and error that we don’t have direct control over our experiences. Often we model this lack of control by saying that there’s a real state of affairs that we don’t have perfect access to. Like I said, I think this model has limitations in areas we consider more abstract, like math, so I don’t want this included in my epistemology. Reality is a tool I can use to simplify my thinking in some situations, not something I want getting in the way in every epistemological problem I encounter.
Likewise, in your autism example, we have a model of possible failure modes that empirical research can have. This is an extremely useful tool, and a good application of the map-territory distinction, but that example still doesn’t compel me to use either of those tools in my epistemology. The more tools I commit myself to, the less stable my epistemology is. (Keeping reservationism in the back of your mind would be helpful here.)