This is essentially the debate between scientific realists and anti-realists in philosophy of science. Realists hold that unobservable entities postulated by scientific theories are still “real”; anti-realists hold that these entities are not real. One of the big problems for anti-realists, as you pointed out with your first example, is that “what is observable” changes over time (e.g. we can now “see” atoms in ways that would have startled physicists in the 1860′s). However, the anti-realists do have one interesting argument in their favor: many theories that were empirically successful for a long period of time turned out to postulate unobservable entities that didn’t actually exist. For example: ether, which made claims that were useful as prediction tools but didn’t truly reflect reality. (This argument comes from Bas Van Fraassen, a leading anti-realist.)
Hopefully this historical context is helpful. The point I am trying to make is this: your question is one of those “great unsolved problems in philosophy.”
The point I am trying to make is this: your question is one of those “great unsolved problems in philosophy.”
The usual “great unsolved question of philosophy’ is “Are atoms real?”. I’m not trying to ask that question. I’m instead asking what disguised empirical inquiry scientists were engaged in, when, in the course of ordinary scientific research (and not metaphysical debates) they tried to figure out whether atoms were real.
Contemporary philosophers call this conceptual analysis and it’s exactly how they talk about scientific realism and anti-realism. Your answer to the question, that X is real if it can be included as part of a coherent whole with the rest of science is vaguely Quinean.
People have solved good chunks of “Why do all dogs resemble one another”, which is a problem that Plato cared a lot about. (Mendelian genetics, Darwinian evolution, and our understanding of how the brain clusters perceptions are all parts of the answer here.)
People have also solved good chunks of: “Is there a God?”, “Is there likely to be an after life?”, and “In what sense do we have free will?”, among other questions.
People have also solved good chunks of: “Is there a God?”, “Is there likely to be an after life?”, and “In what sense do we have free will?”, among others.
If a problem is solved in philosophy, but nobody reads it …
Good point. But I think it is the case that almost everyone who has need of (i.e. uses) information from physics, biology, and neuroscience uses the standard, though esoteric, information produced by scientists.
But people who need (i.e. make decisions based on) ideas from philosophy regarding metaphysics, generally do not make use of what you and I might call the “state of the art” in this field.
Sure, unfortunately acting on the false beliefs that there is a God and you have a soul doesn’t leave the loud and fiery explosions that acting on false beliefs about physics does.
People have solved good chunks of “Why do all dogs resemble one another”, which is a problem that Plato cared a lot about. (Mendelian genetics, Darwinian evolution, and our understanding of how the brain clusters perceptions are all parts of the answer here.)
Why not just the last of those? All dogs resemble one another because if they didn’t have a critical resemblance, we wouldn’t use the same label for them. Even today, we often have common-use terms for organisms, where the labels (taken literally) violate post-Darwinian understanding, and that’s because of what the layperson considers a relevant similarity.
In other cases (e.g. “is a whale a fish?”), a deeper awareness of the relevant similarities did cause us to change up our label.
All dogs resemble one another because if they didn’t have a critical resemblance, we wouldn’t use the same label for them.
That would only be a sufficient answer to the question “Why do we have a category called ‘dogs’ such that all of its members resemble one another?”. Genetics, evolution, etc. are indeed necessary to answer the question about the referent rather than the quotation.
That would only be a sufficient answer to the question “Why do we have a category called ‘dogs’ such that all of its members resemble one another?”. Genetics, evolution, etc. are indeed necessary to answer the question about the referent rather than the quotation.
Only because he picked a specific category where the (apparently-significant) physical resemblance did in fact coincide with a genetic resemblance. But because he picked a class of animals (“dogs”) due to other criteria, the answer to that question begins and ends with his classification algorithm and what his mind counts as “doglike”.
It’s quite common (as I made clear) for people to give the same name to genetically distant organisms or organs. The reason for physical similarity in that case is quite different from the reason in the case of the genetically similar organisms.
To base your answer to Plato on dogs’ genetic similarity, you would also have to “explain” sharks and dolphins as being the same species—the “species” of fish.
To base your answer to Plato on dogs’ genetic similarity, you would also have to “explain” sharks and dolphins as being the same species—the “species” of fish.
Here, too, one search out scientific explanations for how the similarities arose—this time having to do partly with how form is passed along within a species (genetics), and partly with convergent evolutionary pressures that lead sharks and dolphins to both have a streamlined shape, flippers, etc.
Yes, I get that. But, again, Plato didn’t create a category isomorphic to modern knowledge of genetic lines. He created a category based on what Greeks at the time deemed “doglike”. And the answer to that question is purely one of “why do you consider a boundary that includes only those things you call ‘dogs’ worthy of its own label?” Only later, as humans gained more knowledge, could they ask more complex questions about organisms that require knowledge of genetics, selection pressures, and convergent evolution. But the Greeks were not then at that point.
Also, explanations having to do with how humans deem something doglike are scientific.
Edit: To make the point clearer, consider ansewring Plato by saying “dogs are similar because genes determine what an animal looks like, animals reproduce by passing genes, and all dogs have similar genes”. Such an answer would be wrong (uninformative) because it uses the premise “animals you give the same label to are similar because they have genes proportionally similar”. This model is wrong, as it requires (per my above comment) you to also tell Plato that “shark-fish and dolphin-fish are similar because genes determine what an animal looks like, animals reproduce by passing genes, and all fish have similar genes.”
It’s not just a matter of labels. We can imagine a world in which every creature was a unique random mishmash of features without regard to any other creature. Empirically, we do not live in such a world; in our world, living organisms come in definite clusters with regularities to their properties. Evolution provides an explanation of why biology does objectively possess this feature.
I understand that. That still doesn’t mean Plato was in a position to be asking a question that requires understanding of evolutionary theory to answer. His question is not much different from him asking, had he lived in the world you posited, why all aerofauns are similar, where “aerofaun” is a label they innocuously came up with for “any creature that flies”.
In that case, as in the actual one, there are huge differences among the aerofauns, more so than there are among dogs or among flying creatures in this world. But, even if that world’s true explanation were “aliens regularly send their randomized automaton toys to earth”, that still wouldn’t mean you need aliens to answer the aerofaun question, because your question is already dissolved by understanding your own categorization system.
Edit: To further clarify the point: In your hypothetical world, the correct (informative, expectation-constraining) answer to a Plato asking “Why are all aerofauns similar?” would be:
“They’re not similar in any objective sense. They simply have one particular similarity that you deem salient—the fact of their flying—and this is obscured by your having been accustomed to using the same label, ‘aerofaun’ for all of them. And the reason for a word’s existence in the first place is because it calls out a human-relevant cluster. Because it matters to humans whether an animal flies or not, we have a word for it. But once you know whether an animal flies, there is no additional fact of the matter as to why the fliers are similar—that similarity is an artifact of the filtering applied before an animal is called an aerofaun.”
Similarly, you should answer Plato: “Dogs aren’t similar in any objective sense. They simply have a few similarities that you deem salient—how they’re adaptable to humans, work in packs, walk on four legs, like meat, bark, etc. -- and this is obscured by your having been accustomed to using the same label, ‘dog’, for all of them. And the reason for a word’s existence in the first place is because it calls out a human-relevant cluster. Because it matters to humans whether an animal has all the traits {friendly to us, works in packs, can’t stand for long, wants meat, and can emit a loud call}, there is no additional fact of the matter as to why dogs are similar—that similarity is an artifact of the filtering applied before an animal is called a dog. Maybe one day we’ll find that some of the things we were calling dogs differ in a critical way—maybe they can’t interbreed with most dogs? -- and we’ll have to change our labeling system.”
I agree with Silas. Talk of genetics and evolution here makes it look like Plato was actually concerned about dogs but that’s just an example of the problem. Plato was talking about the general question of which the following are also examples (tokens actually!), “Why do all triangles resemble each other?” “Why do all storms resemble each other?” “Why do all performances of Oedipus resemble each other?” and so on. And he’s not looking for a causal explanation, he’s trying to understand what our categories are doing and what it means to refer to different things by the same name.
Understanding how the human brain clusters perceptions helps us understand the question but it doesn’t really answer it- it just transforms it to a question about the reality of such categories. And this problem is far from solved.
In any case, if we’re counting philosophical problems which were transformed at some point into scientific problems then we might as well include the entirety of the sciences save Geometry, Music and Rhetoric as “solved philosophical problems”. I don’t say this to condemn philosophy either, on the contrary it was often philosophers who developed the methodology to answer these questions.
Plato was talking about the general question of which the following are also examples (tokens actually!), “Why do all triangles resemble each other?” “Why do all storms resemble each other?” “Why do all performances of Oedipus resemble each other?” and so on. And he’s not looking for a causal explanation, he’s trying to understand what our categories are doing and what it means to refer to different things by the same name.
Well, I think that’s giving Plato too much credit—my claim is that, at the time, they weren’t even aware of how their categorizations were influencing their judgments. But your comparison to the triangle question is very apt. According what I read in Dennett’s Darwin’s Dangerous Idea, the Western ontology from Greeks through to the 19th century was that all animals represent a special, ideal, “platonic” form.
To claim, as Darwin did, that animals changed forms over time sounded to them, like it would sound to us if someone argued, “Okay, you know all those integers we use? Well, they weren’t always that way. They kinda changed over time. That 3 and 4 we have? See, they actually used to be a 3.5. Then over time it split into 3.2 and 3.8, eventually reaching the 3 and 4 we have today.”
In short, the Greeks didn’t recognize the hidden inferences that words were making and thought they were finding objective categories when really they were creating human-useful categories. EY goes into detail about this in the article AnnaSalamon referenced, Words as Hidden Inferences.
Yet the brain goes on about its work of categorization, whether or not we consciously approve. “All humans are mortal, Socrates is a human, therefore Socrates is mortal”—thus spake the ancient Greek philosophers. Well, if mortality is part of your logical definition of “human”, you can’t logically classify Socrates as human until you observe him to be mortal. But—this is the problem—Aristotle knew perfectly well that Socrates was a human. Aristotle’s brain placed Socrates in the “human” category as efficiently as your own brain categorizes tigers, apples, and everything else in its environment: Swiftly, silently, and without conscious approval.
So what I think you’re saying is that Plato had so much map-territory confusion that what he had to say about forms isn’t even a meaningful question. Is that right?
I might agree. It’s hard to figure out how ancient philosophers were actually thinking about problems given that we only approach their work through modernized translations and with our own concepts and categories at hand.
I’m not sure I see Plato inferring from words, though. Maybe you can point out that step explicitly?
Part of the problem is that “Words as Hidden Inferences” doesn’t make that much sense to me as it stands, particularly as it relates to Greek philosophy. Eliezer’s example is at the very least poorly chosen. Aristotle didn’t even necessarily believe that humans are mortal, he seems agnostic on that question. The quote “All humans are mortal, Socrates is a human, therefore Socrates is mortal” isn’t an argument for anyone’s mortality. It’s an example of a logical syllogism. “All humans are mortal” and “Socrates is a human” are just premises designed to illustrate the form. They might as well be made in set notation.
Aristotle believed bodies inevitably die, if I recall. That maybe a wrong judgment but an inference based mostly on observation (or at least based on general theories which were based on observation but unfortunately not much experimentation). He thought that the part of the soul that thinks might be able to live on after the body but that at least some of the soul was dependent upon the body (note that Aristotle’s soul isn’t at all like the Platonic/Christian conception we’re familiar with and could charitably but plausibly be updated into something people here would be comfortable identifying as a person sans body).
So what I think you’re saying is that Plato had so much map-territory confusion that what he had to say about forms isn’t even a meaningful question. Is that right?
No, I agree there was a meaningful question there: “why have the things we (historically) labeled as ‘dogs’ seem so similar to us?” And you can meaingfully answer that question, in a way that improves your map of the world, by looking at how things got into the dog category in the first place, and why that category (regardless of name) even exists.
While I admit I don’t have special expertise on Greek philosophy in this area, I do know that they had not gathered enough evidence at that point to even be asking questions that require knowledge of evolution to answer, and that they were hung up on idealism (as opposed to nominalism) which forces you to think in terms of ideal forms rather than models that identify relevant clusters.
So perhaps EY’s characterization of the situation misled me, but the essential features are still there to support my claim that Plato went astray by not recognizing the source of the classification-as-dog.
I see. I guess we were disagreeing with Anna for somewhat different reasons. Your point is that when Plato was considering the question “why do the things we call dogs resemble each other” the concept the English word dog references was just a folk concept that was applied to some things that looked the same- the causal-historical story for how those things came to look the way they do is irrelevant to the fact they’re called the same thing just because our brains classify them the same way.
I think thats right. My point was that Plato didn’t really care about dogs so much. What he cared about was this phenomenon of resemblance. The question wasn’t so much how did discrete individuals (Lassie and Snoopy) come to exist in a way that resemble each other. Rather, the question is “We call both Lassie and Snoopy ‘dogs’ and yet they are different individuals. What then is the relation between ‘dog’ and Lassie/Snoopy and what are we doing when we call both Lassie and Snoopy dogs? But that might be more the entire tradition of Western philosophy talking rather than Plato himself.
Plato’s answer though is that there are abstract objects, “forms” which are imperfectly instantiated in Lassie and Snoopy. Both approximate ideal ‘dogness’. For Plato it was these forms that were ‘most real’ so to speak because they were eternal and perfect. Plato, and especially some of his later followers got really mystical about all this and it got imported into Christianity. But we can excise the mysticism/silly talk about perfection and get a live philosophical question (the most notable Platonist of the 20th century is Bertrand Russell). A modern version of the question might be “what is the ontological status of abstract objects?” At best evolution and genetics are only tangentially involved with that question and only for a subset of abstract objects (things like species) and as a whole the question is generally considered unsolved. As it stands nominalism and Platonism have about equal representation among philosophers as a whole, though Platonism has a slight advantage among those who do work in Metaphysics.
Philosophy consists of the questions that we don’t understand well enough to even know how to go about answering them, but which, despite that (or because of that), are still really fun to argue about endlessly even in the absence of any new insights about the structure of the problem.
(Basically, I think describing a given problem as “philosophical” is mostly mind projection; from history, it seems that all the qualities that make a given problem a philosophical one have been properties of the people thinking about it rather than of the problem itself.)
Problems we don’t know the right questions for yet. When we have a good handle on a question, it becomes science. When we have a good answer for the question, it becomes settled science.
People have also solved good chunks of: “Is there a God?”, “Is there likely to be an after life?”, and “In what sense do we have free will?”, among other questions.
Er… I think a small number of people have made some progress, and I guess you could call that progress ‘good chunks’, but I get the feeling that the vast majority of rationalists are very confused about the first two questions (or would be if they noticed their confusion). Atheists and theists are both right and wrong in their own way, but neither have a solid understanding of the important underlying considerations. If you asked me if souls are real or if God is real, I’d say yes to both, but the explanation thereof would be excruciatingly difficult, and I’d be tempted to label the question ‘not even wrong’, akin to ‘If a tree falls in the forest...’. (And I’m not talking about trivially true ensemble universe stuff, either—I think there’s more to it than just being smugly meta-contrarian.) Your point stands that there are a lot of solved philosophy problems, I’m just disputing your first two examples. Free will is a good example, though.
Atheists and theists are both right and wrong in their own way, but neither have a solid understanding of the important underlying considerations. If you asked me if souls are real or if God is real, I’d say yes to both, but the explanation thereof would be excruciatingly difficult, and I’d be tempted to label the question ‘not even wrong’, akin to ’If a tree falls in the forest...
Not to make things ‘excruciating’ for you but you can’t really leave that hanging.
Gah. I’ll stick my neck out a bit. Short barely-defensible version: sometimes your low-level-language/ontology should be bits, sometimes it should be gods. Souls are a pretty good model of how memetic cognitive algorithms make up about half of human experience and don’t reside in any one body. (You could remove all of the memes from someone’s body and put them in someone else’s body, and that’d be damn close to reincarnation. There are obvious objections here but I’m just going to plow ahead.) For instance, Wikipedia: “In philosophy of mind, dualism is a set of views about the relationship between mind and matter, which begins with the claim that mental phenomena are, in some respects, non-physical.”
‘Non-physical’ is the key concept. I like to model cognitive algorithms in terms of e.g. memetics and computer science and phenomenology, not in terms of atoms. So when the nasty monists come along and say ‘everything about this soul business can be explained in terms of atoms’, I say, well sure, the languages are Turing-equivalent, but who cares? There’s barely a difference in anticipated experiences, it’s just arguing about which ontology better carves reality at its joints. Personally, I’m just fine with using the ontology of souls and gods and magic. Yeah, half of it ‘reduces’ to the placebo effect and memetics and what not, but why choose that ontology? Use ontological pragmatism.
(I guess there’s an argument that you can have a speed prior over speed prior languages and should use low-level languages when all else is equal, but I find ‘algorithmic ontology’ to be simpler and easier to reason about than ‘atomic/physical ontology’ anyway, so once again I think I disagree with the monists.)
With regards to God in particular: God exists in a lot of peoples’ heads. He’s a massively parallel distributed cognitive algorithm that millions of people use and model. That’s more of an existence than your average person, by far. What atheists mean when they claim He doesn’t exist is something else that no theists actually care about. He’s revealed Himself to them. Once you’ve personally experienced the God cognitive algorithm, are you going to listen to some snobby scientist who comes along and tells you that God doesn’t exist? But you directly experienced Him! And so did half the people at your church! Silly ignorant scientsts.
In that sense, and it is an important sense, God is very real. More than that, all memes (memetic algorithms) are real. Now, it might be bad ontological pragmatism if this leads you to go ahead and start believing you’ll go to the Christian heaven after you die. And there are all sorts of just-plain-wrong things that theists believe. But I don’t think that they’re that much more wrong than your average atheist. Both are pretty damn wrong. But it doesn’t really matter, because most beliefs are clothes. It’s when people start taking things seriously that you run into trouble.
And I realize this comes across as just being pointlessly meta-contrarian, but it’s important to reason about these things correctly when you’re doing Friendliness philosophy.
With regards to God in particular: God exists in a lot of peoples’ heads. He’s a massively parallel distributed cognitive algorithm that millions of people use and model. . . . In that sense, and it is an important sense, God is very real. More than that, all memes (memetic algorithms) are real.
But that’s not the sense that theists mean when they say “God is real”, and it’s definitely not the sense that atheists mean when they say “God isn’t real”. When someone says “God isn’t real”, it’s not like they’re saying that God is not a meme that exists in anybody’s mind — a person needs to have their own mental copy of the God algorithm, and the understanding that millions of people share it, in order to even bother being an atheist. It’s pretty clear that they mean that the God algorithm isn’t a model of any actual agent that created the universe or acts on it independently of the humans modeling him.
So I’d disagree with “In that sense, and it is an important sense, God is very real.” Clearly in that sense God is real, but it seems like a profoundly unimportant sense to me, particularly because I don’t think anyone actually uses “real” that way. It seems like a type error; a god is an extremely different sort of thing than the idea of a god.
But that’s not the sense that theists mean when they say “God is real”, and it’s definitely not the sense that atheists mean when they say “God isn’t real”.
Indeed. God is the omniscient, omnipresent, infinitely powerful and utterly non-existent creator of the universe! Cognitive algorithms are cognitive algorithms. Sometimes they make people say the word ‘God’.
Clearly in that sense God is real, but it seems like a profoundly unimportant sense to me, particularly because I don’t think anyone actually uses “real” that way. It seems like a type error; a god is an extremely different sort of thing than the idea of a god
You’re right.
I suppose I’m just ignoring the unimportant senses because I’m talking to rationalists about what ‘God’ could be thought of as, and, well, the other more common ways of thinking about it don’t convey much information. I was mostly trying to convey an ontology of cognitive algorithms, but got sidetracked into talking about this God business via a request from the audience. I honestly don’t care much about how typical theists or atheists use the words, because, well, I don’t care what they think. ;) I think I managed to get my points across despite defecting in the words game. Still, my apologies.
Also something very much like the actual God exists in a Tegmark multiverse, but that’s also pretty unimportant, decision theoretically speaking. He’s just another counterfactual terrorist.
Also something very much like the actual God exists in a Tegmark multiverse, but that’s also pretty unimportant, decision theoretically speaking. He’s just another counterfactual terrorist.
Really? It sounds kinda like a self-defeating object. My guess is that there is an unending infinite hierarchy. But I don’t trust my intuitions about the large scale structure of the multiverse much.
Sure. And in that sense, Santa Claus is also real, and it’s entirely correct to say that “God is no more real than Santa Claus.” Or have I misunderstood you?
And yet, I suspect few theists would agree with that statement.
I wouldn’t say that’s entirely correct. God is significantly more real than Santa Claus. He’s inspired all kinds of art and science and devotion and what not, to a much greater extent than Santa Claus. Plus, people don’t really talk to Santa Claus, whereas they often talk to God, and sometimes He answers. God is a much more complex algorithm.
Theists wouldn’t agree with your statement, but I wouldn’t either. And there are lots of statements that are true that theists would disagree with, just like there are lots of statements that are true that anyone would disagree with, because people suck at epistemology. But that’s kind of tangential to the main thrust of my argument.
I’m a little startled by you interpreting “more real” as an quantitative comparison, when I meant it as a qualitative one, so I have to back up a bit and ask you to unpack that.
Presumably you aren’t arguing that inspiring art, science, devotion and whatnot is what it means to be real, or it would follow that most of the atoms in the universe are non-real and are in non-real configurations, which is a decidedly odd use of that word.
You say later that God is “much more complex,” and I can’t really see what that has to do with anything… I mean, a tree is much more complex than a wooden pole, but I wouldn’t say that has anything to do with the reality of a tree or of a wooden pole.
Basically, I can’t quite figure out what you mean by “real,” and you seem to be using it in ways that are inconsistent with the way most people I know (including quite a few theists) would use it.
For my own part, what I would conclude from your argument is that God, independent of reality or non-reality, is more important than Santa Claus. Which I would agree with. If God is a reality, it’s a more important reality than Santa Claus. If God is a myth, it’s a more important myth than Santa Claus. Etc.
Incidentally, many people write letters to Santa Claus, and sometimes things happen that they experience as a reply from Santa Claus. If that is different from what you are referring to as an “answer” here, then I’ve continued to misunderstand you.
So, let me back up and try again. I’m currently imagining a purple dinosaur named Ansel with a built-in helicopter coming out of its skull and a refrigerator in its belly. Are you suggesting that Ansel is real, since it exists in my mind, and that it would become increasingly real if other people sat around imagining it too?
So, let me back up and try again. I’m currently imagining a purple dinosaur named Ansel with a built-in helicopter coming out of its skull and a refrigerator in its belly. Are you suggesting that Ansel is real, since it exists in my mind, and that it would become increasingly real if other people sat around imagining it too?
Yes. And if I imagined Ansel except green and not purple, then that adds a little bit to the realness of Ansel, unless we want to call the new green dinosaur Spinoz instead and have it be its own distinct cognitive algorithm.
Presumably you aren’t arguing that inspiring art, science, devotion and whatnot is what it means to be real, or it would follow that most of the atoms in the universe are non-real and are in non-real configurations, which is a decidedly odd use of that word.
Nah, I reason about it in terms of measure. You have one cognitive algorithm that’s being run on one mind. You have another cognitive algorithm that’s running redundantly on a hundred minds. I’d say the latter has about a hundred times as much measure as the former. I don’t know how else to reason about relative existence. (Realness?) I’m porting this sort of thinking over from reasoning about the universe being spatially infinite and there being an infinite number of TheOtherDaves all typing slightly different things. Some of those TheOtherDaves ‘exist’ more than others, especially if they’re doing very probable things.
If existence isn’t measured by number of copies, then what could it be measured by? The alternative I see is something like decision theoretic significance, which is why I was talking about what you called ‘importance’. But I’m wary of getting into cutting edge decision theory stuff that I don’t understand very well. Instead, can you tell me what you think ‘realness’ is, and whether or not you think God is real, and why or why not? We’re starting to argue over definitions, which is a common failure mode, but it’s cool as long as we realize we’re arguing over definitions.
I think that everything exists, by the way: there’s an ensemble universe, like Tegmark’s level 4 multiverse, and so we can only quibble about how existent something is, not whether or not it exists. I might be having trouble trying to translate commonsense definitions into and out of my ontology. My apologies.
You say later that God is “much more complex,” and I can’t really see what that has to do with anything… I mean, a tree is much more complex than a wooden pole, but I wouldn’t say that has anything to do with the reality of a tree or of a wooden pole.
I mean that people tend to use a lot more neurons to model God than to model Santa Claus, and thus by the redundant-copies argument hinted at above this means that God exists more. Relatedly...
Incidentally, many people write letters to Santa Claus, and sometimes things happen that they experience as a reply from Santa Claus. If that is different from what you are referring to as an “answer” here, then I’ve continued to misunderstand you.
You’re right, I forgot about this. Parents have to use lots of neurons to model Santa Claus when crafting the letters. Kids don’t tend to use as many neurons when writing letters to Santa, I think. But add up all of these neuron-compuations and it’s still vastly less than the neuron-computations used by the many people having religious experiences and praying every day. (I’m using number-of-neurons-used as a proxy for strength/number of computations.)
Also, ‘people’ aren’t ontologically fundamental: they’re made of algorithms too, just like God. So I don’t see how you can say ‘God doesn’t exist’ without implying that Will Newsome doesn’t exist; Will Newsome is just a collection of human universal algorithms (facial recognition, object permanence) and culture-specific memetic contents (humanism, rationality, Buddhism). The body is just a computing substrate, and it’s not something I identify with all that much. And if I’m just a collection of algorithms running on some general computing hardware, well, the same is true of God. It’s just that he’s more parallel and I’m more serial. And I’m way smarter.
(Not that there is any such thing as ‘I’. ‘I’ am made of a kludge of algorithms, and we don’t always agree.)
I don’t feel qualified to answer. If we’re talking about exists in a mathy sense, then any of those that can be represented mathematically exists. I’m not sure if there are universes where 5=4, and other logically impossible things. I’ve heard arguments to this effect but I don’t remember what they are. Surely you can have things that appear logically impossible, due to hiding some contradiction in the middle of a titanic proof, but actually logically impossible, I don’t know. ‘God’ is vague because ‘omnipresent’ and the like don’t really make sense; similar problems with proper factor of 101.
The last one about Roswell seems obviously true, it’s just not true in most universes we find ourselves on. But I mean, it’s a true statement in a trivial way. ‘We live in a spatially infinite universe and so there exists a copy of you that is the same in every way except with 20 foot long hair’ is also trivially true. But if you only care about worlds in which your hair is 20 feet long, then all of a sudden its truth is not trivial; it’s vitally important.
I was trying to tease out whether your “God is real” is intended in the same sense as “the monster group exists” - neither exists in physical reality, but both exist in at least some minds; in a kind of mental reality. My other questions were intended to ferret out whether your idea of a mental or algorithmic “real” includes only well-defined and consistent ideas, or whether vague, incorrect, and impossible ideas also qualify as “real” in your sense.
Sorry I didn’t make that clear in the original comment—I didn’t mean to seem confrontational. I’m just trying to get a better understanding of your interesting suggestion. This appears to be one situation where more politeness might have helped. :)
As for where I am coming from, I’m one of those philosophical anti-realists mentioned earlier in this thread (and a big fan of van Fraassen). I am far from convinced that electrons are real. So I’m interested in the details when someone says, in effect, that God is just as real as electrons.
What’s the usefulness of “I think that everything exists, by the way: there’s an ensemble universe”? How does it constrain your expectations?
I don’t see how having specific beliefs either way about stuff outside the observable universe is useful.
Now, if you can show that whether the universe beyond the observable is infinite or non-infinite but much larger than the Hubble Volume constrains expectations about the contents of the observable universe, then it might be useful.
Off the top of my head: If the world is very big then there are more agents to trade with or be simulated by. Also I’m not sure what counts as the observable universe—we can’t see beyond the Hubble volume with our telescopes, but we can probabilistically model what different parts of the universe or different universes look like nonetheless. We also do not know what is ultimately observable. We currently lack the ability to observe mental phenomena but I still have specific beliefs about roughly what we’ll observe when we really understand consciousness.
It is useful to be curious about mysteries to which you believe there to be no answer; beliefs like that often turn out to be wrong.
And, yes, we were talking about definitions: I wanted to make sure I understood what you were actually saying before I tried to respond to it.
Instead, can you tell me what you think ‘realness’ is, and whether or not you think God is real, and why or why not?
I think we label something a “real X” to assert that it implements a deep structure that characterizes X, rather than merely having a superficial appearance of X.
For doing that to be meaningful we have to be prepared to cash it out in terms of the deep structure we’re asserting; if we can’t do that then we don’t mean anything by the phrase “real X.”
When someone says “Y is real,” I try to interpret that to mean “Y is [a real X]” for some plausible X. If someone says “The elephant I’m seeing is real,” I probably understand them to refer to (I1) a real elephant, which implies that it has mass and occupies volume and reflects light and radiates heat and so forth.
If they mean I1, and it turns out that what they are seeing doesn’t have those properties, then they are wrong. If they meant, instead, that it is (I2) a real activation of their retina, then questions of mass and volume are irrelevant… but if it turns out their retina isn’t being activated, then they’re wrong. If they mean, instead, that it’s (I3) a real activation of their visual cortex, then questions of retinal activation are irrelevant… but if it turns out that their visual cortex isn’t being activated, then they’re wrong.
Regardless of whether they’re right or wrong, these are all different claims, even though the same words are being used to express them. If they mean I3 and I understand I2, communication has failed.
If I’ve understood you: if I say “God is real,” you understand that to mean (J1) my neurons are being activated. And J1 is certainly true. But if I meant to express something else (J2) which implies the entity responsible for the creation of the universe once split the Red Sea in order to allow my ancestors to escape from the Egyptian army, then communication has failed.
Sure, we can get along just fine regardless, as long as we stay pretty vague. I can say “God is real” and you can reply “Yup, he sure is!” and we get lots of social bonding value out of it, but communication has nevertheless failed… unless, of course, our only goal was social bonding in the first place, in which case everything is fine.
So, back to whether I think God is real… I think the thing you’re asking about is real, yes. That is, there exist neurons that get activated when people talk about God, and those activation patterns are kinda-sorta isomorphic to one another.
As for why I think that… I don’t know how to begin answering that question in fewer than a thousand words. I don’t think it’s in the least bit controversial.
But I don’t think that’s what anyone else I’ve ever met would mean by the same question.
By the way, User:ata made this illuminating comment which I agree with; see my reply (where I admit to defecting when it comes to using words correctly).
(nods) Cool. This is essentially why I have been talking all along about the use of words, rather than talking about what kinds of things exist; it has seemed to me that our primary point of discontinuity was about the former rather than the latter.
By “real” I’m assuming you mean something like “a phenomenon that needs to be accounted for in order to make accurate predictions”. Specifically, predictions about what people will do. If so, absolutely.
Of course then there are other valid senses of “real” which everyone else is arguing below, in which there is the question of effects outside people’s actions, and whether the phenomenon showed up in people’s heads because an entity outside our scientific understanding called God put it there. Those are, of course, the tricky ones.
If you asked me if souls are real or if God is real, I’d say yes to both
Having read your explanation, I think you ought to say both are not real. Your description of God and souls as parallelized cognitive algorithms does not predict what “God is real, souls are real” predicts.
I think it would be more accurate to say “the belief that ‘God is real, souls are real’ is definitely real, and regardless of the truth value of the statement, the belief itself affects the world”. That makes the same predictions as your cognitive algorithm idea (which I quite like), but doesn’t cause misunderstandings with people who are using the word ‘real’ in very common ways.
If you asked me if souls are real or if God is real, I’d say yes to both, but the explanation thereof would be excruciatingly difficult, and I’d be tempted to label the question ‘not even wrong’,
Being narrow with your own conceptual framework is good, but I’m promoting being liberal when it comes to interpreting others’ concepts, when playing fast and loose in back-and-forth discourse, and when reasoning very abstractly in order to see connections. As long as you make sure to go back and make sure that everything connects precisely, and avoid affective death spirals around seemingly big insights about the fundamental nature of all things (which is somewhat difficult), it can be useful for getting new perspectives and for communicating concepts effectively.
ETA: With regards to communication, this only really works if each of the participants has some amount of faith in the epistemology of their conversation partner. If some random guy told me God exists, and I wanted to make him smarter, I wouldn’t go on about all the ways that God exists; I’d go on about the ways He doesn’t.
If some random guy told me God exists, and I wanted to make him smarter, I wouldn’t go on about all the ways that God exists; I’d go on about the ways He doesn’t.
When possible this is best, but some people at SIAI (cough Vassar cough) have conversational styles that are very fast so as to convey the most information in the shortest time, and it’s hard to do real-time transformations from ultra-abstract statements to reasonably-precise internal models and back as information is exchanged and people build up their ontologies on the fly. (Which is pretty awesome when it happens—one of the joys of being a Visiting Fellow. And of talking to Michael Vassar.)
I’d more or less agree with this, but would add that it’s important to flag the difference between asserting the existence of X, making decisions based on the existence of X, and supposing the existence of X. If I start using language in a way that elides those differences, I am doing nobody any favors, least of all myself.
your question is one of those “great unsolved problems in philosophy.”
Are there great solved problems in philosophy?
I think a good working definition of philosophy is “not science yet”—so the answer to this question is “yes, but we don’t call it philosophy any more”.
This is essentially the debate between scientific realists and anti-realists in philosophy of science. Realists hold that unobservable entities postulated by scientific theories are still “real”; anti-realists hold that these entities are not real. One of the big problems for anti-realists, as you pointed out with your first example, is that “what is observable” changes over time (e.g. we can now “see” atoms in ways that would have startled physicists in the 1860′s). However, the anti-realists do have one interesting argument in their favor: many theories that were empirically successful for a long period of time turned out to postulate unobservable entities that didn’t actually exist. For example: ether, which made claims that were useful as prediction tools but didn’t truly reflect reality. (This argument comes from Bas Van Fraassen, a leading anti-realist.)
Hopefully this historical context is helpful. The point I am trying to make is this: your question is one of those “great unsolved problems in philosophy.”
The usual “great unsolved question of philosophy’ is “Are atoms real?”. I’m not trying to ask that question. I’m instead asking what disguised empirical inquiry scientists were engaged in, when, in the course of ordinary scientific research (and not metaphysical debates) they tried to figure out whether atoms were real.
Contemporary philosophers call this conceptual analysis and it’s exactly how they talk about scientific realism and anti-realism. Your answer to the question, that X is real if it can be included as part of a coherent whole with the rest of science is vaguely Quinean.
I agree with the resemblance to Quine; it could also be thought of as Philip Kitcher’s “unification” model of explanation.
And also the coherence theory of truth (replace “X is real” with ” ‘X exists’ is true”).
Are there great solved problems in philosophy?
People have solved good chunks of “Why do all dogs resemble one another”, which is a problem that Plato cared a lot about. (Mendelian genetics, Darwinian evolution, and our understanding of how the brain clusters perceptions are all parts of the answer here.)
People have also solved good chunks of: “Is there a God?”, “Is there likely to be an after life?”, and “In what sense do we have free will?”, among other questions.
If a problem is solved in philosophy, but nobody reads it …
Of course, if all we care about are lay beliefs the same could be said for physics, biology and neuroscience.
Good point. But I think it is the case that almost everyone who has need of (i.e. uses) information from physics, biology, and neuroscience uses the standard, though esoteric, information produced by scientists.
But people who need (i.e. make decisions based on) ideas from philosophy regarding metaphysics, generally do not make use of what you and I might call the “state of the art” in this field.
Sure, unfortunately acting on the false beliefs that there is a God and you have a soul doesn’t leave the loud and fiery explosions that acting on false beliefs about physics does.
Unless you count religious warfare, that is.
Why not just the last of those? All dogs resemble one another because if they didn’t have a critical resemblance, we wouldn’t use the same label for them. Even today, we often have common-use terms for organisms, where the labels (taken literally) violate post-Darwinian understanding, and that’s because of what the layperson considers a relevant similarity.
In other cases (e.g. “is a whale a fish?”), a deeper awareness of the relevant similarities did cause us to change up our label.
That would only be a sufficient answer to the question “Why do we have a category called ‘dogs’ such that all of its members resemble one another?”. Genetics, evolution, etc. are indeed necessary to answer the question about the referent rather than the quotation.
Only because he picked a specific category where the (apparently-significant) physical resemblance did in fact coincide with a genetic resemblance. But because he picked a class of animals (“dogs”) due to other criteria, the answer to that question begins and ends with his classification algorithm and what his mind counts as “doglike”.
It’s quite common (as I made clear) for people to give the same name to genetically distant organisms or organs. The reason for physical similarity in that case is quite different from the reason in the case of the genetically similar organisms.
To base your answer to Plato on dogs’ genetic similarity, you would also have to “explain” sharks and dolphins as being the same species—the “species” of fish.
Here, too, one search out scientific explanations for how the similarities arose—this time having to do partly with how form is passed along within a species (genetics), and partly with convergent evolutionary pressures that lead sharks and dolphins to both have a streamlined shape, flippers, etc.
Yes, I get that. But, again, Plato didn’t create a category isomorphic to modern knowledge of genetic lines. He created a category based on what Greeks at the time deemed “doglike”. And the answer to that question is purely one of “why do you consider a boundary that includes only those things you call ‘dogs’ worthy of its own label?” Only later, as humans gained more knowledge, could they ask more complex questions about organisms that require knowledge of genetics, selection pressures, and convergent evolution. But the Greeks were not then at that point.
Also, explanations having to do with how humans deem something doglike are scientific.
Edit: To make the point clearer, consider ansewring Plato by saying “dogs are similar because genes determine what an animal looks like, animals reproduce by passing genes, and all dogs have similar genes”. Such an answer would be wrong (uninformative) because it uses the premise “animals you give the same label to are similar because they have genes proportionally similar”. This model is wrong, as it requires (per my above comment) you to also tell Plato that “shark-fish and dolphin-fish are similar because genes determine what an animal looks like, animals reproduce by passing genes, and all fish have similar genes.”
It’s not just a matter of labels. We can imagine a world in which every creature was a unique random mishmash of features without regard to any other creature. Empirically, we do not live in such a world; in our world, living organisms come in definite clusters with regularities to their properties. Evolution provides an explanation of why biology does objectively possess this feature.
I understand that. That still doesn’t mean Plato was in a position to be asking a question that requires understanding of evolutionary theory to answer. His question is not much different from him asking, had he lived in the world you posited, why all aerofauns are similar, where “aerofaun” is a label they innocuously came up with for “any creature that flies”.
In that case, as in the actual one, there are huge differences among the aerofauns, more so than there are among dogs or among flying creatures in this world. But, even if that world’s true explanation were “aliens regularly send their randomized automaton toys to earth”, that still wouldn’t mean you need aliens to answer the aerofaun question, because your question is already dissolved by understanding your own categorization system.
Edit: To further clarify the point: In your hypothetical world, the correct (informative, expectation-constraining) answer to a Plato asking “Why are all aerofauns similar?” would be:
“They’re not similar in any objective sense. They simply have one particular similarity that you deem salient—the fact of their flying—and this is obscured by your having been accustomed to using the same label, ‘aerofaun’ for all of them. And the reason for a word’s existence in the first place is because it calls out a human-relevant cluster. Because it matters to humans whether an animal flies or not, we have a word for it. But once you know whether an animal flies, there is no additional fact of the matter as to why the fliers are similar—that similarity is an artifact of the filtering applied before an animal is called an aerofaun.”
Similarly, you should answer Plato: “Dogs aren’t similar in any objective sense. They simply have a few similarities that you deem salient—how they’re adaptable to humans, work in packs, walk on four legs, like meat, bark, etc. -- and this is obscured by your having been accustomed to using the same label, ‘dog’, for all of them. And the reason for a word’s existence in the first place is because it calls out a human-relevant cluster. Because it matters to humans whether an animal has all the traits {friendly to us, works in packs, can’t stand for long, wants meat, and can emit a loud call}, there is no additional fact of the matter as to why dogs are similar—that similarity is an artifact of the filtering applied before an animal is called a dog. Maybe one day we’ll find that some of the things we were calling dogs differ in a critical way—maybe they can’t interbreed with most dogs? -- and we’ll have to change our labeling system.”
I agree with Silas. Talk of genetics and evolution here makes it look like Plato was actually concerned about dogs but that’s just an example of the problem. Plato was talking about the general question of which the following are also examples (tokens actually!), “Why do all triangles resemble each other?” “Why do all storms resemble each other?” “Why do all performances of Oedipus resemble each other?” and so on. And he’s not looking for a causal explanation, he’s trying to understand what our categories are doing and what it means to refer to different things by the same name.
Understanding how the human brain clusters perceptions helps us understand the question but it doesn’t really answer it- it just transforms it to a question about the reality of such categories. And this problem is far from solved.
In any case, if we’re counting philosophical problems which were transformed at some point into scientific problems then we might as well include the entirety of the sciences save Geometry, Music and Rhetoric as “solved philosophical problems”. I don’t say this to condemn philosophy either, on the contrary it was often philosophers who developed the methodology to answer these questions.
Well, I think that’s giving Plato too much credit—my claim is that, at the time, they weren’t even aware of how their categorizations were influencing their judgments. But your comparison to the triangle question is very apt. According what I read in Dennett’s Darwin’s Dangerous Idea, the Western ontology from Greeks through to the 19th century was that all animals represent a special, ideal, “platonic” form.
To claim, as Darwin did, that animals changed forms over time sounded to them, like it would sound to us if someone argued, “Okay, you know all those integers we use? Well, they weren’t always that way. They kinda changed over time. That 3 and 4 we have? See, they actually used to be a 3.5. Then over time it split into 3.2 and 3.8, eventually reaching the 3 and 4 we have today.”
In short, the Greeks didn’t recognize the hidden inferences that words were making and thought they were finding objective categories when really they were creating human-useful categories. EY goes into detail about this in the article AnnaSalamon referenced, Words as Hidden Inferences.
So what I think you’re saying is that Plato had so much map-territory confusion that what he had to say about forms isn’t even a meaningful question. Is that right?
I might agree. It’s hard to figure out how ancient philosophers were actually thinking about problems given that we only approach their work through modernized translations and with our own concepts and categories at hand.
I’m not sure I see Plato inferring from words, though. Maybe you can point out that step explicitly?
Part of the problem is that “Words as Hidden Inferences” doesn’t make that much sense to me as it stands, particularly as it relates to Greek philosophy. Eliezer’s example is at the very least poorly chosen. Aristotle didn’t even necessarily believe that humans are mortal, he seems agnostic on that question. The quote “All humans are mortal, Socrates is a human, therefore Socrates is mortal” isn’t an argument for anyone’s mortality. It’s an example of a logical syllogism. “All humans are mortal” and “Socrates is a human” are just premises designed to illustrate the form. They might as well be made in set notation.
Aristotle believed bodies inevitably die, if I recall. That maybe a wrong judgment but an inference based mostly on observation (or at least based on general theories which were based on observation but unfortunately not much experimentation). He thought that the part of the soul that thinks might be able to live on after the body but that at least some of the soul was dependent upon the body (note that Aristotle’s soul isn’t at all like the Platonic/Christian conception we’re familiar with and could charitably but plausibly be updated into something people here would be comfortable identifying as a person sans body).
No, I agree there was a meaningful question there: “why have the things we (historically) labeled as ‘dogs’ seem so similar to us?” And you can meaingfully answer that question, in a way that improves your map of the world, by looking at how things got into the dog category in the first place, and why that category (regardless of name) even exists.
While I admit I don’t have special expertise on Greek philosophy in this area, I do know that they had not gathered enough evidence at that point to even be asking questions that require knowledge of evolution to answer, and that they were hung up on idealism (as opposed to nominalism) which forces you to think in terms of ideal forms rather than models that identify relevant clusters.
So perhaps EY’s characterization of the situation misled me, but the essential features are still there to support my claim that Plato went astray by not recognizing the source of the classification-as-dog.
I see. I guess we were disagreeing with Anna for somewhat different reasons. Your point is that when Plato was considering the question “why do the things we call dogs resemble each other” the concept the English word dog references was just a folk concept that was applied to some things that looked the same- the causal-historical story for how those things came to look the way they do is irrelevant to the fact they’re called the same thing just because our brains classify them the same way.
I think thats right. My point was that Plato didn’t really care about dogs so much. What he cared about was this phenomenon of resemblance. The question wasn’t so much how did discrete individuals (Lassie and Snoopy) come to exist in a way that resemble each other. Rather, the question is “We call both Lassie and Snoopy ‘dogs’ and yet they are different individuals. What then is the relation between ‘dog’ and Lassie/Snoopy and what are we doing when we call both Lassie and Snoopy dogs? But that might be more the entire tradition of Western philosophy talking rather than Plato himself.
Plato’s answer though is that there are abstract objects, “forms” which are imperfectly instantiated in Lassie and Snoopy. Both approximate ideal ‘dogness’. For Plato it was these forms that were ‘most real’ so to speak because they were eternal and perfect. Plato, and especially some of his later followers got really mystical about all this and it got imported into Christianity. But we can excise the mysticism/silly talk about perfection and get a live philosophical question (the most notable Platonist of the 20th century is Bertrand Russell). A modern version of the question might be “what is the ontological status of abstract objects?” At best evolution and genetics are only tangentially involved with that question and only for a subset of abstract objects (things like species) and as a whole the question is generally considered unsolved. As it stands nominalism and Platonism have about equal representation among philosophers as a whole, though Platonism has a slight advantage among those who do work in Metaphysics.
These appear to be things that, once solved, aren’t “philosophy” any more. So what’s philosophy? What, in your view, is left?
Philosophy consists of the questions that we don’t understand well enough to even know how to go about answering them, but which, despite that (or because of that), are still really fun to argue about endlessly even in the absence of any new insights about the structure of the problem.
(Basically, I think describing a given problem as “philosophical” is mostly mind projection; from history, it seems that all the qualities that make a given problem a philosophical one have been properties of the people thinking about it rather than of the problem itself.)
Problems we don’t know the right questions for yet. When we have a good handle on a question, it becomes science. When we have a good answer for the question, it becomes settled science.
Er… I think a small number of people have made some progress, and I guess you could call that progress ‘good chunks’, but I get the feeling that the vast majority of rationalists are very confused about the first two questions (or would be if they noticed their confusion). Atheists and theists are both right and wrong in their own way, but neither have a solid understanding of the important underlying considerations. If you asked me if souls are real or if God is real, I’d say yes to both, but the explanation thereof would be excruciatingly difficult, and I’d be tempted to label the question ‘not even wrong’, akin to ‘If a tree falls in the forest...’. (And I’m not talking about trivially true ensemble universe stuff, either—I think there’s more to it than just being smugly meta-contrarian.) Your point stands that there are a lot of solved philosophy problems, I’m just disputing your first two examples. Free will is a good example, though.
Not to make things ‘excruciating’ for you but you can’t really leave that hanging.
Gah. I’ll stick my neck out a bit. Short barely-defensible version: sometimes your low-level-language/ontology should be bits, sometimes it should be gods. Souls are a pretty good model of how memetic cognitive algorithms make up about half of human experience and don’t reside in any one body. (You could remove all of the memes from someone’s body and put them in someone else’s body, and that’d be damn close to reincarnation. There are obvious objections here but I’m just going to plow ahead.) For instance, Wikipedia: “In philosophy of mind, dualism is a set of views about the relationship between mind and matter, which begins with the claim that mental phenomena are, in some respects, non-physical.”
‘Non-physical’ is the key concept. I like to model cognitive algorithms in terms of e.g. memetics and computer science and phenomenology, not in terms of atoms. So when the nasty monists come along and say ‘everything about this soul business can be explained in terms of atoms’, I say, well sure, the languages are Turing-equivalent, but who cares? There’s barely a difference in anticipated experiences, it’s just arguing about which ontology better carves reality at its joints. Personally, I’m just fine with using the ontology of souls and gods and magic. Yeah, half of it ‘reduces’ to the placebo effect and memetics and what not, but why choose that ontology? Use ontological pragmatism.
(I guess there’s an argument that you can have a speed prior over speed prior languages and should use low-level languages when all else is equal, but I find ‘algorithmic ontology’ to be simpler and easier to reason about than ‘atomic/physical ontology’ anyway, so once again I think I disagree with the monists.)
With regards to God in particular: God exists in a lot of peoples’ heads. He’s a massively parallel distributed cognitive algorithm that millions of people use and model. That’s more of an existence than your average person, by far. What atheists mean when they claim He doesn’t exist is something else that no theists actually care about. He’s revealed Himself to them. Once you’ve personally experienced the God cognitive algorithm, are you going to listen to some snobby scientist who comes along and tells you that God doesn’t exist? But you directly experienced Him! And so did half the people at your church! Silly ignorant scientsts.
In that sense, and it is an important sense, God is very real. More than that, all memes (memetic algorithms) are real. Now, it might be bad ontological pragmatism if this leads you to go ahead and start believing you’ll go to the Christian heaven after you die. And there are all sorts of just-plain-wrong things that theists believe. But I don’t think that they’re that much more wrong than your average atheist. Both are pretty damn wrong. But it doesn’t really matter, because most beliefs are clothes. It’s when people start taking things seriously that you run into trouble.
And I realize this comes across as just being pointlessly meta-contrarian, but it’s important to reason about these things correctly when you’re doing Friendliness philosophy.
But that’s not the sense that theists mean when they say “God is real”, and it’s definitely not the sense that atheists mean when they say “God isn’t real”. When someone says “God isn’t real”, it’s not like they’re saying that God is not a meme that exists in anybody’s mind — a person needs to have their own mental copy of the God algorithm, and the understanding that millions of people share it, in order to even bother being an atheist. It’s pretty clear that they mean that the God algorithm isn’t a model of any actual agent that created the universe or acts on it independently of the humans modeling him.
So I’d disagree with “In that sense, and it is an important sense, God is very real.” Clearly in that sense God is real, but it seems like a profoundly unimportant sense to me, particularly because I don’t think anyone actually uses “real” that way. It seems like a type error; a god is an extremely different sort of thing than the idea of a god.
Indeed. God is the omniscient, omnipresent, infinitely powerful and utterly non-existent creator of the universe! Cognitive algorithms are cognitive algorithms. Sometimes they make people say the word ‘God’.
You’re right.
I suppose I’m just ignoring the unimportant senses because I’m talking to rationalists about what ‘God’ could be thought of as, and, well, the other more common ways of thinking about it don’t convey much information. I was mostly trying to convey an ontology of cognitive algorithms, but got sidetracked into talking about this God business via a request from the audience. I honestly don’t care much about how typical theists or atheists use the words, because, well, I don’t care what they think. ;) I think I managed to get my points across despite defecting in the words game. Still, my apologies.
Also something very much like the actual God exists in a Tegmark multiverse, but that’s also pretty unimportant, decision theoretically speaking. He’s just another counterfactual terrorist.
Really? It sounds kinda like a self-defeating object. My guess is that there is an unending infinite hierarchy. But I don’t trust my intuitions about the large scale structure of the multiverse much.
Sure. And in that sense, Santa Claus is also real, and it’s entirely correct to say that “God is no more real than Santa Claus.” Or have I misunderstood you?
And yet, I suspect few theists would agree with that statement.
Allow me to link to this post on the social construction of Santa Claus
I wouldn’t say that’s entirely correct. God is significantly more real than Santa Claus. He’s inspired all kinds of art and science and devotion and what not, to a much greater extent than Santa Claus. Plus, people don’t really talk to Santa Claus, whereas they often talk to God, and sometimes He answers. God is a much more complex algorithm.
Theists wouldn’t agree with your statement, but I wouldn’t either. And there are lots of statements that are true that theists would disagree with, just like there are lots of statements that are true that anyone would disagree with, because people suck at epistemology. But that’s kind of tangential to the main thrust of my argument.
I’m a little startled by you interpreting “more real” as an quantitative comparison, when I meant it as a qualitative one, so I have to back up a bit and ask you to unpack that.
Presumably you aren’t arguing that inspiring art, science, devotion and whatnot is what it means to be real, or it would follow that most of the atoms in the universe are non-real and are in non-real configurations, which is a decidedly odd use of that word.
You say later that God is “much more complex,” and I can’t really see what that has to do with anything… I mean, a tree is much more complex than a wooden pole, but I wouldn’t say that has anything to do with the reality of a tree or of a wooden pole.
Basically, I can’t quite figure out what you mean by “real,” and you seem to be using it in ways that are inconsistent with the way most people I know (including quite a few theists) would use it.
For my own part, what I would conclude from your argument is that God, independent of reality or non-reality, is more important than Santa Claus. Which I would agree with. If God is a reality, it’s a more important reality than Santa Claus. If God is a myth, it’s a more important myth than Santa Claus. Etc.
Incidentally, many people write letters to Santa Claus, and sometimes things happen that they experience as a reply from Santa Claus. If that is different from what you are referring to as an “answer” here, then I’ve continued to misunderstand you.
So, let me back up and try again. I’m currently imagining a purple dinosaur named Ansel with a built-in helicopter coming out of its skull and a refrigerator in its belly. Are you suggesting that Ansel is real, since it exists in my mind, and that it would become increasingly real if other people sat around imagining it too?
Yes. And if I imagined Ansel except green and not purple, then that adds a little bit to the realness of Ansel, unless we want to call the new green dinosaur Spinoz instead and have it be its own distinct cognitive algorithm.
Nah, I reason about it in terms of measure. You have one cognitive algorithm that’s being run on one mind. You have another cognitive algorithm that’s running redundantly on a hundred minds. I’d say the latter has about a hundred times as much measure as the former. I don’t know how else to reason about relative existence. (Realness?) I’m porting this sort of thinking over from reasoning about the universe being spatially infinite and there being an infinite number of TheOtherDaves all typing slightly different things. Some of those TheOtherDaves ‘exist’ more than others, especially if they’re doing very probable things.
If existence isn’t measured by number of copies, then what could it be measured by? The alternative I see is something like decision theoretic significance, which is why I was talking about what you called ‘importance’. But I’m wary of getting into cutting edge decision theory stuff that I don’t understand very well. Instead, can you tell me what you think ‘realness’ is, and whether or not you think God is real, and why or why not? We’re starting to argue over definitions, which is a common failure mode, but it’s cool as long as we realize we’re arguing over definitions.
I think that everything exists, by the way: there’s an ensemble universe, like Tegmark’s level 4 multiverse, and so we can only quibble about how existent something is, not whether or not it exists. I might be having trouble trying to translate commonsense definitions into and out of my ontology. My apologies.
I mean that people tend to use a lot more neurons to model God than to model Santa Claus, and thus by the redundant-copies argument hinted at above this means that God exists more. Relatedly...
You’re right, I forgot about this. Parents have to use lots of neurons to model Santa Claus when crafting the letters. Kids don’t tend to use as many neurons when writing letters to Santa, I think. But add up all of these neuron-compuations and it’s still vastly less than the neuron-computations used by the many people having religious experiences and praying every day. (I’m using number-of-neurons-used as a proxy for strength/number of computations.)
Also, ‘people’ aren’t ontologically fundamental: they’re made of algorithms too, just like God. So I don’t see how you can say ‘God doesn’t exist’ without implying that Will Newsome doesn’t exist; Will Newsome is just a collection of human universal algorithms (facial recognition, object permanence) and culture-specific memetic contents (humanism, rationality, Buddhism). The body is just a computing substrate, and it’s not something I identify with all that much. And if I’m just a collection of algorithms running on some general computing hardware, well, the same is true of God. It’s just that he’s more parallel and I’m more serial. And I’m way smarter.
(Not that there is any such thing as ‘I’. ‘I’ am made of a kludge of algorithms, and we don’t always agree.)
I wonder whether you could comment on, and compare, the following statements:
God exists.
The “monster” sporadic simple group exists.
A non-trivial root of the zeta function not on the critical line exists.
A proper factor of 101 exists.
Components of an alien spacecraft that crashed in Roswell NM in 1947 exist (at Area 51 of Edwards AFB).
I don’t feel qualified to answer. If we’re talking about exists in a mathy sense, then any of those that can be represented mathematically exists. I’m not sure if there are universes where 5=4, and other logically impossible things. I’ve heard arguments to this effect but I don’t remember what they are. Surely you can have things that appear logically impossible, due to hiding some contradiction in the middle of a titanic proof, but actually logically impossible, I don’t know. ‘God’ is vague because ‘omnipresent’ and the like don’t really make sense; similar problems with proper factor of 101.
The last one about Roswell seems obviously true, it’s just not true in most universes we find ourselves on. But I mean, it’s a true statement in a trivial way. ‘We live in a spatially infinite universe and so there exists a copy of you that is the same in every way except with 20 foot long hair’ is also trivially true. But if you only care about worlds in which your hair is 20 feet long, then all of a sudden its truth is not trivial; it’s vitally important.
What implied questions did I miss?
I was trying to tease out whether your “God is real” is intended in the same sense as “the monster group exists” - neither exists in physical reality, but both exist in at least some minds; in a kind of mental reality. My other questions were intended to ferret out whether your idea of a mental or algorithmic “real” includes only well-defined and consistent ideas, or whether vague, incorrect, and impossible ideas also qualify as “real” in your sense.
Sorry I didn’t make that clear in the original comment—I didn’t mean to seem confrontational. I’m just trying to get a better understanding of your interesting suggestion. This appears to be one situation where more politeness might have helped. :)
As for where I am coming from, I’m one of those philosophical anti-realists mentioned earlier in this thread (and a big fan of van Fraassen). I am far from convinced that electrons are real. So I’m interested in the details when someone says, in effect, that God is just as real as electrons.
What’s the usefulness of “I think that everything exists, by the way: there’s an ensemble universe”? How does it constrain your expectations?
I don’t see how having specific beliefs either way about stuff outside the observable universe is useful.
Now, if you can show that whether the universe beyond the observable is infinite or non-infinite but much larger than the Hubble Volume constrains expectations about the contents of the observable universe, then it might be useful.
Off the top of my head: If the world is very big then there are more agents to trade with or be simulated by. Also I’m not sure what counts as the observable universe—we can’t see beyond the Hubble volume with our telescopes, but we can probabilistically model what different parts of the universe or different universes look like nonetheless. We also do not know what is ultimately observable. We currently lack the ability to observe mental phenomena but I still have specific beliefs about roughly what we’ll observe when we really understand consciousness.
It is useful to be curious about mysteries to which you believe there to be no answer; beliefs like that often turn out to be wrong.
OK, I think I’m following you now.
And, yes, we were talking about definitions: I wanted to make sure I understood what you were actually saying before I tried to respond to it.
I think we label something a “real X” to assert that it implements a deep structure that characterizes X, rather than merely having a superficial appearance of X.
For doing that to be meaningful we have to be prepared to cash it out in terms of the deep structure we’re asserting; if we can’t do that then we don’t mean anything by the phrase “real X.”
When someone says “Y is real,” I try to interpret that to mean “Y is [a real X]” for some plausible X. If someone says “The elephant I’m seeing is real,” I probably understand them to refer to (I1) a real elephant, which implies that it has mass and occupies volume and reflects light and radiates heat and so forth.
If they mean I1, and it turns out that what they are seeing doesn’t have those properties, then they are wrong. If they meant, instead, that it is (I2) a real activation of their retina, then questions of mass and volume are irrelevant… but if it turns out their retina isn’t being activated, then they’re wrong. If they mean, instead, that it’s (I3) a real activation of their visual cortex, then questions of retinal activation are irrelevant… but if it turns out that their visual cortex isn’t being activated, then they’re wrong.
Regardless of whether they’re right or wrong, these are all different claims, even though the same words are being used to express them. If they mean I3 and I understand I2, communication has failed.
If I’ve understood you: if I say “God is real,” you understand that to mean (J1) my neurons are being activated. And J1 is certainly true. But if I meant to express something else (J2) which implies the entity responsible for the creation of the universe once split the Red Sea in order to allow my ancestors to escape from the Egyptian army, then communication has failed.
Sure, we can get along just fine regardless, as long as we stay pretty vague. I can say “God is real” and you can reply “Yup, he sure is!” and we get lots of social bonding value out of it, but communication has nevertheless failed… unless, of course, our only goal was social bonding in the first place, in which case everything is fine.
So, back to whether I think God is real… I think the thing you’re asking about is real, yes. That is, there exist neurons that get activated when people talk about God, and those activation patterns are kinda-sorta isomorphic to one another.
As for why I think that… I don’t know how to begin answering that question in fewer than a thousand words. I don’t think it’s in the least bit controversial.
But I don’t think that’s what anyone else I’ve ever met would mean by the same question.
By the way, User:ata made this illuminating comment which I agree with; see my reply (where I admit to defecting when it comes to using words correctly).
(nods) Cool. This is essentially why I have been talking all along about the use of words, rather than talking about what kinds of things exist; it has seemed to me that our primary point of discontinuity was about the former rather than the latter.
By “real” I’m assuming you mean something like “a phenomenon that needs to be accounted for in order to make accurate predictions”. Specifically, predictions about what people will do. If so, absolutely.
Of course then there are other valid senses of “real” which everyone else is arguing below, in which there is the question of effects outside people’s actions, and whether the phenomenon showed up in people’s heads because an entity outside our scientific understanding called God put it there. Those are, of course, the tricky ones.
(God of the Gaps time!)
Having read your explanation, I think you ought to say both are not real. Your description of God and souls as parallelized cognitive algorithms does not predict what “God is real, souls are real” predicts.
I think it would be more accurate to say “the belief that ‘God is real, souls are real’ is definitely real, and regardless of the truth value of the statement, the belief itself affects the world”. That makes the same predictions as your cognitive algorithm idea (which I quite like), but doesn’t cause misunderstandings with people who are using the word ‘real’ in very common ways.
What about the virtue of narrowness?
Being narrow with your own conceptual framework is good, but I’m promoting being liberal when it comes to interpreting others’ concepts, when playing fast and loose in back-and-forth discourse, and when reasoning very abstractly in order to see connections. As long as you make sure to go back and make sure that everything connects precisely, and avoid affective death spirals around seemingly big insights about the fundamental nature of all things (which is somewhat difficult), it can be useful for getting new perspectives and for communicating concepts effectively.
ETA: With regards to communication, this only really works if each of the participants has some amount of faith in the epistemology of their conversation partner. If some random guy told me God exists, and I wanted to make him smarter, I wouldn’t go on about all the ways that God exists; I’d go on about the ways He doesn’t.
Or just teach him the Virtue of Narrowness.
True, that’s a better solution. But, but, but being contrarian is so much more fun!
You should only be liberal in what you accept, if you can transform it so that when you repeat it, you can still be conservative in what you say.
When possible this is best, but some people at SIAI (cough Vassar cough) have conversational styles that are very fast so as to convey the most information in the shortest time, and it’s hard to do real-time transformations from ultra-abstract statements to reasonably-precise internal models and back as information is exchanged and people build up their ontologies on the fly. (Which is pretty awesome when it happens—one of the joys of being a Visiting Fellow. And of talking to Michael Vassar.)
I’d more or less agree with this, but would add that it’s important to flag the difference between asserting the existence of X, making decisions based on the existence of X, and supposing the existence of X. If I start using language in a way that elides those differences, I am doing nobody any favors, least of all myself.
I think a good working definition of philosophy is “not science yet”—so the answer to this question is “yes, but we don’t call it philosophy any more”.