Philosophers working in decision theory are drastically worse at Newcomb than are other philosophers, two-boxing 70.38% of the time where non-specialists two-box 59.07% of the time (normalized after getting rid of ‘Other’ answers). Philosophers of religion are the most likely to get questions about religion wrong — 79.13% are theists (compared to 13.22% of non-specialists), and they tend strongly toward the Anti-Naturalism cluster. Non-aestheticians think aesthetic value is objective 53.64% of the time; aestheticians think it’s objective 73.88% of the time. Working in epistemology tends to make you an internalist, philosophy of science tends to make you a Humean, metaphysics a Platonist, ethics a deontologist.
If you don’t believe something exists it is unlikely that you are going to dedicate your life to studying it. This explains the theism, aesthetic objectivism and the Platonism. Similarly, if you believe a question has a very simple answer that does not need to be fleshed out you are unlikely to dedicate your life to answering it. This explains the deontology and the internalism. And Humeanism is still a minority view among philosophers of science (I also wonder if Humeans about laws exactly overlap with Humeans about causality—I suspect some of the former might not hold the latter view).
I would also be hesitant to assume LW is more likely to be right about these matters when they aren’t things LW has thought much about. E.g. I’m pretty modern Platonism is actually true.
It probably explains theism—if you don’t take the arguments seriously, you’ll more likely want to study religion anthropologically rather than argue it out philosophically—but I don’t see why one couldn’t study aesthetics as ‘subjective’ (whatever precisely that means), or metaphysics as a skeptic. (In fact, many do each of those things. Just not most.) I guess I can see how devoting your whole life’s work to destroying illusions could be a downer for some, though.
I agree LW hasn’t thought enough about most of these issues to reach a solid, vetted assessment. I’m mostly interested in what these doctrines say about underlying methodology, as a canary in a coalmine. I’m rather less interested in seeing LW and Academic Philosophy duke it out to see who happens to be right on specialized, arcane, mostly not-very-important debates. How many philosophers are epistemic externalists only really matters inasmuch as it’s symptomatic of general professional standards and methodology.
but I don’t see why one couldn’t study aesthetics as ‘subjective’ (whatever precisely that means), or metaphysics as a skeptic. (In fact, many do each of those things. Just not most.) I guess I can see how devoting your whole life’s work to destroying illusions could be a downer, though.
Subjective aesthetics is probably more the realm of psychology (unless it is so subjective that you can’t study it). But I’m obviously not saying only Platonists would want to study metaphysics. I’m just saying that the selection effect is sufficient to explain the differences in positions between specialists and non-specialists.
Philosophers working in decision theory are drastically worse at Newcomb
Listen, this is like someone who believes the Axiom of Choice saying “constructivist mathematicians are drastically worse at set theory” (because they reject Choice). Newcomb is all about how you view free will. This is not a settled question yet.
Why does ‘free will’ make any difference? If Omega can only predict you with e.g. 60% accuracy, that’s still enough to generate the problem.
I’m not saying the right answer, i.e., the right decision theory, is a settled question. I’m just saying they lose. This matters. If their family members’ or friends’ welfare were on the line, as opposed to some spare cash, I strongly suspect philosophers would be less blasé about privileging their pet formal decision-making theory over actually making the world a better place. The units of value don’t matter; what matters is that causal decision theory loses, and loses by arbitrarily large amounts.
I once took a martial arts class (taught by a guy who once appeared on the “ninja episode” of Mythbusters, where they tried to figure out if a human can catch an arrow out of the air). He knew this trick called “choshi dori” (I think it roughly means ‘attention/initiative grabbing’). How exactly this trick works is a long story, but it has to do with “hacking the lower brain” of the opponent in various ways. One of the things he could do was have a guy punch him in the face and have the punch instead land on empty air, completely contrary to the volition of the puncher. Note: it would work even if he told you exactly what he was doing.
He could do this because of the way punch targeting works (the largely subconscious system responsible has certain rules it follows that could be influenced in a way that causes you to miss).
There are various ways to defeat “choshi dori,” although the gentleman in question could certainly get the vast majority of randomly chosen people to fall for it. Whatever “free will” is, its probably more complicated than just taking Omega at its word. Perhaps Omega achieved his accuracy by a similar defeatable hack. Omega claims to “open up the agent,” and my response is to try to “open up Omega,” to see what’s behind his prediction %.
I don’t see why it would be at all difficult or mysterious for Omega to predict that I one-box. I mean, it’s not like my thought processes there are at all difficult to understand or predict.
My point is exactly that it is not mysterious. Omega used some concrete method to win his game, much in the same way that the fellow in question uses a particular method to win the punching game. The interesting question in the Newcomb problem is (a) what is the method, and (b) is the method defeatable. The punching game is defeatable. Giving up too early on the punching game is a missed chance to learn something about volition.
The right response to a “magic trick” is to try to learn how the trick works, not go around for the rest of one’s life assuming strangers can always pick out the ace of spades.
Omega’s not dumb. As soon as Omega knows you’re trying to “come up with a method to defeat him”, Omega knows your conclusion—coming to it by some clever line of reasoning isn’t going to change anything. The trick can’t be defeated by some future insight because there’s nothing mysterious about it.
Free-will-based causal decision theory: The simultaneous belief that two-boxing is the massively obvious, overdetermined answer output by a simple decision theory that everyone should adopt for reasons which seem super clear to you, and that Omega isn’t allowed to predict how many boxes you’re going to take by looking at you.
I am not saying anything weird, merely that the statements of the Newcomb’s problem I heard do not specify how Omega wins the game, merely that it wins a high percentage (all?) of the previous attempts. The same can be said for the punching game, played by a human (who, while quite smart about the volition of punching, is still defeatable).
There are algorithms that Omega could follow that are not defeatable (people like to discuss simulating players, and some others are possible too). Others might be defeatable. The correct decision theory in the punching game would learn how to defeat the punching game and walk away with $$$. The right decision theory in the Newcomb’s problem ought to first try to figure out if Omega is using a defeatable algorithm, and only one box if it is not, or if it is not possible to figure this out.
Okay, let’s try and defeat Omega. The goal is to do better than Eliezer Yudkowsky, which seems to be trustworthy about doing what he publicly says all over the place. Omega will definitely predict that Eliezer will one-box, and Eliezer will get the million.
The only way to do better is to two-box while making Omega believe that we will one-box, so we can get the $1001000 with more than 99.9% certainty. And of course,
Omega has access to our brain schematics
We don’t have access to Omega’s schematics. (optional)
Omega has way more processing power than we do.
Err, short of building an AI to beat the crap out of Omega, that looks pretty impossible. $1000 is not enough to make me do the impossible.
Omega used some concrete method to win his game, much in the same way that the fellow in question uses a particular method to win the punching game.
A crucial difference is that the punching game is real, while Newcomb’s problem is fiction, a thought experiment.
In the punching game, you can try to learn how the trick is done and how to defeat the opponent, and you are still engaged in the punching game.
In Newcomb’s problem, Omega is not a real thing that you could discover something about, in the way that there is something to discover about a real choshi dori master. There is no such thing as what Omega is really doing. If you think up different things that an Omega-like entity might be doing, and how these might be defeated to win $1,001,000, then you are no longer thinking about Newcomb’s problem, but about a different thought experiment in some class of Newcomb-like problems. I expect a lot of such thinking goes on at MIRI, and is more useful than endlessly debating the original problem, but it is not the sort of thing that you are doing to defeat choshi dori.
The right response to a “magic trick” is to try to learn how the trick works, not go around for the rest of one’s life assuming strangers can always pick out the ace of spades.
Here is a trivial model of the “trick” being fool-proof (and I do mean “fool” literally), which I believe has been discussed here a time or ten. Omega runs a perfect simulation of you, terminates it right after you make your selection or if you refuse to choose (he is a mean one), checks what it outputs, uses it to place money in the boxes. Omega won’t even offer the real you the game if you are one of those stubborn non-choosers. The termination clause is to prevent you from enjoying the spoils in case YOU are that simulation, so only the “real you” will know if he won or not. And to avoid any basilisk-like acausal trade. He is not that mean.
EDIT: if you think that the termination is a cruel cold-blooded murder, note that you do that all the time when evaluating what other people would do, then stop thinking about it, once you have your answer. The only difference is the fidelity level. If you don’t require 100% accuracy, you don’t need a perfect simulation.
Do you think that gets rid of the problem? ‘It might be possible to outsmart Omega’ strikes me as fairly irrelevant. As long as it’s logically possible that you don’t successfully outsmart Omega, the original problem can still be posed. You still have to make a decision, in those cases where you don’t catch Omega in a net.
I am not saying there isn’t a problem, I am saying the problem is about clarifying volition (in a way not too dissimilar to the “choshi dori” trick in my anecdote). Punching empty air is “losing.” Does this then mean we should abstain from punching? Seems a bit drastic.
Many problems/paradoxes are about clarification. For example the Simpson’s paradox is about clarifying causal vs statistical intuitions.
More specifically, what I am saying is that depending on what commitments you want to make about volition, you would either want to one box, or two box in such a way that Omega can be defeated. The problem is “non-identified” as stated. This is equivalent to choosing axioms in set theory. You don’t get to say someone fails set theory if they don’t like Choice.
1 - Supposing I have no philosophical views at all about volition, I would be rationally obliged to one-box. In a state of ignorance, the choice is clear simply provided that I value whatever is being offered. Why should I then take the time to form a theory of volition, if you’re right and at most it can only make me lose more often?
We don’t know what the right answer to Newcomb-like problems will look like, but we do know what the wrong answers will look like.
2 - Supposing I do have a view about volition that makes me think I should two-box, I’ll still be rationally obliged to one-box in any case where my confidence in that view is low enough relative to the difference between the options’ expected values.
For instance, if we assign to two-boxing the value ‘every human being except you gets their skin ripped off and is then executed, plus you get $10’ and assign to one-boxing the value ‘nobody gets tortured or killed, but you miss out on the $10’, no sane and reasonable person would choose to two-box, no matter how confident they (realistically) thought they were that they have a clever impossibility proof. But if two-boxing is the right answer sometimes, then, pace Nozick, it should always be the right answer, at least in cases where the difference between the 2B and 1B outcomes is dramatic enough to even register as a significant decision. Every single one of the arguments for two-boxing generalize to the skin-ripping-off case, e.g., ‘I can’t help being (causal-decision-theory-)rational!’ and ‘it’s unfair to punish me for liking CDT; I protest by continuing to employ CDT’.
3 - You seem to be under the impression that there’s something implausible or far-fetched about the premise of Newcomb’s Problem. There isn’t. If you can’t understand a 100% success rate on Omega’s part, then imagine a 99% success rate, or a 50% one. The problem isn’t altered in substance by this.
Edit: and come to think of it I am somewhat less sure about the lower success rates in general. If I can roughly estimate Omega’s prediction about me that would seem to screen off any timeless effect. Like, you could probably pretty reliably predict how someone would answer this question based on variables like Less Wrong participation and having a Phd in philosophy. Using this information, I could conclude that an Omega with 60% accuracy is probably going to classify me as a one-boxer no matter what I decide… and in that case why not two box?
Sorry, by a 50% success rate I meant that Omega correctly predicts your action 50% of the time, and the other half of the time just guesses. Guessing can also yield the right answer, so this isn’t equivalent to a 50% success rate in the sense you meant, which was simply ‘Does Omega put the money in the box he would have wished to?’
If you know that Omega will take into account that you’re a LessWronger, but also know that he won’t take into account any other information about you (including not taking into account the fact that you know that he knows you’re a LessWronger!), then yes, you should two-box. But that’s quite different from merely knowing that Omega has a certain success rate. Let’s suppose we know that 60% of the time Omega makes the decision it would have wished were it omniscient. Then we get:
If I one-box: 60% chance of $1,000,000, 40% chance of $1000.
If I two-box: 60% chance of $1000, 40% chance of $1,001,000.
Then the expected value of one-boxing is $600,400. Expected value of two-boxing is $401,000. So you should one-box in this situation.
You are not listening to me. Suppose this fellow comes by and offers to play a game with you. He asks you to punch him in the face, where he is not allowed to dodge or push your hand. If you hit him, he gives you 1000 dollars, if you miss, you give him 1000 dollars. He also informs you that he has a success rate of over 90% playing this game with randomly sampled strangers. He can show you videos of previous games, etc.
This game is not a philosophical contrivance. There are people who can do this here in physical reality where we both live.
Now, what is the right reaction here? My point is that if your right reaction is to not play then you are giving up too soon. The reaction to not play is to assume a certain model of the situation and leave it there. In fact, all models are wrong, and there is much to be learned about e.g. how punching works in digging deeper into how this fellow wins this game. To not play and leave it at that is incurious.
Certainly the success rate this fellow has with the punching game has nothing to do with any grand philosophical statement about the lack of physical volition by humans.
Learning about how punching works, rather than winning 1000 dollars, is the entire point of this game.
My answer to Newcomb’s problem is to one-box if and only if Omega is not defeatable and two-box in a way that defeats Omega otherwise. Omega can be non-defeatable only if certain things hold. For example if it is possible to fully simulate in physical reality a given human’s decision process at a particular point in time, and have this simulation be “referentially transparent.”
My answer to Newcomb’s problem is to one-box if and only if Omega is not defeatable and two-box in a way that defeats Omega otherwise
But now you’ve laid out your decision-making process, so all Omega needs to do now is to predict whether you think he’s defeatable. ;-)
In general, I expect Omega could actually be implemented just by being able to tell whether somebody is likely to overthink the problem, and if so, predict they will two-box. That might be sufficient to get better-than-chance predictions.
To put it yet another way: if you’re trying to outsmart Omega, that means you’re trying to figure out a rationalization that will let you two-box… which means Omega should predict you’ll two-box. ;-)
There are various ways to defeat “choshi dori,” although the gentleman in question could certainly get the vast majority of randomly chosen people to fall for it. Whatever “free will” is, its probably more complicated than just taking Omega at its word. Perhaps Omega achieved his accuracy by a similar defeatable hack.
Omega claims to “open up the agent,” and my response is to try to “open up Omega,” to see what’s behind his prediction %.
Let’s try using your martial arts analogy. Consider the following:
You find yourself in a real world physical confrontation with a ninja who demands your wallet. You have seen this ninja fight several other ninjas, a pirate and a Jedi in turn and each time he used “choshi dori” upon them then proceeded to break both of their legs and take their wallet. What do you do?
Punch the ninja in the face.
Shout “I have free will!” and punch the ninja in the face.
Think “I want to open up the ninja and see how his choshi dori works” then try to punch the ninja in the face.
Toss your wallet to the ninja and then run away.
This isn’t a trick question. All the answers that either punch the ninja in the face or take two boxes are wrong. They leave you with two broken legs or an otherwise less desirable outcome.
Sometimes people fight a hypothetical because the hypothetical is problematic. I lean toward two-boxing in Newcomb’s problem, basically because I can’t not fight this hypothetical. My reasoning is more or less as follows. If the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation. One-boxing would reveal that I know that and risk getting me turned off, making the money in the box rather beside the point, so I two-box. If I’m not a simulation, I don’t accept the possibility of Omega existing in the first place, so I two-box. Basically, I think Newcomb’s problem is not a particularly useful hypothetical, because I don’t see it as predictive of decision-making in other circumstances.
One-boxing would reveal that I know that and risk getting me turned off, making the money in the box rather beside the point, so I two-box.
It seems to me that if Omega concludes that you are aware that you are in a simulation based on the fact that you take one box then Omega is systematically wrong when reasoning about a broad class of agents that happens to include all the rational agents (and some others). This is rather a significant flaw in an Omega implementation.
Basically, I think Newcomb’s problem is not a particularly useful hypothetical, because I don’t see it as predictive of decision-making in other circumstances.
For agents with coherent decision making procedures it is equivalent to playing a Prisoner’s Dilemma against a clone of yourself. That is something that feels closer to a real world scenario for some people. It is similarly equivalent to Parfit’s Hitch-hiker when said hitch-hiker is at the ATM.
That’s why I don’t like Newcomb’s problem. In a prisoner’s dilemma with myself, I’d cooperate (I trust me to cooperate with myself). Throwing Omega in confuses this pointlessly. I suspect if people substituted “God” for “Omega” I’d get more sympathy on this.
Are you suggesting that if you are a simulation, two-boxing reduces your risk of being turned off? If not, I don’t understand your reasoning at all. If so, I guess I understand your reasoning from that point on (presumably you feel no particular loyalty to the entity you’re simulating?), but I don’t understand how you arrive at that point.
At a minimum, I can’t see how two-boxing could be worse in terms of risk of being turned off. I suppose Omega could think I was trying to be tricky by two-boxing specifically to avoid giving my awareness that I’m being simulated away, but at that point the psychology becomes infinitely recursive. I’ll take my chances while the simulator puzzles that out.
I’m not sure I understand your parenthetical. Does the existence of a simulation imply the existence of an outside entity being simulated?
can’t see how two-boxing could be worse in terms of risk of being turned off.
Neither can I. Nor can I see how it could be better. In fact, I see no likely correlation between one/two-boxing and likelihood of being turned off at all. But if my chances of being turned off aren’t affected by my one/two-box choice, then “One-boxing would [..] risk getting me turned off [..] so I two-box” doesn’t make much sense.
You clearly have a scenario in mind wherein I get turned off if my simulator is aware that I’m aware that I’m being simulated and not otherwise, but I don’t understand why I should expect that.
Does the existence of a simulation imply the existence of an outside entity being simulated?
To be honest, I’ve never quite understood what the difference is supposed to be between the phrases “existing in a simulation” and “existing”.
But regardless, my understanding of “If the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation” had initially been something like “If Omega can perfectly model Dave’s mental processes in order to determine Dave’s likely actions, then Omega will probably create lots of simulated Daves in the process. Since those simulated Daves will think they are Dave, and there are many more of them than there are of Dave, and I think I’m Dave, the odds are (if Omega exists and can do this stuff) that I’m in a simulation.”
All of which also implies that there’s an outside entity being simulated in this scenario, in which case if I feel loyalty to that entity (or otherwise have some basis for caring about how my choices affect it) then whether I get turned off or not isn’t my only concern anyway..
I infer from your question that I misunderstood you in the first place, though, in which case you can probably ignore my parenthetical. Let me back up and ask, instead, why if the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation?
My thinking here is that if a being suddenly shows up and can perfectly model me, despite not having scanned my neural pathways, taken any tissue samples, observed my life history, or gathered any other data whatsoever, then it’s cheating somehow—i.e. I’m a simulation and it has my source code.
This doesn’t require there to be a more real Prismattic one turtle down, as it were. I could be a simulation created to test a set of parameters, not necessarily a model of another entity.
In general, you can’t make people miss or fall over without touching them unless they know you can make them miss or fall over when touching is allowed.
I don’t think controversies over the Axiom of Choice are similar in the right ways to controversies over Newcomb’s Problem. In pragmatic terms, we know that true two-boxers will willingly take on arbitrarily large disutility (or give up arbitrarily large utility), inasmuch as they’re confident that two-boxing is the right answer. The point can even be put psychologically: To the extent that it’s a psychological fact that humans don’t assign infinite value to being Causal Decision Theorists, the utility (relative to people’s actual values) of following CDT can’t outweigh the bad consequences of consistently two-boxing.
I know of no correspondingly strong demonstration that weakening Choice or eliminating LEM leads demonstrably to irrationality (relative to how the world actually is and, in particular, what preferences people actually have).
In pragmatic terms, we know that true two-boxers will willingly take on arbitrarily large disutility
This is only the case in a world-view that accepts that Omega cannot be tricked. How do you know Omega cannot be tricked? This view corresponds to a certain view of how choices get made, how the choice making algorithm is simulated, and various properties of this simulation as embodied in physical reality. Absent an actual proof, this view is just that—a view.
Two-boxers aren’t (necessarily!) stupid, they simply adhere to commitments that make it possible to fool Omega.
Two-boxers aren’t (necessarily!) stupid, they simply adhere to commitments that make it possible to fool Omega.
No, they don’t. You seem to be confused not just about Newcomb’s Problem but also about why the (somewhat educated subset of) people who Two-Box make that choice. They emphatically do not do it because they believe they are able to fool Omega. They expect to lose (ie. not get the $1,000,000).
This is only the case in a world-view that accepts that Omega cannot be tricked. How do you know Omega cannot be tricked?
By hypothesis, this is how it works. Omega can predict your choice with >0.5 accuracy (strictly more than half the time). Regardless of Free Will or Word of God or trickery or Magic.
The whole point of the thought experiment is to analyze a choice under some circumstances where the choice causes the outcomes to have been laid out differently.
If you fight the hypothesis by asserting that some other worldviews grant players Magical Powers From The Beyond to deceive Omega (who is just a mental tool for the thought experiment), then I can freely assert that Omega has Magical Powers From The Outer Further Away Beyond that can neutralize those lesser powers or predict them altogether. Or maybe Omega just has a time machine. Or maybe Omega just fucking can, don’t fight the premises damnit!
And as wedrifid pointed out, this is not even the main reason why the smarter two-boxers two-box. It’s certainly one of the common reasons why the less-smart ones do though, in my experience. (Since they never read the Sequences, aren’t scientists, and never learned to not fight the premises! Ahem.)
I think the ease with which this community adopts one boxing has to do with us having internalized a computationalist view of the mind and the person. This has a lot in common with the psychological view of person-hood. Basically, we treat agents as decision algorithms which makes it much easier to see how decisions could have non-causal properties.
This is, incidentally, related to my platonism you asked me about. Computationalism leads to a Platonic view of personhood (where who you are is basically an algorithm that can have multiple instantiations). One-boxing falls right out of this theory. The decision you make in Newcombs problem is determined by your decision algorithm. You decision algorithm can be wholly or partly instantiated by Omega and that’s what allows Omega to predict your behavior.
My problem with thinking of Newcomb’s paradox this way is that it is possible that my decision algorithm will be “try to predict what Omega does, and....” For Omega to predict my behavior by running through my algorithm will involve a self-reference paradox; it may be literally impossible, even in principle, for Omega to predict what I do.
Of course, you can always say “well, maybe you can’t predict what Omega does”, but the problem as normally posed implies that there’s an algorithm for producing the optimal result and that I am capable of running such an algorithm; if there are some algorithms I can’t run, I may be incapable of properly choosing whether to one-box or two-box at all.
Your prediction of what Omega does is just as recursive as as Omega’s prediction. But if you actually make a decision at some point that means that your decision algorithm has an escape clause (ow! my brain hurts!) which means that Omega can predict what you’re going to do (by modelling the all the recursions you did).
but the problem as normally posed implies that there’s an algorithm for producing the optimal result and that I am capable of running such an algorithm
It doesn’t actually. The optimal result is two boxing when Omega thinks you are going to one box. But since Omega is a God-like super computer and you aren’t that isn’t going to happen. If you happen to have more information about Omega than it has about you and the hardware to run a simulation of Omega then you can win like this. But that isn’t the thought experiment.
My point (or the second part of it) is that simply by asking “what should you do to achieve an optimal result”, the question assumes that your reasoning capacity is good enough to compute the optimal result. If computing the optimal result requires being able to simulate Omega, then the original question implicitly assumes that you are able to simulate Omega.
Where does the question assume that you can compute the optimal result? Newcomb’s Problem simply poses a hypothetical and asks ‘What would you do?‘. Some people think they’ve gotten the right answer; others are less confident. But no answer should need to presuppose at the outset that we can arrive at the very best answer no matter what; if it did, that would show the impossibility of getting the right answer, not the trustworthiness of the ‘I can optimally answer this question’ postulate.
I once had a man walk up to me and ask me if I had the correct time. I looked at my watch and told him the time. But it seemed a little odd that he asked for the correct time. Did he think that if he didn’t specify the qualifier “correct”, I might be uncertain whether I should give him the correct or incorrect time?
I think that asking what you would do, in the context of a reasoning problem, carries the implication “figure out the correct choice” even if you are not being explicitly asked what is correct. Besides, the problem is seldom worded exactly the same way each time and some formulations of it do ask for the correct answer.
For the record, I would one-box, but I don’t actually think that finding the correct answer requires simulating Omega. But I can think of variations of the problem where finding the correct answer does require being able to simulate Omega (or worse yet, produces a self-reference paradox without anyone having to simulate Omega.)
When you suggest someone read three full length posts in response to a single sentence some context is helpful, especially if they weren’t upvoted. Maybe summarize their point or something.
If it was easy to summarize, it wouldn’t have required a three parter sequence. :-)
However, perhaps one relevant point from it is:
For the purposes of Newcomb’s problem, and the rationality of Fred’s decisions, it doesn’t matter how close to that level of power Omega actually is. What matters, in terms of rationality, is the evidence available to Fred about how close Omega is to having to that level of power; or, more precisely, the evidence available to Fred relevant to Fred making predictions about Omega’s performance in this particular game.
Since this is a key factor in Fred’s decision, we ought to be cautious. Rather than specify when setting up the problem that Fred knows with a certainty of 1 that Omega does have that power, it is better to specify a concrete level of evidence that would lead Fred to assign a probability of (1 - δ) to Omega having that power, then examine the effect upon which option to the box problem it is rational for Fred to pick, as δ tends towards 0.
Listen, this is like someone who believes the Axiom of Choice saying “constructivist mathematicians are drastically worse at set theory” (because they reject Choice). Newcomb is all about how you view free will. This is not a settled question yet.
To the extent that Newcomb’s Problem is ‘about how you view free will’ people who two box on Newcomb’s Problem are confused about free will.
This isn’t like constructivist mathematicians being worse at set theory because they reject choice. It’s closer to a kindergarten child scribbling in crayon on a Math exam then insisting “other people are bad at Math too therefore you should give me full marks anyway”.
To the extent that Newcomb’s Problem is ‘about how you view free will’ people who two box on Newcomb’s Problem are confused about free will.
I don’t think that’s fair (though I also don’t think Newcomb’s problem has anything to do with free will either). The question is whether one-boxing or two-boxing is rational. It’s not fair to respond simply with ‘One-boxing is rational because you get more money’, because two-boxers know one-boxing yields more money. They still say it’s irrational. It would be question begging to try to dismiss this view because rationality is just whatever gets you more money, since that’s exactly what the argument is about.
To the extent that Newcomb’s Problem is ‘about how you view free will’ people who two box on Newcomb’s
Problem are confused about free will.
If you say so. If I learn enough about “choshi dori” to fool the punch-avoiding algorithm and win 1000 dollars, and you don’t play, who is confused? Rationalists are supposed to win, remember, not stick to a particular view of a problem.
If you say so. If I learn enough about “choshi dori” to fool the punch-avoiding algorithm and win 1000 dollars, and you don’t play, who is confused? Rationalists are supposed to win, remember, not stick to a particular view of a problem.
Rational agents who play Newcomb’s Problem one box. Rational agents who are in entirely different circumstances make entirely different decisions as determined by said circumstances. They also tend to have a rudimentary capability of noticing the difference between problems.
(a) You are being a dick. I certainly did not insult anyone in this thread.
(b) The isomorphism is exact. The point is granularity. If the guy can avoid the punch 90% of the time (or more precisely guess what your punch decision algorithm will do in response to some inputs 90% of the time), and Omega guesses what you will do correctly 90% of the time, that ought to be sufficient to do the math on expected values, if you want to leave it there.
Or, alternatively, you can try to “open up the agent you are playing against” and try to trick it. It’s certainly possible in the punching game. It may or may not be possible in the game with Omega—the problem doesn’t specify.
If you say “well, rational people do X and not Y, end of story” that’s fine. I am going to make my updates on you and move on.
A typical example of irrational behavior is intransitive preference. As the money pump thread shows people often don’t actually fall for money pumping, even if they have intransitive preferences. In other words, the map doesn’t fully reflect the territory of what people actually do.
Another example is gwern’s example with correlation and causation. Correlation does not imply causation, says gwern, but if we knew how often it does imply it, we may well be rational to conclude the latter from the former if the odds are good enough. He’s right—but no one does this (I don’t think!).
I used the example of the punching game on purpose—it makes the theoretical situation with Omega practical, as in you can go and try this game if you wanted. My response to trying the game was to learn how it works, rather than give up playing it. This is what people actually do. If your model doesn’t capture it, it’s not a good model.
A broader comment: I do math for a living. The issues of applicability of math to practical problems, and changing math models around is something I think about quite a bit.
It took a non-trivial exertion in the direction of politeness to refrain from answering the rhetorical question “who is confused?” with a literal answer.
I certainly did not insult anyone in this thread.
Arguable. I would concede at least that you did not say anything insulting that you do not sincerely believe is warranted.
(b) The isomorphism is exact. The point is granularity. If the guy can avoid the punch 90% of the time (or more precisely guess what your punch decision algorithm will do in response to some inputs 90% of the time), and Omega guesses what you will do correctly 90% of the time, that ought to be sufficient to do the math on expected values, if you want to leave it there.
Doing expected value calculations on probabilistic variants of newcomb’s problem is also old news. And results in one boxing unless the probability gets quite close to random guessing. Once again, if you choose a sufficiently different problem than Newcomb’s (such as by choosing an accuracy sufficiently close to 0.5, reducing the payoff ratio or by positing that you are in fact more intelligent than Omega) then you have failed to respond to a relevant question (or an interesting question, for that matter).
If you say “well, rational people do X and not Y, end of story” that’s fine. I am going to make my updates on you and move on.
Please do. I have likewise updated. Evidence suggests you are ill suited to considering counterfactual problems and unlikely to learn. My only recourse here is to minimize the damage you can do to the local sanity waterline. I’ll leave further attempts at verbal interaction to the half a dozen others who have been attempting to educate you, assuming they have more patience than I.
A broader comment: I do math for a living. The issues of applicability of math to practical problems, and changing math models around is something I think about quite a bit.
I would be interested in seeing how philosophers do on tests of analytical versus intuitive reasoning (I forget the name of the test normally used for gauging this) and ability to narrow down hypotheses when the answers are known and easily verifiable.
I would be interested in seeing how philosophers do on tests of analytical versus intuitive reasoning (I forget the name of the test normally used for gauging this)
Quinean reasons. Tegmark’s position, as far as I can tell, is that all abstract objects are also physically instantiated (or that the only difference between concrete and abstract objects is indexical). Which I think is plausible—but I think abstract objects could be an entirely different sort of thing from concrete, physically existing objects, and still exist.
Do you think abstract objects have anything causally to do with the things (about our universe, or about mathematical practice) that convinced you they exist? My worry is that in the absence of a causal connection, if there weren’t such abstract objects, mathematics would be just as ‘unreasonably effective’. The numbers aren’t doing anything to us to make mathematics work, so their absence wouldn’t deprive us of anything (causally). If a hypothesis can’t predict the data any more reliably than its negation can, then the data can’t be used to support the hypothesis.
In general, I’d like to hear more talk about what sorts of relations these number things enter into with our own world.
Do you think abstract objects have anything causally to do with the things (about our universe, or about mathematical practice) that convinced you they exist?
No. But that is essentially true by definition. On the other hand, I think all causal claims are claims about abstract facts. E.g. when you say “The match caused the barn to burn to the ground” you’re invoking a causal model of the world and models of the world are abstractions (though obviously they can be represented).
My worry is that in the absence of a causal connection, if there weren’t such abstract objects, mathematics would be just as ‘unreasonably effective’.
To me this is like hearing “If mass and velocity didn’t exist Newtonian physics would be just as ‘unreasonably effected’. Mathematical objects are part of mathematics. The fact that math is unreasonably effective is why we can say mathematical facts are true and mathematical entities exist. Just like the fact that quantum theory is unreasonably effective is the reason we can say that quarks exist. This is true of just everyday objects too. We say your chair exists because the chair is the best way of explaining some of your sensory impressions. It just happens that not all entities are particulars embedded in the causal world.
No. But that is essentially true by definition. On the other hand, I think all causal claims are claims about abstract facts. E.g. when you say “The match caused the barn to burn to the ground” you’re invoking a causal model of the world and models of the world are abstractions (though obviously they can be represented).
Causal claim may be expressed with abstract models, but that does not mean they are about abstract models. Causal models do not refer to themselves, in which case they would be about the abstract, they refer
to whatever real-world thing they refer to.
To me this is like hearing “If mass and velocity didn’t exist Newtonian physics would be just as ‘unreasonably effected’. Mathematical objects are part of mathematics. The fact that math is unreasonably effective is why we can say mathematical facts are true and mathematical entities exist.
Maths isn’t unreasonably effective at understanding the world in the sense that any given mathematical
truth is automatically also a physical truth. If one mathematical statement (eg an inverse square law of gravity) is physically true, and infinity of others (inverse cube law, inverse power of four...) is automatically false. So when we reify
out best theories, we are reifying a small part of maths for reasons which aren’t purely mathematical. There is not path from the effectiveness of some maths at describing the physical universe to the reification of all maths, because physical truth is a selection of the physically applicable parts of maths.
Sure, but it’s not true by definition that numbers are abstract. Given your analogy to mass and velocity, and your view that mathematical objects help explain the unreasonable effectiveness of mathematics, it seems to me that it would make much more sense to treat these number things as playing a causal or constitutive role in the makeup of our universe itself, e.g., as universals. Then it would no longer just be a coincidence that our world conveniently accompanies a causally dislocated Realm of correlates for our mathematical discourse.
To me this is like hearing “If mass and velocity didn’t exist Newtonian physics would be just as ‘unreasonably effected’.
But it makes a difference to how our world is that objects have velocity and mass. By hypothesis, it doesn’t make a difference to how our world is that there are numbers. (And from this it follows that it wouldn’t make a difference if there weren’t numbers.) If numbers do play a role as worldly ‘difference-makers’ of some special sort, then could you explain more clearly what that role is, since it’s not causal?
Mathematical objects are part of mathematics.
I don’t know what that means. If by ‘mathematics’ you have in mind a set of human behaviors or mental states, then mathematics isn’t abstract, so its objects are neither causally nor constitutively in any relation to it. On the other hand, if by ‘mathematics’ you have in mind another abstract object, then your statement may be true, but I don’t see the explanatory relevance to mathematical practice.
The fact that math is unreasonably effective is why we can say mathematical facts are true and mathematical entities exist.
Sure, but it’s also why we can assert doctrines like mathematical fictionalism and nominalism. A condition for saying anything at all is that our world exhibit the basic features (property repetition, spatiotemporal structure...) that suffice for there to be worldly quantities at all. I can make sense of the idea that we need to posit something number-like to account in some causality-like way for things like property repetition and spatiotemporal structure themselves. But I still haven’t wrapped my head around why assuming numbers are not difference-makers for the physical world (unlike the presence of e.g. velocity), we should posit them to explain the efficacy of theories whose efficacy they have no impact upon.
Just like the fact that quantum theory is unreasonably effective is the reason we can say that quarks exist.
The properties of quarks causally impact our quantum theorizing. In a world where there weren’t quarks, we’d be less likely to have the evidence for them that we do. If that isn’t true of mathematics (or, in some ways even worse, if we can’t even coherently talk about ‘mathless worlds’), then I don’t see the parity.
Sure, but it’s not true by definition that numbers are abstract.
Huh?
it seems to me that it would make much more sense to treat these number things as playing a causal or constitutive role in the makeup of our universe itself, e.g., as universals.
I don’t recognize a difference between universals and abstract objects but neither plays a causal role in the make up of the universe.
Then it would no longer just be a coincidence that our world conveniently accompanies a causally dislocated Realm of correlates for our mathematical discourse.
You’re taking metaphors way too literally. There is no “Realm”.
The properties of quarks causally impact our quantum theorizing. In a world where there weren’t quarks, we’d be less likely to have the evidence for them that we do. If that isn’t true of mathematics (or, in some ways even worse, if we can’t even coherently talk about ‘mathless worlds’), then I don’t see the parity.
It’s not that complicated. We have successful theories that posit certain entities. I think believing in those theories requires believing in those entities. Some of those entities figure causally and spatio-temporally in our theories. Some don’t. When you say “in a world where there weren’t quarks” I have no idea what you’re talking about. It appears to be some kind of possible world where the laws of physics are different. But now we’re making statements of fact about abstract objects. It is very difficult to say this about mathematics since math appears likely to work the same way in all possible worlds. But that’s a really strange reason to conclude mathematical objects don’t exist. Numbers and quarks are both theoretically posited entities that we need to explain our world.
As far as I can tell everything you have said is just different forms of “but mathematical objects aren’t causal!”. I readily agree with this but since abstract objects aren’t causal by definition and the entire question is about abstract objects it seems like you’re begging the question.
If in axiomatizing arithmetic we are ontologically committed to saying that 1 exists, 2 exists, 3 exists,etc., then we may say that there are numbers even if it is not axiomatic that 1, 2, 3, etc. are causally inert, nonphysical, etc.
Instead of being a platonist and treating numbers as abstract, you could treat them as occupying spacetime (like immanent universals or tropes), you could treat them as non-spatiotemporal but causally efficacious (like the actual Forms of Plato), or you could assert both. (You could also treat them as useful fictions, but I’ll assume that fictionalism is an error theory of mathematics.)
I think many of the views on which mathematical objects have some causal (or, if you prefer, ‘difference-making’) effect on our mathematical discourse are reasonable. The views on which it’s just a coincidence are not reasonable, and I don’t think abstract numbers can easily escape the ‘just a coincidence’ concern (unless, perhaps, accompanied by a larger Tegmark-style framework).
I don’t recognize a difference between universals and abstract objects but neither plays a causal role in the make up of the universe.
Let’s take the property ‘electrically charged’ as an example. If charge is a universal, then it’s something wholly and constitutively shared in common between every charged thing; universals occur exactly in the spatiotemporal locations where their instances are, and they are exhausted by these worldly things. So there’s no need to posit anything outside our universe to believe in universals. Redness is, as it were, ‘in’ every red rose. Generally, universals are assumed to play causal roles (it’s because roses instantiate redness that I respond to them as I do), though in principle you could posit a causally inert one. (Such a universal still wouldn’t be abstract, because it would still occur in our universe.)
If electric charge is instead an abstract object, then it exists outside space and time, and has no effect at all on the electrically charged things in our world. (So abstract electric charge serves absolutely no explanatory role in trying to understand how things in our world are charged. However, it might be a useful posit for the nominalist about universals, just to provide a (non-nominalistic) correlate for our talk in terms of abstract nouns like ‘charge’.
A third option would be to treat electric charge as a Platonic Form, i.e., something outside spacetime but causally responsible for the distribution of charge instances in our universe. (This is confusing, because Platonic Forms aren’t ‘platonic’ in the sense in which mathematical platonism are ‘platonic’. Plato himself was a nominalist about abstract objects, and also a nominalist about universals. His Forms are a totally different thing from the sorts of posits philosophers these days generally entertain.)
A natural way to think of bona-fide ancient Platonism (as opposed to the lowercase-p ‘platonism’ of modern mathematicians) is as cellular automata; for Plato, our universe is an illusion-like epiphenomenon arising from much simpler, lower-level relationships that are not temporal. (Space still plays a role, but as an empty geometry that comes to bear properties only in a derivative way, via its relationships to particular Forms.)
You’re taking metaphors way too literally. There is no “Realm”.
Hm? How do you know I’m taking it too literally? First, how do you know that ‘Realm’ isn’t just part of the metaphor for me? What signals to you when I stop talking about ‘objects’ and start talking about ‘Realms’ that I’ve crossed some line? (Knowing this might help tell me about which parts of your talk you take seriously, and which you don’t.)
Second, as long as we don’t interpret ‘Realm’ spatially, what’s wrong with speaking of a Realm of abstract objects, literally? Physical things occur in spacetime; abstract things exist just as physical ones do, but outside spacetime. Perhaps they occupy their own non-spatial structure, or perhaps they can’t be said to ‘occupy’ anything at all. Either way, we’ve complicated our ontology quite a bit.
If in axiomatizing arithmetic we are ontologically committed to saying that 1 exists, 2 exists, 3 exists,etc., then we may say that there are numbers even if it is not axiomatic that 1, 2, 3, etc. are causally inert, nonphysical, etc.
I’m still lost here.
Instead of being a platonist and treating numbers as abstract, you could treat them as occupying spacetime (like immanent universals or tropes), you could treat them as non-spatiotemporal but causally efficacious (like the actual Forms of Plato), or you could assert both. (You could also treat them as useful fictions, but I’ll assume that fictionalism is an error theory of mathematics.)
I’m not sure I would say Plato’s forms are causally efficacious in the way we understand that concept—but that isn’t really important. Any way, I have issues with the various alternatives to modern Platonism, immanent realism, trope theory etc. -- though not the time to go into each one. If I were to make a general criticism I would say all involve different varieties of torturous philosophizing and the invention of new concepts to solve different problems. Platonism is easier and doesn’t cost me anything.
I think many of the views on which mathematical objects have some causal (or, if you prefer, ‘difference-making’) effect on our mathematical discourse are reasonable. The views on which it’s just a coincidence are not reasonable, and I don’t think abstract numbers can easily escape the ‘just a coincidence’ concern (unless, perhaps, accompanied by a larger Tegmark-style framework).
Ah! This seems like a point of traction. I certainly don’t think there is anything coincidental about the fact that mathematical truths tell us things about physical truths. I just don’t think the relationship is causal. I believe causal facts are facts about possible interventions on variables. Since there is no sense in which we can imagine intervening on mathematical objects I don’t see how that relationship can be causal. But that doesn’t mean it is a coincidence or isn’t sense making. I Mathematics is effective because everything in the natural world is an instantiation of an abstract object. Instantiations have the properties of the abstract object they’re instantiating. This kind of information can be used in a straightforward, explanatory way.
universals occur exactly in the spatiotemporal locations where their instances are, and they are exhausted by these worldly things.
This is a particular way of understanding universals. You need to specify immanent realism. Plenty of philosophers believe in universals as abstract objects.
We have successful theories that posit certain entities. I think believing in those theories requires believing in those entities. Some of those entities figure causally and spatio-temporally in our theories. Some don’t
We think the ones that don’t figure causally or spatio-temporally aren’t actually being posited at all. That’s how you read physics. If you know how to read a map, you know that rivers and mountains on the map are suposed to be in the territory, but lines of lattitude and contour lines aren’t.
When you say “in a world where there weren’t quarks” I have no idea what you’re talking about. It appears to be some kind of possible world where the laws of physics are different. But now we’re making statements of fact about abstract objects.
No, when I say ‘in a world where there weren’t quarks’ I mean in an imagined scenario in which quarks are imagined not to occur. I’m not committed to real non-actual worlds. (If possible worlds were abstract, then they’d have no causal relation to my thoughts about them, so I’d have no reason to think my thoughts about modality were at all on the right track. It’s because modality is epistemic and cognitive and ‘in the head’ that I can reason about hypothetical and counterfactual situations productively.) I’m a modal fictionalist, and a mathematical fictionalist.
In imagined scenarios where we sever the causal links between agents and quarks, e.g., by replacing quarks with some other mechanism that can produce reasoning agents, it seems less likely that the agents would have hypothesized quarks. When we remove abstract numbers from a hypothetical scenario, on the other hand, nothing about the physical world seems to be affected (since, inasmuch as they are causally inert, abstract numbers are in no way responsible for the way our world is).
That suggests that positing numbers is wholly unexplanatory. It might happen to be the case that there are such things, but it can’t do anything to account for the unreasonable effectiveness of mathematics, because of the lack of any causal link.
Abstract objects play a similar role in current physical theories to that which luminiferous aether used to play. The problem with aether isn’t just that it was theoretically dispensable; it was that, even if we weren’t smart enough to figure out how to reformulate our theories without assuming aether, it would still be obvious that the theoretical successes that actually motivated us to form such theories would have arisen in exactly the same way even if there were no aether. Aether doesn’t predict aether-theories like ours, because our aether theory is not based on empirical evidence of aether.
(Aether might still be reasonable to believe in, but only if it deserves a very high prior, such that the lack of direct empirical confirmation is OK. But you haven’t argued for platonism based on high priors, e.g., via a Tegmark hypothesis; you’ve argued for it empirically, based on the real-world successes of mathematicians. That doesn’t work, unless you add some kind of link between the successes and the things you’re positing to explain those successes.)
Modern-day platonists try to make their posits appear ‘metaphysically innocent’ by depriving them of causal roles, but in the process they do away with the only features that could have given us positive reasons to believe such things. It would be like if someone objected to string theory because it’s speculative and lacks evidence, and string theorists responded by replacing strings with non-spatiotemporal, causally inert structures that happen to resemble the physical world’s structures. The whole point of positing strings is that they be causally or constitutively linked to our beliefs about strings, so that the success of our string theory won’t just be a coincidence; likewise, the whole point of reifying mathematical objects should be to treat them as causally or constitutively responsible for the success of mathematics. Without that responsibility, the posit is unmotivated.
math appears likely to work the same way in all possible worlds.
What do you mean by “work the same way”? I can pretty easily imagines world where mathematicians consistently fail to get reliable results. There may even be actual planets like that in the physical universe, if genetic drift eroded the mathematical reasoning capabilities of some species, or if there are aliens who rely heavily on math but don’t relate it to empirical reality in sensible ways. If such occurrences don’t falsify platonism, then our own mathematicians’ remarkable successes don’t verify platonism. So what phenomenon is it that you’re really claiming we need platonism to explain? What kind of ‘unreasonable effectiveness’ is relevant?
When we remove abstract numbers from a hypothetical scenario, on the other hand, nothing about the physical world seems to be affected (since, inasmuch as they are causally inert, abstract numbers are in no way responsible for the way our world is).
I can come up with possible worlds without quarks (in a vague, non-specific way). I have no idea what it means to “remove abstract numbers from a hypothetical scenario”. I don’t think abstract objects have modal variation which is closely related to their (not) being causal. But in so far as mathematics posits abstract entities and mathematics is explanatory than I don’t think there is anything mysterious about the sense in which abstract objects are explanatory.
Abstract objects play a similar role in current physical theories to that which luminiferous aether used to play. The problem with aether isn’t just that it was theoretically dispensable; it was that, even if we weren’t smart enough to figure out how to reformulate our theories without assuming aether, it would still be obvious that the theoretical successes that actually motivated us to form such theories would have arisen in exactly the same way even if there were no aether. Aether doesn’t predict aether-theories like ours, because our aether theory is not based on empirical evidence of aether.
I disagree. I think the problem with aether is entirely just that it was theoretically dispensable. And I think the sentences that follow that are just a way of saying “aether was theoretically dispensable”.
Modern-day platonists try to make their posits appear ‘metaphysically innocent’ by depriving them of causal roles, but in the process they do away with the only features that could have given us positive reasons to believe such things.
Their utility in our explanations is sufficient reason to believe they exist even if their role in those explanations is not causal. Your string theory comparison doesn’t sound like a successful scientific theory.
What do you mean by “work the same way”?
As in we can’t develop models of possible worlds in which mathematics works differently. This has nothing to do with the abilities of hypothetical mathematicians.
As in we can’t develop models of possible worlds in which mathematics works differently.
Or we can’t develop models of mathematically possible worlds where maths works differently. Or maybe we can, since we can image the AoC being either true or false Actually, it is easier for realists to imagine maths being different in different possible worlds, since, for realists, the existence of numbers makes an epistemic difference. For them, some maths that is formally valid (deducable from axioms) might be transcendentally incorrect (eg, the AoC was assumed but is actually false in Plato’s Heaven).
but I think abstract objects could be an entirely different sort of thing from concrete, physically existing objects, and still exist.
It’s logically possible..like so many things.
Either these non physical things interact with matter (eg the brains of mathematicians) or they don’t.
If they do, that is supernaturalism. If they don’t, they succumb to Occam’s razor.
I didn’t say delete numbers from theories. I mean’t don’t reify them. There is stuff in theories that you are supposed not to reify, like centres of gravity.
Centers of gravity are an even better example of a real abstract object. I’m definitely not reifying anything according to the dictionary definition of that word: neither numbers nor centers of gravity are at all concrete. They’re abstract.
OK. So, in what sense do these “still exist”, and in what sense are they “entirely different” from concrete objects? And are common-or-garden numbers included?
I think it might be best if you read the above-linked SEP article and some of the related pieces. But the short form.
We should believe our best scientific theories
Our best scientific theories make reference to/quantifier over abstract objects—mathematical objects like numbers, sets and functions and non-mathematical abstract objects like types, forces and relations. Entities that theories refer to/quantifier over are called their ontic commitments.
Belief in our best scientific theories means belief in their ontic commitments.
C: We should believe in the existence of the abstract objects in our best scientific theories.
One and two seem uncontroversial. 3 can certainly be quibbled with and I spent a few years as a nominalist trying to think of ways to paraphrase out or find reasons to ignore the abstract objects among science’s ontic commitments. Lots of people have done this and have occasionally demonstrated a bit of success. A guy named Hartry Field wrote a pretty cool book in which he axiomatizes Newtonian mechanics without reference to numbers or functions. But he was still incredibly far away from getting rid of abstract objects all together (lots of second order logic) and the resulting theory is totally unwieldy. At some point, personally, I just stopped seeing any reason to deny the existence of abstract objects. Letting them exists costs me nothing. It doesn’t lead to false beliefs and requires far less philosophizing.
The concrete-abstract distinction still gets debated but a good first approximation is that concrete objects can be part of causal chains and are spatio-temporal while abstract objects are not. As for common-or-garden numbers: I see no reason to exclude them.
Quine has a logician’s take on physics—he assumes that the formal expression of a physical law is complete itself, and therefore seeks a purely formal criterion of ontological commitment, or objecthood. However, physics doesn’t work like that. Physical formalisms have semantic implications that aren;t contained in the formalism itself: for instance, f=ma is mathematically identical to p=qr or a=bc, or whatever. But The f, the m and the a all have their own meaning, their
own relation to measurement, as a far as a physicist is concerned.
I spent a few years as a nominalist trying to think of ways to paraphrase out or find reasons to ignore the abstract objects among science’s ontic commitments.
The reasons are already part of the theory..in the sense that the theory is more than the written
formalism Physics students are taught that centers of gravity should not be reified—that is part
of the theory. No physcs student is taught that any pure number is a reifiable object, and few hit upon the idea themselves.
Letting them exists costs me nothing. It doesn’t lead to false beliefs and requires far less philosophizing.
No philosophizing is required to get rid of abstract objects, one only needs to follow the instructions
about what is refiable that are already part of the informal part of a theory.
I can’t see how you can claim that Platonism doesn’t lead to false beliefs without implicitly claiming omniscience. If abstract entities do not exist, then belief in them is false, by a straightforward correspondence theory. Moreover, is Platoism is true, then some common fomlations of physicalism, such as “everything that exists,, exists spatio-temporally” is
false. Perhaps you meant Platonism doesn;t lead to false beliefs with any practical upshot, but violations of Occam’s razor generally don’t.
The concrete-abstract distinction still gets debated but a good first approximation is that concrete objects can be part of causal chains and are spatio-temporal while abstract objects are not.
OK, but that means that centres-of-gravity aren;t abstract:: the center of gravity of the Earth has a location. That doesn’t mean they are fully concete either. Jerrold Katz puts them into a third category, that of the mixed concrete-and-abstract. (His favoured example is the equator).
As for common-or-garden numbers: I see no reason to exclude them.
If you are going to include centers of gravity, and Katz’s categorisation is correct, then there
is still no reason to include fully abstract entities. And there is a reason to exclude centers of gravity, which is the informal semantics of physics.
The reasons are already part of the theory..in the sense that the theory is more than the written formalism Physics students are taught that centers of gravity should not be reified—that is part of the theory. No physcs student is taught that any pure number is a reifiable object, and few hit upon the idea themselves.
There’s that word again. I’m not reifying numbers. Abstract objects aren’t “things”. They aren’t concrete. Platonists don’t want to reify centers of gravity or numbers.
I can’t see how you can claim that Platonism doesn’t lead to false beliefs without implicitly claiming omniscience. If abstract entities do not exist, then belief in them is false, by a straightforward correspondence theory. Moreover, is Platoism is true, then some common fomlations of physicalism, such as “everything that exists,, exists spatio-temporally” is false. Perhaps you meant Platonism doesn;t lead to false beliefs with any practical upshot, but violations of Occam’s razor generally don’t.
Platonism and nominalism don’t differ in anticipations of future sensory experiences. The difference is entirely about theory and methodology. I’ve already replied to the Occam’s razor thing: our theories that include abstract objects are radically simpler and easier to use than the attempts that do not exclude abstract objects.
OK, but that means that centres-of-gravity aren;t abstract:: the center of gravity of the Earth has a location. That doesn’t mean they are fully concete either. Jerrold Katz puts them into a third category, that of the mixed concrete-and-abstract. (His favoured example is the equator).
I’m not sure they have a location in the same way that is generally meant by spatio-temporal: but the exact classification of centers of gravity isn’t that important to me. I’m not claiming to have the details of that figured out.
There’s that word again. I’m not reifying numbers. Abstract objects aren’t “things”. They aren’t concrete. Platonists don’t want to reify centers of gravity or numbers
There has to be some content to Platonism. You seem to be assuming that by “reifying” I must mean “treat as concretely existent”. In context, what I mean is “treat as being existent in whatever sense Platonists think abstracta are existent”. I am not sure what that is, but there has to be something to it, or there is no content
to Platonism, and in any case it is not my job to explain it.
Platonism and nominalism don’t differ in anticipations of future sensory experiences. The difference is entirely about theory and methodology.
I am not sure what you mean by that. The difference is about ontology. If two theories make the same predictions, and one of them has more entities, one of them is multiplying entities unnecessarily.
I’ve already replied to the Occam’s razor thing: our theories that include abstract objects are radically simpler and easier to use than the attempts that do not exclude abstract objects.
And I have replied to the reply. The Quinean approach incorrectly takes a scientific theory to be a formalism.
It is only methodologicaly simpler to reify whatever is quantified over, formally, but that approach is too simple
because it leaves out the semantics of physics—it doensn’t distinguish between f=ma and p=qr.
I’m not sure they have a location in the same way that is generally meant by spatio-temporal: but the exact classification of centers of gravity isn’t that important to me. I’m not claiming to have the details of that figured out.
You seem to be assuming that by “reifying” I must mean “treat as concretely existent”.
Oh come on now. That’s literally what the word means. It’s the dictionary definition. Don’t complain about me assuming things if you’re using words contrary to their dictionary definition and not explaining what you mean.
In context, what I mean is “treat as being existent in whatever sense Platonists think abstracta are existent”.
As I’ve said a thousand times I think all there is to “being existent” is to be an entity quantified over in our best scientific theories. So in this case treating abstract objects as being existent requires scientists to literally do nothing different.
I am not sure what you mean by that. The difference is about ontology. If two theories make the same predictions, and one of them has more entities, one of them is multiplying entities unnecessarily.
Neither nominalism nor platonism make predictions. Scientific theories make predictions and there are no nominalist scientific theories.
The Quinean approach incorrectly takes a scientific theory to be a formalism. It is only methodologicaly simpler to reify whatever is quantified over, formally, but that approach is too simple because it leaves out the semantics of physics—it doensn’t distinguish between f=ma and p=qr.
Honestly, I don’t see how this is relevant. I don’t agree that the Quinean approach leaves out the semantics of physics and I don’t see how including the semantics would let you have a simple scientific theory that didn’t reference abstract objects.
Such details are what could bring Platonism down.
Obviously it is possible that there are arguments that could convince me I’m wrong. I’m not obligated to have a preemptive reply to all of them.
As I’ve said a thousand times I think all there is to “being existent” is to be an entity quantified over in our best scientific theories.
The point of Quinean Platonism is to inflate the formal criterion of quantification into an ontological claim of existence, not to deflate existence into a mere formalism.
So in this case treating abstract objects as being existent requires scientists to literally do nothing different.
It requries them to ignore part of the informal interpretation of a theory.
Neither nominalism nor platonism make predictions.
Then one of them is unnecessarily complicated as an ontology. You see to think Platonism isn’t ontology. I have no idea what your would then think it is.
there are no nominalist scientific theories.
Whether theories are nominalist, or whatever, depends on how you read them. They don’t have their own
interpretation built-in, as I have pointed out a 1000 times.
I don’t agree that the Quinean approach leaves out the semantics of physics a
nd I don’t see how including the semantics would let you have a simple scientific theory that didn’t reference abstract objects.
Theories can include numbers and centers of gravity, and reference them in that sense, and that is not
the slightest argument for Platonism. Platonism requires that certain symbols have real referents—whichis another sense of “reference”.
Looking
at a symbol on a piece of paper doesn’t tell you that the symbol has a real referent. Non-Platonism isnt the claim that such symbols need to be deleted, it is an interpretation whereby some symbols get reified—have real world referents—and others don’t. Platonism is not the claim that there are abstract symbols in formalisms, it is an ontological claim about what exists.
Doesn’t this imply that equivalent scientific theories may have quite different implications wrt. what abstract objects exist, depending on how exactly they are formulated (i.e. the extent to which they rely on quantifying over variables)?
Also, given the context, it’s not clear that rejecting theories which rely on second-order and higher-order logics makes sense. The usual justification for dismissing higher-order logics is that you can always translate such theories to first-order logic, and doing so is a way of “staying honest” wrt. their expressiveness. But any such translation is going to affect how variables are quantified over in the theory, hence what ‘commitments’ are made.
Doesn’t this imply that equivalent scientific theories may have quite different implications wrt. what abstract objects exist, depending on how exactly they are formulated (i.e. the extent to which they rely on quantifying over variables)?
I’m not sure what you mean by “equivalent” here. If you mean “makes the same predictions” then yes—but that isn’t really an interesting fact. There are empirically equivalent theories that quantify over different concrete objects too. Usually we can and do adjudicate between empirically equivalent theories using additional criteria: generality, parsimony, ease of calculation etc.
I think Jack meant the sort of modern platonism that philosophers believe, not Tegmark-style platonism. Modern platonism is the position that, as Wikipedia says, abstract objects exist in a sense “distinct both from the sensible external world and from the internal world of consciousness”, while in Tegmark’s platonism, abstract objects exist in the same sense as the external world, and the external world is a mathematical structure.
This seems to be a question of “How are we allowed to use the word ‘exist’ in this conversational context without being confusing?” or “What sort of definition do we care to assign to the word ‘exist’?” rather than an unquoted question of what exists.
In other words, I would be comfortable saying that my office chair and the number 3 both plexist (Platonic-exist), whereas my office chair mexists (materially exists) whereas 3 does not.
Well it is certainly the case that knowing how to use the word “exist” is helpful for answering the question: “what exists?” And a consistent application of the usage of the word “exist” is how the modern platonic argument get’s its start. We look at universally agreed upon cases of the usage of “exist”, formulate criteria for something to exist and apply those criteria. The modern Platonist generally has a criteria along the lines of “If and only if an entity is quantified over by our best scientific theories then it exists.” Since our best scientific theories quantify over abstract objects the modern Platonist concludes that abstract objects exist.
Once can deny the criteria and come up with a different one or deny that abstract objects meet the criteria. But what advantage do these neologisms give us? Does using two different words, plexist and mexist, do anything more than recognize that material objects and abstract objects are two different kinds of things? If so why isn’t calling one “material” and the other “abstract” sufficient for for making that distinction? Presumably we wouldn’t want to come up with a different word for every way something might exist: quark-exist, chair-exist, triangle-exist, three-exist and so on.
Why not just have one word and distinguish entities from each other with adjectives?
Why not just have one word and distinguish entities from each other with adjectives?
Because what we’re saying about our descriptions of things is different. For some nouns, saying that it “exists” means that it has mass and takes up space, can be bumped into and such. For other nouns, “exists” means it can be defined without contradiction, or some such.
The verb “exist” is being used polysemously, even metaphorically — in the manner that “run” is used of sprinters, computer programs, and the dyed color of a laundered shirt. A sprinter, program, and dye are not actually doing anything like the same thing when they “run”, but we use the same word for them. This is a fact about our language, not about the things those three entities are doing. If there were any confusion what we meant, we would not hesitate to say that the program is “executing” and the dye is “spreading” or some such.
For some nouns, saying that it “exists” means that it has mass and takes up space, can be bumped into and such. For other nouns, “exists” means it can be defined without contradiction, or some such.
The whole Platonist position begins from a definition of “exists” that works equally well for abstract and concrete objects. You alternative definitions are bad: “has mass and takes up space, can be bumped into and such” isn’t even a necessary set of criteria for a wide variety of concrete objects. Photons and gluons for instance.
We don’t know that it “works equally well”, since we don’t have omniscient knowledge about the existence
of abstract objects. If abstract objects don’t exist, then the quantification criterion is too broad, and therefore
does not work.
This straight-forwardly begs the question. I say “What it means to exist is to be quantified over in our best scientific theories”. Your reply is basically “If you’re wrong about the definition then you’re wrong about the definition.”
The whole Platonist position begins from a definition of “exists” that works equally well for abstract and concrete objects.
I’m yet to see such a definition. Do you mean the “definition” (a postulate, really) such as the one on Wikipedia? (SEP isn’t any better.)
With a lower case “p”, “platonism’ refers to the philosophy that affirms the existence of abstract objects, which are asserted to “exist” in a “third realm distinct both from the sensible external world and from the internal world of consciousness...”
If so, then it’s a separate definition, not something that “works equally well”. Besides, I have trouble understanding why one needs to differentiate between the abstract world and “the world of consciousness”.
No, I don’t mean that. I’ve given a definition/criterion like eight times in this thread include two comments up :-).
The modern Platonist generally has a criteria along the lines of “If and only if an entity is quantified over by our best scientific theories then it exists.
In other words, theories about the world generally make reference to entities of various kinds. The say “Some x are y” or “There is an x that y’s” etc. These x’s are a theory’s ontological commitments. To say “the number the 3 is prime” implies 3 exists just as “some birds can fly” implies birds exist. Existence is simply being an entity posited by a true scientific theory. Making anything more out of “existence” gives it a metaphysical woo-ness the concept isn’t entitled to.
“Sherlock Holmes is a bachelor” implies that Sherlock Holmes exists. But when you say that you’re simply taking part in a fictitious story. It’s story telling and everyone knows you’re not trying to describe the universe. If the fiction of Arthur Conan Doyle turned out to be a good theory of something—say it was an accurate description of events that really took place in the late 19th century—and accurately predicted lots of historic discoveries and Sherlock Holmes and the traits attributed to him were essential for that theory, then we would sat Sherlock Holmes existed.
A lot of lifting seems to be being done by the “scientific” in “scientific theory”.
I am rightly shifting the criteria of “what exists” to people who actually seem to know what they’re doing.
“Sherlock Holmes is a bachelor” implies that Sherlock Holmes exists
That is not uncontentious.
But when you say that you’re simply taking part in a fictitious story.
In which case SH is not implied to exist. But I knew that it is a fictitious story. The point was that
“the number the 3 is prime” doens’t imply that 3 exists, since properties can be correctly or incorrectly
ascribe to fictive entities. There is no obvious implication from a statement being true to a statement
involving entities that actually exist. Mathematical formalism and fictivism hold 3 to be no more existent than
SH, and are not obviously false.
I am rightly shifting the criteria of “what exists” to people who actually seem to know what they’re doing.
You are not, because you are ignoring them when they say centres don’t exist. You are trying to read ontology from formalism, without taking into account the interpretation of the formalism, the semantics.
”
You are not, because you are ignoring them when they say centres don’t exist.
I don’t agree that I am.
In which case SH is not implied to exist. But I knew that it is a fictitious story. The point was that “the number the 3 is prime” doens’t imply that 3 exists, since properties can be correctly or incorrectly ascribe to fictive entities. There is no obvious implication from a statement being true to a statement involving entities that actually exist. Mathematical formalism and fictivism hold 3 to be no more existent than SH, and are not obviously false.
I don’t understand what you’re trying to accomplish with this line of reasoning. Obviously, “truths” about fictitious stories do not imply the existence of the entities they quantify over. A fiction is a sort of mutually agreed upon lie. (I don’t agree, btw, that a statement about Sherlock Holmes is true in the same way that “There are white Swans” is true). But it is none the less the case that the assertion “Sherlock Holmes is a bachelor” implies the existence of Sherlock Holmes. It just so happens that everyone plays along with the story. But unlike the stories of Sherlock Holmes I really do believe in quantum mechanics and so take the theory’s word for it that the entities it implies exist actually do exist.
I’m obviously aware there are alternatives to Platonism and that there is plenty of debate. I presumably have reasons for rejecting the alternatives. But instead of actually asserting a positive case for any alternative you seem to just be picking at things and disagreeing with me without explaining why (plus a decent amount of misunderstanding the position). If you’d like to continue this discussion please do that instead of just complaining about my position. It’s unpleasant and not productive.
So do I. But I take “the entities it implies” to mean “the entities that you are supposed to believe in according to
the informal interpretation of the formalism”, not “the entities quantified over”.
“Maddy’s first objection to the indispensability argument is that the actual attitudes of working scientists towards the components of well-confirmed theories vary from belief, through tolerance, to outright rejection (Maddy 1992, p. 280). The point is that naturalism counsels us to respect the methods of working scientists, and yet holism is apparently telling us that working scientists ought not have such differential support to the entities in their theories. Maddy suggests that we should side with naturalism and not holism here. Thus we should endorse the attitudes of working scientists who apparently do not believe in all the entities posited by our best theories. We should thus reject P1.”
I’ve given a definition/criterion like eight times in this thread include two comments up :-).
Sorry, I should have looked first.
The modern Platonist generally has a criteria along the lines of “If and only if an entity is quantified over by our best scientific theories then it exists.” Since our best scientific theories quantify over abstract objects the modern Platonist concludes that abstract objects exist.
Ah, I see. How is it different from “we define stuff we think about that is not found in nature as “abstract”″?
To say “the number the 3 is prime” implies 3 exists just as “some birds can fly” implies birds exist.
I guess that’s where I am having problems with this approach. “Number 3 is prime” is a well-formed string in a suitable mathematical model, whereas “some birds can fly” is an observation about external world. Basically, it seems to me that the term “exist” is redundant in it. Everything you can talk about “exists” in Platonism, so the term is devoid of meaningful content.
Hmm, where do pink unicorns exist? Not in the external world, so somewhere in the internal world then? Or do they not exist at all? Then what definition of existence do they fail? For example, “our best scientific theories” imply that people can think about pink unicorns as if they were experimental facts. Thus they must exist in our imagination. Which seems uncontroversial, but vacuous and useless.
I don’t think the hypothesis that there is an independent conscious person existing along with you in your mind (or whatever those people think they’re doing) is the best explanation for the experiences they’re describing. If they just want to use it as shorthand for a set of narratively consistent hallucination then I suppose I could be okay with saying a tulpa exists. But either way: I don’t think a tulpa is an abstract object. It’s a mental object like an imaginary friend or a hallucination. Like any entity, I think the test for existence is how it figures in scientific explanation but I think Platonists and non-Platonists are logically free to admit or deny tulpas existence.
Really? The ‘existence’ status of that kind of mental entity seems to be an orthogonal issue to what (I am guessing) you mean by Tegmarkian considerations.
Tegmarkia includes every possible arrangement of physical law, including forms of psycho-phsycial parallelism whereby what is thought automatically becomes real.
Ah, fair point. I went too far. Still, I’m dubious about conflating the logical and the physical definition of existence. But hey, go wild, it’s of no consequence.
Have you noticed that, although you and Jack have completely opposite (minimal and maxima) ontologies, you both have the same motivation, of avoiding “philosophising”. Well, I suppose “everything exists” and “nothing exists” both impose minimal cognitive burden—if you believe some non -trivial subset exists, you have to put effort into populating it.
I haven’t noticed that Jack has a motivation of “avoiding philosophizing”. And I don’t say that “nothing exists”, I just avoid the term as mostly vacuous, except in specific narrow cases, like math.
I would say pink unicorns do not exist at all. The term, for me, describes a concrete entity that does not exist. “The Unicorn” could be type-language, which are abstract objects—like “the Indian Elephant” or “The Higgs Boson” but unlike the Indian Elephant the Unicorn is not something quantified over in zoology and it is hard to think of a useful scientific process which would ever involve an ontological commitment to unicorns (aside from studying the mythology of unicorns which is clearly something quite different). “3 is prime” is a well-formed string in a suitable mathematical model—which is to say a system of manipulating symbols. But this particular method of symbol manipulation is utterly essential to the scientific enterprise and it is trivial to construct methods of symbol manipulation that are not.
Our best scientific theories imply that people can think about pink unicorns as if they were experimental facts. So thoughts about pink unicorns certainly exist. It may also be the case the unicorns possibly exist. But our best scientific theories certainly do not imply the actual existence of unicorns. So pink unicorns do not exist (bracketing modal concerns).
How is it different from “we define stuff we think about that is not found in nature as “abstract”″?
So to conclude: it’s different in that the criterion for existence requires that the entity actually figures in scientific explanation, in our accurate model of the universe, not simply that it is something we can think about.
So, if a theory of pink unicorns was useful to construct an “accurate model of the universe” (presumably not including the part of the universe that is you and me discussing pink unicorns?) these imaginary creatures would be as real as imaginary numbers?
A lot of lifting is being done by “scientific” here. It’s uncontroversial that scientific theories have to be about the real world in some sense, but it doesn’t follow from that that every term mentioned in them successfully refers to something real.
But if “plexists” means something like “I have an idea of it in my head”, then there is no substance to the claim that 3 plexists..3 is then no more real than a unicorn.
The number 3 has well-defined properties; such that I can be pretty sure that if I talk about 3 and you talk about 3, we’re talking about the same sort of thing. Sources on unicorns vary rather more broadly on the properties ascribed to them.
In other words, I would be comfortable saying that my office chair and the number 3 both plexist (Platonic-exist), whereas my office chair mexists (materially exists) whereas 3 does not.
I agree that this is useful, but it is essential to recognize that these words are just wrapping up our confusion, and that there are other questions that are still left unanswered when we have answered yours. It can sometimes help to determine which things plexist and which mexist, but we still don’t really know what we mean when we say these, and having words for them can sometimes cause us to forget that. (I suppose I should refer to phlogiston here.) I think that Tegmark-platonism is probably a step towards resolving that confusion, but I doubt that any current metaphysical theory that has completed the job; I certainly don’t know of any that doesn’t leave me confused.
I don’t think we really can. The categories of concrete and abstract objects are supposed to carve reality at its joins: I see a chair, I prove a theorem. You can’t really do this sort of analysis without reference to the chairs and the theorems, and if you do make those references, you must have already settled the question of whether a chair is concrete, and a fortiori whether concrete objects exist. The alternative, studying concepts that were originally intended to carve reality at its joins without intending to do so yourself, has historically been unproductive, except to some extent in math.
Right, so accept that both abstract and concrete objects exist.. While you’re not doing science feel free to think about what abstraction is, what concrete means and so on.
I don’t think I’ve been clear. I’m saying that the categories of abstract and concrete objects are themselves generated by experience and are intended to reflect natural categories, and that it’s not useful to think about what abstraction is without thinking about particular abstract objects and what makes us consider them abstract.
Wikipedia’s fine, but I’d rely more on SEP for quick stuff like this. The question of what makes something ‘mathematical’ is a difficult one, but it’s not important for evaluating abstract-object realism. What makes something abstract is just that it’s causally inert and non-spatiotemporal. Tegmark’s MUH asserts things like that. Sparser mathematical platonisms also assert things like that. For present purposes, their salient difference is how they motivate realism about abstract objects, not how they conceive of the nature of our own world.
If I understand this correctly, I disagree. Modern philosophical platonism means different things by ‘abstract’ than Tegmark’s platonism. In philosophical platonism, I accept your definition that something is abstract if it is causally inert and non-spatiotemporal. For Tegmark, this doesn’t really make sense though, since the universe is causal in the same sense that a mathematical model of a dynamical system is causal, and it is spatiotemporal in the same sense that the mathematical concept of Minkowski spacetime is spatiotemporal, since the universe is just (approximately) a dynamical system on (approximately) Minkowski spacetime. The usual definition of an abstract object implies that physical, spatiotemporal objects are not abstract, which contradicts the MUH. I don’t think we really have a precise definition of abstract object that makes sense in Tegmark’s platonism, since something like ‘mathematical structure’ is obviously imprecise.
For Tegmark, this doesn’t really make sense though, since the universe is causal in the same sense that a mathematical model of a dynamical system is causal, and it is spatiotemporal in the same sense that the mathematical concept of Minkowski spacetime is spatiotemporal
I don’t think that means that abstract objects in the ordinary sense don’t make sense. It just means that he counts a lot of things as concrete that most people might think of as abstract. We don’t need a definition of ‘mathematical structure’ for present purposes, just mathematically precise definitions of ‘causal’ and ‘spatiotemporal’.
The abstract/concrete distinction is actually a separate ontic axis from the mathematical/physical one. You can have abstract (platonic) physical objects, and concrete mathematical objects.
Example of abstract physical objects: Fields
Example of concrete mathematical objects: Software
My definitions:
Abstract: universal , timeless and acausal (always everywhere true and outside time and space, and not causally connected to concrete things). Concrete: can be located in space and time, is causal, has moving parts
Mathematical: concerned with categories, logics and models Physical: concerned with space, time, and matter
My take on modern Platonism is that abstract objects are considered the only real (fundamental) objects. Abstract objects can’t interact with concrete objects, because concrete objects don’t actually exist! Rather, concrete things should be thought of as particular parts (cross-sections, aspects of) abstract things. Abstract objects encompass concrete objects. But the so-called concrete objects are really just categories in our own minds (a feature of the way we have chosen to ‘carve reality at the joints’).
My take on modern Platonism is that abstract objects are considered the only real (fundamental) objects. Abstract objects can’t interact with concrete objects, because concrete objects don’t actually exist!
This isn’t modern Platonism.
Example of concrete mathematical objects: Software
A program is an abstract object. Particular copies of a program stored in your hard drive, are concrete.
Ok, then its Geddesian Platonism ;) The easiest solution is to do away with the concrete dynamic objects as anything fundamental and just regard reality as a timeless Platonia . I thought thats more or less what Julian Barbour suggests.
A program is an abstract object. Particular copies of a program stored in your hard drive, are concrete.
The actual timeless (abstract) math objects are the mathematical relations making up the algorithm in question. But the particular model or representation of a program stored on a computer can be regarded as a concrete math object. And an instantiated (running) program can be viewed as a concrete math object also ( a dynamical system with input, processing and output).
These analogies are exact:
Space is to physics as categories are to math
Time is to physics as dynamical systems (running programs) are to math
Similarly, if you believe a question has a very simple answer that does not need to be fleshed out you are unlikely to dedicate your life to answering it.
And you are unlikely to be able to make discussing the simple solution with others into a viable career in academic publishing.
If you don’t believe something exists it is unlikely that you are going to dedicate your life to studying it. This explains the theism, aesthetic objectivism and the Platonism. Similarly, if you believe a question has a very simple answer that does not need to be fleshed out you are unlikely to dedicate your life to answering it. This explains the deontology and the internalism. And Humeanism is still a minority view among philosophers of science (I also wonder if Humeans about laws exactly overlap with Humeans about causality—I suspect some of the former might not hold the latter view).
I would also be hesitant to assume LW is more likely to be right about these matters when they aren’t things LW has thought much about. E.g. I’m pretty modern Platonism is actually true.
It probably explains theism—if you don’t take the arguments seriously, you’ll more likely want to study religion anthropologically rather than argue it out philosophically—but I don’t see why one couldn’t study aesthetics as ‘subjective’ (whatever precisely that means), or metaphysics as a skeptic. (In fact, many do each of those things. Just not most.) I guess I can see how devoting your whole life’s work to destroying illusions could be a downer for some, though.
I agree LW hasn’t thought enough about most of these issues to reach a solid, vetted assessment. I’m mostly interested in what these doctrines say about underlying methodology, as a canary in a coalmine. I’m rather less interested in seeing LW and Academic Philosophy duke it out to see who happens to be right on specialized, arcane, mostly not-very-important debates. How many philosophers are epistemic externalists only really matters inasmuch as it’s symptomatic of general professional standards and methodology.
Subjective aesthetics is probably more the realm of psychology (unless it is so subjective that you can’t study it). But I’m obviously not saying only Platonists would want to study metaphysics. I’m just saying that the selection effect is sufficient to explain the differences in positions between specialists and non-specialists.
Listen, this is like someone who believes the Axiom of Choice saying “constructivist mathematicians are drastically worse at set theory” (because they reject Choice). Newcomb is all about how you view free will. This is not a settled question yet.
Why does ‘free will’ make any difference? If Omega can only predict you with e.g. 60% accuracy, that’s still enough to generate the problem.
I’m not saying the right answer, i.e., the right decision theory, is a settled question. I’m just saying they lose. This matters. If their family members’ or friends’ welfare were on the line, as opposed to some spare cash, I strongly suspect philosophers would be less blasé about privileging their pet formal decision-making theory over actually making the world a better place. The units of value don’t matter; what matters is that causal decision theory loses, and loses by arbitrarily large amounts.
I once took a martial arts class (taught by a guy who once appeared on the “ninja episode” of Mythbusters, where they tried to figure out if a human can catch an arrow out of the air). He knew this trick called “choshi dori” (I think it roughly means ‘attention/initiative grabbing’). How exactly this trick works is a long story, but it has to do with “hacking the lower brain” of the opponent in various ways. One of the things he could do was have a guy punch him in the face and have the punch instead land on empty air, completely contrary to the volition of the puncher. Note: it would work even if he told you exactly what he was doing.
He could do this because of the way punch targeting works (the largely subconscious system responsible has certain rules it follows that could be influenced in a way that causes you to miss).
There are various ways to defeat “choshi dori,” although the gentleman in question could certainly get the vast majority of randomly chosen people to fall for it. Whatever “free will” is, its probably more complicated than just taking Omega at its word. Perhaps Omega achieved his accuracy by a similar defeatable hack. Omega claims to “open up the agent,” and my response is to try to “open up Omega,” to see what’s behind his prediction %.
I don’t see why it would be at all difficult or mysterious for Omega to predict that I one-box. I mean, it’s not like my thought processes there are at all difficult to understand or predict.
My point is exactly that it is not mysterious. Omega used some concrete method to win his game, much in the same way that the fellow in question uses a particular method to win the punching game. The interesting question in the Newcomb problem is (a) what is the method, and (b) is the method defeatable. The punching game is defeatable. Giving up too early on the punching game is a missed chance to learn something about volition.
The right response to a “magic trick” is to try to learn how the trick works, not go around for the rest of one’s life assuming strangers can always pick out the ace of spades.
Omega’s not dumb. As soon as Omega knows you’re trying to “come up with a method to defeat him”, Omega knows your conclusion—coming to it by some clever line of reasoning isn’t going to change anything. The trick can’t be defeated by some future insight because there’s nothing mysterious about it.
Free-will-based causal decision theory: The simultaneous belief that two-boxing is the massively obvious, overdetermined answer output by a simple decision theory that everyone should adopt for reasons which seem super clear to you, and that Omega isn’t allowed to predict how many boxes you’re going to take by looking at you.
I am not saying anything weird, merely that the statements of the Newcomb’s problem I heard do not specify how Omega wins the game, merely that it wins a high percentage (all?) of the previous attempts. The same can be said for the punching game, played by a human (who, while quite smart about the volition of punching, is still defeatable).
There are algorithms that Omega could follow that are not defeatable (people like to discuss simulating players, and some others are possible too). Others might be defeatable. The correct decision theory in the punching game would learn how to defeat the punching game and walk away with $$$. The right decision theory in the Newcomb’s problem ought to first try to figure out if Omega is using a defeatable algorithm, and only one box if it is not, or if it is not possible to figure this out.
Okay, let’s try and defeat Omega. The goal is to do better than Eliezer Yudkowsky, which seems to be trustworthy about doing what he publicly says all over the place. Omega will definitely predict that Eliezer will one-box, and Eliezer will get the million.
The only way to do better is to two-box while making Omega believe that we will one-box, so we can get the $1001000 with more than 99.9% certainty. And of course,
Omega has access to our brain schematics
We don’t have access to Omega’s schematics. (optional)
Omega has way more processing power than we do.
Err, short of building an AI to beat the crap out of Omega, that looks pretty impossible. $1000 is not enough to make me do the impossible.
A crucial difference is that the punching game is real, while Newcomb’s problem is fiction, a thought experiment.
In the punching game, you can try to learn how the trick is done and how to defeat the opponent, and you are still engaged in the punching game.
In Newcomb’s problem, Omega is not a real thing that you could discover something about, in the way that there is something to discover about a real choshi dori master. There is no such thing as what Omega is really doing. If you think up different things that an Omega-like entity might be doing, and how these might be defeated to win $1,001,000, then you are no longer thinking about Newcomb’s problem, but about a different thought experiment in some class of Newcomb-like problems. I expect a lot of such thinking goes on at MIRI, and is more useful than endlessly debating the original problem, but it is not the sort of thing that you are doing to defeat choshi dori.
Here is a trivial model of the “trick” being fool-proof (and I do mean “fool” literally), which I believe has been discussed here a time or ten. Omega runs a perfect simulation of you, terminates it right after you make your selection or if you refuse to choose (he is a mean one), checks what it outputs, uses it to place money in the boxes. Omega won’t even offer the real you the game if you are one of those stubborn non-choosers. The termination clause is to prevent you from enjoying the spoils in case YOU are that simulation, so only the “real you” will know if he won or not. And to avoid any basilisk-like acausal trade. He is not that mean.
EDIT: if you think that the termination is a cruel cold-blooded murder, note that you do that all the time when evaluating what other people would do, then stop thinking about it, once you have your answer. The only difference is the fidelity level. If you don’t require 100% accuracy, you don’t need a perfect simulation.
Do you think that gets rid of the problem? ‘It might be possible to outsmart Omega’ strikes me as fairly irrelevant. As long as it’s logically possible that you don’t successfully outsmart Omega, the original problem can still be posed. You still have to make a decision, in those cases where you don’t catch Omega in a net.
I am not saying there isn’t a problem, I am saying the problem is about clarifying volition (in a way not too dissimilar to the “choshi dori” trick in my anecdote). Punching empty air is “losing.” Does this then mean we should abstain from punching? Seems a bit drastic.
Many problems/paradoxes are about clarification. For example the Simpson’s paradox is about clarifying causal vs statistical intuitions.
More specifically, what I am saying is that depending on what commitments you want to make about volition, you would either want to one box, or two box in such a way that Omega can be defeated. The problem is “non-identified” as stated. This is equivalent to choosing axioms in set theory. You don’t get to say someone fails set theory if they don’t like Choice.
1 - Supposing I have no philosophical views at all about volition, I would be rationally obliged to one-box. In a state of ignorance, the choice is clear simply provided that I value whatever is being offered. Why should I then take the time to form a theory of volition, if you’re right and at most it can only make me lose more often?
We don’t know what the right answer to Newcomb-like problems will look like, but we do know what the wrong answers will look like.
2 - Supposing I do have a view about volition that makes me think I should two-box, I’ll still be rationally obliged to one-box in any case where my confidence in that view is low enough relative to the difference between the options’ expected values.
For instance, if we assign to two-boxing the value ‘every human being except you gets their skin ripped off and is then executed, plus you get $10’ and assign to one-boxing the value ‘nobody gets tortured or killed, but you miss out on the $10’, no sane and reasonable person would choose to two-box, no matter how confident they (realistically) thought they were that they have a clever impossibility proof. But if two-boxing is the right answer sometimes, then, pace Nozick, it should always be the right answer, at least in cases where the difference between the 2B and 1B outcomes is dramatic enough to even register as a significant decision. Every single one of the arguments for two-boxing generalize to the skin-ripping-off case, e.g., ‘I can’t help being (causal-decision-theory-)rational!’ and ‘it’s unfair to punish me for liking CDT; I protest by continuing to employ CDT’.
3 - You seem to be under the impression that there’s something implausible or far-fetched about the premise of Newcomb’s Problem. There isn’t. If you can’t understand a 100% success rate on Omega’s part, then imagine a 99% success rate, or a 50% one. The problem isn’t altered in substance by this.
A 50% success rate would recommend two boxing.
Edit: and come to think of it I am somewhat less sure about the lower success rates in general. If I can roughly estimate Omega’s prediction about me that would seem to screen off any timeless effect. Like, you could probably pretty reliably predict how someone would answer this question based on variables like Less Wrong participation and having a Phd in philosophy. Using this information, I could conclude that an Omega with 60% accuracy is probably going to classify me as a one-boxer no matter what I decide… and in that case why not two box?
Sorry, by a 50% success rate I meant that Omega correctly predicts your action 50% of the time, and the other half of the time just guesses. Guessing can also yield the right answer, so this isn’t equivalent to a 50% success rate in the sense you meant, which was simply ‘Does Omega put the money in the box he would have wished to?’
If you know that Omega will take into account that you’re a LessWronger, but also know that he won’t take into account any other information about you (including not taking into account the fact that you know that he knows you’re a LessWronger!), then yes, you should two-box. But that’s quite different from merely knowing that Omega has a certain success rate. Let’s suppose we know that 60% of the time Omega makes the decision it would have wished were it omniscient. Then we get:
If I one-box: 60% chance of $1,000,000, 40% chance of $1000.
If I two-box: 60% chance of $1000, 40% chance of $1,001,000.
Then the expected value of one-boxing is $600,400. Expected value of two-boxing is $401,000. So you should one-box in this situation.
This makes sense.
You are not listening to me. Suppose this fellow comes by and offers to play a game with you. He asks you to punch him in the face, where he is not allowed to dodge or push your hand. If you hit him, he gives you 1000 dollars, if you miss, you give him 1000 dollars. He also informs you that he has a success rate of over 90% playing this game with randomly sampled strangers. He can show you videos of previous games, etc.
This game is not a philosophical contrivance. There are people who can do this here in physical reality where we both live.
Now, what is the right reaction here? My point is that if your right reaction is to not play then you are giving up too soon. The reaction to not play is to assume a certain model of the situation and leave it there. In fact, all models are wrong, and there is much to be learned about e.g. how punching works in digging deeper into how this fellow wins this game. To not play and leave it at that is incurious.
Certainly the success rate this fellow has with the punching game has nothing to do with any grand philosophical statement about the lack of physical volition by humans.
Learning about how punching works, rather than winning 1000 dollars, is the entire point of this game.
My answer to Newcomb’s problem is to one-box if and only if Omega is not defeatable and two-box in a way that defeats Omega otherwise. Omega can be non-defeatable only if certain things hold. For example if it is possible to fully simulate in physical reality a given human’s decision process at a particular point in time, and have this simulation be “referentially transparent.”
edit: fixed a typo.
There is a typo here.
But now you’ve laid out your decision-making process, so all Omega needs to do now is to predict whether you think he’s defeatable. ;-)
In general, I expect Omega could actually be implemented just by being able to tell whether somebody is likely to overthink the problem, and if so, predict they will two-box. That might be sufficient to get better-than-chance predictions.
To put it yet another way: if you’re trying to outsmart Omega, that means you’re trying to figure out a rationalization that will let you two-box… which means Omega should predict you’ll two-box. ;-)
You are (merely) fighting the hypothetical.
Let’s try using your martial arts analogy. Consider the following:
You find yourself in a real world physical confrontation with a ninja who demands your wallet. You have seen this ninja fight several other ninjas, a pirate and a Jedi in turn and each time he used “choshi dori” upon them then proceeded to break both of their legs and take their wallet. What do you do?
Punch the ninja in the face.
Shout “I have free will!” and punch the ninja in the face.
Think “I want to open up the ninja and see how his choshi dori works” then try to punch the ninja in the face.
Toss your wallet to the ninja and then run away.
This isn’t a trick question. All the answers that either punch the ninja in the face or take two boxes are wrong. They leave you with two broken legs or an otherwise less desirable outcome.
Sometimes people fight a hypothetical because the hypothetical is problematic. I lean toward two-boxing in Newcomb’s problem, basically because I can’t not fight this hypothetical. My reasoning is more or less as follows. If the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation. One-boxing would reveal that I know that and risk getting me turned off, making the money in the box rather beside the point, so I two-box. If I’m not a simulation, I don’t accept the possibility of Omega existing in the first place, so I two-box. Basically, I think Newcomb’s problem is not a particularly useful hypothetical, because I don’t see it as predictive of decision-making in other circumstances.
It seems to me that if Omega concludes that you are aware that you are in a simulation based on the fact that you take one box then Omega is systematically wrong when reasoning about a broad class of agents that happens to include all the rational agents (and some others). This is rather a significant flaw in an Omega implementation.
For agents with coherent decision making procedures it is equivalent to playing a Prisoner’s Dilemma against a clone of yourself. That is something that feels closer to a real world scenario for some people. It is similarly equivalent to Parfit’s Hitch-hiker when said hitch-hiker is at the ATM.
That’s why I don’t like Newcomb’s problem. In a prisoner’s dilemma with myself, I’d cooperate (I trust me to cooperate with myself). Throwing Omega in confuses this pointlessly. I suspect if people substituted “God” for “Omega” I’d get more sympathy on this.
Are you suggesting that if you are a simulation, two-boxing reduces your risk of being turned off?
If not, I don’t understand your reasoning at all.
If so, I guess I understand your reasoning from that point on (presumably you feel no particular loyalty to the entity you’re simulating?), but I don’t understand how you arrive at that point.
At a minimum, I can’t see how two-boxing could be worse in terms of risk of being turned off. I suppose Omega could think I was trying to be tricky by two-boxing specifically to avoid giving my awareness that I’m being simulated away, but at that point the psychology becomes infinitely recursive. I’ll take my chances while the simulator puzzles that out.
I’m not sure I understand your parenthetical. Does the existence of a simulation imply the existence of an outside entity being simulated?
Neither can I. Nor can I see how it could be better. In fact, I see no likely correlation between one/two-boxing and likelihood of being turned off at all. But if my chances of being turned off aren’t affected by my one/two-box choice, then “One-boxing would [..] risk getting me turned off [..] so I two-box” doesn’t make much sense.
You clearly have a scenario in mind wherein I get turned off if my simulator is aware that I’m aware that I’m being simulated and not otherwise, but I don’t understand why I should expect that.
To be honest, I’ve never quite understood what the difference is supposed to be between the phrases “existing in a simulation” and “existing”.
But regardless, my understanding of “If the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation” had initially been something like “If Omega can perfectly model Dave’s mental processes in order to determine Dave’s likely actions, then Omega will probably create lots of simulated Daves in the process. Since those simulated Daves will think they are Dave, and there are many more of them than there are of Dave, and I think I’m Dave, the odds are (if Omega exists and can do this stuff) that I’m in a simulation.”
All of which also implies that there’s an outside entity being simulated in this scenario, in which case if I feel loyalty to that entity (or otherwise have some basis for caring about how my choices affect it) then whether I get turned off or not isn’t my only concern anyway..
I infer from your question that I misunderstood you in the first place, though, in which case you can probably ignore my parenthetical. Let me back up and ask, instead, why if the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation?
My thinking here is that if a being suddenly shows up and can perfectly model me, despite not having scanned my neural pathways, taken any tissue samples, observed my life history, or gathered any other data whatsoever, then it’s cheating somehow—i.e. I’m a simulation and it has my source code.
This doesn’t require there to be a more real Prismattic one turtle down, as it were. I could be a simulation created to test a set of parameters, not necessarily a model of another entity.
Ah, I see.
OK, thanks for clarifying.
I would like to know more about this “choshi dori”. Do you know of videos or useful write-ups of the technique?
Discussion from a ninjutsu (Bujinkan) forum
Discussion from a general martial arts forum
A fat Russian guy demonstrating the same thing, from a different system.
In general, you can’t make people miss or fall over without touching them unless they know you can make them miss or fall over when touching is allowed.
I don’t think controversies over the Axiom of Choice are similar in the right ways to controversies over Newcomb’s Problem. In pragmatic terms, we know that true two-boxers will willingly take on arbitrarily large disutility (or give up arbitrarily large utility), inasmuch as they’re confident that two-boxing is the right answer. The point can even be put psychologically: To the extent that it’s a psychological fact that humans don’t assign infinite value to being Causal Decision Theorists, the utility (relative to people’s actual values) of following CDT can’t outweigh the bad consequences of consistently two-boxing.
I know of no correspondingly strong demonstration that weakening Choice or eliminating LEM leads demonstrably to irrationality (relative to how the world actually is and, in particular, what preferences people actually have).
This is only the case in a world-view that accepts that Omega cannot be tricked. How do you know Omega cannot be tricked? This view corresponds to a certain view of how choices get made, how the choice making algorithm is simulated, and various properties of this simulation as embodied in physical reality. Absent an actual proof, this view is just that—a view.
Two-boxers aren’t (necessarily!) stupid, they simply adhere to commitments that make it possible to fool Omega.
No, they don’t. You seem to be confused not just about Newcomb’s Problem but also about why the (somewhat educated subset of) people who Two-Box make that choice. They emphatically do not do it because they believe they are able to fool Omega. They expect to lose (ie. not get the $1,000,000).
By hypothesis, this is how it works. Omega can predict your choice with >0.5 accuracy (strictly more than half the time). Regardless of Free Will or Word of God or trickery or Magic.
The whole point of the thought experiment is to analyze a choice under some circumstances where the choice causes the outcomes to have been laid out differently.
If you fight the hypothesis by asserting that some other worldviews grant players Magical Powers From The Beyond to deceive Omega (who is just a mental tool for the thought experiment), then I can freely assert that Omega has Magical Powers From The Outer Further Away Beyond that can neutralize those lesser powers or predict them altogether. Or maybe Omega just has a time machine. Or maybe Omega just fucking can, don’t fight the premises damnit!
And as wedrifid pointed out, this is not even the main reason why the smarter two-boxers two-box. It’s certainly one of the common reasons why the less-smart ones do though, in my experience. (Since they never read the Sequences, aren’t scientists, and never learned to not fight the premises! Ahem.)
I would say Newcomb is all about how you view personal identity. But I’m not sure why this comment was directed at me.
Why would you say personal identity is relevant?
I think the ease with which this community adopts one boxing has to do with us having internalized a computationalist view of the mind and the person. This has a lot in common with the psychological view of person-hood. Basically, we treat agents as decision algorithms which makes it much easier to see how decisions could have non-causal properties.
This is, incidentally, related to my platonism you asked me about. Computationalism leads to a Platonic view of personhood (where who you are is basically an algorithm that can have multiple instantiations). One-boxing falls right out of this theory. The decision you make in Newcombs problem is determined by your decision algorithm. You decision algorithm can be wholly or partly instantiated by Omega and that’s what allows Omega to predict your behavior.
My problem with thinking of Newcomb’s paradox this way is that it is possible that my decision algorithm will be “try to predict what Omega does, and....” For Omega to predict my behavior by running through my algorithm will involve a self-reference paradox; it may be literally impossible, even in principle, for Omega to predict what I do.
Of course, you can always say “well, maybe you can’t predict what Omega does”, but the problem as normally posed implies that there’s an algorithm for producing the optimal result and that I am capable of running such an algorithm; if there are some algorithms I can’t run, I may be incapable of properly choosing whether to one-box or two-box at all.
Your prediction of what Omega does is just as recursive as as Omega’s prediction. But if you actually make a decision at some point that means that your decision algorithm has an escape clause (ow! my brain hurts!) which means that Omega can predict what you’re going to do (by modelling the all the recursions you did).
It doesn’t actually. The optimal result is two boxing when Omega thinks you are going to one box. But since Omega is a God-like super computer and you aren’t that isn’t going to happen. If you happen to have more information about Omega than it has about you and the hardware to run a simulation of Omega then you can win like this. But that isn’t the thought experiment.
My point (or the second part of it) is that simply by asking “what should you do to achieve an optimal result”, the question assumes that your reasoning capacity is good enough to compute the optimal result. If computing the optimal result requires being able to simulate Omega, then the original question implicitly assumes that you are able to simulate Omega.
Right, I just don’t agree that the question assumes that.
Where does the question assume that you can compute the optimal result? Newcomb’s Problem simply poses a hypothetical and asks ‘What would you do?‘. Some people think they’ve gotten the right answer; others are less confident. But no answer should need to presuppose at the outset that we can arrive at the very best answer no matter what; if it did, that would show the impossibility of getting the right answer, not the trustworthiness of the ‘I can optimally answer this question’ postulate.
I once had a man walk up to me and ask me if I had the correct time. I looked at my watch and told him the time. But it seemed a little odd that he asked for the correct time. Did he think that if he didn’t specify the qualifier “correct”, I might be uncertain whether I should give him the correct or incorrect time?
I think that asking what you would do, in the context of a reasoning problem, carries the implication “figure out the correct choice” even if you are not being explicitly asked what is correct. Besides, the problem is seldom worded exactly the same way each time and some formulations of it do ask for the correct answer.
For the record, I would one-box, but I don’t actually think that finding the correct answer requires simulating Omega. But I can think of variations of the problem where finding the correct answer does require being able to simulate Omega (or worse yet, produces a self-reference paradox without anyone having to simulate Omega.)
See the sequence:
A solvable Newcomb-like problem—part 1 of 3
A solvable Newcomb-like problem—part 2 of 3
A solvable Newcomb-like problem—part 3 of 3
When you suggest someone read three full length posts in response to a single sentence some context is helpful, especially if they weren’t upvoted. Maybe summarize their point or something.
If it was easy to summarize, it wouldn’t have required a three parter sequence. :-)
However, perhaps one relevant point from it is:
For the purposes of Newcomb’s problem, and the rationality of Fred’s decisions, it doesn’t matter how close to that level of power Omega actually is. What matters, in terms of rationality, is the evidence available to Fred about how close Omega is to having to that level of power; or, more precisely, the evidence available to Fred relevant to Fred making predictions about Omega’s performance in this particular game.
Since this is a key factor in Fred’s decision, we ought to be cautious. Rather than specify when setting up the problem that Fred knows with a certainty of 1 that Omega does have that power, it is better to specify a concrete level of evidence that would lead Fred to assign a probability of (1 - δ) to Omega having that power, then examine the effect upon which option to the box problem it is rational for Fred to pick, as δ tends towards 0.
To the extent that Newcomb’s Problem is ‘about how you view free will’ people who two box on Newcomb’s Problem are confused about free will.
This isn’t like constructivist mathematicians being worse at set theory because they reject choice. It’s closer to a kindergarten child scribbling in crayon on a Math exam then insisting “other people are bad at Math too therefore you should give me full marks anyway”.
I don’t think that’s fair (though I also don’t think Newcomb’s problem has anything to do with free will either). The question is whether one-boxing or two-boxing is rational. It’s not fair to respond simply with ‘One-boxing is rational because you get more money’, because two-boxers know one-boxing yields more money. They still say it’s irrational. It would be question begging to try to dismiss this view because rationality is just whatever gets you more money, since that’s exactly what the argument is about.
If you say so. If I learn enough about “choshi dori” to fool the punch-avoiding algorithm and win 1000 dollars, and you don’t play, who is confused? Rationalists are supposed to win, remember, not stick to a particular view of a problem.
Rational agents who play Newcomb’s Problem one box. Rational agents who are in entirely different circumstances make entirely different decisions as determined by said circumstances. They also tend to have a rudimentary capability of noticing the difference between problems.
(a) You are being a dick. I certainly did not insult anyone in this thread.
(b) The isomorphism is exact. The point is granularity. If the guy can avoid the punch 90% of the time (or more precisely guess what your punch decision algorithm will do in response to some inputs 90% of the time), and Omega guesses what you will do correctly 90% of the time, that ought to be sufficient to do the math on expected values, if you want to leave it there.
Or, alternatively, you can try to “open up the agent you are playing against” and try to trick it. It’s certainly possible in the punching game. It may or may not be possible in the game with Omega—the problem doesn’t specify.
If you say “well, rational people do X and not Y, end of story” that’s fine. I am going to make my updates on you and move on.
A typical example of irrational behavior is intransitive preference. As the money pump thread shows people often don’t actually fall for money pumping, even if they have intransitive preferences. In other words, the map doesn’t fully reflect the territory of what people actually do.
Another example is gwern’s example with correlation and causation. Correlation does not imply causation, says gwern, but if we knew how often it does imply it, we may well be rational to conclude the latter from the former if the odds are good enough. He’s right—but no one does this (I don’t think!).
I used the example of the punching game on purpose—it makes the theoretical situation with Omega practical, as in you can go and try this game if you wanted. My response to trying the game was to learn how it works, rather than give up playing it. This is what people actually do. If your model doesn’t capture it, it’s not a good model.
A broader comment: I do math for a living. The issues of applicability of math to practical problems, and changing math models around is something I think about quite a bit.
It took a non-trivial exertion in the direction of politeness to refrain from answering the rhetorical question “who is confused?” with a literal answer.
Arguable. I would concede at least that you did not say anything insulting that you do not sincerely believe is warranted.
Doing expected value calculations on probabilistic variants of newcomb’s problem is also old news. And results in one boxing unless the probability gets quite close to random guessing. Once again, if you choose a sufficiently different problem than Newcomb’s (such as by choosing an accuracy sufficiently close to 0.5, reducing the payoff ratio or by positing that you are in fact more intelligent than Omega) then you have failed to respond to a relevant question (or an interesting question, for that matter).
Please do. I have likewise updated. Evidence suggests you are ill suited to considering counterfactual problems and unlikely to learn. My only recourse here is to minimize the damage you can do to the local sanity waterline. I’ll leave further attempts at verbal interaction to the half a dozen others who have been attempting to educate you, assuming they have more patience than I.
See.
I would be interested in seeing how philosophers do on tests of analytical versus intuitive reasoning (I forget the name of the test normally used for gauging this) and ability to narrow down hypotheses when the answers are known and easily verifiable.
Cognitive Reflection Test?
That was the one, thanks.
We do pretty well, actually (pdf). (Though I think this is a selection effect, not a positive effect of training.)
Upvoted for understatement.
Out of curiosity, do you think mathematical platonism is true for Tegmark-style reasons? Or some other reason?
Quinean reasons. Tegmark’s position, as far as I can tell, is that all abstract objects are also physically instantiated (or that the only difference between concrete and abstract objects is indexical). Which I think is plausible—but I think abstract objects could be an entirely different sort of thing from concrete, physically existing objects, and still exist.
Do you think abstract objects have anything causally to do with the things (about our universe, or about mathematical practice) that convinced you they exist? My worry is that in the absence of a causal connection, if there weren’t such abstract objects, mathematics would be just as ‘unreasonably effective’. The numbers aren’t doing anything to us to make mathematics work, so their absence wouldn’t deprive us of anything (causally). If a hypothesis can’t predict the data any more reliably than its negation can, then the data can’t be used to support the hypothesis.
In general, I’d like to hear more talk about what sorts of relations these number things enter into with our own world.
No. But that is essentially true by definition. On the other hand, I think all causal claims are claims about abstract facts. E.g. when you say “The match caused the barn to burn to the ground” you’re invoking a causal model of the world and models of the world are abstractions (though obviously they can be represented).
To me this is like hearing “If mass and velocity didn’t exist Newtonian physics would be just as ‘unreasonably effected’. Mathematical objects are part of mathematics. The fact that math is unreasonably effective is why we can say mathematical facts are true and mathematical entities exist. Just like the fact that quantum theory is unreasonably effective is the reason we can say that quarks exist. This is true of just everyday objects too. We say your chair exists because the chair is the best way of explaining some of your sensory impressions. It just happens that not all entities are particulars embedded in the causal world.
Causal claim may be expressed with abstract models, but that does not mean they are about abstract models. Causal models do not refer to themselves, in which case they would be about the abstract, they refer to whatever real-world thing they refer to.
Maths isn’t unreasonably effective at understanding the world in the sense that any given mathematical truth is automatically also a physical truth. If one mathematical statement (eg an inverse square law of gravity) is physically true, and infinity of others (inverse cube law, inverse power of four...) is automatically false. So when we reify out best theories, we are reifying a small part of maths for reasons which aren’t purely mathematical. There is not path from the effectiveness of some maths at describing the physical universe to the reification of all maths, because physical truth is a selection of the physically applicable parts of maths.
Sure, but it’s not true by definition that numbers are abstract. Given your analogy to mass and velocity, and your view that mathematical objects help explain the unreasonable effectiveness of mathematics, it seems to me that it would make much more sense to treat these number things as playing a causal or constitutive role in the makeup of our universe itself, e.g., as universals. Then it would no longer just be a coincidence that our world conveniently accompanies a causally dislocated Realm of correlates for our mathematical discourse.
But it makes a difference to how our world is that objects have velocity and mass. By hypothesis, it doesn’t make a difference to how our world is that there are numbers. (And from this it follows that it wouldn’t make a difference if there weren’t numbers.) If numbers do play a role as worldly ‘difference-makers’ of some special sort, then could you explain more clearly what that role is, since it’s not causal?
I don’t know what that means. If by ‘mathematics’ you have in mind a set of human behaviors or mental states, then mathematics isn’t abstract, so its objects are neither causally nor constitutively in any relation to it. On the other hand, if by ‘mathematics’ you have in mind another abstract object, then your statement may be true, but I don’t see the explanatory relevance to mathematical practice.
Sure, but it’s also why we can assert doctrines like mathematical fictionalism and nominalism. A condition for saying anything at all is that our world exhibit the basic features (property repetition, spatiotemporal structure...) that suffice for there to be worldly quantities at all. I can make sense of the idea that we need to posit something number-like to account in some causality-like way for things like property repetition and spatiotemporal structure themselves. But I still haven’t wrapped my head around why assuming numbers are not difference-makers for the physical world (unlike the presence of e.g. velocity), we should posit them to explain the efficacy of theories whose efficacy they have no impact upon.
The properties of quarks causally impact our quantum theorizing. In a world where there weren’t quarks, we’d be less likely to have the evidence for them that we do. If that isn’t true of mathematics (or, in some ways even worse, if we can’t even coherently talk about ‘mathless worlds’), then I don’t see the parity.
Huh?
I don’t recognize a difference between universals and abstract objects but neither plays a causal role in the make up of the universe.
You’re taking metaphors way too literally. There is no “Realm”.
It’s not that complicated. We have successful theories that posit certain entities. I think believing in those theories requires believing in those entities. Some of those entities figure causally and spatio-temporally in our theories. Some don’t. When you say “in a world where there weren’t quarks” I have no idea what you’re talking about. It appears to be some kind of possible world where the laws of physics are different. But now we’re making statements of fact about abstract objects. It is very difficult to say this about mathematics since math appears likely to work the same way in all possible worlds. But that’s a really strange reason to conclude mathematical objects don’t exist. Numbers and quarks are both theoretically posited entities that we need to explain our world.
As far as I can tell everything you have said is just different forms of “but mathematical objects aren’t causal!”. I readily agree with this but since abstract objects aren’t causal by definition and the entire question is about abstract objects it seems like you’re begging the question.
(Edit: Not my downvote btw)
If in axiomatizing arithmetic we are ontologically committed to saying that 1 exists, 2 exists, 3 exists,etc., then we may say that there are numbers even if it is not axiomatic that 1, 2, 3, etc. are causally inert, nonphysical, etc.
Instead of being a platonist and treating numbers as abstract, you could treat them as occupying spacetime (like immanent universals or tropes), you could treat them as non-spatiotemporal but causally efficacious (like the actual Forms of Plato), or you could assert both. (You could also treat them as useful fictions, but I’ll assume that fictionalism is an error theory of mathematics.)
I think many of the views on which mathematical objects have some causal (or, if you prefer, ‘difference-making’) effect on our mathematical discourse are reasonable. The views on which it’s just a coincidence are not reasonable, and I don’t think abstract numbers can easily escape the ‘just a coincidence’ concern (unless, perhaps, accompanied by a larger Tegmark-style framework).
Let’s take the property ‘electrically charged’ as an example. If charge is a universal, then it’s something wholly and constitutively shared in common between every charged thing; universals occur exactly in the spatiotemporal locations where their instances are, and they are exhausted by these worldly things. So there’s no need to posit anything outside our universe to believe in universals. Redness is, as it were, ‘in’ every red rose. Generally, universals are assumed to play causal roles (it’s because roses instantiate redness that I respond to them as I do), though in principle you could posit a causally inert one. (Such a universal still wouldn’t be abstract, because it would still occur in our universe.)
If electric charge is instead an abstract object, then it exists outside space and time, and has no effect at all on the electrically charged things in our world. (So abstract electric charge serves absolutely no explanatory role in trying to understand how things in our world are charged. However, it might be a useful posit for the nominalist about universals, just to provide a (non-nominalistic) correlate for our talk in terms of abstract nouns like ‘charge’.
A third option would be to treat electric charge as a Platonic Form, i.e., something outside spacetime but causally responsible for the distribution of charge instances in our universe. (This is confusing, because Platonic Forms aren’t ‘platonic’ in the sense in which mathematical platonism are ‘platonic’. Plato himself was a nominalist about abstract objects, and also a nominalist about universals. His Forms are a totally different thing from the sorts of posits philosophers these days generally entertain.)
A natural way to think of bona-fide ancient Platonism (as opposed to the lowercase-p ‘platonism’ of modern mathematicians) is as cellular automata; for Plato, our universe is an illusion-like epiphenomenon arising from much simpler, lower-level relationships that are not temporal. (Space still plays a role, but as an empty geometry that comes to bear properties only in a derivative way, via its relationships to particular Forms.)
Hm? How do you know I’m taking it too literally? First, how do you know that ‘Realm’ isn’t just part of the metaphor for me? What signals to you when I stop talking about ‘objects’ and start talking about ‘Realms’ that I’ve crossed some line? (Knowing this might help tell me about which parts of your talk you take seriously, and which you don’t.)
Second, as long as we don’t interpret ‘Realm’ spatially, what’s wrong with speaking of a Realm of abstract objects, literally? Physical things occur in spacetime; abstract things exist just as physical ones do, but outside spacetime. Perhaps they occupy their own non-spatial structure, or perhaps they can’t be said to ‘occupy’ anything at all. Either way, we’ve complicated our ontology quite a bit.
I’m still lost here.
I’m not sure I would say Plato’s forms are causally efficacious in the way we understand that concept—but that isn’t really important. Any way, I have issues with the various alternatives to modern Platonism, immanent realism, trope theory etc. -- though not the time to go into each one. If I were to make a general criticism I would say all involve different varieties of torturous philosophizing and the invention of new concepts to solve different problems. Platonism is easier and doesn’t cost me anything.
Ah! This seems like a point of traction. I certainly don’t think there is anything coincidental about the fact that mathematical truths tell us things about physical truths. I just don’t think the relationship is causal. I believe causal facts are facts about possible interventions on variables. Since there is no sense in which we can imagine intervening on mathematical objects I don’t see how that relationship can be causal. But that doesn’t mean it is a coincidence or isn’t sense making. I Mathematics is effective because everything in the natural world is an instantiation of an abstract object. Instantiations have the properties of the abstract object they’re instantiating. This kind of information can be used in a straightforward, explanatory way.
This is a particular way of understanding universals. You need to specify immanent realism. Plenty of philosophers believe in universals as abstract objects.
We think the ones that don’t figure causally or spatio-temporally aren’t actually being posited at all. That’s how you read physics. If you know how to read a map, you know that rivers and mountains on the map are suposed to be in the territory, but lines of lattitude and contour lines aren’t.
No, when I say ‘in a world where there weren’t quarks’ I mean in an imagined scenario in which quarks are imagined not to occur. I’m not committed to real non-actual worlds. (If possible worlds were abstract, then they’d have no causal relation to my thoughts about them, so I’d have no reason to think my thoughts about modality were at all on the right track. It’s because modality is epistemic and cognitive and ‘in the head’ that I can reason about hypothetical and counterfactual situations productively.) I’m a modal fictionalist, and a mathematical fictionalist.
In imagined scenarios where we sever the causal links between agents and quarks, e.g., by replacing quarks with some other mechanism that can produce reasoning agents, it seems less likely that the agents would have hypothesized quarks. When we remove abstract numbers from a hypothetical scenario, on the other hand, nothing about the physical world seems to be affected (since, inasmuch as they are causally inert, abstract numbers are in no way responsible for the way our world is).
That suggests that positing numbers is wholly unexplanatory. It might happen to be the case that there are such things, but it can’t do anything to account for the unreasonable effectiveness of mathematics, because of the lack of any causal link.
Abstract objects play a similar role in current physical theories to that which luminiferous aether used to play. The problem with aether isn’t just that it was theoretically dispensable; it was that, even if we weren’t smart enough to figure out how to reformulate our theories without assuming aether, it would still be obvious that the theoretical successes that actually motivated us to form such theories would have arisen in exactly the same way even if there were no aether. Aether doesn’t predict aether-theories like ours, because our aether theory is not based on empirical evidence of aether.
(Aether might still be reasonable to believe in, but only if it deserves a very high prior, such that the lack of direct empirical confirmation is OK. But you haven’t argued for platonism based on high priors, e.g., via a Tegmark hypothesis; you’ve argued for it empirically, based on the real-world successes of mathematicians. That doesn’t work, unless you add some kind of link between the successes and the things you’re positing to explain those successes.)
Modern-day platonists try to make their posits appear ‘metaphysically innocent’ by depriving them of causal roles, but in the process they do away with the only features that could have given us positive reasons to believe such things. It would be like if someone objected to string theory because it’s speculative and lacks evidence, and string theorists responded by replacing strings with non-spatiotemporal, causally inert structures that happen to resemble the physical world’s structures. The whole point of positing strings is that they be causally or constitutively linked to our beliefs about strings, so that the success of our string theory won’t just be a coincidence; likewise, the whole point of reifying mathematical objects should be to treat them as causally or constitutively responsible for the success of mathematics. Without that responsibility, the posit is unmotivated.
What do you mean by “work the same way”? I can pretty easily imagines world where mathematicians consistently fail to get reliable results. There may even be actual planets like that in the physical universe, if genetic drift eroded the mathematical reasoning capabilities of some species, or if there are aliens who rely heavily on math but don’t relate it to empirical reality in sensible ways. If such occurrences don’t falsify platonism, then our own mathematicians’ remarkable successes don’t verify platonism. So what phenomenon is it that you’re really claiming we need platonism to explain? What kind of ‘unreasonable effectiveness’ is relevant?
I can come up with possible worlds without quarks (in a vague, non-specific way). I have no idea what it means to “remove abstract numbers from a hypothetical scenario”. I don’t think abstract objects have modal variation which is closely related to their (not) being causal. But in so far as mathematics posits abstract entities and mathematics is explanatory than I don’t think there is anything mysterious about the sense in which abstract objects are explanatory.
I disagree. I think the problem with aether is entirely just that it was theoretically dispensable. And I think the sentences that follow that are just a way of saying “aether was theoretically dispensable”.
Their utility in our explanations is sufficient reason to believe they exist even if their role in those explanations is not causal. Your string theory comparison doesn’t sound like a successful scientific theory.
As in we can’t develop models of possible worlds in which mathematics works differently. This has nothing to do with the abilities of hypothetical mathematicians.
Or we can’t develop models of mathematically possible worlds where maths works differently. Or maybe we can, since we can image the AoC being either true or false Actually, it is easier for realists to imagine maths being different in different possible worlds, since, for realists, the existence of numbers makes an epistemic difference. For them, some maths that is formally valid (deducable from axioms) might be transcendentally incorrect (eg, the AoC was assumed but is actually false in Plato’s Heaven).
It’s logically possible..like so many things.
Either these non physical things interact with matter (eg the brains of mathematicians) or they don’t. If they do, that is supernaturalism. If they don’t, they succumb to Occam’s razor.
No. They don’t. Stating scientific theories without abstract objects makes theories vastly more complicated when they can even be stated at all.
I didn’t say delete numbers from theories. I mean’t don’t reify them. There is stuff in theories that you are supposed not to reify, like centres of gravity.
Centers of gravity are an even better example of a real abstract object. I’m definitely not reifying anything according to the dictionary definition of that word: neither numbers nor centers of gravity are at all concrete. They’re abstract.
OK. So, in what sense do these “still exist”, and in what sense are they “entirely different” from concrete objects? And are common-or-garden numbers included?
I think it might be best if you read the above-linked SEP article and some of the related pieces. But the short form.
We should believe our best scientific theories
Our best scientific theories make reference to/quantifier over abstract objects—mathematical objects like numbers, sets and functions and non-mathematical abstract objects like types, forces and relations. Entities that theories refer to/quantifier over are called their ontic commitments.
Belief in our best scientific theories means belief in their ontic commitments.
C: We should believe in the existence of the abstract objects in our best scientific theories.
One and two seem uncontroversial. 3 can certainly be quibbled with and I spent a few years as a nominalist trying to think of ways to paraphrase out or find reasons to ignore the abstract objects among science’s ontic commitments. Lots of people have done this and have occasionally demonstrated a bit of success. A guy named Hartry Field wrote a pretty cool book in which he axiomatizes Newtonian mechanics without reference to numbers or functions. But he was still incredibly far away from getting rid of abstract objects all together (lots of second order logic) and the resulting theory is totally unwieldy. At some point, personally, I just stopped seeing any reason to deny the existence of abstract objects. Letting them exists costs me nothing. It doesn’t lead to false beliefs and requires far less philosophizing.
The concrete-abstract distinction still gets debated but a good first approximation is that concrete objects can be part of causal chains and are spatio-temporal while abstract objects are not. As for common-or-garden numbers: I see no reason to exclude them.
Quine has a logician’s take on physics—he assumes that the formal expression of a physical law is complete itself, and therefore seeks a purely formal criterion of ontological commitment, or objecthood. However, physics doesn’t work like that. Physical formalisms have semantic implications that aren;t contained in the formalism itself: for instance, f=ma is mathematically identical to p=qr or a=bc, or whatever. But The f, the m and the a all have their own meaning, their own relation to measurement, as a far as a physicist is concerned.
The reasons are already part of the theory..in the sense that the theory is more than the written formalism Physics students are taught that centers of gravity should not be reified—that is part of the theory. No physcs student is taught that any pure number is a reifiable object, and few hit upon the idea themselves.
No philosophizing is required to get rid of abstract objects, one only needs to follow the instructions about what is refiable that are already part of the informal part of a theory.
I can’t see how you can claim that Platonism doesn’t lead to false beliefs without implicitly claiming omniscience. If abstract entities do not exist, then belief in them is false, by a straightforward correspondence theory. Moreover, is Platoism is true, then some common fomlations of physicalism, such as “everything that exists,, exists spatio-temporally” is false. Perhaps you meant Platonism doesn;t lead to false beliefs with any practical upshot, but violations of Occam’s razor generally don’t.
OK, but that means that centres-of-gravity aren;t abstract:: the center of gravity of the Earth has a location. That doesn’t mean they are fully concete either. Jerrold Katz puts them into a third category, that of the mixed concrete-and-abstract. (His favoured example is the equator).
If you are going to include centers of gravity, and Katz’s categorisation is correct, then there is still no reason to include fully abstract entities. And there is a reason to exclude centers of gravity, which is the informal semantics of physics.
There’s that word again. I’m not reifying numbers. Abstract objects aren’t “things”. They aren’t concrete. Platonists don’t want to reify centers of gravity or numbers.
Platonism and nominalism don’t differ in anticipations of future sensory experiences. The difference is entirely about theory and methodology. I’ve already replied to the Occam’s razor thing: our theories that include abstract objects are radically simpler and easier to use than the attempts that do not exclude abstract objects.
I’m not sure they have a location in the same way that is generally meant by spatio-temporal: but the exact classification of centers of gravity isn’t that important to me. I’m not claiming to have the details of that figured out.
There has to be some content to Platonism. You seem to be assuming that by “reifying” I must mean “treat as concretely existent”. In context, what I mean is “treat as being existent in whatever sense Platonists think abstracta are existent”. I am not sure what that is, but there has to be something to it, or there is no content to Platonism, and in any case it is not my job to explain it.
I am not sure what you mean by that. The difference is about ontology. If two theories make the same predictions, and one of them has more entities, one of them is multiplying entities unnecessarily.
And I have replied to the reply. The Quinean approach incorrectly takes a scientific theory to be a formalism. It is only methodologicaly simpler to reify whatever is quantified over, formally, but that approach is too simple because it leaves out the semantics of physics—it doensn’t distinguish between f=ma and p=qr.
Such details are what could bring Platonism down.
Oh come on now. That’s literally what the word means. It’s the dictionary definition. Don’t complain about me assuming things if you’re using words contrary to their dictionary definition and not explaining what you mean.
As I’ve said a thousand times I think all there is to “being existent” is to be an entity quantified over in our best scientific theories. So in this case treating abstract objects as being existent requires scientists to literally do nothing different.
Neither nominalism nor platonism make predictions. Scientific theories make predictions and there are no nominalist scientific theories.
Honestly, I don’t see how this is relevant. I don’t agree that the Quinean approach leaves out the semantics of physics and I don’t see how including the semantics would let you have a simple scientific theory that didn’t reference abstract objects.
Obviously it is possible that there are arguments that could convince me I’m wrong. I’m not obligated to have a preemptive reply to all of them.
The point of Quinean Platonism is to inflate the formal criterion of quantification into an ontological claim of existence, not to deflate existence into a mere formalism.
It requries them to ignore part of the informal interpretation of a theory.
Then one of them is unnecessarily complicated as an ontology. You see to think Platonism isn’t ontology. I have no idea what your would then think it is.
Whether theories are nominalist, or whatever, depends on how you read them. They don’t have their own interpretation built-in, as I have pointed out a 1000 times.
Theories can include numbers and centers of gravity, and reference them in that sense, and that is not the slightest argument for Platonism. Platonism requires that certain symbols have real referents—whichis another sense of “reference”.
Looking at a symbol on a piece of paper doesn’t tell you that the symbol has a real referent. Non-Platonism isnt the claim that such symbols need to be deleted, it is an interpretation whereby some symbols get reified—have real world referents—and others don’t. Platonism is not the claim that there are abstract symbols in formalisms, it is an ontological claim about what exists.
Doesn’t this imply that equivalent scientific theories may have quite different implications wrt. what abstract objects exist, depending on how exactly they are formulated (i.e. the extent to which they rely on quantifying over variables)?
Also, given the context, it’s not clear that rejecting theories which rely on second-order and higher-order logics makes sense. The usual justification for dismissing higher-order logics is that you can always translate such theories to first-order logic, and doing so is a way of “staying honest” wrt. their expressiveness. But any such translation is going to affect how variables are quantified over in the theory, hence what ‘commitments’ are made.
I’m not sure what you mean by “equivalent” here. If you mean “makes the same predictions” then yes—but that isn’t really an interesting fact. There are empirically equivalent theories that quantify over different concrete objects too. Usually we can and do adjudicate between empirically equivalent theories using additional criteria: generality, parsimony, ease of calculation etc.
I think Jack meant the sort of modern platonism that philosophers believe, not Tegmark-style platonism. Modern platonism is the position that, as Wikipedia says, abstract objects exist in a sense “distinct both from the sensible external world and from the internal world of consciousness”, while in Tegmark’s platonism, abstract objects exist in the same sense as the external world, and the external world is a mathematical structure.
This seems to be a question of “How are we allowed to use the word ‘exist’ in this conversational context without being confusing?” or “What sort of definition do we care to assign to the word ‘exist’?” rather than an unquoted question of what exists.
In other words, I would be comfortable saying that my office chair and the number 3 both plexist (Platonic-exist), whereas my office chair mexists (materially exists) whereas 3 does not.
Well it is certainly the case that knowing how to use the word “exist” is helpful for answering the question: “what exists?” And a consistent application of the usage of the word “exist” is how the modern platonic argument get’s its start. We look at universally agreed upon cases of the usage of “exist”, formulate criteria for something to exist and apply those criteria. The modern Platonist generally has a criteria along the lines of “If and only if an entity is quantified over by our best scientific theories then it exists.” Since our best scientific theories quantify over abstract objects the modern Platonist concludes that abstract objects exist.
Once can deny the criteria and come up with a different one or deny that abstract objects meet the criteria. But what advantage do these neologisms give us? Does using two different words, plexist and mexist, do anything more than recognize that material objects and abstract objects are two different kinds of things? If so why isn’t calling one “material” and the other “abstract” sufficient for for making that distinction? Presumably we wouldn’t want to come up with a different word for every way something might exist: quark-exist, chair-exist, triangle-exist, three-exist and so on.
Why not just have one word and distinguish entities from each other with adjectives?
Because what we’re saying about our descriptions of things is different. For some nouns, saying that it “exists” means that it has mass and takes up space, can be bumped into and such. For other nouns, “exists” means it can be defined without contradiction, or some such.
The verb “exist” is being used polysemously, even metaphorically — in the manner that “run” is used of sprinters, computer programs, and the dyed color of a laundered shirt. A sprinter, program, and dye are not actually doing anything like the same thing when they “run”, but we use the same word for them. This is a fact about our language, not about the things those three entities are doing. If there were any confusion what we meant, we would not hesitate to say that the program is “executing” and the dye is “spreading” or some such.
The whole Platonist position begins from a definition of “exists” that works equally well for abstract and concrete objects. You alternative definitions are bad: “has mass and takes up space, can be bumped into and such” isn’t even a necessary set of criteria for a wide variety of concrete objects. Photons and gluons for instance.
We don’t know that it “works equally well”, since we don’t have omniscient knowledge about the existence of abstract objects. If abstract objects don’t exist, then the quantification criterion is too broad, and therefore does not work.
This straight-forwardly begs the question. I say “What it means to exist is to be quantified over in our best scientific theories”. Your reply is basically “If you’re wrong about the definition then you’re wrong about the definition.”
Your claim was “If we are right about the definition, we are right about the definition”.
I’m yet to see such a definition. Do you mean the “definition” (a postulate, really) such as the one on Wikipedia? (SEP isn’t any better.)
If so, then it’s a separate definition, not something that “works equally well”. Besides, I have trouble understanding why one needs to differentiate between the abstract world and “the world of consciousness”.
It’s just a way of categorising Platoniists. Conceptualists think 3 is just a concept in their mind, Ptatonists don’t.
No, I don’t mean that. I’ve given a definition/criterion like eight times in this thread include two comments up :-).
In other words, theories about the world generally make reference to entities of various kinds. The say “Some x are y” or “There is an x that y’s” etc. These x’s are a theory’s ontological commitments. To say “the number the 3 is prime” implies 3 exists just as “some birds can fly” implies birds exist. Existence is simply being an entity posited by a true scientific theory. Making anything more out of “existence” gives it a metaphysical woo-ness the concept isn’t entitled to.
What does “Sherlock Holmes is a bachelor” imply?
“Sherlock Holmes is married” is false. But the truth of “Sherlock Holmes is a bachelor” doesn’t imply much about his existence.
A lot of lifting seems to be being done by the “scientific” in “scientific theory”.
“Sherlock Holmes is a bachelor” implies that Sherlock Holmes exists. But when you say that you’re simply taking part in a fictitious story. It’s story telling and everyone knows you’re not trying to describe the universe. If the fiction of Arthur Conan Doyle turned out to be a good theory of something—say it was an accurate description of events that really took place in the late 19th century—and accurately predicted lots of historic discoveries and Sherlock Holmes and the traits attributed to him were essential for that theory, then we would sat Sherlock Holmes existed.
I am rightly shifting the criteria of “what exists” to people who actually seem to know what they’re doing.
That is not uncontentious.
In which case SH is not implied to exist. But I knew that it is a fictitious story. The point was that “the number the 3 is prime” doens’t imply that 3 exists, since properties can be correctly or incorrectly ascribe to fictive entities. There is no obvious implication from a statement being true to a statement involving entities that actually exist. Mathematical formalism and fictivism hold 3 to be no more existent than SH, and are not obviously false.
You are not, because you are ignoring them when they say centres don’t exist. You are trying to read ontology from formalism, without taking into account the interpretation of the formalism, the semantics. ”
I don’t agree that I am.
I don’t understand what you’re trying to accomplish with this line of reasoning. Obviously, “truths” about fictitious stories do not imply the existence of the entities they quantify over. A fiction is a sort of mutually agreed upon lie. (I don’t agree, btw, that a statement about Sherlock Holmes is true in the same way that “There are white Swans” is true). But it is none the less the case that the assertion “Sherlock Holmes is a bachelor” implies the existence of Sherlock Holmes. It just so happens that everyone plays along with the story. But unlike the stories of Sherlock Holmes I really do believe in quantum mechanics and so take the theory’s word for it that the entities it implies exist actually do exist.
I’m obviously aware there are alternatives to Platonism and that there is plenty of debate. I presumably have reasons for rejecting the alternatives. But instead of actually asserting a positive case for any alternative you seem to just be picking at things and disagreeing with me without explaining why (plus a decent amount of misunderstanding the position). If you’d like to continue this discussion please do that instead of just complaining about my position. It’s unpleasant and not productive.
“Maddy’s first objection to the indispensability argument is that the actual attitudes of working scientists towards the components of well-confirmed theories vary from belief, through tolerance, to outright rejection (Maddy 1992, p. 280). The point is that naturalism counsels us to respect the methods of working scientists, and yet holism is apparently telling us that working scientists ought not have such differential support to the entities in their theories. Maddy suggests that we should side with naturalism and not holism here. Thus we should endorse the attitudes of working scientists who apparently do not believe in all the entities posited by our best theories. We should thus reject P1.”
SEP
Sorry, I should have looked first.
Ah, I see. How is it different from “we define stuff we think about that is not found in nature as “abstract”″?
I guess that’s where I am having problems with this approach. “Number 3 is prime” is a well-formed string in a suitable mathematical model, whereas “some birds can fly” is an observation about external world. Basically, it seems to me that the term “exist” is redundant in it. Everything you can talk about “exists” in Platonism, so the term is devoid of meaningful content.
Hmm, where do pink unicorns exist? Not in the external world, so somewhere in the internal world then? Or do they not exist at all? Then what definition of existence do they fail? For example, “our best scientific theories” imply that people can think about pink unicorns as if they were experimental facts. Thus they must exist in our imagination. Which seems uncontroversial, but vacuous and useless.
I can talk about a Highest Prime. Specifically, I can say it doesn’t exist.
Would a Platonist think that a tulpa exists?
I don’t think the hypothesis that there is an independent conscious person existing along with you in your mind (or whatever those people think they’re doing) is the best explanation for the experiences they’re describing. If they just want to use it as shorthand for a set of narratively consistent hallucination then I suppose I could be okay with saying a tulpa exists. But either way: I don’t think a tulpa is an abstract object. It’s a mental object like an imaginary friend or a hallucination. Like any entity, I think the test for existence is how it figures in scientific explanation but I think Platonists and non-Platonists are logically free to admit or deny tulpas existence.
A Tegmarkian would.
Really? The ‘existence’ status of that kind of mental entity seems to be an orthogonal issue to what (I am guessing) you mean by Tegmarkian considerations.
Tegmarkia includes every possible arrangement of physical law, including forms of psycho-phsycial parallelism whereby what is thought automatically becomes real.
Ah, fair point. I went too far. Still, I’m dubious about conflating the logical and the physical definition of existence. But hey, go wild, it’s of no consequence.
Have you noticed that, although you and Jack have completely opposite (minimal and maxima) ontologies, you both have the same motivation, of avoiding “philosophising”. Well, I suppose “everything exists” and “nothing exists” both impose minimal cognitive burden—if you believe some non -trivial subset exists, you have to put effort into populating it.
I haven’t noticed that Jack has a motivation of “avoiding philosophizing”. And I don’t say that “nothing exists”, I just avoid the term as mostly vacuous, except in specific narrow cases, like math.
I would say pink unicorns do not exist at all. The term, for me, describes a concrete entity that does not exist. “The Unicorn” could be type-language, which are abstract objects—like “the Indian Elephant” or “The Higgs Boson” but unlike the Indian Elephant the Unicorn is not something quantified over in zoology and it is hard to think of a useful scientific process which would ever involve an ontological commitment to unicorns (aside from studying the mythology of unicorns which is clearly something quite different). “3 is prime” is a well-formed string in a suitable mathematical model—which is to say a system of manipulating symbols. But this particular method of symbol manipulation is utterly essential to the scientific enterprise and it is trivial to construct methods of symbol manipulation that are not.
Our best scientific theories imply that people can think about pink unicorns as if they were experimental facts. So thoughts about pink unicorns certainly exist. It may also be the case the unicorns possibly exist. But our best scientific theories certainly do not imply the actual existence of unicorns. So pink unicorns do not exist (bracketing modal concerns).
So to conclude: it’s different in that the criterion for existence requires that the entity actually figures in scientific explanation, in our accurate model of the universe, not simply that it is something we can think about.
So, if a theory of pink unicorns was useful to construct an “accurate model of the universe” (presumably not including the part of the universe that is you and me discussing pink unicorns?) these imaginary creatures would be as real as imaginary numbers?
Sure! Another way of saying that: If we discovered pink unicorns on another planet they would be as real as imaginary numbers.
A lot of lifting is being done by “scientific” here. It’s uncontroversial that scientific theories have to be about the real world in some sense, but it doesn’t follow from that that every term mentioned in them successfully refers to something real.
But if “plexists” means something like “I have an idea of it in my head”, then there is no substance to the claim that 3 plexists..3 is then no more real than a unicorn.
The number 3 has well-defined properties; such that I can be pretty sure that if I talk about 3 and you talk about 3, we’re talking about the same sort of thing. Sources on unicorns vary rather more broadly on the properties ascribed to them.
I don’t see what that has to do with existence. We could cook up a well-defined fubarosco-juno unicorn.
I agree that this is useful, but it is essential to recognize that these words are just wrapping up our confusion, and that there are other questions that are still left unanswered when we have answered yours. It can sometimes help to determine which things plexist and which mexist, but we still don’t really know what we mean when we say these, and having words for them can sometimes cause us to forget that. (I suppose I should refer to phlogiston here.) I think that Tegmark-platonism is probably a step towards resolving that confusion, but I doubt that any current metaphysical theory that has completed the job; I certainly don’t know of any that doesn’t leave me confused.
We can wonder about the nature of concrete objects and the nature of abstract objects without quarreling about whether or not one exists.
I don’t think we really can. The categories of concrete and abstract objects are supposed to carve reality at its joins: I see a chair, I prove a theorem. You can’t really do this sort of analysis without reference to the chairs and the theorems, and if you do make those references, you must have already settled the question of whether a chair is concrete, and a fortiori whether concrete objects exist. The alternative, studying concepts that were originally intended to carve reality at its joins without intending to do so yourself, has historically been unproductive, except to some extent in math.
Right, so accept that both abstract and concrete objects exist.. While you’re not doing science feel free to think about what abstraction is, what concrete means and so on.
I don’t think I’ve been clear. I’m saying that the categories of abstract and concrete objects are themselves generated by experience and are intended to reflect natural categories, and that it’s not useful to think about what abstraction is without thinking about particular abstract objects and what makes us consider them abstract.
Wikipedia’s fine, but I’d rely more on SEP for quick stuff like this. The question of what makes something ‘mathematical’ is a difficult one, but it’s not important for evaluating abstract-object realism. What makes something abstract is just that it’s causally inert and non-spatiotemporal. Tegmark’s MUH asserts things like that. Sparser mathematical platonisms also assert things like that. For present purposes, their salient difference is how they motivate realism about abstract objects, not how they conceive of the nature of our own world.
If I understand this correctly, I disagree. Modern philosophical platonism means different things by ‘abstract’ than Tegmark’s platonism. In philosophical platonism, I accept your definition that something is abstract if it is causally inert and non-spatiotemporal. For Tegmark, this doesn’t really make sense though, since the universe is causal in the same sense that a mathematical model of a dynamical system is causal, and it is spatiotemporal in the same sense that the mathematical concept of Minkowski spacetime is spatiotemporal, since the universe is just (approximately) a dynamical system on (approximately) Minkowski spacetime. The usual definition of an abstract object implies that physical, spatiotemporal objects are not abstract, which contradicts the MUH. I don’t think we really have a precise definition of abstract object that makes sense in Tegmark’s platonism, since something like ‘mathematical structure’ is obviously imprecise.
I don’t think that means that abstract objects in the ordinary sense don’t make sense. It just means that he counts a lot of things as concrete that most people might think of as abstract. We don’t need a definition of ‘mathematical structure’ for present purposes, just mathematically precise definitions of ‘causal’ and ‘spatiotemporal’.
The abstract/concrete distinction is actually a separate ontic axis from the mathematical/physical one. You can have abstract (platonic) physical objects, and concrete mathematical objects.
Example of abstract physical objects: Fields
Example of concrete mathematical objects: Software
My definitions:
Abstract: universal , timeless and acausal (always everywhere true and outside time and space, and not causally connected to concrete things).
Concrete: can be located in space and time, is causal, has moving parts
Mathematical: concerned with categories, logics and models
Physical: concerned with space, time, and matter
My take on modern Platonism is that abstract objects are considered the only real (fundamental) objects. Abstract objects can’t interact with concrete objects, because concrete objects don’t actually exist! Rather, concrete things should be thought of as particular parts (cross-sections, aspects of) abstract things. Abstract objects encompass concrete objects. But the so-called concrete objects are really just categories in our own minds (a feature of the way we have chosen to ‘carve reality at the joints’).
This isn’t modern Platonism.
A program is an abstract object. Particular copies of a program stored in your hard drive, are concrete.
Ok, then its Geddesian Platonism ;) The easiest solution is to do away with the concrete dynamic objects as anything fundamental and just regard reality as a timeless Platonia . I thought thats more or less what Julian Barbour suggests.
http://en.wikipedia.org/wiki/Platonia_(philosophy)
The actual timeless (abstract) math objects are the mathematical relations making up the algorithm in question. But the particular model or representation of a program stored on a computer can be regarded as a concrete math object. And an instantiated (running) program can be viewed as a concrete math object also ( a dynamical system with input, processing and output).
These analogies are exact:
Space is to physics as categories are to math
Time is to physics as dynamical systems (running programs) are to math
Matter is to physics as data models are to math
And you are unlikely to be able to make discussing the simple solution with others into a viable career in academic publishing.