Could you humor me for an example? What would the universe look like if “deontology is true” versus a universe where “deontology is false”?
What would it be like if utilitarianism is true? Or the axiom of choice? Or the continuum hypothesis?
I don’t see how a deontological system would prevent number-crunching.
I don’t see how a description of the neurology of moral reasoning tells you how to crunch the numbers—which decision theory you need to use to implement which moral theory to resolve conflicts in the right way.
This statement seems meaningless to me. As in “Utilitarianism is true” computes in my mind the exact same way as “Politics is true” or “Eggs are true”.
The term “utilitarianism” encompasses a broad range of philosophies, but seems more commonly used on lesswrong as meaning roughly some sort of mathematical model for computing the relative values of different situations based on certain value assumptions about the elements of those situations and a thinghy called “utility function”.
If this latter meaning is used, “utilitarianism is true” is a complete type error, just like “Blue is true” or “Eggs are loud”. You can’t say that the mathematical formulas and formalisms of utilitarianism are “true” or “false”, they’re just formulas. You can’t say that “x = 5″ is “true” or “false”. It’s just a formula that doesn’t connect to anything, and that “x” isn’t related to anything physical—I just pinpointed “x” as a variable, “5” as a number, and then declared them equivalent for the purposes of the rest of this comment.
This is also why I requested an example for deontology. To me, “deontology is true” sounds just like those examples. Neither “utilitarianism is true” or “deontology is true” correspond to well-formed statements or sentences or propositions or whatever the “correct” philosophical term is for this.
but seems more commonly used on lesswrong as meaning roughly some sort of mathematical model for computing the relative values of different situations based on certain value assumptions about the elements of those situations and a thinghy called “utility function”.
Wait, seriously? That sounds like a gross misuse of terminology, since “utilitarianism” is an established term in philosophy that specifically talks about maximising some external aggregative value such as “total happiness”, or “total pleasure minus suffering”. Utility functions are a lot more general than that (ie. need not be utilitarian, and can be selfish, for example).
Wait, seriously? That sounds like a gross misuse of terminology, since “utilitarianism” is an established term in philosophy that specifically talks about maximising some external aggregative value such as “total happiness”, or “total pleasure minus suffering”.
To an untrained reader, this would seem as if you’d just repeated in different words what I said ;)
I don’t see “utilitarianism” itself used all that often, to be honest. I’ve seen the phrase “in utilitarian fashion”, usually referring more to my description than the traditional meaning you’ve described.
“Utility function”, on the other hand, gets thrown around a lot with a very general meaning that seems to be “If there’s something you’d prefer than maximizing your utility function, then that wasn’t your real utility function”.
I think one important source of confusion is that LWers routinely use concepts that were popularized or even invented by primary utilitarians (or so I’m guessing, since these concepts come up on the wikipedia page for utilitarianism), and then some reader assumes they’re using utilitarianism as a whole in their thinking, and the discussion drifts from “utility” and “utility function” to “in utilitarian fashion” and “utility is generally applicable” to “utilitarianism is true” and “(global, single-variable-per-population) utility is the only thing of moral value in the universe!”.
Everywhere outside of LW , utilitarianism means a a moral theory. It, or some specific variation of it is therefore capable of being true or false. The point could have as well been made with some less mathematical moral theory. The truth or falsehood or moral theories doesn’t have direct empirical consequences, and more than the truth or falsehood of abstract mathematical claims. Shut-up-and-calculate doesn’t work here, because one is not using utilitarianism or any other moral theury for predictingwhat will happen, one is using to plan what one will do.
You can’t say that the mathematical formulas and formalisms of utilitarianism are “true” or “false”, they’re just formulas. You can’t say that “x = 5” is “true” or “false”. It’s just a formula that doesn’t connect to anything, and that “x” isn’t related to anything physical—I just pinpointed “x” as a variable, “5″ as a number, and then declared them equivalent for the purposes of the rest of this comment.
And I can’t say that f, ma and a mean something in f=ma? When you apply maths, the variables
mean something. That’s what application is. In U-ism, the input it happiness, or lifeyears, or soemthig, and the output is a decision that is put into practice.
This is also why I requested an example for deontology. To me, “deontology is true” sounds just like those examples. Neither “utilitarianism is true” or “deontology is true” correspond to well-formed statements or sentences or propositions or whatever the “correct” philosophical term is for this.
I don’t know why you would want to say you have an explanation of morality when you are an error theorist..
I also don’t know why you are an error theorist. U-ism and D-ology are rival answers to the question “what is the right way to resolve conflicts of interest?”. I don’t think that is a meaningless or unanswerable question.
I don’t see why anyone would want to pluck a formula out of the air, number-crunch using it, and then make
it policy. Would you walk into a suicide booth because someone had calculated, without justifying the formula used that you were a burden to society?
I think you are making a lot of assumptions about what I think and believe. I also think you’re coming dangerously close to being perceived as a troll, at least by me.
U-ism and D-ology are rival answers to the question “what is the right way to resolve conflicts of interest?”
Oh! So that’s what they’re supposed to be? Good, then clearly neither—rejoice, people of the Earth, the answer has been found! Mathematically you literally cannot do better than Pareto-optimal choices.
The real question, of course, is how to put meaningful numbers into the game theory formula, how to calculate the utility of the agents, how to determine the correct utility functions for each agent.
My answer to this is that there is already a set of utility functions implemented in each humans’ brains, and this set of utility functions can itself be considered a separate sub-game, and if you find solutions to all the problems in this subgame you’ll end up with a reflectively coherent CEV-like (“ideal” from now on) utility function for this one human, and then that’s the utility function you use for that agent in the big game board / decision tree / payoff matrix / what-have-you of moral dilemmas and conflicts of interest.
So now what we need is better insights and more research into what these sets of utility functions look like, how close to completion they are, and how similar they are across different humans.
Note that I’ve never even heard of a single human capable of knowing or always acting on their “ideal utility function”. All sample humans I’ve ever seen also have other mechanisms interfering or taking over which makes it so that they don’t always act even according to their current utility set, let alone their ideal one.
I don’t know why you would want to say you have an explanation of morality when you are an error theorist. (...) I also don’t know why you are an error theorist.
I don’t know what being an “error theorist” entails, but you claiming that I am one seems valid evidence that I am one so far, so sure. Whatever labels float your boat, as long as you aren’t trying to sneak in connotations about me or committing the noncentral fallacy. (notice that I accidentally snuck in the connotation that, if you are committing this fallacy, you may be using “worst argument in the world”)
And I can’t say that f, ma and a mean something in f=ma? When you apply maths, the variables mean something. That’s what application is. In U-ism, the input it happiness, or lifeyears, or soemthig, and the output is a decision that is put into practice.
Sure. Now to re-state my earlier question: which formulations of U and D can have truth values, and what pieces of evidence would falsify each?
The formulation for f=ma is that the force applied to an object is equal to the product of the object’s mass and its acceleration, for certain appropriate units of measurements. You can experimentally verify this by pushing objects, literally. If for some reason we ran a well-designed, controlled experiment and suddenly more massive objects started accelerating more than less massive objects with the same amount of force, or more generally the physical behavior didn’t correspond to that equation, the equation would be false.
Oh! So that’s what they’re supposed to be? Good, then clearly neither—rejoice, people of the Earth, the answer has been found! Mathematically you literally cannot do better than Pareto-optimal choices
Assuming that everything of interest can be quantified,that the quantities can be aggregated and compated, and assuming that anyone can take any amount of loss for the greater good...ie assuming all the stuff that utiliatarins assume and that their opponents don’t.
My answer to this is that there is already a set of utility functions implemented in each humans’ brains, and this set of utility functions can itself be considered a separate sub-game, and if you find solutions to all the problems in this subgame you’ll end up with a reflectively coherent CEV-like (“ideal” from now on) utility function for this one human, and then that’s the utility function you use for that agent in the big game board / decision tree / payoff matrix / what-have-you of moral dilemmas and conflicts of interest.
No. You cant leap from “a reflectively coherent CEV-like [..] utility function for this one human” to a solution
of conflicts of interest between agents. All you have is a set of exquisite model of individual interests, and no
way of combining them, or trading them off.
I don’t know what being an “error theorist” entails,
Strictly speaking, you are a metaethical error theorist. You think there is no meaning to the truth of falsehood of metathical claims.
Now to re-state my earlier question: which formulations of U and D can have truth values, and what pieces of evidence would falsify each?
Any two theoris which have differing logical structure can have truth values, since they can be judged by coherence, etc, and any two theories which make differnt objectlevle predictions can likelwise have truth values.
U and D pass both criteria with flying colours.
And if CEV is not a meaningul metaethical theory, why bother with it? If you can’t say that the output of a grand CEV number crunch is what someone should actually do, what is the point?
The formulation for f=ma is that the force applied to an object is equal to the product of the object’s mass and its acceleration, for certain appropriate units of measurements. You can experimentally verify this by pushing objects, literally.
I know. And you detemine the truth factors of other theories (eg maths) non-empirically. Or you can use a mixture. How were you porposing to test CEV?
Assuming that everything of interest can be quantified,that the quantities can be aggregated and compated, and assuming that anyone can take any amount of loss for the greater good...ie assuming all the stuff that utiliatarins assume and that their opponents don’t.
(...)
No. You cant leap from “a reflectively coherent CEV-like [..] utility function for this one human” to a solution of conflicts of interest between agents. All you have is a set of exquisite model of individual interests, and no way of combining them, or trading them off.
Two individual interests: Making paperclips and saving human lives. Prisoners’ dilemma between the two. Is there any sort of theory of morality that will “solve” the problem or do better than number-crunching for Pareto optimality?
Even things that cannot be quantified can be quantified. I can quantify non-quantifiable things with “1” and “0″. Then I can count them. Then I can compare them: I’d rather have Unquantifiable-A than Unquantifiable-B, unless there’s also Unquantifiable-C, so B < A < B+C. I can add any number of unquantifiables and/or unbreakable rules, and devise a numerical system that encodes all my comparative preferences in which higher numbers are better. Then I can use this to find numbers to put on my Prisoners Dilemma matrix or any other game-theoretic system and situation.
Relevant claim from an earlier comment of mine, reworded: There does not exist any “objective”, human-independent method of comparing and trading the values within human morality functions.
Game Theory is the science of figuring out what to do in case you have different agents with incompatible utility functions. It provides solutions and formalisms both when comparisons between agents’ payoffs are impossible and when they are possible. Isn’t this exactly what you’re looking for? All that’s left is applied stuff—figuring out what exactly each individual cares about, which things all humans care about so that we can simplify some calculations, and so on. That’s obviously the most time-consuming, research-intensive part, too.
any two theories which make differnt objectlevle predictions can likelwise have truth values.
Would you mind giving three examples of cases where Deontology being true gives different predictions than Consequentialism being true? This is another extension of the original question posed, which you’ve been dodging.
Would you mind giving three examples of cases where Deontology being true gives different predictions than Consequentialism being true?
Deontology says you should push the fat man under the trolley, and various other examples that are well known
in the literature.
This is another extension of the original question posed, which you’ve been dodging.
I have not been “dodging” it. The question seemed to frame the issue as one of being able to predict events that are observed passively. That misses the point on several levels. For one thing, it is not the case that empirical proof
is the only kind of proof. For another, no moral theory “does” anything unless you act on it. And that includes CEV.
Deontology says you should push the fat man under the trolley, and various other examples that are well known in the literature.
This would still be the case, even if Deonotology was false. This leads me to strongly suspect that whether or not it is true is a meaningless question. There is no test I can think of which would determine its veracity.
Actually you deontology says you should NOT push the fat man . Consequentialism says you should.
This would still be the case, even if Deonotology was false
it is hard to make sense of that. If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case.
There is no test I can think of which would determine its veracity.
Once again I will state: moral theories are tested by their ability to match moral intuition, and by their internal consistency, etc.
Compute CEV. Then actually do learn and become this better person that was modeled to compute the CEV. See if you prefer the CEV or any other possible utility function.
Asymptotic estimations could also be made IFF utility function spaces are continuous and can be mapped by similarity: If as you learn more true things from a random sample and ordering of all possible true things you could learn, gain more brain computing power, and gain a better capacity for self-reflection, your preferences tend towards CEV-predicted preferences, then CEV is almost certainly true.
If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case.
D(x) and U(y) make opposite recommendations. x and y are different intuitions from different people, and these intuitions may or may not match the actual morality functions inside the brains of their proponents.
I can find no measure of which recommendation is “correct” other than inside my own brain somewhere. This directly implies that it is “correct for Frank’s Brain”, not “correct universally” or “correct across all humans”.
Based on this reasoning, if I use my moral intuition to reason about the the fat man trolley problem problem using D() and find the conclusion correct within context, then D is correct for me, and the same goes for U(). So let’s try it!
My primary deontological rule: When there exist counterfactual possible futures where the expected number of deaths is lower than all other possible futures, always take the course of action which leads to this less-expected-deaths future. (In simple words: Do Not Kill, where inaction that leads to death is considered Killing).
A train is going to hit five people. There is a fat man which I can push down to save the five people with 90% probability. (let’s just assume I’m really good at quickly estimating this kind of physics within this thought experiment)
If I don’t push the fat man, 5 people die with 99% probability (shit happens). If I push the fat man, 1 person dies with 99% probabilty (shit happens), and the 5 others still die with 10% probability.
Expected deaths of not-pushing: 4.95.
Expected deaths of pushing: 1.49.
I apply the deontological rule. That fat man is doomed.
Now let’s try the utilitarian vers—Oh wait. That’s already what we did. We created a deontological rule that says to pick the highest expected utility action, and that’s also what utilitarianism tells me to do.
See what I mean when I say there is no meaningful distinction? If you calibrate your rules consistently, all “moral theories” I see philosophers arguing about produce the same output. Equal output, in fact.
So to return to the earlier point: D(trolley, Frank’s Rule) is correct where trolley is the problem and Frank’s is the rules I find most moral. U(trolley, Frank’s Utility Function) is also correct. D(trolley, ARBITRARY RULE PICKED AT RANDOM FROM ALL POSSIBLE RULES) is incorrect for me. Likewise, U(trolley, ARBITRARY UTILITY FUNCTION PICKED AT RANDOM FROM ALL POSSIBLE UTILITY FUNCTION) is incorrect for me.
This means that U(trolley) and D(trolley) cannot be “correct” or “incorrect, because in typical Functional Programming fashion, U(trolley) and D(trolley) return curried functions, that is, they return a function of a certain type which takes a rule (for D) or a utility function (for U) and returns a recommendation based on this for the trolley problem.
To reiterate some previous claims of mine, in which I am fairly confident, in the above jargon: There does not exist any single-parameter U(x) or D(x) functions that return a single truth-valuable recommendation without any rules or utility functions as input. All deontological systems rely on the rules supplied to them, and all utilitarian systems rely on the utility functions supplied to them. There exists a utility function equivalent to each possible rule, and there exists a rule equivalent to each possible utility function.
The rules or the utility functions are inside human brains. And either can be expressed in terms of the other interchangeably—which we use is merely a matter of convenience as one will correspond to the brain’s algorithm more easily than the other.
My primary deontological rule: When there exist counterfactual possible futures where the expected number of deaths is lower than all other possible futures, always take the course of action which leads to this less-expected-deaths future. (In simple words: Do Not Kill, where inaction that leads to death is considered Killing).
I suspect that defining deontology as obeying the single rule “maximize utility” would be a non-central redefinition of the term. something most deontologists would find unacceptable.
The simplified “Do Not Kill” formulation sounds very much like most deontological rules I’ve heard of (AFAIK, “Do not kill.” is a bread-and-butter standard deontological rule). It also happens to be a rule which I explicitly attempt to implement in my everyday life in exactly the format I’ve exposed—it’s not just a toy example, this is actually my primary “deontological” rule as far as I can tell.
And to me there is no difference between “Pull the trigger” or “Remain immobile” when both are extremely likely to lead to the death of someone. To me, both are “Kill”. So if not pulling the trigger leads to one death, and pulling the trigger leads to three deaths, both options are horrible, but I still really prefer not pulling the trigger.
So if for some inexplicable reason it’s really certain that the fat man will save the workers and that there is no better solution (this is an extremely unlikely proposition, and by default would not trust myself to have searched the whole space of possible options), then I would prefer pushing the fat man.
If I considered standing by and watching people die because I did nothing to not be “Kill”, then I would enforce that rule, and my utility function would also different. And then I wouldn’t push the fat man either way, whether I calculate it with utility functions or whether I follow the rule “Do Not Kill”.
I agree that it’s non-central, but IME most “central” rules I’ve heard of are really simple wordings that obfuscate the complexity and black boxes that are really going on in the human brain. At the base level, “do not kill” and “do not steal” are extremely complex. I trust that this part isn’t controversial except in naive philosophical journals of armchair philosophizing.
to me there is no difference between “Pull the trigger” or “Remain immobile” when both are extremely likely to lead to the death of someone. To me, both are “Kill”.
I believe that this is where many deontologists would label you a consequentialist.
most “central” rules I’ve heard of are really simple wordings that obfuscate the complexity and black boxes that are really going on in the human brain. At the base level, “do not kill” and “do not steal” are extremely complex. I trust that this part isn’t controversial except in naive philosophical journals of armchair philosophizing.
There are certainly the complex edge cases, like minimum necessary self-defense and such, but in most scenarios the application of the rules is pretty simple. Moreover, “inaction = negative action” is quite non-standard. In fact, even if I believe that in your example pushing the fat man would be the “right thing to do”, I do not alieve it (i.e. I would probably not do it if push came to shove, so to speak).
I believe that this is where many deontologists would label you a consequentialist.
With all due respect to all parties involved, if that’s how it works I would label the respective hypothetical individuals who would label me that “a bunch of hypocrites”. They’re no less consequentialist, in my view, since they hide behind words the fact that they have to make the assumption that pulling a trigger will lead to the consequence of a bullet coming out of it which will lead to the complex consequence of someone’s life ending.
I wish I could be more clear and specific, but it is difficult to discuss and argue all the concepts I have in mind as they are not all completely clear to me, and the level of emotional involvement I have in the whole topic of morality (as, I expect, do most people) along with the sheer amount of fun I’m having in here are certainly not helping mental clarity and debiasing. (yes, I find discussions, arguments, debates etc. of this type quite fun, most of the time)
In fact, even if I believe that in your example pushing the fat man would be the “right thing to do”, I do not alieve it (i.e. I would probably not do it if push came to shove, so to speak).
I’m not sure it’s just a question of not alieving it. There are many good reasons not to believe evidence that this will work, and even more good reasons to believe there is probably a better option, and many reasons why it could be extremely detrimental to you in the long term to push down a fat man onto train tracks, and if push come to shove it might end up being the more rational action in a real-life situation similar to the thought experiment.
Actually you deontology says you should NOT push the fat man . Consequentialism says you should.
I’m quite aware of that.
it is hard to make sense of that. If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case.
At this point, I simply must tap out. I’m at a loss at how else to explain what you seem to be consistently missing in my questions, but DaFranker is doing a very good job of it, so I’ll just stop trying.
moral theories are tested by their ability to match moral intuition,
Really? This is news to me. I guess Moore was right all along...
TL;DR: You should push the fat man if and only if X. You should not push the fat man if and only if ¬X.
X can be derived into a rule to use with D(X’) to compute whether you should push or not. X can also be derived into a utility function to use with U(X’) to compute whether you should push or not. The answer in either case doesn’t depend on U or D, it depends on your derivation of X, which itself depends on X.
This is shown by the assumption that for all reasonable a, there exists a g(a) where U(a) = D(g(a)). Since, by their ambiguity and vague definitions, both U() and D() seem to cover an infinite domain and are the equivalent of turing-complete, this assumption seems very natural.
I don’t know why you would want to say you have an explanation of morality when you are an error theorist.
Error theorists are cognitivists. The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,) he is precisely asking you what it would mean for U or D to have truth values.
I also don’t know why you are an error theorist. U-ism and D-ology are rival answers to the question “what is the right way to resolve conflicts of interest?”.
When they are both trying to give accounts of what it would mean for something to be “right”, it seems this question becomes pretty silly.
The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,)
I’m not sure at all what those mean. If they mean that I think there doesn’t exist any sentences about morality that can have truth values, that is false. “DaFranker finds it immoral to coat children in burning napalm” is true, with more confidence than I can reasonably express (I’m about as certain of this belief about my moral system as I am in things like 2 + 2 = 4).
However, the sentence “It is immoral to coat children in burning napalm” returns an error for me.
You could say I consider the function “isMoral?” to take as input a morality function, a current worldstate, and an action to be applied to this worldstate that one wants to evaluate whether it is moral or not. A wrapper function “whichAreMoral?” exists to check more complicated scenarios with multiple possible actions and other fun things.
See, if the “morality function” input parameter is omitted, the function just crashes. If the current worldstate is omitted, the morality function gets run with empty variables and 0s, which means that the whole thing is meaningless and not connected to anything in reality.
he is precisely asking you what it would mean for U or D to have truth values.
Yes.
In the example above, my “isMoral?” function can only return a truth-value when you give it inputs and run the algorithm. You can’t look at the overall code defining the function and give it a truth-value. That’s just completely meaningless. My current understanding of U and D is that they’re fairly similar to this function.
When they are both trying to give accounts of what it would mean for something to be “right”, it seems this question becomes pretty silly.
I agree somewhat. To use another code analogy, here I’ve stumbled upon the symbol “Right”, and then I look back across the code for this discussion and I can’t find any declarations or “Right = XXXXX” assignment operations. So clearly the other programmers are using different linked libraries that I don’t have access to (or they forgot that “Right” doesn’t have a declaration!)
If they mean that I think there doesn’t exist any sentences about morality that can have truth values, that is false. “DaFranker finds it immoral to coat children in burning napalm” is true, with more confidence than I can reasonably express
An error theorist could agree with that. it isn’t really a statement about morality, it is about belief. Consider “Eudoximander considers it prudent to refrain from rape, so as to avoid being torn apart by vengeful harpies”.
That isn’t a true statement about harpies.
See, if the “morality function” input parameter is omitted, the function just crashes. If the current worldstate is omitted, the morality function gets run with empty variables and 0s, which means that the whole thing is meaningless and not connected to anything in reality.
And it doesn’t matter what the morality function is? Any mapping from input to output will do?
You can’t look at the overall code defining the function and give it a truth-value. That’s just completely meaningless.
So is it meaninless that
some simulations do (not) correctly model the simulated system
some commercial software does (not) fulfil a real-world business requirment
some algorithms do (not) correctly computer mathematical functions
some games are (not) entertaining
some trading software does (not) return a profit
I agree somewhat. To use another code analogy, here I’ve stumbled upon the symbol “Right”, and then I look back across the code for this discussion and I can’t find any declarations or “Right = XXXXX” assignment operations.
It’s worth noting that natural language is highhly contextual. We know in broad terms what it means to get a theory “right”. That’s “right” in one context. In this context we want a “right” theory of morality, that is a theoretically-right theory of the morally-right.
And it doesn’t matter what the morality function is? Any mapping from input to output will do?
Yes.
I have a standard library in my own brain that determines what I think looks like a “good” or “useful” morality function, and I only send morality functions that I’ve approved into my “isMoral?” function. But “isMoral?” can take any properly-formatted function of the right type as input.
And I have no idea yet what it is that makes certain morality functions look “good” or “useful” to me. Sometimes, to try and clear things up, I try to recurse “isMoral?” on different parameters.
e.g.: “isMoral? defaultMoralFunc w1 (isMoral? newMoralFunc w1 BurnBabies)” would tell me whether my default morality function considers moral the evaluation and results of whether the new morality function considers burning babies moral or not.
An error theorist could agree with that. it isn’t really a statement about morality, it is about belief. Consider “Eudoximander considers it prudent to refrain from rape, so as to avoid being torn apart by vengeful harpies”. That isn’t a true statement about harpies.
I’m not sure what you mean by “it isn’t really a statement about morality, it is about belief.”
Yes, I have the belief that I consider it immoral to coat children in napalm. This previous sentence is certainly a statement about my beliefs. “I consider it immoral to coat children in napalm” certainly sounds like a statement about my morality though.
“isMoral? DaFranker_IdealMoralFunction Universe coatChildInNapalm = False” would be a good way to put it.
It is a true statement about my ideal moral function that it considers it better not to coat a child in burning napalm. The declaration and definition of “better” here are inside the source code of DaFranker_IdealMoralFunction, and I don’t have access to that source code (it’s probably not even written yet).
Also note that “isMoral? MoralIntuition w a” =/= “”isMoral? [MoralFunctionsInBrain] w a” =/= “”isMoral? DominantMoralFunctionInBrain w a” =/= “”isMoral? CurrentMaxMoralFunctionInBrain w a” =/= “isMoral? IdealMoralFunction w a”.
In other words, when one thinks of whether or not to coat a child in burning napalm, many functions are executed in the brain, some of them may disagree on the betterness of some details of the situation, one of those functions usually takes the lead and becomes what the person actually does when faced with that situation (this dominance is dynamically computed at runtime, so at each evaluation the result may be different if, for instance, one’s moral intuitions have changed the internal power balance within the brain), but one could in theory make up a function that represents the pareto-optimal compromise of all those functions, and all of this is reviewed in very synthesized form by the conscious mind to generate a Moral Intuition. All of which are very different from what would happen if the conscious mind could read the source code for the set of moral functions in the brain and edit things to be the way it prefers recursively towards a unique ideal moral function.
So is it meaninless that
some simulations do (not) correctly model the simulated system
some commercial software does (not) fulfil a real-world business requirment
some algorithms do (not) correctly computer mathematical functions
some games are (not) entertaining
some trading software does (not) return a profit
Not quite, but those are different questions. Is the trading software itself “true” or “false”? No. Is my approximate model of how the trading software works “true” or “false”? No.
Is it “true” or “false” that my approximate model of how the trading software works is better than competing alternatives? Yes, it is true (or false). Is it “true” or “false” that the trading software returns a profit? Yes, it is.
See, there’s an element of context that lets us ask true/false questions about things. “Politics is true” is meaningless. “Politics is the most efficient method of managing a society” is certainly not meaningless, and with more formal definitions of “efficient” and “managing” one could even produce experimental tests to determine by observations whether that is true or false.
However, when one says “utilitarianism is true”, I just don’t know what observations to make. “utilitarianism accurately models DaFranker’s ideal moral function” is much better—I can compare the two, I can try to refine what is meant by “utilitarianism” here exactly, and I could in principle determine whether this is true or false.
“as per utilitarianism’s claim, what is morally best is to maximize the sum of x where each x is a measure u() of each agent’s ideal morality function” sounds like it also makes sense. But then you run into a snag while trying to evaluate the truth-value of this. What is “morally best” here? According to what principle? It seems this “morally best” depends on the reader, or myself, or some other point of reference.
We could decide that this “morally best” means that it is the optimal compromise between all of our morality functions, the optimal way to resolve conflicts of interest with the least total loss in utility and highest total gain.
We could assign a truth-value to that; compute all possible forms of social agreement about morality, all possible rule systems, and if the above utilitarian claim is among the pareto-optimal choices on the game payoff matrix, then the statement is true, if it is strictly dominated by some other outcome, then it is false. Of course, actually running this computation would require solving all kinds of problems and getting various sorts of information that I don’t even know how to find ways to solve or get. And might requite a Halting Oracle or some form of hypercomputer.
At any rate, I don’t think “as per utilitarianism’s claim, it is pareto-optimal across all humans to maximize the sum of x where each x is a measure u() of each agent’s ideal morality function” is what you meant by “utilitarianism is true”.
Error theorists are cognitivists. The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,) he is precisely asking you what it would mean for U or D to have truth values.
By comparing them to abstract formulas, which don’t have truth values … as opposed to equations, which, do, and to applied maths,which does, and theories, which do...
When they are both trying to give accounts of what it would mean for something to be “right”, it seems this question becomes pretty silly.
I have no idea why you would say that. Belief in objective morality is debatable but not silly in the way belief in unicorns is.
The question of what is right is also about the most important question there is.
By comparing them to abstract formulas, which don’t have truth values … as opposed to equations, which, do, and to applied maths,which does, and theories, which do...
My main point is that I haven’t the slightest clue as to what kind of applied math or equations U and D could possibly be equivalent to. That’s why I was asking you, since you seem to know.
I am not assuming they have to be implemented mathematically. And I thought you problem is that you didn;t have a procedure for identifying corect theories of morality?
By comparing them to abstract formulas, which don’t have truth values … as opposed to equations, which, do, and to applied maths,which does, and theories, which do...
I’ll concede I may have misinterpreted them. I guess we shall wait and see what DF has to say about this.
I have no idea why you would say that. Belief in objective morality is debatable but not silly in the way belief in unicorns is. The question of what is right is also about the most important question there is.
I never said belief in “objective morality” was silly. I said that trying to decide whether to use U or D by asking “which one of these is the right way to resolve conflicts of interest?” when accepting one or the other necessarily changes variables in what you mean by the word ‘right’ and also, maybe even, the word ‘resolve’, sounds silly.
I said that trying to decide whether to use U or D by asking “which one of these is the right way to resolve conflicts of interest?” when accepting one or the other necessarily changes variables in what you mean by the word ‘right’ and also, maybe even, the word ‘resolve’, sounds silly.
That woudl be the case of “right way” meant “morally-right way”. But metaethical theories aren’t compared
by object-level moral rightness, exactly. They can be compared by coherence, practicallity, etc. If metaethics were just obviously unsolveable, someone would have noticed.
That woudl be the case of “right way” meant “morally-right way”.
That’s just how I understand that word. ‘Right for me to do’ and ‘moral for me to do’ refer to the same things, to me. What differs in your understanding of the terms?
If metaethics were just obviously unsolveable, someone would have noticed.
Remind me what it would look like for metaethics to be solved?
That’s just how I understand that word. ‘Right for me to do’ and ‘moral for me to do’ refer to the same things, to me. What differs in your understanding of the terms?
eg. mugging an old lady is the instrumentally-right way of scoring my next hit of heroin, but it isn’t morally-right.
Remind me what it would look like for metaethics to be solved?
Unsolved-at-time-T doesn’t mean unsolvable. Ask Andrew Wyles.
eg. mugging an old lady is the instrumentally-right way of scoring my next hit of heroin, but it isn’t morally-right
Just like moving queen to E3 is instrumentallly-right when playing chess, but not morally right. The difference is that in the chess and heroin examples, a specific reference point is being explicitly plucked out of thought-space (Right::Chess; Right::Scoring my next hit,) which doesn’t refer to me at all. Mugging an old woman may or may not be moral, but deciding that solely based on whether or not it helps me score heroin is a category error.
Unsolved-at-time-T doesn’t mean unsolvable. Ask Andrew Wyles.
I’m no good at math, but it’s my understanding that there was an idea of what it would look like for someone to solve Fermat’s Problem even before someone actually did so. I’m skeptical that ‘solving metaethics’ is similar in this respect.
Just like moving queen to E3 is instrumentallly-right when playing chess, but not morally right. The difference is that in the chess and heroin examples, a specific reference point is being explicitly plucked out of thought-space (Right::Chess; Right::Scoring my next hit,) which doesn’t refer to me at all. Mugging an old woman may or may not be moral, but deciding that solely based on whether or not it helps me score heroin is a category error.
You seem to have intpreted that the wrong way round. The point was that there are different and incompatible notions of “right”. Hence “the right theory of what is right to do” is not circular, so long as the two “rights” mean differnt things. Whcih they do (theorertical correctness and moral obligation, respectively).
I’m no good at math, but it’s my understanding that there was an idea of what it would look like for someone to solve Fermat’s Problem even before someone actually did so. I’m skeptical that ‘solving metaethics’ is similar in this respect.
No one knows what a good explanation looks like? But then why even bother with things like CEV, if we can’t say what they are for?
What would it be like if utilitarianism is true? Or the axiom of choice? Or the continuum hypothesis?
I don’t see how a description of the neurology of moral reasoning tells you how to crunch the numbers—which decision theory you need to use to implement which moral theory to resolve conflicts in the right way.
This statement seems meaningless to me. As in “Utilitarianism is true” computes in my mind the exact same way as “Politics is true” or “Eggs are true”.
The term “utilitarianism” encompasses a broad range of philosophies, but seems more commonly used on lesswrong as meaning roughly some sort of mathematical model for computing the relative values of different situations based on certain value assumptions about the elements of those situations and a thinghy called “utility function”.
If this latter meaning is used, “utilitarianism is true” is a complete type error, just like “Blue is true” or “Eggs are loud”. You can’t say that the mathematical formulas and formalisms of utilitarianism are “true” or “false”, they’re just formulas. You can’t say that “x = 5″ is “true” or “false”. It’s just a formula that doesn’t connect to anything, and that “x” isn’t related to anything physical—I just pinpointed “x” as a variable, “5” as a number, and then declared them equivalent for the purposes of the rest of this comment.
This is also why I requested an example for deontology. To me, “deontology is true” sounds just like those examples. Neither “utilitarianism is true” or “deontology is true” correspond to well-formed statements or sentences or propositions or whatever the “correct” philosophical term is for this.
Wait, seriously? That sounds like a gross misuse of terminology, since “utilitarianism” is an established term in philosophy that specifically talks about maximising some external aggregative value such as “total happiness”, or “total pleasure minus suffering”. Utility functions are a lot more general than that (ie. need not be utilitarian, and can be selfish, for example).
To an untrained reader, this would seem as if you’d just repeated in different words what I said ;)
I don’t see “utilitarianism” itself used all that often, to be honest. I’ve seen the phrase “in utilitarian fashion”, usually referring more to my description than the traditional meaning you’ve described.
“Utility function”, on the other hand, gets thrown around a lot with a very general meaning that seems to be “If there’s something you’d prefer than maximizing your utility function, then that wasn’t your real utility function”.
I think one important source of confusion is that LWers routinely use concepts that were popularized or even invented by primary utilitarians (or so I’m guessing, since these concepts come up on the wikipedia page for utilitarianism), and then some reader assumes they’re using utilitarianism as a whole in their thinking, and the discussion drifts from “utility” and “utility function” to “in utilitarian fashion” and “utility is generally applicable” to “utilitarianism is true” and “(global, single-variable-per-population) utility is the only thing of moral value in the universe!”.
Everywhere outside of LW , utilitarianism means a a moral theory. It, or some specific variation of it is therefore capable of being true or false. The point could have as well been made with some less mathematical moral theory. The truth or falsehood or moral theories doesn’t have direct empirical consequences, and more than the truth or falsehood of abstract mathematical claims. Shut-up-and-calculate doesn’t work here, because one is not using utilitarianism or any other moral theury for predictingwhat will happen, one is using to plan what one will do.
And I can’t say that f, ma and a mean something in f=ma? When you apply maths, the variables mean something. That’s what application is. In U-ism, the input it happiness, or lifeyears, or soemthig, and the output is a decision that is put into practice.
I don’t know why you would want to say you have an explanation of morality when you are an error theorist..
I also don’t know why you are an error theorist. U-ism and D-ology are rival answers to the question “what is the right way to resolve conflicts of interest?”. I don’t think that is a meaningless or unanswerable question. I don’t see why anyone would want to pluck a formula out of the air, number-crunch using it, and then make it policy. Would you walk into a suicide booth because someone had calculated, without justifying the formula used that you were a burden to society?
I think you are making a lot of assumptions about what I think and believe. I also think you’re coming dangerously close to being perceived as a troll, at least by me.
Oh! So that’s what they’re supposed to be? Good, then clearly neither—rejoice, people of the Earth, the answer has been found! Mathematically you literally cannot do better than Pareto-optimal choices.
The real question, of course, is how to put meaningful numbers into the game theory formula, how to calculate the utility of the agents, how to determine the correct utility functions for each agent.
My answer to this is that there is already a set of utility functions implemented in each humans’ brains, and this set of utility functions can itself be considered a separate sub-game, and if you find solutions to all the problems in this subgame you’ll end up with a reflectively coherent CEV-like (“ideal” from now on) utility function for this one human, and then that’s the utility function you use for that agent in the big game board / decision tree / payoff matrix / what-have-you of moral dilemmas and conflicts of interest.
So now what we need is better insights and more research into what these sets of utility functions look like, how close to completion they are, and how similar they are across different humans.
Note that I’ve never even heard of a single human capable of knowing or always acting on their “ideal utility function”. All sample humans I’ve ever seen also have other mechanisms interfering or taking over which makes it so that they don’t always act even according to their current utility set, let alone their ideal one.
I don’t know what being an “error theorist” entails, but you claiming that I am one seems valid evidence that I am one so far, so sure. Whatever labels float your boat, as long as you aren’t trying to sneak in connotations about me or committing the noncentral fallacy. (notice that I accidentally snuck in the connotation that, if you are committing this fallacy, you may be using “worst argument in the world”)
Sure. Now to re-state my earlier question: which formulations of U and D can have truth values, and what pieces of evidence would falsify each?
The formulation for f=ma is that the force applied to an object is equal to the product of the object’s mass and its acceleration, for certain appropriate units of measurements. You can experimentally verify this by pushing objects, literally. If for some reason we ran a well-designed, controlled experiment and suddenly more massive objects started accelerating more than less massive objects with the same amount of force, or more generally the physical behavior didn’t correspond to that equation, the equation would be false.
Assuming that everything of interest can be quantified,that the quantities can be aggregated and compated, and assuming that anyone can take any amount of loss for the greater good...ie assuming all the stuff that utiliatarins assume and that their opponents don’t.
No. You cant leap from “a reflectively coherent CEV-like [..] utility function for this one human” to a solution of conflicts of interest between agents. All you have is a set of exquisite model of individual interests, and no way of combining them, or trading them off.
Strictly speaking, you are a metaethical error theorist. You think there is no meaning to the truth of falsehood of metathical claims.
Any two theoris which have differing logical structure can have truth values, since they can be judged by coherence, etc, and any two theories which make differnt objectlevle predictions can likelwise have truth values. U and D pass both criteria with flying colours.
And if CEV is not a meaningul metaethical theory, why bother with it? If you can’t say that the output of a grand CEV number crunch is what someone should actually do, what is the point?
I know. And you detemine the truth factors of other theories (eg maths) non-empirically. Or you can use a mixture. How were you porposing to test CEV?
That is simply false.
Two individual interests: Making paperclips and saving human lives. Prisoners’ dilemma between the two. Is there any sort of theory of morality that will “solve” the problem or do better than number-crunching for Pareto optimality?
Even things that cannot be quantified can be quantified. I can quantify non-quantifiable things with “1” and “0″. Then I can count them. Then I can compare them: I’d rather have Unquantifiable-A than Unquantifiable-B, unless there’s also Unquantifiable-C, so B < A < B+C. I can add any number of unquantifiables and/or unbreakable rules, and devise a numerical system that encodes all my comparative preferences in which higher numbers are better. Then I can use this to find numbers to put on my Prisoners Dilemma matrix or any other game-theoretic system and situation.
Relevant claim from an earlier comment of mine, reworded: There does not exist any “objective”, human-independent method of comparing and trading the values within human morality functions.
Game Theory is the science of figuring out what to do in case you have different agents with incompatible utility functions. It provides solutions and formalisms both when comparisons between agents’ payoffs are impossible and when they are possible. Isn’t this exactly what you’re looking for? All that’s left is applied stuff—figuring out what exactly each individual cares about, which things all humans care about so that we can simplify some calculations, and so on. That’s obviously the most time-consuming, research-intensive part, too.
Would you mind giving three examples of cases where Deontology being true gives different predictions than Consequentialism being true? This is another extension of the original question posed, which you’ve been dodging.
Deontology says you should push the fat man under the trolley, and various other examples that are well known in the literature.
I have not been “dodging” it. The question seemed to frame the issue as one of being able to predict events that are observed passively. That misses the point on several levels. For one thing, it is not the case that empirical proof is the only kind of proof. For another, no moral theory “does” anything unless you act on it. And that includes CEV.
This would still be the case, even if Deonotology was false. This leads me to strongly suspect that whether or not it is true is a meaningless question. There is no test I can think of which would determine its veracity.
Actually you deontology says you should NOT push the fat man . Consequentialism says you should.
it is hard to make sense of that. If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case.
Once again I will state: moral theories are tested by their ability to match moral intuition, and by their internal consistency, etc.
Once again, I will ask: how would you test CEV?
Compute CEV. Then actually do learn and become this better person that was modeled to compute the CEV. See if you prefer the CEV or any other possible utility function.
Asymptotic estimations could also be made IFF utility function spaces are continuous and can be mapped by similarity: If as you learn more true things from a random sample and ordering of all possible true things you could learn, gain more brain computing power, and gain a better capacity for self-reflection, your preferences tend towards CEV-predicted preferences, then CEV is almost certainly true.
D(x) and U(y) make opposite recommendations. x and y are different intuitions from different people, and these intuitions may or may not match the actual morality functions inside the brains of their proponents.
I can find no measure of which recommendation is “correct” other than inside my own brain somewhere. This directly implies that it is “correct for Frank’s Brain”, not “correct universally” or “correct across all humans”.
Based on this reasoning, if I use my moral intuition to reason about the the fat man trolley problem problem using D() and find the conclusion correct within context, then D is correct for me, and the same goes for U(). So let’s try it!
My primary deontological rule: When there exist counterfactual possible futures where the expected number of deaths is lower than all other possible futures, always take the course of action which leads to this less-expected-deaths future. (In simple words: Do Not Kill, where inaction that leads to death is considered Killing).
A train is going to hit five people. There is a fat man which I can push down to save the five people with 90% probability. (let’s just assume I’m really good at quickly estimating this kind of physics within this thought experiment)
If I don’t push the fat man, 5 people die with 99% probability (shit happens). If I push the fat man, 1 person dies with 99% probabilty (shit happens), and the 5 others still die with 10% probability.
Expected deaths of not-pushing: 4.95.
Expected deaths of pushing: 1.49.
I apply the deontological rule. That fat man is doomed.
Now let’s try the utilitarian vers—Oh wait. That’s already what we did. We created a deontological rule that says to pick the highest expected utility action, and that’s also what utilitarianism tells me to do.
See what I mean when I say there is no meaningful distinction? If you calibrate your rules consistently, all “moral theories” I see philosophers arguing about produce the same output. Equal output, in fact.
So to return to the earlier point: D(trolley, Frank’s Rule) is correct where trolley is the problem and Frank’s is the rules I find most moral. U(trolley, Frank’s Utility Function) is also correct. D(trolley, ARBITRARY RULE PICKED AT RANDOM FROM ALL POSSIBLE RULES) is incorrect for me. Likewise, U(trolley, ARBITRARY UTILITY FUNCTION PICKED AT RANDOM FROM ALL POSSIBLE UTILITY FUNCTION) is incorrect for me.
This means that U(trolley) and D(trolley) cannot be “correct” or “incorrect, because in typical Functional Programming fashion, U(trolley) and D(trolley) return curried functions, that is, they return a function of a certain type which takes a rule (for D) or a utility function (for U) and returns a recommendation based on this for the trolley problem.
To reiterate some previous claims of mine, in which I am fairly confident, in the above jargon: There does not exist any single-parameter U(x) or D(x) functions that return a single truth-valuable recommendation without any rules or utility functions as input. All deontological systems rely on the rules supplied to them, and all utilitarian systems rely on the utility functions supplied to them. There exists a utility function equivalent to each possible rule, and there exists a rule equivalent to each possible utility function.
The rules or the utility functions are inside human brains. And either can be expressed in terms of the other interchangeably—which we use is merely a matter of convenience as one will correspond to the brain’s algorithm more easily than the other.
I suspect that defining deontology as obeying the single rule “maximize utility” would be a non-central redefinition of the term. something most deontologists would find unacceptable.
The simplified “Do Not Kill” formulation sounds very much like most deontological rules I’ve heard of (AFAIK, “Do not kill.” is a bread-and-butter standard deontological rule). It also happens to be a rule which I explicitly attempt to implement in my everyday life in exactly the format I’ve exposed—it’s not just a toy example, this is actually my primary “deontological” rule as far as I can tell.
And to me there is no difference between “Pull the trigger” or “Remain immobile” when both are extremely likely to lead to the death of someone. To me, both are “Kill”. So if not pulling the trigger leads to one death, and pulling the trigger leads to three deaths, both options are horrible, but I still really prefer not pulling the trigger.
So if for some inexplicable reason it’s really certain that the fat man will save the workers and that there is no better solution (this is an extremely unlikely proposition, and by default would not trust myself to have searched the whole space of possible options), then I would prefer pushing the fat man.
If I considered standing by and watching people die because I did nothing to not be “Kill”, then I would enforce that rule, and my utility function would also different. And then I wouldn’t push the fat man either way, whether I calculate it with utility functions or whether I follow the rule “Do Not Kill”.
I agree that it’s non-central, but IME most “central” rules I’ve heard of are really simple wordings that obfuscate the complexity and black boxes that are really going on in the human brain. At the base level, “do not kill” and “do not steal” are extremely complex. I trust that this part isn’t controversial except in naive philosophical journals of armchair philosophizing.
I believe that this is where many deontologists would label you a consequentialist.
There are certainly the complex edge cases, like minimum necessary self-defense and such, but in most scenarios the application of the rules is pretty simple. Moreover, “inaction = negative action” is quite non-standard. In fact, even if I believe that in your example pushing the fat man would be the “right thing to do”, I do not alieve it (i.e. I would probably not do it if push came to shove, so to speak).
With all due respect to all parties involved, if that’s how it works I would label the respective hypothetical individuals who would label me that “a bunch of hypocrites”. They’re no less consequentialist, in my view, since they hide behind words the fact that they have to make the assumption that pulling a trigger will lead to the consequence of a bullet coming out of it which will lead to the complex consequence of someone’s life ending.
I wish I could be more clear and specific, but it is difficult to discuss and argue all the concepts I have in mind as they are not all completely clear to me, and the level of emotional involvement I have in the whole topic of morality (as, I expect, do most people) along with the sheer amount of fun I’m having in here are certainly not helping mental clarity and debiasing. (yes, I find discussions, arguments, debates etc. of this type quite fun, most of the time)
I’m not sure it’s just a question of not alieving it. There are many good reasons not to believe evidence that this will work, and even more good reasons to believe there is probably a better option, and many reasons why it could be extremely detrimental to you in the long term to push down a fat man onto train tracks, and if push come to shove it might end up being the more rational action in a real-life situation similar to the thought experiment.
I’m quite aware of that.
At this point, I simply must tap out. I’m at a loss at how else to explain what you seem to be consistently missing in my questions, but DaFranker is doing a very good job of it, so I’ll just stop trying.
Really? This is news to me. I guess Moore was right all along...
You have proof that you should push the fat man?
Lengthy breakdown of my response.
TL;DR: You should push the fat man if and only if X. You should not push the fat man if and only if ¬X.
X can be derived into a rule to use with D(X’) to compute whether you should push or not. X can also be derived into a utility function to use with U(X’) to compute whether you should push or not. The answer in either case doesn’t depend on U or D, it depends on your derivation of X, which itself depends on X.
This is shown by the assumption that for all reasonable a, there exists a g(a) where U(a) = D(g(a)). Since, by their ambiguity and vague definitions, both U() and D() seem to cover an infinite domain and are the equivalent of turing-complete, this assumption seems very natural.
Error theorists are cognitivists. The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,) he is precisely asking you what it would mean for U or D to have truth values.
When they are both trying to give accounts of what it would mean for something to be “right”, it seems this question becomes pretty silly.
I’m not sure at all what those mean. If they mean that I think there doesn’t exist any sentences about morality that can have truth values, that is false. “DaFranker finds it immoral to coat children in burning napalm” is true, with more confidence than I can reasonably express (I’m about as certain of this belief about my moral system as I am in things like 2 + 2 = 4).
However, the sentence “It is immoral to coat children in burning napalm” returns an error for me.
You could say I consider the function “isMoral?” to take as input a morality function, a current worldstate, and an action to be applied to this worldstate that one wants to evaluate whether it is moral or not. A wrapper function “whichAreMoral?” exists to check more complicated scenarios with multiple possible actions and other fun things.
See, if the “morality function” input parameter is omitted, the function just crashes. If the current worldstate is omitted, the morality function gets run with empty variables and 0s, which means that the whole thing is meaningless and not connected to anything in reality.
Yes.
In the example above, my “isMoral?” function can only return a truth-value when you give it inputs and run the algorithm. You can’t look at the overall code defining the function and give it a truth-value. That’s just completely meaningless. My current understanding of U and D is that they’re fairly similar to this function.
I agree somewhat. To use another code analogy, here I’ve stumbled upon the symbol “Right”, and then I look back across the code for this discussion and I can’t find any declarations or “Right = XXXXX” assignment operations. So clearly the other programmers are using different linked libraries that I don’t have access to (or they forgot that “Right” doesn’t have a declaration!)
An error theorist could agree with that. it isn’t really a statement about morality, it is about belief. Consider “Eudoximander considers it prudent to refrain from rape, so as to avoid being torn apart by vengeful harpies”. That isn’t a true statement about harpies.
And it doesn’t matter what the morality function is? Any mapping from input to output will do?
So is it meaninless that
some simulations do (not) correctly model the simulated system
some commercial software does (not) fulfil a real-world business requirment
some algorithms do (not) correctly computer mathematical functions
some games are (not) entertaining
some trading software does (not) return a profit
It’s worth noting that natural language is highhly contextual. We know in broad terms what it means to get a theory “right”. That’s “right” in one context. In this context we want a “right” theory of morality, that is a theoretically-right theory of the morally-right.
Yes.
I have a standard library in my own brain that determines what I think looks like a “good” or “useful” morality function, and I only send morality functions that I’ve approved into my “isMoral?” function. But “isMoral?” can take any properly-formatted function of the right type as input.
And I have no idea yet what it is that makes certain morality functions look “good” or “useful” to me. Sometimes, to try and clear things up, I try to recurse “isMoral?” on different parameters.
e.g.: “isMoral? defaultMoralFunc w1 (isMoral? newMoralFunc w1 BurnBabies)” would tell me whether my default morality function considers moral the evaluation and results of whether the new morality function considers burning babies moral or not.
I’m not sure what you mean by “it isn’t really a statement about morality, it is about belief.”
Yes, I have the belief that I consider it immoral to coat children in napalm. This previous sentence is certainly a statement about my beliefs. “I consider it immoral to coat children in napalm” certainly sounds like a statement about my morality though.
“isMoral? DaFranker_IdealMoralFunction Universe coatChildInNapalm = False” would be a good way to put it.
It is a true statement about my ideal moral function that it considers it better not to coat a child in burning napalm. The declaration and definition of “better” here are inside the source code of DaFranker_IdealMoralFunction, and I don’t have access to that source code (it’s probably not even written yet).
Also note that “isMoral? MoralIntuition w a” =/= “”isMoral? [MoralFunctionsInBrain] w a” =/= “”isMoral? DominantMoralFunctionInBrain w a” =/= “”isMoral? CurrentMaxMoralFunctionInBrain w a” =/= “isMoral? IdealMoralFunction w a”.
In other words, when one thinks of whether or not to coat a child in burning napalm, many functions are executed in the brain, some of them may disagree on the betterness of some details of the situation, one of those functions usually takes the lead and becomes what the person actually does when faced with that situation (this dominance is dynamically computed at runtime, so at each evaluation the result may be different if, for instance, one’s moral intuitions have changed the internal power balance within the brain), but one could in theory make up a function that represents the pareto-optimal compromise of all those functions, and all of this is reviewed in very synthesized form by the conscious mind to generate a Moral Intuition. All of which are very different from what would happen if the conscious mind could read the source code for the set of moral functions in the brain and edit things to be the way it prefers recursively towards a unique ideal moral function.
Not quite, but those are different questions. Is the trading software itself “true” or “false”? No. Is my approximate model of how the trading software works “true” or “false”? No.
Is it “true” or “false” that my approximate model of how the trading software works is better than competing alternatives? Yes, it is true (or false). Is it “true” or “false” that the trading software returns a profit? Yes, it is.
See, there’s an element of context that lets us ask true/false questions about things. “Politics is true” is meaningless. “Politics is the most efficient method of managing a society” is certainly not meaningless, and with more formal definitions of “efficient” and “managing” one could even produce experimental tests to determine by observations whether that is true or false.
However, when one says “utilitarianism is true”, I just don’t know what observations to make. “utilitarianism accurately models DaFranker’s ideal moral function” is much better—I can compare the two, I can try to refine what is meant by “utilitarianism” here exactly, and I could in principle determine whether this is true or false.
“as per utilitarianism’s claim, what is morally best is to maximize the sum of x where each x is a measure u() of each agent’s ideal morality function” sounds like it also makes sense. But then you run into a snag while trying to evaluate the truth-value of this. What is “morally best” here? According to what principle? It seems this “morally best” depends on the reader, or myself, or some other point of reference.
We could decide that this “morally best” means that it is the optimal compromise between all of our morality functions, the optimal way to resolve conflicts of interest with the least total loss in utility and highest total gain.
We could assign a truth-value to that; compute all possible forms of social agreement about morality, all possible rule systems, and if the above utilitarian claim is among the pareto-optimal choices on the game payoff matrix, then the statement is true, if it is strictly dominated by some other outcome, then it is false. Of course, actually running this computation would require solving all kinds of problems and getting various sorts of information that I don’t even know how to find ways to solve or get. And might requite a Halting Oracle or some form of hypercomputer.
At any rate, I don’t think “as per utilitarianism’s claim, it is pareto-optimal across all humans to maximize the sum of x where each x is a measure u() of each agent’s ideal morality function” is what you meant by “utilitarianism is true”.
By comparing them to abstract formulas, which don’t have truth values … as opposed to equations, which, do, and to applied maths,which does, and theories, which do...
I have no idea why you would say that. Belief in objective morality is debatable but not silly in the way belief in unicorns is. The question of what is right is also about the most important question there is.
My main point is that I haven’t the slightest clue as to what kind of applied math or equations U and D could possibly be equivalent to. That’s why I was asking you, since you seem to know.
I am not assuming they have to be implemented mathematically. And I thought you problem is that you didn;t have a procedure for identifying corect theories of morality?
I’ll concede I may have misinterpreted them. I guess we shall wait and see what DF has to say about this.
I never said belief in “objective morality” was silly. I said that trying to decide whether to use U or D by asking “which one of these is the right way to resolve conflicts of interest?” when accepting one or the other necessarily changes variables in what you mean by the word ‘right’ and also, maybe even, the word ‘resolve’, sounds silly.
That woudl be the case of “right way” meant “morally-right way”. But metaethical theories aren’t compared by object-level moral rightness, exactly. They can be compared by coherence, practicallity, etc. If metaethics were just obviously unsolveable, someone would have noticed.
That’s just how I understand that word. ‘Right for me to do’ and ‘moral for me to do’ refer to the same things, to me. What differs in your understanding of the terms?
Remind me what it would look like for metaethics to be solved?
eg. mugging an old lady is the instrumentally-right way of scoring my next hit of heroin, but it isn’t morally-right.
Unsolved-at-time-T doesn’t mean unsolvable. Ask Andrew Wyles.
Just like moving queen to E3 is instrumentallly-right when playing chess, but not morally right. The difference is that in the chess and heroin examples, a specific reference point is being explicitly plucked out of thought-space (Right::Chess; Right::Scoring my next hit,) which doesn’t refer to me at all. Mugging an old woman may or may not be moral, but deciding that solely based on whether or not it helps me score heroin is a category error.
I’m no good at math, but it’s my understanding that there was an idea of what it would look like for someone to solve Fermat’s Problem even before someone actually did so. I’m skeptical that ‘solving metaethics’ is similar in this respect.
You seem to have intpreted that the wrong way round. The point was that there are different and incompatible notions of “right”. Hence “the right theory of what is right to do” is not circular, so long as the two “rights” mean differnt things. Whcih they do (theorertical correctness and moral obligation, respectively).
No one knows what a good explanation looks like? But then why even bother with things like CEV, if we can’t say what they are for?
I think you’ve just repeated his question.