I’m not offering a higher price since it seems cost ineffective compared to other opportunities, but I’m curious what your price would be for a year of 98% veganism. (The 98% means that 2 non-vegan meals per month are tolerated.)
selylindi
YMMV, but the argument that did it for me was Mylan Engel, Jr’s argument, as summarized and nicely presented here.
On the assumption that the figures given by the OP are approximately right, with my adjustments for personal values, it would be cost-effective for me to pay you $18 (via BTC) to go from habitual omnivory to 98% ovo-lacto-vegetarianism for a year, or $24 (via BTC) to go for habitual omnivory to 98% veganism for a year, both prorated by month, of course with some modicum of evidence that the change was real. Let me know if you want to take up the offer.
Or somewhat more realistically:
Is there a way in Scheme for a function to detect whether it is being run by an (eval x) form? Or use a macro to do something like change all the ‘Ds to ’Cs in the parent function?
If so, then an amusing case would be a PoltergeistBot, which only ever Defects, but if another bot tries to evaluate it, it “possesses” the function and forces it to Cooperate.
Well, Yes, but then as stated earlier I think desirism bites the bullet on “dust speck”, too, given more dust specks. For a quick Fermi estimate, if I suppose that the fly-buzz-scenario takes about 5 seconds and is 1/1000th as strong (in some sense) as the desire not to be tortured for 5 seconds, then the number of people where the fly-buzz-scenarios outweight the torture is about a half trillion.
Granted, for people who don’t find desirism intuitive, this altered scenario changes nothing about the argument. I personally do find desirism intuitive, though unlikely to be a complete theory of ethics. So for me, given the dilemma between 50 years of torture of one individual and one dust-speck-in-eye or one fly-buzz-distraction for each of 3^^^3 people, I have a strong gut reaction of “Hell yes!” to preferring the specks and “Hell no!” to preferring the distractions.
I have a high tolerance for chaotic surroundings, but even so I occasionally experience a weak, fleeting desire to impose greater order on other people’s belongings in my physical environment. It could be thwarted by an event like a fly buzzing around my head once, which though not painful at all would divert my attention long enough to ensure that the desire died without having been successfully acted on.
Ah, yeah, that could be a problematic assumption. The grounds for my claim was generalization from my own experience. I have no consciously accessible desires which are affected by barely noticeable dust specks.
There are many ways of approaching this question, and one that I think is valuable and which I can’t find any mention of on this page of comments is the desirist approach.
Desirism is an ethical theory also sometimes called desire utilitarianism. The desirist approach has many details for which you can Google, but in general it is a form of consequentialism in which the relevant consequences are desire-satisfaction and desire-thwarting.
Fifty years of torture satisfies none and thwarts virtually all desires, especially the most intense desires, for fifty years of one individuals’ life, and most of the subsequent years of life also due to extreme psychological damage. Barely noticeable dust specks neither satisfy nor thwart any desires, and so in a population of any finite size the minor pain is of no account whatever in desirist terms. So a desirist would prefer the dust specks.
The Repetition Objection: If this choice was repeated say, a billion times, then the lives of the 3^^^3 people would become unlivable due to constant dust specks, and so at some point it must be that an additional individual tortured becomes preferable to another dust speck in 3^^^3 eyes.
The desirist response bites the bullet. Dust specks in eyes may increase linearly, but their effect on desire-satisfaction and desire-thwarting is highly nonlinear. It’s probably the case that an additional torture becomes preferable as soon as the expected marginal utility of the next dust speck is a few million desires thwarted, and certainly the case when the expected marginal utility of the next dust speck is a few billion desires thwarted.
Perhaps then we should speak of what we want in terms of “terminal values”? For example, I might say that it is a terminal value of mine that I should not murder, or that freedom from authority is good.
But what does “terminal value” mean? Usually, it means that the value of something is not contingent on or derived from other facts or situations, like for example, I may value beautiful things in a way that is not derived from what they get me. The recursive chain of valuableness terminates at some set of values.
… if even the most fundamental-seeming moral feelings are subject to argument, I wonder if there is any coherent sense in which I could be said to have terminal values at all.
TimS mentioned moral anti-realism as one possibility. I have a favorable opinion of desire utilitarianism (search for pros and cons), which is a system that would be compatible with another possibility: real and objective values, but not necessarily any terminal values.
By analogy, such a situation would be a description for moral values like epistemological coherentism (versus foundationalism) describes knowledge. The mental model could be a web rather than a hierarchy. At least it’s a possibility—I don’t intend to argue for or against it right now as I have minimal evidence.
If you don’t understand why that is so, read the articles about the t-test and the F-test. The tests compute what a difference in magnitude of response such that, 95% of the time, if the measured effect difference is that large, the null hypothesis (that the responses of all subjects in both groups were drawn from the same distribution) is false.
No, the correct form is:
The tests compute a difference in magnitude of response such that if the null hypothesis is true, then 95% of the time the measured effect is not that large. The form you quoted is a deadly undergraduate mistake.
I read through most of the comments and was surprised that so little was made of this. Thanks, VincentYu. For anyone who could use a more general wording, it’s the difference between:
P(E≥S|H) the probability P of the evidence E being at least as extreme as test statistic S assuming the hypothesis H is true, and
P(H|E) the probability P of the hypothesis H being true given the evidence E.
You can only conclude that food dye affects behavior with 84% confidence, rather than the 95% you desired.
Or rather, you can conclude that, if there were no effect of food dye on hyperactivity and we did this test a whole lotta times, then we’d get data like this 16% of the time, rather than beneath the 5%-of-the-time maximum cutoff you were hoping for.
It’s not so easy to jump from frequentist confidence intervals to confidence for or against a hypothesis. We’d need a bunch of assumptions. I don’t have access to the original article so I’ll just make shit up. Specifically, if I assume that we got the 84% confidence interval from a normal distribution in which it was centrally located and two-tailed, then the corresponding minimum Bayes Factor is 0.37 for the model {mean hyperactivity = baseline} versus the model {mean hyperactivity = baseline + food dye effect}. Getting to an actual confidence level in the hypothesis requires having a prior. Since I’m too ignorant of the subject material to have an intuitive sense of the appropriate prior, I’ll go with my usual here which is to charge 1 nat per parameter as a complexity penalty. And that weak complexity prior wipes out the evidence from this study.
So given these assumptions, the original article’s claim...
The results of this study indicate that artificial food colorings do not affect the behavior of school-age children who are claimed to be sensitive to these agents
...would be correct.
But you are a human and you don’t obey the axioms … Suppose further that you can quantify each item on that list
Thanks for the interesting read. FWIW, this human isn’t convinced that becoming a human approximation to an optimizer is worthwhile. What happens if, as is more realistic, I can’t quantify any item on my list? (Or perhaps I can, but with three large error terms for environmental noise in the signal, temporal drift of the signal, and systemic errors in converting different classes of value to a common unit.)
I disagree. The downsides greatly outweigh the upsides from my perspective.
I’m skeptical that the behaviors people engage in to eke out a little more social status among people they don’t value are anything more than resources wasted with high opportunity cost.
And, at 30 years of age, I’m already starting to notice that recovery from minor injuries and illnesses takes longer than it used to—if I kept expecting and desiring perfect health, I’d get only disappointment from here on out. As much as I can choose it, I’ll choose to desire only a standard of health that is realistically achievable.
However, the question of moral ontology remains...do objective moral values exist? Is there anything I (or anyone) should do, independent from what I desire?
Thanks for bringing up that point! You mentioned below your appreciation for desirism, which says inter alia that there are no intrinsic values independent of what agents desire. Nevertheless, I think there is another way of looking at it under desirism that is almost like saying that there are intrinsic values.
Pose the question this way: If I could choose my desires in whole or in part, what set of desires would I be most satisfied with? In general, an agent will be more satisfied with a larger number of satisfiable desires and a smaller number of unsatisfiable desires. Then the usual criteria of desirism apply as a filter.
To the very limited extent that I can modify my desires, I take that near-tautology to mean that, independently from what I currently desire, I should change my mind and enjoy and desire things I never used to, like professional sports, crime novels, and fashion, for popular examples. It would also mean that I should enjoy and desire a broad variety of music and food, and generally be highly curious. And it would mean I should reduce my desires for social status, perfect health as I age, and resolution of difficult philosophical problems.
“pre-1980” = “pre-lukeprog”, and thus, the ancient days
(kidding)
I downvoted on account of the use of “The Way” as a name for a set of useful techniques in the art of human rationality. It won’t be understood by casual readers, and it sounds very cultish.
Incidentally, the article would be greatly improved by the addition of specific examples of the “huge range of benefits” supposedly available to people with mastery of the popular rationality techniques promoted on LessWrong, but not to those struggling at the narrow end of the horn.
What exactly are you calculating?
I expected Texas voter turnout for a Presidential election to be about 8,075,000. Assuming everyone votes either for Obama or Romney, averaging the polls gives a probability for each vote of about 0.415 for Obama and 0.595 for Romney. That story fits a binomial distribution, and my vote would be critical if the votes were split evenly.
binopdf(0.5*8075000,8075000,0.415) evaluated to approximately 10^-51120, and at that point I just upped the exponent one rather than trying to figure out the electoral college details.
Forget 1 in 10 million. For my state (Texas), a simple binomial model suggests my chance of being the critical vote is about 1 in 10^51121. I did vote, but it wasn’t due to a instrumental pursuit of expected value. For me, it was an intrinsic value of civic participation and a feeling of connection with the political struggles of past generations. These things are part of my notion of the good life. If they’re not part of yours, that’s fine.
I took it.
For the P(Warming) question, you might get people answering different versions of the question on this. For example, my personal evaluation of the probability of warming and that humans are a major cause is very, very high, but my evaluation of the probability that humans are the primary cause is much lower.
This will be done unblinded, because Kurzweil’s predictions are so well known that it would be infeasible to find large numbers of people who are technologically aware but ignorant of them.
Is this true? It could be, or alternatively it could simply appear true from your perspective of familiarity. I’m only vaguely aware of Kurzweil and have never heard any mention of him among my group of largely grad student / geek friends.
Based on your description here of your reaction, I get the impression that you mistook the structure of the argument. Specifically, you note, as if it were sufficient, that you disagree with several of the premises. Engel was not attempting to build on the conjunction (p1*p2*...*p16) of the premises; he was building on their disjunction (p1+p2+...+p16). Your credence in p1 through p16 would have to be uniformly very low to keep their disjunction also low. Personally, I give high credence to p1, p9, p10, and varying lower degrees of assent to the other premises, so the disjunction is also quite high for me, and therefore the conclusion has a great deal of strength; but even if I later rejected p1, p9, and p10, the disjunction of the others would still be high. It’s that robustness of the argument, drawing more on many weak points than one strong one, that convinced me.
I don’t understand your duck/troll response to the quote from Engel. Everything he has said in that paragraph is straightforward. It is important that beliefs be true, not merely consistent. That does mean you oughtn’t simply reject whichever premises get in the way of the conclusions you value. p1-p16 are indeed entangled with many other beliefs, and propagating belief and value updates of rejecting more of them is likely, in most people, to be a more severe change than becoming vegetarian. Really, if you find yourself suspecting that a professional philosopher is trolling people in one of his most famous arguments, that’s a prime example of a moment to notice the fact that you’re confused. It’s possible you were reading him as saying something he wasn’t saying.
Regarding the edit: the argument does not assume that you care about animal suffering. I brought it up precisely because it didn’t make that assumption. If you want something specifically about animal suffering, presumably a Kantian argument is the way to go: You examine why you care about yourself and you find it is because you have certain properties; so if something else has the same properties, to be consistent you should care about it also. (Obviously this depends on what properties you pick.)