Extremely temporary friendships. I suspect, without demonstrable evidence beyond stories from friends and myself, that location-based networking applications have lead to us developing better skills to manage temporary group friendships among travelers and locals. CouchSurfing, AirBnB, Grindr, etc., started out fairly awkward for all involved several years ago, but now it seems to me that people are comfortable and adept with the norms.
selylindi
Nah, they’re welcome to use whichever statistics they like. We might point out interpretation errors, though, if they make any.
Under the assumptions I described, a p-value of 0.16 is about 0.99 nats of evidence which is essentially canceled by the 1 nat prior. A p-value of 0.05 under the same assumptions would be about 1.92 nats of evidence, so if there’s a lot of published science that matches those assumptions (which is dubious), then they’re merely weak evidence, not necessarily wrong.
It’s not the job of the complexity penalty to “prove the null hypothesis is correct”. Proving what’s right and what’s wrong is a job for evidence. The penalty was merely a cheap substitute for an informed prior.
I think we can combine your [cousin_it’s] suggestion with MrMind’s for an Option 2 scenario.
Suppose Bob finds that he has a stored belief in Bright with an apparent memory of having based it on evidence A, but no memory of what evidence A was. That does constitute some small evidence in favor of Bright existing.
But if Bob then goes out in search of evidence about whether Bright exists, and finds some evidence A in favor, he is unable to know whether it’s the same evidence as before that he had forgotten, or if it’s different evidence. Another way of saying that is that Bob can’t tell whether or not A and A are independent. I suppose the ideal reasoner’s response would be to assign a probability density distribution over a range from full independence to full dependence and proceed with any belief updates taking that distribution into account.
The distribution should be formed by consideration of how Bob got the evidence. If Bob found his new evidence A in some easily repeatable way, like hearing it from Bright apologists, then Bob would probably think dependence on A is much more likely than independence, and so he would take into account mostly just A and not A. But if Bob got A by some means that he probably wouldn’t have had access to in the past, like an experiment requiring brand new technology to perform, then he would probably think independence was more likely, and so he would take into account A and A mostly separately.
I answered that section quickly and on the basis of intuition in the hope that those questions were chosen because there is some interesting cognitive bias affecting the answers that I was unaware of. :D
Which Medfusion? Google finds several organizations by that name, and all seem like implausible referents to me.
Another valuable service, if you (ChrisHallquist) decide to write the proposed article, is to provide a glossary translating between LW idiom and conventional terminology.
I realize that this is counterintuitive. Do you think I have to be clearer about it in the post?
Yes, please.
What works for me to prevent this worry has been to announce aloud as I lock and check the door, “I am locking the door. The door is now locked.” Similarly for other tasks that are repeated so often that they can be easily forgotten. If you have multiple such tasks, then instead of mindfulness of each task you should use a checklist.
Most reliable: Make the interruption a compliment, saying something along the lines of, “Oh, the way you describe it sounds amazing. Thank you so much for the reference! I have to write it down a moment.” - Less reliable: Use a peg list (or memory palace) and attach it to the next free peg (or put it in the next open alcove).
Put a sign on the door that says “Push hard to close” or similar. Or fix the door and remove the need for memory.
No-memory method: Ask for something nearby your destination that you can use the GPS to locate, then find another local. - Most reliable memory method: Write them down. - Less reliable: Use chunking or a memory palace.
That is exactly the traditional purpose of a memory palace.
Use the Major System and find some colorful phrase for your ID. e.g. 000458789625 becomes “000-relieve-a-cough-by-chewing-holly”.
What works for me is to convince myself that (1) if a work-thought is actually a good one, I’ll have it again when I’m at work. (2) Sleep is higher-valued than the thought at this time. (3) Also, I recall that my experience is that most late-night thoughts that I have written down have not actually been helpful come the next day. So there’s no need to rouse myself and write it down.
Memory competitors use memory palaces for this task.
Learn the disconnected facts/jargon/procedures with Anki. But also do try to find connections, and if at all possible create a visualization showing the relationships.
You may be interested in the book Moonwalking with Einstein by Joshua Foer.
I think you reached the wrong conclusion in your final paragraphs. Can you show how the expected value calculation could be different for “the amount of money contained in the other envelope expressed in terms of the amount of money in this envelope [which is in dollars, BTW]” versus “the amount of money in the other envelope expressed in dollars”?
I see that Wikipedia says that there is no generally accepted solution to the paradox. That almost certainly means people are interpreting it differently. I’ll give my opinion. But let me recast the problem in more pointed terms.
Mr Moneybags writes a check for some real positive amount of money and puts it in an envelope. Then he writes a check for double the amount of the previous check and puts it in another envelope. He shuffles the envelopes. You pick one and open it, and you find $N. He offers to let you switch to the other just this once. Should you?
On first glance it appears that the other envelope could have $N/2 or $N*2, and that you are entirely ignorant about which is the case, so your ignorance priors are 1⁄2 for each possibility. Then the expected value calculation comes up all wrong! Call the hypothesis that you opened the lower value envelope L and the hypothesis that you opened the higher value envelope H.
E(switching)=P(H)*(N/2)+P(L)*(N*2)=(1/2)*(N/2)+(1/2)*(N*2)=(5/4)*N.
But that’s nonsense. My opinion as to what went wrong was that it’s false that you are entirely ignorant after opening the envelope, because you can now do an update on your ignorance prior.
We’ll start, of course, with Bayes’ formula P(H|N)/P(L|N) = P(N|H)/P(N|L) * P(H)/P(L) * P(N)/P(N). The last ratio cancels. The second-to-last ratio also cancels because the shuffling of the envelopes made it equally likely you’d open either one. We know basically nothing about the distribution that the lower value was first drawn from, so a convenient ignorance prior on it is that it was some uniform distribution over the interval $A to $B. Consequently, the higher value would be described by a uniform distribution over the interval 2*$A to 2*$B. The distributions may or may not overlap. I’ll take each case separately.
Common start: A=0.
P(N=k|H)={1/3 for 0<k<=B; 1 for B<k<=2B} P(N=k|L)={2/3 for 0<k<=B; 0 for B<k<=2B} Integrate each with respect to k from 0 to 2B, then plug in P(H|N)/P(H|L) = (4/3*B)/(2/3*B) = 2 and convert back to probabilities P(H|N)=2/3, P(L|N)=1/3.
Separate start: A>0 and 2A<B.
P(N=k|H)={0 for A<k<2A; 1/3 for 2A<=k<=B; 1 for B<k<=2B} P(N=k|L)={1 for A<k<2A; 2/3 for 2A<=k<=B; 0 for B<k<=2B} Integrate each with respect to k from A to 2B, then plug in P(H|N)/P(H|L) = P(N|H)/P(N|L) = (4/3*B-2/3*A)/(2/3*B-1/3*A) = 2 and convert back to probabilities P(H|N)=2/3, P(L|N)=1/3.
No overlap: B<2A.
P(N=k|H)={0 for A<=k<=B; 1 for 2A<=k<=2B} P(N=k|L)={1 for A<=k<=B; 0 for 2A<=k<=2B} Integrate each with respect to k from A to 2B, then plug in P(H|N)/P(H|L) = P(N|H)/P(N|L) = (2B-2A)/(B-A) = 2 and convert back to probabilities P(H|N)=2/3, P(L|N)=1/3.
So viewing your prize, you suddenly feel it’s twice as likely that you got the higher amount. Bizarre, huh? And your expected value calculation becomes
E(switching)=P(H|N)*(N/2)+P(L|N)*(N*2)=(2/3)*(N/2)+(1/3)*(N*2)=N,
so there’s no point switching after all. Even weirder is that, before you open the envelope, this is a situation where you know for sure what you will believe in the future, but you nevertheless can’t make that update till then.
The article’s conclusion is that “people decide they want to convert for emotional reasons, but some can’t believe it at first, so they use apologetics as a tool to get themselves to believe what they’ve decided they want to believe.”
So we expect apologetic literature and speakers as a market niche wherever there are emotionally manipulative (claimed) rewards and punishments attendant on belief. Some rewards and punishments are quite real, like social status, praise, and condemnation. Others are fictional, like afterlives and the deep satisfaction of living according to divine law.
Similarly to mainstream religion, there is plentiful apologetic literature, speakers, and films for political ideologies. The social rewards of being in a political group are real; the future consequences that are promised if only enough elections can be won may or may not be real.
Given religions where beliefs are not rewarded or punished, we’d expect little or no consumption of apologetics. Shinto, neopaganism, and Unitarian Universalism fit that. However, there is certainly plenty of apologetic literature for secular humanist atheism, which also lacks the rewards/punishments. That looks to almost entirely undermine the hypothesis.
There is also basically no apologetic literature for believing in the greatness of particular sports teams, despite the large social rewards of being in a fanbase and the promised vicarious glory of psyching your team up for a win by your fervent support. OK, so to me the hypothesis is dead. Something more is going on than simple market response to rewarded/punished belief.
Any ideas what?
There were three times in my life when I consumed apologetics. First was when I was evangelical Protestant and it was a tool for the religious imperative of winning converts. Second was when I could no longer believe my childhood religion, but still believed in God and the importance of Jesus, and so I read the apologetics of other religions to see which was most likely true, and I ended up converting Catholic for a while. Third was when I became infatuated with the principled style of libertarian political ideology and needed the apologetics to “understand” why nothing fit.
Based on my own anecotal experience, then, my next hypothesis would be that apologetic argument and literature is demanded when people are (1) committed to a theory (for any reasons good or bad) and (2) also committed to acknowledging the facts, and (3) the facts don’t fit the theory in a straightforward way, and (4) complex fits of facts to theory are tolerated.
Religions that propose explanations would then be expected to have apologetics, and religions that don’t propose explanations would not. All political ideologies would be expected to have apologetics, because it’s an unfortunate fact of life that the consequences of politics are very complicated. Secular humanist atheists, insofar as they propose explanations for life, the universe, and everything, similarly end up occasionally faced with bizarre and extraordinary scenarios that defy simple explanation, and so they have apologetics. Some sports fans may, after a loss, blame the coach, the refs, the weather, and other factors, but at least in my experience most are willing to believe the other team played better. Oddly, we even end up with pro-science apologetics sometimes; at least I remember my physics and chemistry professors spending inordinate time mis-explaining phenomena when they were committed to the phenomena being explainable primarily by that week’s lesson.
It seems to fit. And it suggests that the process leading to apologetics can be interrupted at two places, as described elsewhere by Eliezer. First, don’t be committed to a theory. Don’t make a belief part of your identity. Let your beliefs be faithless and blown about by the winds of evidence. Second, count facts that require detailed explanations as contrary evidence even if the explanation is adequate. (This is not strictly Bayesianly correct but it seems like a good approximation.)
Regarding geography and polarization, have a look at the Cook Partisan Voting Index.
To address your correct criticism, how about we modify apophenia’s “15” words to:
• If two things are reliably correlated, there is causation. Either A causes B, B causes A, they have common cause, or they have a common effect you’re conditioning on.
A 15-word version is possible but awkward:
• Reliable correlation implies causation: one causes the other, or there’s common cause, or common effect.
Potentially a great deal of complexity is smuggled into the word “reliable”.
--
Edit: A friend pointed out to me that the above sentences provide unbalanced guidance for intuitions. A more evenly balanced version is:
• Reliable correlation implies causation and unreliable correlation does not.
Perhaps you and they are just focusing on different stages of reasoning. The difference in utility that you’ve described is a temporal asymmetry that sure looks at first glance like a flaw. But that’s because it’s an unnecessary complexity to add it as a root principle when explaining morality up to now. Each of us desires not to be a victim of murder sprees (when there are too many people) or to have to care for dozens of babies (when there are too few people), and the simplest way for a group of people to organize to enforce satisfaction of that desire is for them to guarantee the state does not victimize any member of the group. So on desirist grounds I’d expect the temporal asymmetry to tend to emerge strategically as the conventional morality applying only among the ruling social class of a society: only humans and not animals in a modern democracy, only men when women lack suffrage, only whites when blacks are subjugated, only nobles in aristocratic society, and so on. (I can readily think of supporting examples, but I’m not confident in my inability to think of contrary examples, so I do not yet claim that history bears out desirism’s prediction on this matter.)
Of course, if you plan to build an AI capable of aquiring power over all current life, you may have strong reason to incorporate the temporal asymmetry as a root principle. It wouldn’t likely emerge out of unbalanced power relations. And similarly, if you plan on bootstrapping yourself as an em into a powerful optimizer, you have strong reason to precommit to the temporal asymmetry so the rest of us don’t fear you. :D
Yes and no. Yes in that the timeless view is timeless in both directions. No in that for decisionmaking we can only take into account predictions of the future and not the future itself.
For intuitive purposes, consider the current political issues of climate change and economic bubbles. It might be the case that we who are now alive could have better quality of life if we used up the natural resources and if we had the government propogate a massive economic bubble that wouldn’t burst until after we died. If we don’t value the welfare of possible future generations, we should do those things. If we do value the welfare of possible future generations, we should not do those things.
For technical purposes, suppose we have an AIXI-bot with a utility function that values human welfare. Examination of the AIXI definition makes it clear that the utility function is evaluated over the (predicted) total future. (Entertaining speculation: If the utility function was additive, such an optimizer might kill off those of us using more than our share of resources to ensure we stay within Earth’s carrying capacity, making it able to support a billion years of humanity; or it might enslave us to build space colonies capable of supporting unimaginable throngs of future happier humans.)
For philosophical purposes, there’s an important sense in which my brainstates change so much over the years that I can meaningfully, if not literally, say “I’m not the same person I was a decade ago”, and expect that the same will be true a decade from now. So if I want to value my future self, there’s a sense in which I necessarily must value the welfare of some only-partly-known set of possible future persons.
I’m not aware of any Western religion that says cruelty to animals is a sin.
FWIW I’ll provide some institutional references:
The current Catechism of the Catholic Church section 2418 reads, in part: “It is contrary to human dignity to cause animals to suffer or die needlessly.” The 1908 Catholic Encyclopedia goes into more detail.
I also searched for statements by the largest Protestant denominations. I found nothing by the EKD. The SBC doesn’t take official positions but the Humane Society publishes a PDF presenting Baptist thinking that is favorable to animals.
The United Synagogue of Conservative Judaism website has lots of minor references to animal welfare. One specific example is that they appear to endorse the Humane Farm Animal Care Standards.
The largest Muslim organization that I found reference to, the Nahdlatul Ulama, does not appear to have any official stance on treatment of animals.
political opinion: While singleton government is certainly an area of interest, it looks implausible to me that we can get there from here without either war or massive worldwide propagandistic/educational effort. If your goal is to reduce existential risk, I think this is not a cost-effective way to do it.
Of course, if your goal is to study fascinating research questions in this area that may also help reduce existential risk, then go for it!
As far as I’m aware, that’s exactly how logical arguments work, formally. See the second paragraph here.
Certainly not.
OK, since you are rejecting formal logic I’ll agree we’ve reached a point where no further agreement is likely.
What I was having a hard time taking at face value was the notion of reasoning about moral propositions using this sort of probabilistic logic. That is to say: what, exactly, does it mean to say that you believe “We ought to take steps to make the world a better place” with P = 0.3? Like, maybe we should and maybe we shouldn’t? Probabilities are often said to be understandable as bets; what would you be betting on, in this case? How would you settle such a bet?
I’d be betting on whether or not the proposition would follow from the relevant moral theory if I were in possession of all the relevant facts. The bet would be settled by collecting additional facts and updating. I incline toward consequentialist moral theories in which practicality requires that I can never possess all the relevant facts. So it is reasonable for me to evaluate situational moral rules and claims in probabilistic terms based on how confident I am that they will actually serve my overarching moral goals.
I don’t know how to interpret that. It seems strange. Logical arguments do not generally work this way, wherein you just have an unordered heap of undifferentiated, independent propositions, which you add up in any old order, and build up some conclusion from them like assembling a big lump of clay from smaller lumps of clay. I don’t rightly know what it would mean for an argument to work like that.
As far as I’m aware, that’s exactly how logical arguments work, formally. See the second paragraph here.
Why do you character the quoted belief as “motivated”?
Meat tastes good and is a great source of calories and nutrients. That’s powerful motivation for bodies like us. But you can strike that word if you prefer.
And, in any case, why are we singling out this particular belief for consistency-checking?
We aren’t. We’re requiring only and exactly that it not be singled out for immunity to consistency-checking.
I think it would be far more productive (as these things go) to just look at that list of propositions, see which we accept, and then see if vegetarianism follows reasonably from that
That’s it! That’s exactly the structure of Engel’s argument, and what he was trying to get people to do. :)
(Hi, sorry for the delayed response. I’ve been gone.)
And I’m not sure what sort of logic you’re using wherein you believe p1 with low probability, p2 with low probability, p3 … etc., and their disjunction ends up being true. (Really, that wasn’t sarcasm. What kind of logic are you applying here...?)
Just the standard stuff you’d get in high school or undergrad college. Suppose we have independent statements S1 through Sn, and you assign each a subjective probability of P(Si). Then you have the probability of the disjunction P(S1+S2+S3+...+Sn) = 1-P(~S1)*P(~S2)*P(~S3)*...*P(~Sn). So if in a specific case you have n=10 and P(Si)=0.10 for all i, then even though you’re moderately disposed to reject every statement, you’re weakly disposed to accept the disjunction, since P(disjunction)=0.65. This is closely related to the preface paradox.
You’re right, of course, that Engel’s premises are not all independent. The general effect on probability of disjunctions remains always in the same direction, though, since P(A+B)≥P(A) for all A and B.
“After all, as a philosopher [assuming we all love wisdom and want to know the best way in which to live], you are interested in more than mere consistency; you are interested in truth. Consequently, you will not reject just any belief(s) you think most likely to be false.
You’re right, I guess I have no idea what he’s saying here, because this seems to me blatantly absurd on its face. If you’re interested in truth, of course you’re going to reject those beliefs most likely to be false. That’s exactly what you’re going to do. The opposite of that is what you would do if you were, in fact, interested in mere consistency rather than truth.
OK, yes, you’ve expressed yourself well and it’s clear that you’re intepreting him as having claimed the opposite of what he meant. Let me try to restate his paragraph in more LW-ish phrasing:
“As a rationalist, you are highly interested in truth, which requires consistency but also requires a useful correspondence between your beliefs and reality. Consequently, when you consider that you believe it is not worthwhile for you to value animal interests and you discover that this belief is inconsistent with other of your beliefs, you will not reject just any of those other beliefs you think most likely to be false. (You will subject the initial, motivated belief to equal, unprivileged scrutiny along with the others, and tentatively accept the mutually consistent set of beliefs with the highest probability given your current evidence.)”
If you’re interested in reconsidering Engel’s argument given his intended interpretation of it, I’d like to hear your updated reasons for/against it.
Similarly to what some others have written, my attitude toward LessWrong is that it would best thrive with this model:
1. Embrace the Eternal September.
If LessWrong is successful at encouraging epistemic and especially instrumental rationality, people who have benefited from the material here will find less value in staying and greater opportunities elsewhere. LessWrong doesn’t need to be a place to stay any more than does a schoolhouse. Its purpose could be to teach Internet users rationality skills they don’t learn in ordinary life or public school, and to help them transition into whatever comes next after they have done so.
Since culture is always changing, to best aid new waves of people, the Sequences will need to be scrapped and crafted anew on occasion.
2. Aim lower.
Eliezer had motives in writing the Sequences in the way he did, and he also had a very narrow background. It has often been noticed that the demographics here are absurdly skewed toward high IQ people. My presumption is that our demographics is a consequence of how things like the Sequences are written. For example, Eliezer’s supposedly “excruciatingly gentle” introduction to Bayesianism is in fact inaccessible for most people; at least it was difficult for me as a high-but-not-very-high IQ person with (not-recent) years of statistics training, and I pointed friends toward it who simply gave up, unable to make progress with it. A new Sequences could do well to have multiple entry points for people of different backgrounds (i.e. abandon the programmer jargon) and ordinary IQs.
3. Extend higher.
If we want to keep longtime participants from moving on, then we have to give them additional value here. I can’t give advice here; I feel I’ve already learned more theoretical rationality here than I can effectively ingrain into habit.