That’s interesting, but how much money is needed to solve “most of the world’s current problems”?
irrational
To forestall an objection: I think investing with a goal of improving the world as opposed to maximizing income, is basically the same as giving, so that comes into the category of how to spend, not how much money to allocate for it. If you were investing rather than giving, and had income from it, you’d simply allocate it back into the category.
That’s a very useful point. I do have employer match and it is likely to be an inflection point for effectiveness of any money I give.
I apologize for being unclear in my description. At the moment, after all my bills I have money left over. This implicitly goes toward retirement. So it wouldn’t be slighting my family to give some more to charity. I also have enough saved to semi-retire today (e.g. if I chose to move to a cheap area I could live like a lower-middle class person on my savings alone), and my regular 401K contributions (assuming I don’t retire) would mean that I’ll have plenty of income if I retire at 65 or so.
I was hoping that answering “How did you decide how much of your income to give to charity?” is obviously one way of answering my original question, and so some people would answer that. But you may be right that it’s too ambiguous.
I don’t mean that I have one that’s superior to anyone else’s, but there are tools to deal with this problem, various numbers that indicate risk, waste level, impact, etc. I can also decide what areas to give in based on personal preferences/biases.
This thread is interesting, but off-topic. There is lots of useful discussion on the most effective ways to give, but that wasn’t my question.
I see what you mean now, I think. I don’t have a good model of dealing with a situation where someone can influence the actual updating process either. I was always thinking of a setup where the sorcerer affects something other than this.
By the way, I remember reading a book which had a game-theoretical analysis of games where one side had god-like powers (omniscience, etc), but I don’t remember what it was called. Does anyone reading this by any chance know which book I mean?
For this experiment, I don’t want to get involved in the social aspect of this. Suppose they aren’t aware of each other, or it’s very impolite to talk about sorcerers, or whatever. I am curious about their individual minds, and about an outside observer that can observe both (i.e. me).
How about, if Bob has a sort of “sorcerous experience” which is kind of like an epiphany. I don’t want to go off to Zombie-land with this, but let’s say it could be caused by his brain doing its mysterious thing, or by a sorcerer. Does that still count as “moving things around in the world”?
I am not certain that it’s the same A. If I say to you, here’s a book that proves that P=NP. You go and read it, and it’s full of Math, and you can’t fully process it. Later, you come back and read it again, this time you actually able to fully comprehend it. Even later you come back again, and not only comprehend it, but are able to prove some new facts, using no external sources, just your mind. Those are not all the same “A”. So, you may have some evidence for/against a sorcerer, but are not able to accurately estimate the probability. After some reflection, you derive new facts, and then update again. Upon further reflection, you derive more facts, and update. Why should this process stop?
It’s not that different from saying “I believe it will rain tomorrow, and the fact that I believe that is evidence that it is rain tomorrow, so I’ll increase my degree of belief. But wait, that makes the evidence even stronger!”.
This is completely different. My belief about the rain tomorrow is in no way evidence for actual rain tomorrow, as you point out—it’s already factored in. Tomorrow’s rain is in no way able to affect my beliefs, whereas a sorcerer can, even without mind tampering. He can, for instance, manufacture evidence so as to mislead me, and if he is sufficiently clever, I’ll be misled. But I am also aware that my belief state about sorcerers is not as reliable because of possible tampering.
Here, by me, I mean a person living in Faerie, not “me” as in the original post.
That’s a very interesting analysis. I think you are taking the point of view that sorcerers are rational, or that they are optimizing solely for proving or disproving their existence. That wasn’t my assumption. Sorcerers are mysterious, so people can’t expect their cooperation in an experiment designed for this purpose. Even under your assumption you can never distinguish between Bright and Dark existing: they could behave identically, to convince you that Bright exists. Dark would sort the deck whenever you query for Bright, for instance.
The way I was thinking about it is that you have other beliefs about sorcerers and your evidence for their existence is primarily established based on other grounds (e.g. see my comment about kittens in another thread). Then Bob and Daisy take into account the fact that Bright and Dark have these additional peculiar preferences for people’s belief in them.
I don’t think I completely follow everything you say, but let’s take a concrete case. Suppose I believe that Dark is extremely powerful and clever and wishes to convince me he doesn’t exist. I think you can conclude from this that if I believe he exists, he can’t possibly exist (because he’d find a way to convince me otherwise), so I conclude he can’t exist (or at least the probability is very low). Now I’ve convinced myself he doesn’t exist. But maybe that’s how he operates! So I have new evidence that he does in fact exist. I think there’s some sort of paradox in this situation. You can’t say that this evidence is screened off, since I haven’t considered the result of my reasoning until I have arrived at it. It seems to me that your belief oscillates between 2 numbers, or else your updates get smaller and you converge to some number in between.
I am not assuming they are Bayesians necessarily, but I think it’s fine to take this case too. Let’s suppose that Bob finds that whenever he calls upon Bright for help (in his head, so nobody can observe this), he gets unexpectedly high success rate in whatever he tries. Let’s further suppose that it’s believed that Dark hates kittens (and it’s more important for him than trying to hide his existence), and Daisy is Faerie’s chief veterinarian and is aware of a number of mysterious deaths of kittens that she can’t rationally explain. She is afraid to discuss this with anyone, so it’s private. For numeric probabilities you can take, say, 0.7, for each.
Thanks. I am of course assuming they lack common knowledge. I understand what you are saying, but I am interested in a qualitative answer (for #2): does the fact they have updated their knowledge according to this meta-reasoning process affect my own update of the evidence, or not?
If you don’t have a bachelor’s degree, that makes it rather unlikely that you could get a PhD. I agree with folks that you shouldn’t bother—if you are right, you’ll get your honorary degrees and Nobel prizes, and if not, then not. (I know I am replying to a five-year-old comment).
I also think you are too quick to dismiss the point of getting these degrees, since you in fact have no experience in what that involves.
That’s the standard scientific point of view, certainly. But would an Orthodox Bayesian agree?:) Isn’t there a very strong prior?
if cognitive biases/sociology provide a substantial portion of or even all of the explanation for creationists talking about irreducible organs, then their actual counterarguments are screened off by your prior knowledge of what causes them to deploy those counterarguments; you should be less inclined to consider their arguments than a random string generator that happened to output a sentence that reads as a counterargument against natural selection.
I’ve just discovered Argument Screens Off Authority by EY, so it seems I’ve got an authority on my side too:) You can’t eliminate an argument even if it’s presented by untrustworthy people.
Maybe I am amoral, but I don’t value myself the same as a random person even in a theoretical sense. What I do is I recognize that in some sense I am no more valuable to humanity than any other person. But I am way more valuable to me—if I die, that brings utility to 0, and while it can be negative in some circumstances (aka Life is not worth living), some random person’s death clearly cannot do so, people are constantly dying in huge numbers all the time, and the cost of each death is non-zero to me, but must be relatively small, else I would easily be in the negative territory, and I am not.