I was surprised to see mention of MIRI and Existential Risk. That means that they did a little research. Without that, I’d be >99% sure it was a scam.
I wonder if this hints at their methodology. Assuming it is a scam, I’d guess they find small but successful charities, then find small tight-knit communities organized around them and target those communities. Broad, catch-all nets may catch a few gullible people, but if enough people have caught on then perhaps a more targeted approach is actually more lucrative?
Really, it’s a shame to see this happen even if no one here fell for it, because now we’re all a little less likely to be receptive to weird requests/offers. I suspect it’s useful for EAs to be able to make random requests of specific people. For example, I can imagine needing a couple hours or days of consulting work from a domain expert. In that situation, I’d be tempted to PM someone knowledgeable in that area, and offer to pay them for some consulting work on the side.
I can actually think of 2 instances where this community has done things like this out in the open (not PM), so it wouldn’t surprise me if there are occasional private transactions. (I’d link to examples, but I’d rather not help a potential scammer improve on their methods.) Perhaps a solution would be to route anything that looks suspicious through Bitcoin, so that the transaction can’t be cancelled? I wouldn’t want to add trivial inconveniences to non-suspicious things, though.
Yes, scammers do the homework needed for this kind of project. I know someone who lost around $8,000 due to a scheme like this, through a letter which seemed completely familiar with my friend’s interests. However, when I saw the letter (after the money was already lost), I informed him that it should have been evident from the beginning that it was a scam.
It does mean that not-scams should find ways to signal that they aren’t scams, and the fact that something does not signal not-scam is itself strong evidence of scam.
Surely scammers will be more motivated to find good signals, and will have more opportunity to experiment with what works and what does not. Someone effectively signaling that they are a non-scam should be a hallmark of a scam.… which is why smart people like us need a long thread like this to explain to us how the scam works.
It might not be easy to figure out good signals that can’t be replicate by scammers though. More importantly, and what I think MarsColony_in10years is getting at, even if you can find hard to copy signals they are unlikely to be without costs of their own and it is unfortunate that scammers are forcing these costs on legitimate charities.
I was surprised to see mention of MIRI and Existential Risk. That means that they did a little research. Without that, I’d be >99% sure it was a scam.
I wonder if this hints at their methodology. Assuming it is a scam, I’d guess they find small but successful charities, then find small tight-knit communities organized around them and target those communities. Broad, catch-all nets may catch a few gullible people, but if enough people have caught on then perhaps a more targeted approach is actually more lucrative?
Really, it’s a shame to see this happen even if no one here fell for it, because now we’re all a little less likely to be receptive to weird requests/offers. I suspect it’s useful for EAs to be able to make random requests of specific people. For example, I can imagine needing a couple hours or days of consulting work from a domain expert. In that situation, I’d be tempted to PM someone knowledgeable in that area, and offer to pay them for some consulting work on the side.
I can actually think of 2 instances where this community has done things like this out in the open (not PM), so it wouldn’t surprise me if there are occasional private transactions. (I’d link to examples, but I’d rather not help a potential scammer improve on their methods.) Perhaps a solution would be to route anything that looks suspicious through Bitcoin, so that the transaction can’t be cancelled? I wouldn’t want to add trivial inconveniences to non-suspicious things, though.
Yes, scammers do the homework needed for this kind of project. I know someone who lost around $8,000 due to a scheme like this, through a letter which seemed completely familiar with my friend’s interests. However, when I saw the letter (after the money was already lost), I informed him that it should have been evident from the beginning that it was a scam.
Just mentioning, but It’s a good policy to avoid feeling good about figuring out anwers you already knew. ---> http://lesswrong.com/lw/il/hindsight_bias/
It does mean that not-scams should find ways to signal that they aren’t scams, and the fact that something does not signal not-scam is itself strong evidence of scam.
Surely scammers will be more motivated to find good signals, and will have more opportunity to experiment with what works and what does not. Someone effectively signaling that they are a non-scam should be a hallmark of a scam.… which is why smart people like us need a long thread like this to explain to us how the scam works.
It might not be easy to figure out good signals that can’t be replicate by scammers though. More importantly, and what I think MarsColony_in10years is getting at, even if you can find hard to copy signals they are unlikely to be without costs of their own and it is unfortunate that scammers are forcing these costs on legitimate charities.