Facebook completely sucks at removing even the most obvious spam. It also disincentivizes reporting—as you mentioned, reporting spam takes many clicks and the probability of a useful outcome is low, while blocking only takes a few clicks and solves the problem of the specific spammer forever, but only for you.
By “even the most obvious spam” I mean something like:
the only English comment in a non-English thread, often in a non-English discussion group;
on a completely unrelated topic, e.g. asking people to click on something;
several identical copies posted in the same comment thread (e.g. someone posts something, they get five replies, and this spam is posted as a reply to each of the five replies);
...shortly, something that anyone with IQ 80 would immediately recognize as a spam even if they couldn’t speak the language...
And yet, whenever I report it, it takes dozen clicks, then I get a notification “we have received your report”, and the next day I get a notification “we have examined your report and concluded that the reported comment does not violate our community guidelines”.
I strongly doubt that any human has ever seen the report. Most likely the entire thing is automated, and it probably takes many reports of the same content to make someone actually look at it. But given how many clicks it requires to report a spam, most people don’t bother.
According to some metric, this is probably a great success. I imagine something like: “Since the introduction of our Great Anti-Spam Algorithm 3.0, the number of reported spams has decreased by 93%, it clearly works!”
Yes, it’s amazing how bad Facebook is at spam detection.
There was a time when I was getting a personal message per day that produced a notification containing spam from people I don’t know. At the same time actually important messages, like a journalist wanting to contact me were often sorted in a way that produces no notification/message and needs clicking on “messages requests”.
I feel like in the last year I have gotten less spam so they are improving somewhat but it’s still not a good state of affairs.
...shortly, something that anyone with IQ 80 would immediately recognize as a spam even if they couldn’t speak the language...
And yet, whenever I report it, it takes dozen clicks, then I get a notification “we have received your report”, and the next day I get a notification “we have examined your report and concluded that the reported comment does not violate our community guidelines”.
I have a vague memory that might or might not be true that the people humans that do the spam review don’t see the context of the message.
I have a vague memory that might or might not be true that the people humans that do the spam review don’t see the context of the message.
That would explain a thing or two.
I suppose, from Facebook’s perspective, banning is the right thing (because it costs them nothing). Everyone moderates their own statuses; groups are moderated by their admins.
The report functionality is probably there only for political reasons: yes it exists, and yes it can remove porn or literally Nazi messages (at least the ones written in English).
Facebook completely sucks at removing even the most obvious spam. It also disincentivizes reporting—as you mentioned, reporting spam takes many clicks and the probability of a useful outcome is low, while blocking only takes a few clicks and solves the problem of the specific spammer forever, but only for you.
By “even the most obvious spam” I mean something like:
the only English comment in a non-English thread, often in a non-English discussion group;
on a completely unrelated topic, e.g. asking people to click on something;
several identical copies posted in the same comment thread (e.g. someone posts something, they get five replies, and this spam is posted as a reply to each of the five replies);
...shortly, something that anyone with IQ 80 would immediately recognize as a spam even if they couldn’t speak the language...
And yet, whenever I report it, it takes dozen clicks, then I get a notification “we have received your report”, and the next day I get a notification “we have examined your report and concluded that the reported comment does not violate our community guidelines”.
I strongly doubt that any human has ever seen the report. Most likely the entire thing is automated, and it probably takes many reports of the same content to make someone actually look at it. But given how many clicks it requires to report a spam, most people don’t bother.
According to some metric, this is probably a great success. I imagine something like: “Since the introduction of our Great Anti-Spam Algorithm 3.0, the number of reported spams has decreased by 93%, it clearly works!”
Yes, it’s amazing how bad Facebook is at spam detection.
There was a time when I was getting a personal message per day that produced a notification containing spam from people I don’t know. At the same time actually important messages, like a journalist wanting to contact me were often sorted in a way that produces no notification/message and needs clicking on “messages requests”.
I have a vague memory that might or might not be true that the people humans that do the spam review don’t see the context of the message.
That would explain a thing or two.
I suppose, from Facebook’s perspective, banning is the right thing (because it costs them nothing). Everyone moderates their own statuses; groups are moderated by their admins.
The report functionality is probably there only for political reasons: yes it exists, and yes it can remove porn or literally Nazi messages (at least the ones written in English).