More Examples of Facebook Being a Cesspool

A couple of years ago I did a video about how Meta/Facebook operates a cesspool where they look the other way as international criminals exploit unsuspecting users on their platform.

Here’s another example: Take a look at any public Facebook post of a car wreck or other local tragic event and you’ll often find posts like this:

This of course will link you to a scammer account who will do who knows what to a person’s account or privacy.

As the dutiful netizen that I am, I decided to report this Facebook to see what happens.

It took over a week for them to review this, but guess what? Facebook’s moderators allow the scam post, and its author to continue exploiting people on the platform. What’s crazy is how formulaic these posts are and how easy it is (conceivably) to block them programmatically. But they allow this to persist.

Why is that? The answer is shareholders.

Facebook uses a metric called “family daily active people” to report to shareholders how many active users are on their platforms (Facebook, Instagram, Whatsapp, etc). They likely have hit critical mass for new users, so the activities of active users are now the metric.

On this post of a recent tragic accident, you’ll find 8 of these spam posts out of 24 posts total – over 30%! Given the volume of spam traffic I see, it’s probably not out of the realm of possibility that scam accounts are likely a significant portion of the “family daily active people.” A more aggressive effort to remove them would eat into a key shareholder metric.

With no consequences and money to lose, why deal with the problem? For them it’s far more profitable to look the other way. Just like UPS is doing with the recent scam I uncovered coming from one of their local retail outlets.