The internet is a breeding ground for scams, and among the most powerful—and destructive—are those involving celebrity faces. Such “celeb-bait” scams trick people into clicking on the ads that appear to be endorsed by well-known individuals only to bring them to phishing websites or other types of scams. Meta, which owns Facebook and Instagram, has been criticized for years for not doing enough to prevent these scams. But recent breakthroughs in facial recognition technology are turning things around.

Scammers frequently resort to using celebrity images to make their advertisements seem legitimate. These advertisements may be peddling dodgy investments or requesting personal information and financial contributions from users. The challenge facing Meta is that not all celebrity advertisements are frauds—many are genuine collaborations, and the fake ones are crafted to be virtually indistinguishable.
To fight back against this, Meta began utilizing facial recognition as part of its computerized ad review process. When an ad triggers alarms—either from computers or users—the platform now scans the faces in the ad against official pictures of public figures on Facebook and Instagram. When a match is discovered and the ad proves to be a scam, Meta blocks it before anyone has a chance to see it.
Monika Bickert, Meta’s VP of Content Policy, explains that this technology is applied only to detect scams. Any facial data generated in the process is deleted immediately, whether or not a match is established. None of such data is stored or reused otherwise.
Meta initially rolled out this system to a small set of public figures and celebrities. The outcomes were positive, with quicker and more accurate detection of scams. Following initial success, Meta broadened the program to cover almost half a million public figures, particularly in areas most affected by scam ads like the EU, UK, and South Korea.
Public personalities in the system get in-app notifications and can opt out at any time through their settings. Meta is also rolling out these protections to Instagram to get wider coverage across its services.
Since starting the program, Meta has observed a 22% worldwide reduction in user complaints about celebrity-bait ads. The company’s automatic systems now flag and delete over twice as many scam ads as previously.
Scams don’t just stop at phishing adverts—scammers also attack accounts. Having a Facebook or Instagram account taken over by a hacker or a phishing operation can be a nightmare. To simplify recovery and keep accounts safe, Meta has rolled out video selfie verification in the UK, EU, and South Korea.
This voluntary feature allows users to upload a short video selfie, which is matched against their account’s profile pictures. The information is encrypted, utilized solely for verification, and erased immediately after. The process, Meta says, takes a minute and provides a more secure alternative to sharing ID documents, which are simpler for hackers to manipulate.
Facial recognition is a sensitive topic, particularly when practiced by giant tech firms. Meta’s track record in this regard is blemished—in 2021, it closed down its photo-tagging facial recognition program after attracting legal opposition and privacy-related issues. The firm even paid significant settlements over biometric data abuse.
To prevent a repeat of those errors, Meta has incorporated robust privacy protections into its new system. Facial recognition is applied only for particular purposes—scam detection and assisting with account recovery—and data gathered is erased immediately after use. Public figures and users are entirely in control of whether they take part.
Meta collaborated also with the regulators in the UK and EU, which are among the regions with some of the strictest privacy regulations globally. Following demonstrations that its system is privacy-by-design compliant and taking part is always voluntary, the firm secured approval to proceed.
Meta’s application of facial recognition already has an observable effect. There are fewer complaints about fake celebrity advertisements, and public figures are more secure from having their faces used inappropriately. Account recovery is quicker and more safe with video confirmation.
Nevertheless, the battle is far from lost. Scammers are astute and always evolving. David Agranovich, Meta’s global threat disruption director, acknowledges that some scams will continue to slip through and that new threats will always develop.
Meta’s gradual deployment demonstrates both the promise and the danger of applying facial recognition. By limiting its attention to scambusting and account recuperation, the firm attempts to find a middle ground between safeguarding users and tolerating their privacy in a more perilous internet environment.



