For years, The large digital platforms have tried to maintain an impossible balance: proclaiming that they fight fraud while continuing to make money, a lot of money, from it. The story they have repeated was based on a very convenient premise: that they operate like a telephone operator and, therefore, cannot be responsible for what others do on their lines.
But that comparison, so useful to them, never withstood a minimally serious analysis. Not because they must detect absolutely everything, but because they do have the technical capacity and internal knowledge to identify obvious patterns of abuse, and act when there is a real will to do so. In the era of artificial intelligence, pretending that fraud is impossible to detect is completely absurd.
It is true that the digital advertising ecosystem is gigantic and that no one can expect absolute supervision over millions of ads. However, the problem is not omniscience, but prioritization. These companies are capable of analyzing down to the last millisecond of user behavior to maximize their income, but become stupidly and willfully short-sighted when that same analysis reveals clear signs of fraud.
Google has known for more than a decade that fake locksmith ads generated constant scams and focused on emergency situations. I was aware of the anomaly in their conversion rates and the suspicious patterns in their bidding.
Even so, his reaction was systematically slow, reactive and insufficient. We are not talking about ambiguities or gray areas: we are talking about repeated, known and documented fraud.
TikTok could allege the difficulty of distinguishing between aggressive marketing and organized scamming in an environment of huge volumes of content
TikTok could allege the difficulty of distinguishing between aggressive marketing and organized scamming in an environment of enormous volumes of content. But this difficulty becomes an alibi when we see the proliferation of false financial gurus, miraculous investments or non-existent products that remain active for weeks despite multiple complaints.
The platform itself boasts automatic systems capable of detecting viral trends in minutes and analyze thousands of internal signals to predict behavior. If you can accurately identify a dance, meme, or consumption pattern, you can also detect that an ad is using celebrity images without permission or promising impossible returns. The difference is not technical, it is strategic.
Meta, for its part, has been promising improvements to its verification mechanisms for years, but its record shows that it has not prioritized the fight against fraud. Fake cryptocurrency campaigns, brand imitations, non-existent investments or miracle products have circulated massively on Facebook and Instagram despite warnings from regulators and national organizations.
The company may argue that algorithms don’t always distinguish between incompetence and deliberate deception, but when an advertiser racks up hundreds of complaints, uses third-party identity, or is part of repeat networks, the “hard to detect” line disappears. What remains is a platform that decides not to cut off a source of income, regardless of whether what is behind it is a scam, electoral manipulation or genocide.
Some advocates of these companies warn of the risks of too strict regulation: the possible elimination of legitimate ads, the brake on innovation or excessive burdens. But that argument ignores an obvious fact: self-regulation failed precisely because platforms never had real incentives to act diligently.
The new European regulations do not ask for miracles or infallibility: they ask for reasonable action in the face of clear signs of abuse
When a country requires verifiable identification in sensitive sectors, results improve immediately. When there is no obligation, diligence decreases. The new European regulations do not ask for miracles or infallibility: calls for reasonable action in the face of clear signs of abuse. It does not require that they detect everything, but rather that they stop ignoring the obvious.
It is true that fraud evolves rapidly, often faster than control mechanisms. But that does not exempt from responsibility those who obtain direct benefits from the traffic generated by these frauds. No other economic sector can profit from harmful activities by claiming that they are difficult to control.
If a bank detects suspicious activity, it takes action. If a business receives counterfeit products, it removes them. If a media outlet publishes a misleading advertisement, it responds to the courts. Only digital platforms had managed to establish the idea that their scale made them separate entities, immune to the standards applicable to the rest of the economic fabric. Europe has just reminded them that their size does not exempt them: it forces them.
The new European legislation will not eradicate all fraud or turn the internet into a walled garden. But it introduces an essential principle: if you profit from misleading ads and don’t act when you have sufficient evidence, you will pay for it.
It is not about holding the platforms responsible for everything that happens on them, but rather demanding consistency between what they know and what they do. They cannot boast of algorithmic intelligence when they want to sell advertising and feign clumsiness when they must protect their users.
In the end, what falls apart is not a technical model, but a legal fiction. The platforms were not innocent, but they were not asked to be innocent either: they were only asked to assume the responsibilities that correspond to any relevant economic actor.
Now, finally, they will have to decide if they continue to be machines for monetizing any behavior, no matter how harmful, or if they are willing to behave like the critical infrastructures that they have been saying they are for years. Indifference, from now on, will have a price.
***Enrique Dans is Professor of Innovation at IE University.
