Google said Wednesday it suspended more than 39 million advertising accounts on its platform last year, a tally significantly higher—more than triple—than the year before, as the company intensified efforts to combat advertising fraud with advanced artificial intelligence tools.
The search giant has leveraged sophisticated large language models (LLMs) to detect fraudulent advertisers swiftly and effectively, examining critical signals such as business impersonation and invalid payment information. With these sophisticated systems, Google managed to halt the overwhelming majority of fraudulent ad accounts before they could ever publish a single ad, the company explained.
Throughout 2024 alone, Google introduced over 50 LLM-based technological enhancements to strengthen its safeguards across various platforms. Despite the increased reliance on artificial intelligence, Google’s General Manager for Ads Safety, Alex Rodriguez, emphasized that human judgment still plays a crucial role in overseeing and refining the process.
According to Rodriguez, a cross-company team of over 100 experts, involving personnel from Ads Safety, Trust and Safety, and researchers from DeepMind, came together specifically to investigate deepfake advertisements that involved impersonations of public figures, a particularly deceptive and harmful form of ad fraud.
Their combined effort led to the introduction of more than 30 updates to ad policies and technical protection measures, resulting in the suspension of roughly 700,000 fraudulent accounts associated with deepfake scams. These actions translated into a 90% decrease in public complaints regarding the problematic ads, according to Google.
In the United States alone, Google shut down 39.2 million accounts and removed approximately 1.8 billion ads throughout 2024, citing significant violations such as misuse of trademarks, false healthcare claims, deceptive financial offers, and ad network abuse. India, as the second-largest user market globally and the most populous nation, saw substantial enforcement as well—with 2.9 million accounts suspended and over 247 million ads deleted. The leading policy violations there pertained to financial services misconduct, trademark misuse, personalized advertising infractions, ad network fraud, and unauthorized gambling ads.
In its global anti-fraud push, Google reported suspending nearly 5 million accounts specifically for scams and eliminating about half a billion scam-related advertisements last year. The company also introduced rigorous verification measures for over 8,900 new political advertisers during 2024, a year marked by elections involving half of the world’s population, concluding with the removal of 10.7 million politically focused ads. Rodriguez underscored, however, that election-related ads comprised only a fraction of Google’s advertisement ecosystem, thus having minimal impact on broader safety metrics.
All told, Google’s enforcement resulted in blocking 5.1 billion harmful advertisements and completely removing 1.3 billion harmful webpages last year. This shows a decline compared to 2023, when the company blocked 5.5 billion ads and took action against 2.1 billion webpages. Google explained that the declining figures indicated rising effectiveness in early detection and blocking before malicious accounts could become fully operational on the advertising network.
In addition, Google placed restrictions on around 9.1 billion ads throughout the year, reflecting continued vigilance over potential policy violations.
Acknowledging the concerns regarding fairness and transparency in mass account suspensions, Rodriguez noted Google’s enhancement of its appeal process, ensuring that human intervention provides oversight and clear rationale behind actions. The executive conceded past transparency shortcomings, promising ongoing improvements toward clearer communication with advertisers about the reasons for their accounts’ suspension and providing individuals more detailed guidance about policy adherence.