Google must have finally found the application of large language models (LLM), which can even surpass AI’s suspects. The company just released its 2024 ADS safety report, confirming that it used a set of new upgraded AI models to scan for bad advertising. The result is a huge increase in suspended spammer and scammer accounts, which is less Malicious ads -based ads In front of your eyes
Emphasizing that it has not been asleep in the past years, Google has said it has deployed more than 50 better LLMs to help implement its advertising policy in 2024. About 97 97 % of Google’s advertising implementation are included in AI models, which require less data to allegedly commit. Therefore, it is possible to cope with the rapidly produced scams.
Google says As a result of its efforts in 2024, 39.2 million US advertising accounts were suspended for fake activities. This is three times higher than the number of suspended accounts in 2023 (12.7 million). Factors that mobilize suspension usually include abuse of advertising network, misuse of personal data, false medical claims, trademark violations, or violations.
Despite these efforts, some bad ads still receive it. Google says it identified 1.8 billion bad ads in the United States and 5.1 billion globally. This is a small reduction from the 5.5 billion ads removed in 2023, but this means that Google had to remove less ads because it had stopped fraudulent accounts before the spread. The company claims that most of the 39.2 million suspended accounts were caught before running the same ad.