Google released its Ad Safety Report for 2023 which says that the company blocked or removed more than 5.5 billion ads, slightly up front the prior year, and 12.7 million advertisement accounts, nearly double from the previous year, for policy violations.
The use of Generative Artificial Intelligence (AI), both as a tool for creating bad ads and to identify them, was the major highlight of the report.
“Similarly, we work to protect advertisers and people by removing our ads from publisher pages and sites that violate our policies, such as sexually explicit content or dangerous products. In 2023, we blocked or restricted ads from serving on more than 2.1 billion publisher pages, up slightly from 2022,” the company said in a blog post on its latest ad safety report.
“We are also getting better at tackling pervasive or egregious violations. We took broader site-level enforcement action on more than 395,000 publisher sites, up markedly from 2022,” it added.
Furthermore, Google stated that their safety teams have long used machine learning powered by AI to enforce its policies at scale. It’s how, for years, the company has been able to detect and block billions of bad ads before a person ever sees them.
But, while still highly sophisticated, these machine learning models have historically needed to be trained extensively – they often rely on hundreds of thousands, if not millions of examples of violative content.
“LLMs, on the other hand, are able to rapidly review and interpret content at a high volume, while also capturing important nuances within that content. These advanced reasoning capabilities have already resulted in larger-scale and more precise enforcement decisions on some of our more complex policies,” Google stated.
“We’ve only just begun to leverage the power of LLMs for ads safety. Gemini, launched publicly last year, is Google’s most capable AI model. We’re excited to have started bringing its sophisticated reasoning capabilities into our ads safety and enforcement efforts,” it added.
Fraud and scams prevention work done by Google
According to Google, in 2023, scams and fraud across all online platforms were on the rise. Bad actors are constantly evolving their tactics to manipulate digital advertising in order to scam people and legitimate businesses alike.
In order to counter these ever-shifting threats, the tech giant quickly updated policies, deployed rapid-response enforcement teams and sharpened its detection techniques.
“In November, we launched our Limited Ads Serving policy, which is designed to protect users by limiting the reach of advertisers with whom we are less familiar. Under this policy, we’ve implemented a “get-to-know-you” period for advertisers who don’t yet have an established track record of good behaviour, during which impressions for their ads might be limited in certain circumstances,” Google said.
Furthermore, it stated that towards the end of 2023 and into 2024, the company faced a targeted campaign of ads featuring the likeness of public figures to scam users, often through the use of deepfakes.
On detecting this threat, Google created a dedicated team to respond immediately and pinpointed patterns in the bad actors’ behaviour, trained its automated enforcement models to detect similar ads and began removing them at scale. The tech giant also updated its misrepresentation policy to better enable the company to rapidly suspend the accounts of bad actors.
“Overall, we blocked or removed 206.5 million advertisements for violating our misrepresentation policy, which includes many scam tactics and 273.4 million advertisements for violating our financial services policy. We also blocked or removed over 1 billion advertisements for violating our policy against abusing the ad network, which includes promoting malware,” Google said in its blog post.
Efforts undertaken regarding political advertising
Google has highlighted that all election ads must also include a “paid for by” disclosure. Moreover, in 2023, Google verified more than 5,000 new election advertisers and removed more than 7.3 million election ads that came from advertisers who did not complete verification.
“Last year, we were the first tech company to launch a new disclosure requirement for election ads containing synthetic content. As more advertisers leverage the power and opportunity of AI, we want to make sure we continue to provide people with the greater transparency and the information they need to make informed decisions. Additionally, we’ve continued to enforce our policies against ads that promote demonstrably false election claims that could undermine trust or participation in democratic processes,” Google stated.
Google emphasised that when it comes to ads safety, a lot can change over the course of a year: The introduction of new technology such as generative AI to novel abuse trends and global conflicts. And the digital advertising space has to be nimble and ready to react.
That’s why the company is focused on continuously developing new policies, strengthening its enforcement systems, deepening cross-industry collaboration and offering more control to people, publishers and advertisers.