Play Store’s AI security blocks almost one million policy-violating apps

Play Store’s AI security blocks almost one million policy-violating apps Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)


Google’s AI-powered Play Store security has blocked almost one million policy-violating apps from reaching users.

In a blog post, Google detailed what it’s been doing to protect the billions of Android users and millions of developers creating apps for the world’s largest mobile platform.

2020 was a year when many of us made sacrifices to our freedoms to protect not just ourselves, but those around us. Unfortunately, criminals sought to take advantage of more people relying on their connected devices more than ever for work, play, and seeking vital help and information.

Three new policies were also introduced to help tackle some of the challenges the world has faced in recent times:

  • COVID-19 apps requirements: Google introduced specific requirements for COVID-19 apps to ensure public safety, information integrity, and privacy. Apps providing critical information, such as about testing, had to be endorsed by either official governmental entities or healthcare organisations and meet a high standard for user data privacy. 
  • News policy: Google introduced minimum requirements that apps must meet in order for developers to declare their app as a “News” app on Google Play as part of a bid to promote transparency in news publishing and counter misinformation.
  • Election support: Specific teams and processes were created across Google Play to provide additional support and adapt to the changing landscape around important elections.

In 2020, Google says it scanned over 100 billion installed apps each day to detect malware.

Krish Vitaldevara, Director of Product Management Trust & Safety at Google Play, wrote in the post:

“Our core efforts around identifying and mitigating bad apps and developers continued to evolve to address new adversarial behaviors and forms of abuse. Our machine-learning detection capabilities and enhanced app review processes prevented over 962k policy-violating app submissions from getting published to Google Play.

We also banned 119k malicious and spammy developer accounts. Additionally, we significantly increased our focus on SDK enforcement, as we’ve found these violations have an outsized impact on security and user data privacy.”

Google says that it has also enhanced its processes when enforcement action is required against developers to help build trust. More relevant information is now provided about why action has been taken which Google claims have resulted in a “significant reduction” in appeals and increased developer satisfaction.

Over the coming year, Google says Android developers can expect further improvements to the speed and quality of communication in addition to more initiatives to engage and elevate trusted devs.

Despite the clear progress made by Google in tackling malicious apps, around 29 percent of known malware-infected software are still getting through. As Developer reported this week, this “mobile malware pandemic” is disproportionately affecting emerging markets—driven predominately by the appetite for low-end Android devices.

(Image Credit: Google)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Tags: , , , , , , ,

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *