AI outperforms other methods of finding fraudulent apps on Google Play by a long shot.

Pam Baker

July 1, 2019

4 Min Read
Surprised Man with Smartphone
Shutterstock

Artificial intelligence (AI) outperforms other methods in finding fraudulent apps on Google Play, according to a study done by the University of Sydney and CSIRO’s Data61.

Data61 was officially formed in 2016 from the integration of Australia’s innovation catalyst, CSIRO’s Digital Productivity flagship, and the Australian government’s National ICT Australia Ltd (NICTA). The report points to the dangerous apps AI found in part as: “1,565 potential counterfeits asking for at least five additional dangerous permissions than the original app and 1,407 potential counterfeits having at least five extra third-party advertisement libraries.”

The researchers used AI to evaluate apps and identify counterfeit apps. They found that the AI “outperforms many baseline image retrieval methods for the task of detecting visually similar app icons.” The researchers say the AI delivered 8-12% higher precision rates than previous, non-AI driven efforts.

The higher fraudulent-app occurrence is alarming to enterprises that worry about the apps their employees install on dual-purpose personal and company-owned devices.

Bakken-Sam_OneSpan.jpg

OneSpan’s Sam Bakken

“We just can’t trust that Apple and Google can keep us safe from every mobile threat, and this study is just one example. The official app stores are the ideal distribution channel for criminals as they attempt to infect as many users as possible with their malware, and they’ll spare no expense in getting their apps published,” said Sam Bakken, senior product marketing manager at OneSpan.

Clearly malware developers are finding ways around Google’s vetting processes and this presents a major problem for other developers too.

“Ideally, more and more developers of sensitive apps – like those for banking and payments – will leverage in-app protection technology that ensures those apps are safe for consumers to use and are protected against other malicious software that might reside on their users’ devices,” said Bakken.

Bakken recommends using something like app shielding, for example, to search and find malicious behavior targeting the app it protects to prevent malicious activity before it starts.

“As an added bonus, app shielding can also detect when an app has been repackaged – a method used in counterfeiting operations such as this one – so that the forged app cannot execute if it’s been tampered with,” said Bakken.

Pitt-Laurence_Juniper-Networks.jpg

Juniper Networks’ Laurence Pitt

There are other actions consumers, employers and MSSPs can take to prevent harm from rogue apps; however, most think more can and should be done from multiple angles.

“There is clearly an issue with applications on Google Play and the Apple App Store, to a lesser degree, suffering from counterfeit applications. What would be helpful is a feature in the operating system that, periodically, alerts the user to installed applications that have not been used for a given period of time and makes the recommendations they could be uninstalled. It could even include data on which apps have been withdrawn from the store. We have ScreenTime for iOS; why not OldAppRemove?” said Laurence Pitt, strategic security director at Juniper Networks.

While this result from using AI on app security is a strong plus for security teams, AI isn’t always so friendly and helpful.

A malicious developer used AI in a deepfake app to turn photos of women into nudes, essentially creating nonconsensual porn. The app was free, easy to use and fast. Fortunately the developer decided to end that app shortly after releasing it. The developer said he created the app for “entertainment purposes” but as Sigal Samuel reported in his article in Vox, the AI-powered app “had all the ingredients necessary to turn an unsuspecting woman’s existence into a living hell.” That includes female CEOs and other high-ranking officials in thousands of organizations. Potentially apps such as these could be used to pressure, manipulate and blackmail women leaders. Thus it becomes yet another threat for security professionals to face.

Other deepfake videos and apps, all powered by AI, are already posing serious threats to organization leaders. One of the best known examples is a deepfake video of Facebook’s Mark Zuckerberg, an AI master in his own right.

Clearly, as shown by this study, AI is an important tool for cybersecurity pros; however, it’s just as clear that AI will be the most serious threat cybersecurity pros have ever had to combat.  If MSSPs aren’t already working on this problem, they should be. The future will require greater skills and smarter strategies from MSSPs to protect their clients.

Read more about:

MSPs

About the Author(s)

Pam Baker

A prolific writer and analyst, Pam Baker’s published work appears in many leading print and online publications including Security Boulevard, PCMag, Institutional Investor magazine, CIO, TechTarget, Linux.com and InformationWeek, as well as many others. Her latest book is “Data Divination: Big Data Strategies.” She’s also a popular speaker at technology conferences as well as specialty conferences such as the Excellence in Journalism events and a medical research and healthcare event at the NY Academy of Sciences.

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like