The other day we noted that 21 apps which infected 50,000 Android devices were pulled from the market and we’ve since learned that the problem is bigger, impacting over 250k phones and 58 apps. Google has released a statement about this and they don’t come off as overly cautious. They note that they “believe that the only information the attacker(s) were able to gather was device-specific (IMEI/IMSI, unique codes which are used to identify mobile devices, and the version of Android running on your device).” Of course, they also state that other data could be accessible and they are patching the security exploit and utilizing a remote kill feature to remove the viral software. To me, this is a bit late and a bit too little.
First off, this was a known security flaw and Google patched it in Android 2.2.2, but no patch was required for older devices so it remains. had they, of course this would have been avoided. Secondly, removing these apps may not remove the virus. It’s known that the virus has the ability to seek additional downloads so in theory it could have downloaded, secretly, another app that works with the malicious app in question and which would be harder to detect and not removed by the kill switch. Also, this all speaks to Google’s actual protection of their market, which is to say it’s virtually none. These apps were found by a third party who detected the virus and reported it to Google. Had it not been for them, these apps would still be floating around. And interestingly, some of the apps they were packaged in were former paid apps, ripped off and repackaged illegally with the malicious code added, yet still available in the Market like a legitimate app. There’s almost nothing less Google could be doing here. The approach they currently are taking lets attackers spread malware and once detected Google springs into action to remove those apps. Of course, that’s after your data is transmitted. The logical thing to do is carry out scanning on software before letting it enter the market, but of course then Google would put a limitation on their ‘openness’ and they seem to be in favor of letting malicious code circulate before they add any limitation to developer marketplace access, even if slight. So in weighing protection of the end user compared to access to the markets, a slight review period of an app is apparently viewed as too much of a restriction in order to get the benefit of protecting all users. I don’t see how they could balance it and end up with where things stand.
And to be clear, this may ultimately be a pretty minor virus scare but it doesn’t appear to be a wakeup call and that’s the point. The market remains open for these types of vulnerabilities so long as Google’s sole solution to protecting consumers is a kill switch to try to undo the harm, but obviously that doesn’t retrieve any sensitive data that you may have transmitted from your phone first. It’s time to wake up.