Site icon Panda Security Mediacenter

Comparing the comparatives

Unprecedented number of malware variants, targeted DDoS malware against Gmer's and Joe Stewart' s sites, Web Attacker vulnerability-based malware distribution, mini downloaders, brazilian malware mobs, botnet C&C's completely out of control, an ever increasing use of rootkit techniques in new malware samples… As Gartner puts it:
By the end of 2007, 75% of enterprises will be infected with undetected, financially motivated, targeted malware that evaded their traditional perimeter and host defenses. The threat environment is changing — financially motivated, targeted attacks are increasing, and automated malware-generation kits allow simple creation of thousands of variants quickly — but our security processes and technologies haven't kept up.

This situation does not seem to be acknowledged by many. It's business as usual at magazines and IT publications, who keep on comparing and reporting simple detection rates. Unfortunately malware has evolved and these findings are now based on old, limited views of reality. Some magazines simply scan
through GB's of badware they've collected from the Internet… fast and
easy but misleading at best. Others go through the trouble of outsourcing the tests to professional organizations,
such as AV-Test.org. But still most of these tests are based on
scanning gigs of (more reliable) samples or simply measuring how long
it takes vendors to add detection to their respective signatures.
Again, misleading at best. Some even go as far as creating malware
specific for the test with the only objective of fooling antivirus
detection… this is the sensationalist, yellow journalism of the
industry.

Ok, let's focus on signature and heuristic detection for a moment, but let's give it a bit more realistic view. Paul Laudanski's CastleCops MIRT — Malware Incident Reporting and Termination — and Tom Shaw are doing an excellent job of tracking newer malware. Since December 2nd, 2006 a total of 672 samples of what could very well be considered "newly/recently created malware" have been scanned using the VirusTotal service. The results are pretty amazing: the average detection rate of all antivirus engines is only 30%.

Click here to view Tom's hourly updated graph or here to see the detailed information per AV engine. The worse performer only detects 5% of the 672 samples submitted to date.

Conclusion?: The fact remains that traditional engines are insufficient against new malware. It's apparent that if you want protection nowadays you cannot rely on signatures and heuristics alone, regardless of how "leading edge" you're told it is. Use of behavioural analysis and other proactive techniques is an absolute must. Many leading solutions are finally starting to implement behavioural technologies in their solutions and that is A-Good-Thing™.

The next question is how to measure the level of protection of a reactive+proactive solution? And most importantly, how do you do it professionaly? Has anybody gone through the trouble of actually executing real samples to see if the security solutions are able to block it? Because that's the only true indicator of whether a user is protected by xyz solution or not….. The only answer I've been able to get from magazines and professional testers so far is that "it's too expensive".

It's about time anti-malware testing evolved as well.
One constructive proposal would be to actually report on a smaller testbed of real samples, but with more in-depth testing of both reactive plus behaviour and proactive blocking technologies. Some questions that should be evaluated are, for example:

These, and more, are important questions which are not being asked nor answered. Reporting real-life results would be of much more value to the end users than simple detection rates of mega collections. It seems we got stuck over 10 years ago when AV companies used to advertise full-page ads of pretty graphs comparing how many signatures each AV vendor's database file had.

Exit mobile version