The problem with making a prediction about security in the year 2007 is that no matter how basic or extravagant it might be, it may actually come true. Who would have said that in 1999, a virus was going to saturate the Internet and make the front page of all the newspapers? Loveletter did just that in 2000. Who would not have been surprised if in 2002 they had been told that a worm was going to bring down hundreds of servers in less than a minute? SQLSlammer did it in January 2003.
We can try to imagine a catastrophe. Trying to crash the Internet is a complicated task, but it is the goal of many hackers. And this could be done using a silent worm that could infiltrate many computers and from there, launch an attack at any given time.
Of course, the Internet is not so easy to bring to a halt. It would be more feasible to try a denial of service attack against a vital Internet service. For example, a sufficiently widespread worm that, at a specific moment, tried to logon to the social security servers of a certain country. By simply launching requests to enter a pension queries page, the system would stop responding to the public, as it would not be able to deal with so many queries.
If the servers were dimensioned to avoid this type of attack, the attacker would only need to find the appropriate number of computers to carry out this task. A couple of dozen botnets launching several queries per second would do the trick.
But obviously, this is not going to happen. Creators of malicious code, hackers and the like are only in it for the money these days. Why would they go to the trouble of doing something that gives nothing in return? Being realistic, the problem we will have next year will be the same as this year: theft of confidential user data to operate bank accounts using stolen identities. This will be fact, not fiction.
The techniques used by hackers to steal confidential information will become more refined. On the one hand, design and programming techniques will have to improve, as automatic systems for detecting phishing are increasingly effective, and even some Internet browsers now include options for detecting fraudulent web pages.
On the other hand, they will have to considerably improve their social engineering techniques. Emails announcing that you have won a lottery, or that the widow of an African ex-president needs your help in shipping money out of the country are old hat (although sometimes still effective given their proliferation), so new techniques will need to be perfected. Which? If only we knew!
Companies should also be taking precautions against a new type of threat: unique Trojans. The developers of traditional antivirus solutions depend on examples of malicious code in order to generate the disinfection routine against that particular strain. But, what if there is only ever one example of that malicious code? And if the hacker has sent this spy Trojan, say, to the director of a company and has not distributed it further?
The information obtained can be worth its weight in gold and the chances of this Trojan reaching the hands of anti-malware researchers is very slim, not to say non-existent. It will remain on the system until the creator is bored of this computer and decides to switch it to another to continue the malicious work.
Operating system and application vulnerabilities deserve special mention. Each new operating system, just as with a new model of car, requires a certain period of real road-testing (regardless of all the beta phases), during which time vulnerabilities will no doubt be encountered. 2007 will be the year of Windows Vista, and despite all the claims about it being a secure system, it’s bound to have the odd problem. Windows NT was also claimed to have a high level of security when it was presented in the early 90’s.
Errors will gradually appear, and will be resolved over time. But the problem will really lie in the time it takes after the discovery of a certain error for an exploit to appear. This time is critical, and in many cases -dubbed ‘zero-day exploits’- it is practically negligible. The reaction to these events is fundamental, and systems for mitigating vulnerabilities are the key factors in any serious corporate security policy.
Spam, those annoying unsolicited emails, will also evolve. It will no longer be aimed at selling miraculous pills, cheap loans or ridiculous fake designer watches, but will be focused towards making other business activities more profitable. We have seen in 2006 how effective spam can be for pushing up stock prices, and so in 2007 this trend is likely to continue, in fact, there will probably be more schemes for obtaining money fraudulently.
And looking at more ‘mundane’ matters, there will likely be yet more malicious code protected by rootkits, more attempts to create viruses for cell phones (presumably with the same success as those created up until now, i.e. almost none), laptop users on WiFi networks will have to be on their guard against intrusions when they connect to different networks (in airports, for example), dialers will finally disappear…
However, for all of these problems, there are solutions. If viruses like “Friday 13” have been consigned to the dustbin of history thanks to antivirus solutions, all new problems that arise in 2007 can also be countered with preventive solutions. These are intelligent technologies for detecting malicious code. We are talking about systems based simply on analyzing what a certain program is doing and being able to classify it as dangerous and therefore block it.
In this way, a unique Trojan in the computer of, say, a company director, a WiFi intrusion using a “zero day” exploit, a new rootkit executing a dangerous task… all can be blocked before they cause any real damage.
2007 will see the arrival of 2007-era threats. So why not use 2007-era preventive technologies? To continue using the same protection systems as in 2000 will only protect systems from threats that use year 2000 technologies.