I recently attended a keynote address by the CTO of a leading anti-virus firm. His company is fighting the good fight. Having recognized that signature-based malware detection no longer suffices, they have turned to a combination of detection and prevention to find and weed out bad actors. Big Data is crunched in The Cloud to find the malware which is then manually investigated to find out what it does. Once identified, sites serving malware are blacklisted for the benefit of this firm’s customers. The CTO proceeded to show an example of a piece of malware that changed the Windows hosts file to point a list of banking URLs to a single IP address, where one presumes the unsuspecting user would find a rogue copy of their banking website intent on stealing the user’s credentials or worse.
Now, this is only one example of the forty-thousand-odd unique malware infestations spotted in a depressingly short time, but my question is thus: why was a piece of malicious software running (inadvertently one assumes) on behalf of a user allowed to change a system-wide file like hosts? Shouldn’t there be a sandbox for code downloaded from the network that, if it needs to be run at all, prevents it from damaging the underlying operating system?
This situation paints for me the following picture: a tap is running, malware flowing like water into a sieve and onto the floor. The security industry is frantically mopping the floor, trying to stem the flow of malware. They are paid well for their trouble, but meanwhile the expensive rug that represents your business is getting awfully wet. It would be nice if someone could turn off the tap, or design an operating system that doesn’t leak like a sieve.
Hey, I’ve been thinking of this for a longish time
Here’s the first half of my response:
Second part of my response: http://atagunov.blogspot.com/2012/02/challenge-of-back-door.html