Security Innovations
Poor Web Application Security Leads to Mass Infections Targeting End Users
The Internet is an ecosystem and as with any such environment, various elements rely on one another in order to function. Browsers rely on web servers, which in turn rely on other web servers for additional content. It is this interdependence, which makes the Internet the vibrant, content rich environment that we’ve come to rely on. It is also this interdependence, which is at the heart of the many security issues that we face when leveraging internet resources.
Attackers are indiscriminately targeting end users in an effort to infect as many machines as possible in order to build botnet armies. They’re after CPU cycles that once controlled, can be used for virtually anything – sending spam, attacking other machines, performing DDoS attacks, etc. Alternately, the infected machines can simply be rented to others that require the computing power for their own malicious activities.
In order to infect end user machines, attackers need to control or at least influence web traffic by convincing potential victims to view malicious content. While that could be accomplished by setting up a new, malicious web site, doing so would additionally come with the challenge of driving traffic to the site. Instead, attackers can simply infect already popular sites with their own content. It is this approach that we see resulting in the majority of attacks against end users today. Sadly, the state of internet security remains so poor, that it is trivial for an attacker to find vulnerable web sites that allow for malicious content to be injected.
On Monday June 7, 2010, we witnessed one such attack. Beginning at 3:56am PST, Zscaler’s NanoLog servers began recording requests to Over the next few hours, requests for this Javascript file began to pick up. Why? Because 1,000+ websites had been infected with a simple <script> tag, which pointed to the file. When a user surfed to one of the infected pages, the malicious Javascript would attack known web browser vulnerabilities to install malware. While the majority of the sites impacted were lesser known, a few major sites including the Wall Street Journal and the Jerusalem Post were affected. Although all infected sites were running on Microsoft IIS servers with .Net installed, the vulnerable code, which made the attack possible, was within custom application code, not a weakness in the server itself. The attack simply happened to target IIS/.Net based web applications as SQL injection attacks need some customization based on the backend database targeted.
We know from analyzing infected web sites that SQL injection was used as the attack vector that led to the injected <script> tag1. This is not surprising, given that 9-10% of websites running ASP/ASPX code have SQL injection vulnerabilities2, although all platforms suffer from this type of attack due to poor programming practices. In such ‘mass injection’ attacks, those responsible do not discriminate when identifying targets. Rather, they write scripts that comb the web, looking for potentially vulnerable web pages. When a target is identified, the same script attempts to inject code, typically a <script> or an <iframe> tag – just a single line of code that is likely to go unnoticed as it doesn’t change the look and feel of the page. The goal – infect as many pages as possible to expand the population of potential victims.
Fortunately, following this attack, ShadowServer, quickly intervened and worked with Neustar and GoDaddy to sinkhole the domain where the malicious Javascript was hosted. Hosting the content at a single domain turned out to be the Achilles heel of the attack as it left a single point of failure. Once domain traffic was redirected to ShadowServer, the attack was effectively shut down. This fact alone, suggests that the attack was not sophisticated or well planned – a frightening thought, considering that it succeeded. While some of the sites impacted during the initial attack have been cleaned up, many remain vulnerable. In fact, only four days later, on Friday June 11, 2010, we noticed a very similar mass SQL injection attack, which had impacted many of the same sites. This time the code was being delivered from a new domain ( and leveraged a recent Adobe Flash vulnerability (CVE-2010-1297) for which a patch had been released only the day before. Given similarities in the structure of the attack, the same group may have been responsible for both.
Security teams often caution users by suggesting that they ‘only surf to reputable sites’. As can be seen from the aforementioned scenario, this advice is now of limited value as attackers are leveraging reputable sites as catalysts for attack. In today’s web, there is no such thing as a ‘reputable site’ – all content should be treated as untrusted and be inspected prior to hitting an employee’s browser.
Copyright © 2009-2010 Zscaler