HP Security Products Blog
From applications to infrastructure, enterprises and governments alike face a constant barrage of digital attacks designed to steal data, cripple networks, damage brands, and perform a host of other malicious intents. HP Enterprise Security Products offers products and services that help organizations meet the security demands of a rapidly changing and more dangerous world. HP ESP enables businesses and institutions to take a proactive approach to security that integrates information correlation, deep application analysis and network-level defense mechanisms—unifying the components of a complete security program and reducing risk across your enterprise. In this blog, we will announce the latest offerings from HP ESP, discuss current trends in vulnerability research and technology, reveal new HP ESP security initiatives and promote our upcoming appearances and speaking engagements.

5 reasons why security is harder today than a decade ago

DC cherry blossoms.pngAfter speaking with other security professionals I have noticed that everyone seems to have a similar perspective: that application security seems harder now than it was ten years ago. What do you think?


Continue reading to find out my thoughts and what I think has led to these sentiments.

China, Google and Web Security

Google recently announced that its China based location was the victim of an attack that targeted and compromised a critical internal system used to track the email accounts of those on China’s watch list. The system was designed to comply with government warrants for information concerning Chinese human rights activists. Some suspect China of targeting this specific system to circumvent the official warrant process in order to collect data on other Chinese citizens .



More alarmingly, this attack was not exclusively directed at Google. In all, at least 34 companies including Yahoo, Symantec, Northrop Grumman, Dow Chemical, Washington-based think tanks, and assorted human rights advocacy groups were compromised by the spear phishing attack .


At first rumored to be another Adobe flaw, closer examination by McAfee Labs revealed that the attack (code named “Aurora”) was actually a sophisticated zero-day vulnerability exploit against Microsoft’s Internet Explorer .


What should be most worrisome is not the zero-day in all versions of IE, but the new crop of “advanced persistent threats” that are siphoning money and intellectual property. These APTs are professionally organized, have extensive funding and employ smart people. The result: triple encrypted shell code which downloads multiple encrypted binaries used to drop an encrypted payload on a target machine which then establishes an encrypted SSL channel to connect to a command and control network . This is serious stuff.


Only a few years ago the majority of web-based attacks seemed to be launched by individuals or small groups to collect credit card information. These attacks had seriously consequences, but the magnitude of the losses and the organization of the black market economy were still child’s play by today’s standards.


Current threats from the Eastern bloc are directed at massive monetary gain - probably in the area of tens of millions of dollars . China appears hell bent on stealing state secrets and intellectual property from both governments and private business alike. The stakes are much higher, and the bad guys are much more capable of pulling off the heist.




We have known for a long time that phishing scams have been very effective at exploiting random samples of unsuspecting users. However, the focused targeting of private business is a newer, more sophisticated and lucrative threat. These spear fishing attacks are intensely researched and aimed at top level executives, and will become more common as time passes.


In a directly related point, consider the curious appearance of a new website called iiScan. This service offers to scan your web application for vulnerabilities - for FREE. Just sign up and point their software to your website, and they will, ‘figure out’ how vulnerable to an attack you might be. After the scan is done, they will email you a PDF based report to your email account.


Placing trust is such services has been discussed before, especially concerning cloud security.  It doesn’t take much to imagine all the things that could go wrong in this scenario, even if IE didn’t have multiple zero-day exploits, and a proof of concept embedded malicious PDF exploit had not just been released.


 It might very well turn out that NOSEC Technologies Co., Ltd. (the company behind iiScan) may be legitimate, or at least may have started out that way. Even if they are not actively attacking websites, it shouldn’t take long for them to become a high profile target for either private hackers, or for the Chinese government itself. What would be a better target than a database full of public websites and their known vulnerabilities? These sites, if not already compromised by iiScan, could be used as command and control drones, payload hosts, pieces of a distributed file-system, or merely SPAM relay channels.


Education and Armament


Everyday adds more proof that web application threats are being crafted by motivated professional organizations with deep pockets. Security needs to be taken very seriously, practiced diligently, and all users need be paranoid when surfing the web. This is especially important because the media is very cautious to report all the gory details of the real impact of cybercrime .


Installing preventative software is a good idea, too. Some of the latest tools and devices may help to prevent drive-by malware, spear phishing payloads, etc. Install Firefox and use plug-ins that flag suspected malware host sites. Use a personal web proxy, and restrict evil IPs. You can get the most comprehensive list of Korean and Chinese blocks (including iptables, htaccess files, dns zones, etc) from this page. Above all, stop clicking on those emails from your least technical friends that include an attached PowerPoint or PDF file to deliver a punch line. The villains take the Internet very seriously, and so should you.


UPDATE (1/19/2010):


Thanks to the Full-disclosure list (Marc, Smasher, Dan) for pointing out that the exploit was not nearly as sophisticated as McAfee has led us to believe.


The exaggerated sophistication of the attack re-enforces my point about media FUD - ironic in its own way because the media is quick to exaggerate the sophistication of the attacks, yet minimize the damage associated with them. It’s like getting up off the floor after a sucker punch and taunting "That didn't hurt". The reality is that simple attacks are still very effective - our security education and implementation still has a long way to go.


However, the real point of this article was to encourage a little more critical thinking surrounding software security. Putting blind faith in any type of security device (airport scanners, webapp scanners, etc.) is not good security practice.







".htaccess" for the win! Stomp overaccesible folder vulns.

All too often organizations are exposing themselves far too much due to lazy administration and the web applications they install aren't doing them any favors.

I doubt any one has the numbers, but websites that are just "installs" of some CMS, blog, or (insert open source app here) un(tar|zipped) into the default directory of an Apache web root are splattered across the Internet. That said, I remember appreciating this simplicity; it certainly allows any developer with any amount of tomfoolery/mad skillz in his/her bones (but mostly none) to create a webpage and have an Internet presence. The real reason these settings are default are probably due to either laziness or for parading their features (although it would be great if Apache was more secure from a default perspective). I'm off topic here, but it's not all Apache. The distributions need to take responsibility for distributing the default config themselves... why isn't there an "apache2-secure" overlay/package that would try and install a more secure set of defaults for the average user? (Answer: the maintainers are not the average user)

Whatever, I don't want to get knee deep into Apache politics and the reasons for feature X being enabled by default. Nothing changes the fact that most default installations of Apache have things like directory listings enabled as well as allowing any installed web programming modules to execute file extensions they register for inside the web root and any subdirectory. Like I said, this is great for the beginning user that doesn't know how to massage httpd.conf files, but it's scary how common these defaults are left enabled.

One of the features Apache users could probably do with out by default are Distributed configuration files (".htaccess" files). These files make it all too easy for people to forget about the root directory settings and allow their to further configure any specific settings it requires. The side effect here is that you are now also relying on the web application to remove the privileges of the web root to function securely. Of course I could come up with a bulleted list of why we need this functionality; I've used it and I'm about to advocate its use, but I challenge whether its necessary for a default installation. If you expect the Apache user to edit the httpd.conf file during the installation of Apache (to set the ServerName/IP/VirtualHosts), why not just expect that user to edit the httpd.conf file for any applications that require specific configurations? I dunno...maybe that's a little too draconian of me. But hopefully Apache doesn't make it more complicated by changing its config files to Lua...

Since we do have this feature though, how can OSS projects take advantage of it...

There is no excuse for open source apps to be distributed without ".htaccess" files limiting anonymous access to files that shouldn't be publicly accessible. There have been literally hundreds of vulnerabilities because some web applications internal files were exposed in some sub directory of the installed application.

The above is all you need to do for PHP. Just drop that ".htaccess" file in any folders your OSS project has that shouldn't be accessible publicly and voila! your done. No more information disclosure, variable overwriting, XSS, and other vulnerabilities.

In fact, if you want to take it to the next step, you should change your error reporting of 40's to the same response. Thus attackers won't be able to fingerprint your web application by the existence of your app's library files (like wafp does). Here is a snip-it to accomplish that:

Although the existence of your public files might be enough to fingerprint it, either way though your OSS project will be better off.

Of course even if you are an administrator of a web application, this is something that you can do as well. If you know certain folders are just for temporary storage by a web app or are part of a library, just create the aforementioned ".htaccess" files and stop random vagabonds from squatting on (some of) your vulnerabilities.


Diamond heist holds infosec lessons, too

Wired is running the story “The Untold Story of the World's Biggest Diamond Heist” on their site and in the next issue. You may have already read it, since it’s pretty popular on the tubes right now. If you haven’t—while it’s pretty long—it’s an interesting read on physical security, criminals, capers and exploitation of weaknesses.


Below is a fairly spoiler-iffic view on the heist and how I think it relates to the information security world—so if you want all the gory details without my summary, head over to wired.com first.  Here’s a high-level synopsis: in an Italian Job like caper, a group of thieves do the seemingly impossible by robbing a massive vault in Antwerp’s diamond district. This is no ordinary vault—it holds most of the district’s diamonds and other valuables, and has a significant amount of security, including the following (there is also a diagram of this):   


  A 3 ton steel door with:

  • Combination dial (0-99)
  • Keyed lock
  • Seismic sensor (built-in)
  • Locked steel grate
  • Magnetic sensor
  • External security camera

 And inside the vault:

  • Light sensor
  • Security camera
  • Heat/motion sensor

This vault is also located in the basement of a guarded and monitored building, and in the middle of a diamond district that is blanketed with security cameras and has its own specialized police force.   


 Seems impossible to break into, right? Of course not.   


The story, as told by Leonardo Notarbartolo (serving 10 years in jail for his part in the crime), tells of how a small group of men exploited minor weaknesses in the various layers of security, and walked away with an unknown amount of wealth (read the article as to why it’s unknown)… and very nearly got away with it.   


While this isn’t directly web/network/data security related, their tactics, from reconnaissance to exploitation, have a lot of parallels to the computer security realm. The methods they use are the same pen-testers and criminals are using against our networks and applications.   




Notarbartolo used a small “spy” camera to document the building and vault. They studied the monitoring systems, vault, building, surrounding buildings, entrances (conventional and otherwise) and habits of the guards. They sneaked in a small spy camera to capture the vault combination (key logger, anyone?) which went to a transmitter hidden inside a fully functional fire extinguisher.   


Social Engineering

Notarbartolo set himself up as a dealer in the building, and rented a box in the vault. After a time, the guards came to recognize and almost ignore him, which gave him a critical opening to disable a heat detector inside the vault.  



The thieves set up a fake vault to practice in and to look for new weaknesses (pretty sure this was in one of the Ocean’s <insert number here> movies). Professional testers make no secret of the work they do against “fake vaults” (ok, lab systems) looking for 0day vulnerabilities to use in their products and future engagements.  


Exploiting Small Weaknesses

There was no single, major flaw in the security. There was no brute force attack (portions of the movie Heat come to mind). Like gaining access to a less-secure and less important “system” (an adjacent building with a courtyard), they had unmonitored access to the secure building’s exterior. They used a little stealth with a home-made polyester shield to defeat a heat and motion sensor, a piece of aluminum and duct tape to defeat a magnetic sensor, and while they had a duplicate key crafted based on the video tape they’d made (which smacks of SNEAKY), they found the actual key in a nearby room (password under the keyboard, or console access?). Black bags covered video cameras which were expected to show darkened stairways. Hairspray defeated a motion and heat sensor long enough for them to disable it completely.  


All of these tiny weaknesses and crafty exploits combined into a massive heist. The tiny chinks in the armor turned into a significant financial loss for the owners of the lockboxes and their insurers.   



So what lessons can be gleaned from the physical-security weaknesses, and translated into the digital world? Plenty.  


First off, no vulnerability or weakness can be completely ignored. Decisions based on risk and cost can be made, and fixes prioritized, but everything should be discovered, catalogued and tracked—at some point that “minor” flaw could be very important. Knowing every weakness and monitoring for exploit attempts could be critical in stopping a massive breach.  


Secondly, there is no such thing as an unimportant application or system on your perimeter (maybe even your internal network). Just because your Job Listing application doesn’t hold customer data doesn’t mean it isn’t critical to your perimeter security. Even if the app runs on a different web server, what if there is a flaw that gives someone system access? Is the root password the same one as in on your banking systems? Does the system give the attacker access to the same DMZ that your critical data traverses? Is your security staff monitoring the IDS as closely for the “ATM Locator” system as the bank login?  


Thirdly, security is not a one-time effort. Your security efforts may be baked in from planning to implementation, but they don’t stop there. Threats, attacks and techniques are constantly being discovered and are always evolving. Since the criminals knew, it seems possible the manufacturer of the heat and motion sensor knew of the “hairspray exploit” but didn’t bother to warn their customers and issue a “patch”—or maybe they did, and the operators of the vault chose not to implement a fix.  


And lastly, to the best of your ability, you have to think like a criminal. You may even consider hiring someone (permanently or on contract) that can do that pretty well (I won’t debate hiring actual criminals). Would someone used to breaking into vaults ask “Why the heck is the magnetic sensor mounted on the outside of the vault door?”  I’d hope so. Would they have scouted the nearby buildings looking for an attack vector? Perhaps. Would they have questioned the wisdom of having video cameras watching completely dark rooms? Any one of those questions, posed to site security or management, could have triggered a corrective action that might have stopped this entire heist in the planning phase.   


So, the next time you think you have better things to do than fix that minor cross-site scripting issue on that little loan calculator application—reconsider—and wonder if Leonardo Notarbartolo learned any computer hacking skills in prison.


And, I mentioned that Notarbartolo is in jail, which means they didn’t get away clean. Check out the full article as to why… that interesting bit, and a lot of other details and theories that I didn’t get into, make it a good read even after skimming this.


The security industry should hold itself to higher standards

At a previous job I worked on the application testing side of web security—breaking in-house/contract built applications, commercial off-the-shelf (COTS) applications, appliances, and partner’s sites (which were built with all of the above). While most of these weren’t security related, more than a few of them were.

Time after time, web applications or appliances built by “security companies” turned up a ridiculous amount of vulnerabilities. I won’t go so far as to make bold statements about how every developer at every security company should be an expert (though it would be nice), but certainly these places should have rigorous testing methodologies at all stages before product release… right? (see Rafal’s blog for lots more posts about that). Security products should have fewer vulnerabilities… right?

One of the under-used features over at OSVDB.org is the ability to search by products that are identified as “security software.”This search reveals fourteen pages—over 400 issues—of flaws in security products (though not all of them web related). Sadly, if my experience is any indication of how this works throughout the industry, there are tons more that were never publicly released due to contract and political restrictions.

Take, for example, a certain network traffic collection, storage and security analysis appliance I tested 3 or 4 years ago. It had a “hidden” web directory of administration scripts, and about half of them lacked an authentication call. This let you do “minor” things like running shell command or viewing all the captured network data. After communicating with the vendor, they fixed the problem half a year later—but it was never publicly disclosed.

So, how can you trust your security products you may need to rely on in court, when they are not secure? Two years ago, and upheld last month by an appeals court, a judge determined that the failure of the manufacturer to release the source code to a breathalyzer machine was a violation of due process and thus impacted the defendant’s ability for a fair trial—the test results were not admissible as evidence. This was a bold defense strategy, and, in my opinion, the right decisions by the judges involved.

So how far off are we from a defendant questioning forensic evidence or logs when the software or tool that collected it had a critical security flaw? Or has this already happened and I missed it?

Maybe it doesn’t matter in the long-run. Paul Ohm writes in his blog post Being Acquitted Versus Being Searched (YANAL) that by the time the trial comes and you try to make a compelling argument about tainted evidence, they’ve likely gathered so much more via surveillance/warrants that it won’t matter if you are able to discredit some portion of it.

Despite what Ohm says, I still like the idea that a real-life Alan Shore will raise the bar for security software makers. Only if the bottom-line is in jeopardy will most companies decide investments in getting their own security-house in order is worth it.

Sun releases Netscape Enteprise source--let the bug hunt begin?

Sun Microsystems announced today that Netscape Enteprise Server, one of the original grand-pappys of "modern" web servers (which excludes NCSA--sorry fanboys... I know you're out there), has been released under the BSD license. This isn't the really old one that (hopefully) no one uses, but the modern version, now called JES Web Server (part of the OpenSolaris Web Stack). Yep, it was also iPlanet for a bit. And SunONE. And possibly still Sun Java System Web Server. And probably a few other names people with too much time on their hands will find buried in the source.

No matter what you call it, it will be interesting to see over the next few days/weeks/months to see what the security research community does with the source. I suspect we'll at least see a short-term uptick in the number of vulnerabiltiies for the server as people start looking into it and running their analysis tools. Corporations running the Java System Web Server should keep a keen eye on the Sun Alerts.

Some links for you:

Showing results for 
Search instead for 
Do you mean 
About the Author(s)
Follow Us

HP Blog

HP Software Solutions Blog

The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation