HP Security Products Blog
From applications to infrastructure, enterprises and governments alike face a constant barrage of digital attacks designed to steal data, cripple networks, damage brands, and perform a host of other malicious intents. HP Enterprise Security Products offers products and services that help organizations meet the security demands of a rapidly changing and more dangerous world. HP ESP enables businesses and institutions to take a proactive approach to security that integrates information correlation, deep application analysis and network-level defense mechanisms—unifying the components of a complete security program and reducing risk across your enterprise. In this blog, we will announce the latest offerings from HP ESP, discuss current trends in vulnerability research and technology, reveal new HP ESP security initiatives and promote our upcoming appearances and speaking engagements.

".htaccess" for the win! Stomp overaccesible folder vulns.

All too often organizations are exposing themselves far too much due to lazy administration and the web applications they install aren't doing them any favors.


I doubt any one has the numbers, but websites that are just "installs" of some CMS, blog, or (insert open source app here) un(tar|zipped) into the default directory of an Apache web root are splattered across the Internet. That said, I remember appreciating this simplicity; it certainly allows any developer with any amount of tomfoolery/mad skillz in his/her bones (but mostly none) to create a webpage and have an Internet presence. The real reason these settings are default are probably due to either laziness or for parading their features (although it would be great if Apache was more secure from a default perspective). I'm off topic here, but it's not all Apache. The distributions need to take responsibility for distributing the default config themselves... why isn't there an "apache2-secure" overlay/package that would try and install a more secure set of defaults for the average user? (Answer: the maintainers are not the average user)


Whatever, I don't want to get knee deep into Apache politics and the reasons for feature X being enabled by default. Nothing changes the fact that most default installations of Apache have things like directory listings enabled as well as allowing any installed web programming modules to execute file extensions they register for inside the web root and any subdirectory. Like I said, this is great for the beginning user that doesn't know how to massage httpd.conf files, but it's scary how common these defaults are left enabled.


One of the features Apache users could probably do with out by default are Distributed configuration files (".htaccess" files). These files make it all too easy for people to forget about the root directory settings and allow their to further configure any specific settings it requires. The side effect here is that you are now also relying on the web application to remove the privileges of the web root to function securely. Of course I could come up with a bulleted list of why we need this functionality; I've used it and I'm about to advocate its use, but I challenge whether its necessary for a default installation. If you expect the Apache user to edit the httpd.conf file during the installation of Apache (to set the ServerName/IP/VirtualHosts), why not just expect that user to edit the httpd.conf file for any applications that require specific configurations? I dunno...maybe that's a little too draconian of me. But hopefully Apache doesn't make it more complicated by changing its config files to Lua...


Since we do have this feature though, how can OSS projects take advantage of it...


There is no excuse for open source apps to be distributed without ".htaccess" files limiting anonymous access to files that shouldn't be publicly accessible. There have been literally hundreds of vulnerabilities because some web applications internal files were exposed in some sub directory of the installed application.



The above is all you need to do for PHP. Just drop that ".htaccess" file in any folders your OSS project has that shouldn't be accessible publicly and voila! your done. No more information disclosure, variable overwriting, XSS, and other vulnerabilities.


In fact, if you want to take it to the next step, you should change your error reporting of 40's to the same response. Thus attackers won't be able to fingerprint your web application by the existence of your app's library files (like wafp does). Here is a snip-it to accomplish that:



Although the existence of your public files might be enough to fingerprint it, either way though your OSS project will be better off.


Of course even if you are an administrator of a web application, this is something that you can do as well. If you know certain folders are just for temporary storage by a web app or are part of a library, just create the aforementioned ".htaccess" files and stop random vagabonds from squatting on (some of) your vulnerabilities.


 

SSLv3/TLS Renegotiation Stream Injection

Recently, Thursday 11/5/09, a few folks over on the IETF mailing list went public with a limited Man-in-the-Middle attack on SSLv3 and TLS. There has been quite a bit of press coverage on this issue's severity. However, the way this attack can be used is proving to be more dangerous in specific contexts than at first thought. This vulnerability affects almost every SSL/TLS implementation: IIS (5|6|7), Apache mod_ssl < 2.2.14, OpenSSL < 0.9.8l, GnuTLS < 2.8.5, Mozilla NSS < 3.12.4, and certainly more. Any products using these libraries as their underlying secure transport layer are also vulnerable to this content injection vulnerability. This vulnerability has been assigned CVE-2009-3555 by Mitre and I'm sure they will continue to update their listing with newly affected packages as they are found.


At a high level, this is not a true Man-in-the-Middle attack. An attacker leveraging this vulnerability can only inject data and is unable to read client traffic or server responses. While this is not an entire compromise, there are novel attacks that can be performed in the context of HTTP and SSLv3/TLS. In order for SSLv3/TLS to support long running sessions, at a protocol level, encryption keys must be changed to ensure cryptographic security. However, the real problem manifests within the communication between an application and the SSLv3/TLS library the server is using. Since HTTP requests cannot be processed until they are complete, web servers keep the request in a buffer until it is complete. When the SSLv3/TLS libraries receive new packets they are immediately forwarded to the web server once decrypted at a protocol level. The issue here lies within that caching behavior; before completing the HTTP request, the client initiates a key renegotiation then sending new keys. Except these new keys could have been from the client connecting to the man-in-the-middle and just simply forwarded by the attacker. This is especially malicious because the client will never know he has been man-in-the-middled as the communication between her and the server appears completely normal. Unfortunately, prior to this vulnerability disclosure SSLv3/TLS enabled web servers do NOT flush the data buffers before the renegotiation. Thus an attacker can leverage this optimization/feature/whatever to inject data into the beginning of the conversation between the unexpecting client and the web server.


Given an attacker can get traffic routed through him (this is easy in a WiFi environment or with arp poisoning) the attacker can effectively transparently proxy a client's request through him. Now using this vulnerability, any HTTPS connection through him can have arbitrary data from the attacker prepended to anything the client sends. This is especially bad for HTTP requests.


These theoretical attacks all require some type of renegotiation to occur. Luckily, for an attacker, there are simple ways that can be utilized. The possible ways renegotation can be forced are:



  • Any server that allows client renegotiation.

  • Servers with public access to a subset of folders, and a set of secure folders requiring a client certificate.

  • Servers that renegotiate for any reason.


Of course the libraries have quickly identified and fixed some of the issues that cause this vulnerability. OpenSSL has disabled renegotiation for SSLv3 and TLS in a patch. GnuTLS has implemented some of the drafted larger fixes for TLS specifically. Apache has fixed the some of the issues at the application layer associated with the prepended attacks, but this still requires patches of openssl to completely solve all the certificate attacks (doesn't prevent from server initiated renegotiation).


WebInspect will now also test any SSL web servers for this vulnerability with Check ID 10942. Simply SmartUpdate to receive the latest checks.

Labels: MitM| Research| SSLv3| TLS

Talking Headers: Part 3: The Fun

In Part 1 of the series on interesting headers, I talked about leaking hostnames. In Part 2, it was PHP errors. In Part 3 I bring you... the funny stuff. Not funny, like how Mark Mcgwire's rookie card is now $5 on ebay compared to the hundreds it once was (and that I have 5 of them for some reason), but funny like watching a 1984 episode of Miami Vice and wondering how humanity survived the clothing and hair.


Note, a link to each site is on the header name:


I thought "x-hackers" typically got government jobs, or at least wrote books?



At least these guys at admit they cobbled it together... but gave themselves credit nonetheless:



Nick, Mic, or Andy (or all three?) must also get the warm fuzzies from Ashley, beacuse they've also got:



I'm not quite sure what to make of this next one, but I'm pretty sure it freaks me out just a little, like clowns. Actually, clowns freak me out a lot:



And finally... one that should need no explanation or almost-witty comment:



There are certainly tons more funny or odd headers out there, and I tried not to duplicate those in Andrew Wooster's post--what else have you seen?

Labels: Headers| HTTP| Research

Talking Headers: Part 2

While my rookie Mark McGwire cards aren't appreciating at all, my header collection is.  Check these actual headers out:



  • php warning: Unknown(): Unable to load dynamic library '/usr/local/lib/php/extensions/no-debug-non-zts-20020429/mysql.so' - Cannot open "/usr/local/lib/php/extensions/no-debug-non-zts-20020429/mysql.so" in Unknown on line 0

  • php: Error parsing /usr/www/users/bob/cgi-bin/php.ini on line 125

  • php warning: Function registration failed - duplicate name - pdf_new in Unknown on line 0, Function registration failed - duplicate name - pdf_delete in Unknown on line 0...


Yes, those are actual HTTP header names and values. That's some serious ugliness right there. Why PHP would be reporting errors through the headers I can only guess--but it is.


Finding any information on this via a search engine has proven impossible, as it's polluted with PHP syntax error messages and relevant discussions. So, if you have any ideas as to they why/how of this, I'd be interested to hear them.


And of course, my shameless product plug: WebInspect will alert on these.

Labels: Headers| HTTP| PHP| Research

Talking Headers: Part 1

Some people collect coins, DVDs or comic books. Others collect cars or Star Wars toys. Among other things, I like to collect HTTP headers. They take up a lot less space than cars, and can have a much higher return value than Mark McGwire's rookie card--as long as you something interesting.


From time to time I like to look through my collection for rare gems... like these, which caught my eye this week:



  • x-real-server

  • real-hostname


These are the two most popular of a few slight variations. The header name itself is generally useless (more on that some other day)--it is, of course, the value that matters. Unfortunately, the vast majority of these are boring as heck--the server's name with (or without) the www. In a few cases, however, they reveal something interesting--something other than the server's name.


At least one of them in my collection is likely the host's internal or "real" hostname (a cartoon character). Another is a completely different host/domain combination (perhaps the hosting company's machine name which the virtual host is running on?). And yet another reveals that it's actually "cgi01"--maybe a good indication there's a "cgi02" and that they'd be good places to look for... lots of CGI programs.


Earth shattering? No. Interesting, and with the potential to reveal a bit about your servers? Yes.


As always when building your web infrastructure, stop every bit of useless information that heads outbound--no matter how innocuous it may seem. You never know what an attacker may be able to leverage for attacks or social engineering, and you never know what future holds for new attacks or exploits.


And just for a bit of a product plug, WebInspect will now check for these variations.


For some fun headers, see Andrew Wooster's post from nearly 4 years ago.

Labels: Headers| HTTP| Research

Sun releases Netscape Enteprise source--let the bug hunt begin?

Sun Microsystems announced today that Netscape Enteprise Server, one of the original grand-pappys of "modern" web servers (which excludes NCSA--sorry fanboys... I know you're out there), has been released under the BSD license. This isn't the really old one that (hopefully) no one uses, but the modern version, now called JES Web Server (part of the OpenSolaris Web Stack). Yep, it was also iPlanet for a bit. And SunONE. And possibly still Sun Java System Web Server. And probably a few other names people with too much time on their hands will find buried in the source.


No matter what you call it, it will be interesting to see over the next few days/weeks/months to see what the security research community does with the source. I suspect we'll at least see a short-term uptick in the number of vulnerabiltiies for the server as people start looking into it and running their analysis tools. Corporations running the Java System Web Server should keep a keen eye on the Sun Alerts.


Some links for you:


URL Authentication - IE Silliness

IE dropped support for URL authentication (e.g., http://user:smileytongue:ass@example.com/) around 2004. There are plenty of discussions out there about the merits and problems with URL authentication, so I won't comment on it yet again. However, it is still in the RFC.


If you try to load a URL with authentication in IE 6, you see the message "Invalid Syntax Error: Page Cannot Be Displayed" -- which at least points to the fact that there may be a problem with the link you followed. However, I happened to notice in IE 7 that they've dumbed it down a little further: "Windows cannot find 'http://user:smileytongue:ass@example.com/'. Check the spelling and try again"


If you don't put the "http://" in your browser (because for years browsers have been teaching people not to type the protocol), you get the completely different error "The webpage cannot be displayed."  


Way to go IE team! Rather than providing a better user experience, you hint that the site name is incorrect and leave it alone. Good job helping to educate your users.


Incidentally, Firefox, Safari and Opera will ignore invalid syntaxes like http://@example.com/ so you could create links that exclude IE users, should you be into that sort of thing for fun or profit.

Subdomains With Hyphens

I’ve been running a lightweight web crawler for a while just to look for interesting things. Recently I’ve noticed several web sites with hyphens at the beginning or end (or both) of their subdomain names/labels. The first time I saw it, I chalked it up to a link error, but after noticing it a few times it warranted investigation.

 

Here’s an example with both: http://-brainfreeze-.blogspot.com/

 

Obviously Blogspot allows that, but I got to wondering if that’s technically legitimate, and if so, what software accepts or barfs on it? So let’s look at the mess of RFCs that I found… RFC-608, RFC-592, RFC-1033, RFC-1034, RFC-1132, RFC-1738 and RFC-1912 all throw their hands into the host/domain name cookie jar (did I miss any?).

 

RFC-1912, which seems to be the latest, says (emphasis mine):


Allowable characters in a label for a host name are only ASCII letters, digits, and the `-' character. Labels may not be all numbers, but may have a leading digit (e.g., 3com.com). Labels must end and begin only with a letter or digit.

 

So… my interpretation is that that “-brainfreeze-“ and others are in violation of this RFC. But we all know how this really works—it only matters what the browsers support—so what do they do?

 

Not surprisingly, IE, Firefox (Mac & Windows), Safari (Mac) and Opera (Mac) all open the web site without a problem. Safari on the iPhone, however, fails with the error “Safari can't open the page because it can't find the server.”


 


So that leads us outside the browser…

 

Some quick mail tests show Outlook, Thunderbird and Gmail all will accept them in a “To” field. Apache seems to have no issues with them, either.

 

Plesk, a popular web server management package, disallows them, saying the subdomain field has an “improper value.”

 

For programming languages, PERL’s LWP module won’t load them (“bad hostname”), and Ruby’s Hpricot library won’t either (“the scheme http does not accept registry part”).  PHP with include/require fails (“php_network_getaddresses: getaddrinfo failed: Name or service not known”). Python’s urllib2 also spits up on it (“IOError: (-2, 'Name or service not known')”).


 


However, these errors may not be in any particular language or program, but based on underlying name resolution issues. It’s important to note that nslookup on Windows, and nslookup/dig on OS X and CentOS 5.2 don’t have any problems with these names (on the same hosts those languages were tried on).  I’ll try to look at the resolution libraries soon for some of those and post a follow-up.

 For web sites, Archive.org's Wayback Machine gives a vague error if you try to look up our friend "-brainfreeze-" (it differs from a nonexistent host), but Google's cache of it works. Tinyurl doesn't have any issues either. 

So is this a security issue? Probably not directly—but it may be a problem using tools against names that fit this pattern. What happens if your proxy or filter fails to parse the name, what tools rely on a “broken” ping or name lookup before they do something, what tools use urllib2 or LWP under the hood, and what hostname parsing routines will simply say they are invalid and return an error? Any one of those “minor” issues could cause a compliance failure, if not a real security issue.

 

So to recap, what failed in testing?


-          PERL LWP


-          Ruby Hpricot


-          Python urllib2


-          PHP include/require


-          Plesk


-          Safari (iPhone)


-     The Wayback Machine

 

Find others? Post in the comments, and give http://–brainfreeze-.blogspot.com/ some love for being a good test site.

Labels: Firefox| IE| iPhone| Research

Finding SQL Injection with Scrawlr

 Yes, we know that other blogs on this issue have included this comic, but it's just too perfect to not reference it


You have likely been tracking the mass SQL Injections that are currently sweeping through the net. Just last night I was shopping on www.ihomeaudio.com when I noticed they had been injected (they have since fixed their site). HP started to observe these attacks in January. They spread to over 500,000 sites by April before calming down and then picking up again in May. Most of the sites hit were initally Microsoft IIS ASP applications, causing many security companies to mistake this for some sort of new vulnerability in IIS and leading Microsoft to research the possibility, but alas, it's just our old friend, SQL Injection. Indeed we now see this attack hitting ASP and PHP sites and thanks to Google, it's easy to see just which sites out there have been hit.


While we were closely following the situation, the nice folks at Microsoft contacted us to see if we could work together to help people identify and cope with this issue. Together we quickly developed an action plan. The Microsoft Security Response Center (MSRC) was in a tough spot, hundreds of thousands of ASP sites were getting hacked, yet the vulnerability wasn't something Microsoft could release a patch for. SQL Injection is an issue that occurs because of poorly written web code interfacing with the web sites backend database and the solution was much more complicated than a simple patch. Developers were going to have to learn about security and were going to have to patch their code if they were going to solve this. Microsoft's Security Vulnerability Research & Defense has a blog about this problem as well where they share Microsoft's recomendations for this problem.


Now if you are no stranger to web security, you might be saying "well duh" right about now. Unfortunately to at least 500,000 sites on the Internet this concept is still pretty new and if you are one of the folks who are just now learning what SQL Injection is, I highly recomend you read HP's Web Security Research Group white papers on verbose and blind SQL injection located in our HP application security resource library.


Introducing HP Scrawlr



 



When Microsoft contacted us, they asked us to equip their customers with the tools necessary to quickly find SQL Injection vulnerabilities in their sites. HP's application security software, DevInspect, QAInspect and WebInspect all find SQL Injection and countless other security vulnerabilities. DevInspect can even inspect your source code for SQL Injection as well and guide developers through the process of fixing their code. But what if you need to just quickly look for SQL Injection before you decide how you are going handle the issue? We needed something quick, highly accurate and easy to download and install.


Scrawlr, developed by the HP Web Security Research Group in coordination with the MSRC, is short for SQL Injector and Crawler. Scrawlr will crawl a website while simultaneously analyzing the parameters of each individual web page for SQL Injection vulnerabilities. Scrawlr is lightning fast and uses our intelligent engine technology to dynamically craft SQL Injection attacks on the fly. It can even provide proof positive results by displaying the type of backend database in use and a list of available table names. There is no denying you have SQL Injection when I can show you table names!


Technical details for Scrawlr



  • Identify Verbose SQL Injection vulnerabilities in URL parameters

  • Can be configured to use a Proxy to access the web site

  • Will identify the type of SQL server in use

  • Will extract table names (verbose only) to guarantee no false positives


Scrawlr does have some limitations versus our professional solutions and our fully functional SQL Injector tool



  • Will only crawls up to 1500 pages

  • Does not support sites requiring authentication

  • Does not perform Blind SQL injection

  • Cannot retrieve database contents

  • Does not support JavaScript or flash parsing

  • Will not test forms for SQL Injection (POST Parameters)


Download Scrawlr


You can download Scrawlr by visiting the following link: https://h30406.www3.hp.com/campaigns/2008/wwcampaign/1-57C4K/index.php?mcc=DNXA&jumpid=in_r11374_us/en/large/tsg/w1_0908_scrawlr_redirect/mcc_DNXA.


Scrawlr is offered as-is and is not a supported product. Assistance may be available from other Scrawlr users in our online Scrawlr forum located at http://www.communities.hp.com/securitysoftware/forums/198.aspx.


You can learn more about the HP Web Application Security Group and the HP Application Security Center by visiting our Security Community site at www.communities.hp.com/securitysoftware/ or by visiting our product information page at www.hp.com/go/securitysoftware/

Search
Showing results for 
Search instead for 
Do you mean 
About the Author(s)
HP Blog

HP Software Solutions Blog

Featured


Follow Us
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.