HP Security Products Blog
From applications to infrastructure, enterprises and governments alike face a constant barrage of digital attacks designed to steal data, cripple networks, damage brands, and perform a host of other malicious intents. HP Enterprise Security Products offers products and services that help organizations meet the security demands of a rapidly changing and more dangerous world. HP ESP enables businesses and institutions to take a proactive approach to security that integrates information correlation, deep application analysis and network-level defense mechanisms—unifying the components of a complete security program and reducing risk across your enterprise. In this blog, we will announce the latest offerings from HP ESP, discuss current trends in vulnerability research and technology, reveal new HP ESP security initiatives and promote our upcoming appearances and speaking engagements.

Displaying articles for: November 2007

JavaScript strings immutable in Rhino???

Update: Hmmm. I think I'm looking at the wrong thing. This needs more testing/tracing to see exactly whats going on.

Just a quick update from yesterday's post. It appears that Mozilla Rhino (a JavaScript interpreter written in Java) uses Java's String object to represent JavaScript strings inside of the engine. Here is the constructor from /js/src/org/mozilla/javascript/NativeString.java:

 69     private NativeString(String s) {
 70         string = s;
 71     }

This could be bad, depending on what people are storing in JavaScript strings (which are represented as Java String objects). Strings are immutable in Java (and many other languages). You as a developer cannot easily clear out the contents of the String object. As you manipulate a string, mulitple copies are made of its contents. For example consider this Java code:

String foo = "p@$$w0rd";

System.out.print("Your password in upper case is:");


There are now two copies of the password string in memory, "p@$$w0rd" and "P@$$W0RD." Noted security expert John Viega has discussed disclosing sensitive data in memory in length.

Why is all of this an issue? Because the JavaScript language spec doesn't provide an information/warnings/guidance about what you should and should not store in JavaScript strings (and really they shouldn't). It's up the designers and implementors of JavaScript interpreters to explain how their interpreter handles data. However I don't now of a single JavaScript interpreter that does this. Rhino certainly contains no warnings that sensitive data should not be stored in JavaScript strings. To make matters more concerning, Rhino is not typically embedded in a web browser where client-side JavaScript strings shouldn't contain sensitive information anyway (yes, I know that web security readers just let out a laugh). Rhino embedded in other programs/projects where the types of data it could be processing are far more diverse than in a web browser and the probability that sensitive data will be present in JavaScript (and thus Java) strings is higher.

All and all, the Mozilla folks should probably modify Rhino so that it uses a StringBuilder Object instead of a String object to represent JavaScript strings. I haven't dug into SpiderMonkey, but hopefully they are clobbering character arrays with junk before freeing it. Interestingly, I just found this article describing situations where compilers will "optimize" memset() calls to override sensitive sting data. Its possible SpiderMonkey leaves sensitive data lying around as well even if they are trying not to!

[snarfs coffee]... wait, What are you doing?

While reading through an article about Firefox 3 on Security Focus today I snarfed my drink when I read the following passage:

The group also rewrote the Password Manager in JavaScript from C++ to eliminate memory errors, Schroepfer said.

Digging a little deeper I find an article talking about how OS keychain tools can interact with Firefox 3's JavaScript password manager. In the comments of the article is the following tidbit:

The JS portions mostly handle DOM interaction and file IO for
signons2.txt. The two main reasons for switching to JS are simpler code
and increased security (eg, no buffer overflows possible)
. Most of the
Firefox frontend is already JS, so this isn’t exactly a radical change.
But, in any case, the actual encryption of logins continues to be done
be a C++ component (using Triple-DES).

 There are numerous things about this that concern me:

  1. JavaScript code is *not* simple. It is highly dynamic, loosely typed, with late bindings. This means short of a syntax error, all your errors are runtime errors. Not fun to debug regardless of how awesome Firebug is. In fact, we have an entire chapter in our  Ajax Security book about nuances of JavaScript (variable scoping and declaration, oddly performing functions, deadlocks/race conditions/live locks with no mutexs, etc) that make it tricky to develop JavaScript. Dumping one language for another solely to improve readability of your code is admitting you are a poor software architect and, frankly, rather of lame.

  2. Moving to JavaScript because most of Firefox/chrome is JavaScript kind of makes sense. Moving to JavaScript from C++ to "fix" buffer overflows and memory problems is a horrible reason. You are admitting you are incapable of solving a very well known problem and the only solution was to move to a language/runtime that removed the problem for you.

  3. Is this JavaScript interface accessible from plugins or other chrome controls? Please don't tell me you just increased what can be done with Cross Zone Scripting attacks.

  4. As is pointed out in the comments of the second article, programs must be very careful when freeing memory that contains passwords. In C you can blast the buffer with junk before a free(). You have to be extremely careful with passwords in memory for managed languages like C# or Java where strings are immutable. JavaScript? No control what-so-ever over how string data is stored and destroyed. Does Spidermoney or Rhino handle JavaScript strings securely? Hmmmm...

All and all, this is a scary jolt first thing in the morning.

Digging into ASP.NET RegEx Validators

RegEx Validators are handy for implementing Whitelist input validation (our DevInspect product has a library of a hundred or so) so it pays to see what they  actually do under the covers. The following code is from the class
System.Web.UI.WebControls.RegularExpressionValidator which implements the RegEx Validator in ASP.NET. I'm looking at version 2.0 of the .NET framework. The EvaluteIsValid() function is what actually determines if an input matches the allowed regex:

protected override bool EvaluateIsValid()
string controlValidationValue
= base.GetControlValidationValue(base.ControlToValidate);

if ((controlValidationValue == null)
|| (controlValidationValue.Trim().Length == 0))
return true;
Match match = Regex.Match(controlValidationValue,
return ((match.Success && (match.Index == 0))
&& (match.Length == controlValidationValue.Length));
return true;

Lets look at the first if statement closely. The conditional is a logical OR, so we can treat each part as individual if statements. The code says "if the control value is null, the input passes this validator." The control value can only be null if the control doesn't exist, and chances are you would have had a compile error. Either way, I don't really care about this statement.

The second part of the if statement is interesting. If the trimmed value of the input has no length, the input passes validation. This is a little weird. Trim() removes all leading and trailing whitespace from a string. Things like tab, carriage return, linefeed, etc. The full list is in the .NET String Class documentation. So input consisting of nothing but whitespace passes any ASP.NET RegEx Validator, regardless of the regex. This is counter-intuitive. If I set a RegEx validator to use + whitespace-only input should not get through. More interesting is that characters like vertical tab, CR, LF, and form feed are often interpreted as delimiters, so input with nothing but whitespace could in fact do some weird things in a backend app.

Moving along, we see that the input is matched against the supplied patterned. If a match is found and that match begins with the first character of the input and the length of the matched text is the same as the length of the input, then the input passes the validator. Essentially, this is saying that the entire input must match the regex defined. This is good and bad. Its good because I often see developers define a whitelist and forget to use the the "start position" and "end position" characters (^ and $ respectively). Consider an simplified regex for valid email address +@+\.{3}. (This RegEx excludes valid email addresses and is just to illustrate the point). This RegEx matches email addresses like billy@hp.com. It also matches email address like <script>alert("XSS")</script>billy@hp.com because the regex doesn't force that the entire input must match the regex pattern, but merely some subset of the input matches. The RegEx should be ^+@+\.{3}$ where ^ and $ force the entire input to match.

I often see developers make this mistake and in a way Microsoft is solving the problem by saying "Developers are stupid we will also match the entire string so they don't have to." Ok, I like erring on the side of caution, but this prohibits advanced users from using $ and ^ appropriately. Microsoft completely usurps the developer and removes the choice from their hands. You have to use a CustomValidator class to implement RegExs which utilize just ^ or $. However I can live with this because it undoubtedly saves the Internet for silly mistakes.

A final thing that caught my eye was the try ... catch ... block. If the Regex.Match() call throws an exception, the validator returns true indicting the input is safe. This means in event of an error, the validator fails open instead of failing closed! Deciding when applications/appliances/software/hardware/structures should fail open or fail closed is way beyond the scope of this post and the answer is almost always circumstantial based on the individual situations. Quick, should firewalls fail open or closed? Fail open? Well then an attacker knocks out your firewalls and its open seasons on the FTP servers and Samba shares inside your organization. Fail closed? Thats a nifty DoS you built into your network infrastructure now isn't it? when should input validation fail open or fail closed? Again depend, but my gut tells me it should fail closed more often than it fails open.

Ignoring how it should fail is moot if you can't make it fail. How can we make Regex.Match() throw an exception? Null strings would do it, but a brief glance at the code paths in ASP.NET seems to prevent that from occuring. Invalid RegEx syntax would do it, but you get compile-time problems or an immediate runtime error during the Page_Load event, long perform input is validated. So RegEx Validators in ASP.NET fail open if there is an exception, but I can't seem to get it to exception in an exploitable way to sidestep input validation. Perhaps someone else can take a swing at it :-)

Analysis of Larry Suto's comparative case study


In October 2007, Larry Suto released a case study entitled “Analyzing the Effectiveness and Coverage of Web Application Security Scanners,” available for reading at http://www.stratdat.com/webscan.pdf.  The study compared the results of three commercial web application security scanners, including WebInspect.  There has been much discussion in the industry about this study (for a good example, see the “Coverage and a recent paper by L. Suto” thread at http://lists.immunitysec.com/pipermail/dailydave/2007-October/thread.html).  Part of the discussion focuses on Suto’s questionable methodology & conclusions relating to application coverage, and the vagueness of his results.

Since any solid science experiment should be repeatable, SPI Labs set out to re-create Suto’s study to reasonably verify his conclusions and methodology.  In doing so we discovered significant discrepancies between our results and the results reported by Suto.  Attached is our final report (Suto_review_FINAL.pdf), where we indicate the results we received when we tested the same applications.

Ajax Security more than Increased Attack Surface

I got an email from Christ1an the other day asking me what Ajax Security was all about. I was just going to send him the table of contents to the book, but I thought it might be educational to see how the components of Ajax security relate, and where they come from. In Jeremiah's fascinating Web Application Professionals Survey less than 3% of people think there is nothing new about Ajax security which jives with the idea of Ajax Security acceptance.  And while over 2/3 understand that Ajax applications have an increased attack surface, many of the comments showed that some people believe Ajax security is just about an increased attack surface.

Let me assure you, if Ajax Security was only about an increased attack surface two things would have happened:

  1. Addison Wesley won't have asked me to write a 500+ page book about it

  2. Bryan and I would have finished a long time ago :-)

 There are many issues with Ajax Security and hopefully this piece will help people see the bigger Ajax Security picture.


makes applications more responsive. It does this by allowing client-side JavaScript
to asynchronously fetch data from the web server without blocking user
interaction or refreshing the web page. This seems trivial, but it is not. Everything
about Ajax
security stems from this fundamental shift in application architecture

This shift means there must be code running on the client to
send these requests and process the results. So what is it talking to? Web
services running on the server which may or may not have been there before. Attack
surface grows, so at the very least developers have more inputs that need to be
properly secured. Anyone reading Full Disclosure or Secunia knows how well we
are doing in that department as is. But you already knew that, so lets more on.

Ajax application straddle the network and exists on both the server and the client. So what’s in that client side code? Variables names, return
values, string and numeric constants, data types, value ranges, proper program
flow based on which web services are called in what order and which interdependent
values. All good details about the inner working of an application. That black
box is looking a little gray. What’s more there’s an interaction between the
client and the server code and the attacker controls the client. Variable x is
used by web service 1 and web service 3, only its value can be modified by an
attacker between the two uses. An attacker can call the web services out of
order, manipulate the logic, etc. Reverse Engineering of client code is a
growing field, so don’t count on protecting your logic.

The fundamental unit of a traditional web app is a web page
written in HTML. The fundamental unit of an Ajax web application is arbitrary data. So
how do developers move data back and forth between client and server? With a data
transport layer! Sure they can use name/value pairs in a query string when submitting
data, but what about the format for what gets returned? JSON, SOAP, CSV,
Base64, binary data, and custom formats are all fair game. And with it come
the problems of implementing robust parsers and serializers while avoiding Denial
of Service attacks.

And lets not forget forgot all the programming errors of
writing an enterprise web application in still more programming languages. Functions like
String.replace(), String.substring() and RegExs don’t work in JavaScript the
way the work in most languages. It’s interpreted so everything except syntax
errors are runtime errors. And it’s asynchronous, so you can have race
conditions, deadlocks, livelocks, etc. Have fun QAing that. There are also odd
properties of JavaScript, like dynamic reassignment of functions. And you can
do a lot more then just JSON hijacking. Function clobbering, logical MITM, and
even real-time JavaScript environment monitoring to attack lazy loading are all possible and explored in the book.

If developers do silly things in client-side code now, what
do you think they will do when that client-side code actually does something
meaning for the application? It was pretty hard to have a database connection string
on the client in a traditional application because there was not much that
could be done with it. Ajax
applications are talking directly to data tiers and more business logic is
ending up on the client. After all, the client is
a place where code can execute so developers use it. They are doing more
and more on the client and even adding new features to the client. Now we have
client-side storage, which opens a giant can of worms. How is the data stored? It’s
it session or global oriented? How much can I store? What can I store? Do I have to serialize everything to a string? More data transport layer issues! Can it auto expire or is the develoepr responsible for clean up? Does it require user interaction to clear it? What
are the access controls? Which sites can access also access the data? Does exposing access to
the data expose my API as well? I’m looking at you crossdomain.xml! Cross
directory, cross domain, and cross port attacks, data integrity, and missing
data are all concerns of the developer. And every storage method, from cookies
to DOM Storage, to Flash LSO and IE’s userData do things differently. Too bad
Dojo.Storage abstracts away which is being used when you application must be
written with certain assumptions. Cooke storage = built in HTTP Response
Splitting vulnerability.

And speaking of enabling the client, look at offline Ajax frameworks like Google
Gears, Dojo.Offline, and Microsoft Sync. Now we are writing so much of a web
application in JavaScript that it can do meaningful things when it’s not even
connected to the Internet. There can be no isolation of business logic there. Code
transparency issues just got worse. Transparent local web servers to serve
cached resources that can be poisoned from other web apps (including poisoning
the JavaScript logic of the offline Ajax
app). Extended threading models that allow CPU intense JavaScript to run in a
separate thread (think Jikto, JavaScript port scanning, or XSS-Proxy running
all the time). JavaScript accessible SQL databases that can auto sync with the
server are even cropping up. Yes, with offline Ajax applications you can have client-side SQL injection attacks!

The widget craze is largely possible with Ajax and thus is
an important subject for Ajax
security. After all, could you imagine a web page with 9 widgets that all have
to do post backs with full page refreshes to exchange data with the server? The
web application would always be blocking and unresponsive with nothing but “Receiving
data from PageFlakes” on the screen! IFrame jails, cross widget attacks and
hijacking, degrading trust of 3rd party data sources, cross Flash
communication, data leakage, and crazy/bad stuff like JavaScript implementations
of RSA and SHA-256 are all covered. And let’s not forget mashups. You want to
take data from numerous untrustworthy sources and load them in your security
context? Cascading exploits, misplaced trust, and the Confused Deputy all play
out in that arena.

And what about all this rich content you are accepting? Web
2.0 is the world of user generated content after all. RSS feeds, sitemaps, Cascading
Style Sheets, JavaScript code snippets. Forget validating that a telephone
number is really a telephone number, how do we validate rich content? How do
you deal with phishing attacks and polymorphic JavaScript which evade widget filter
functions? What about attacks that exploit nothing but DOM styles? A magician succeeds
by fooling your eyes and nothing else; the same is true for presentation layer

What native security features do the frameworks like DWR,
Dojo, Prototype, and ASP.Net Ajax provide? What do you have to implement
yourself? What features are on by default that you don’t know about? How does a
developer choose an appropriate framework? How do QA test an Ajax application when a textbox might contain
a SQL injectionable web service but the ODBC message is suppressed by the

We’ve only just started to talk about what’s in the Ajax
Book. There is still JavaScript malware, control flow exploitation, request
origin uncertainty and more. Hopefully this little taste shows you that there
is far far more to Ajax Security than some JavaScript eye candy and an
increased attack surface. Developer, QA professional, and hacker alike will all
find Ajax Security an enormously powerful resource to help design, build, test, and
hack Ajax

Showing results for 
Search instead for 
Do you mean 
About the Author(s)
HP Blog

HP Software Solutions Blog


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.