HP LoadRunner and Performance Center Blog

Displaying articles for: May 2009

Learn: what can you really get out of a Performance CoE

I've been working in centralized performance testing organizations for more than 6 years, giving presentations and consulting on Perf CoE - to the point where I didn't think there was much more to learn.  Then I read the preliminary research from Theresa Lanowitz who digs deeper into the true value of doing centralized performance testing.  There is compelling new evidence to show the real value that our customers are finding, the clear benefits and some of the hidden value.   


Please consider attending this session from Theresa Lanowitz, founder of voke Inc., as she presents the findings of a recent Market Snapshot on Performance Centers of Excellence. In this presentation, she will discuss:



  • Building a performance CoE

  • Achieving performance CoE ROI – qualitative and quantitative

  • Gaining organizational maturity through a performance CoE

  • Realizing the benefits, results, and strategic value of a performance CoE

Attendees will receive a copy of the voke "Market SnapshotTM Report: Performance Center of Excellence."


Click here to Register

Labels: LoadRunner

ROI: You Get What You Pay For

We've all heard that saying. But how many times do we really follow it? We have bought, ok I have bought, cheap drills, exercise machines, furniture, only to be sorry about when they break prematurely. Or you find a great deal on shoes only to have them fall apart on you while you are in a meeting with a customer. I'm not saying that happened to me, but I know how that feels.

Cheaper always seems like it's a better deal. Of course it's not always true. I can tell you that now I pay more for my shoes and I'm much happier for it :smileyhappy:. No more embarrassing shoe problems in front of customers (not saying that it happened to me). In fact when my latest pair of shoes had an issue, I sent them back to the dealer and they mailed me a new pair in less than a week! That's service. You get what you pay for.

The same holds true for cars, clothes, hardware, repairs, and of course software testing tools. You knew I was going to have to go there.

I hear from some people that Performance Center is too expensive. I'm always amazed when I hear that. I'm not saying Performance Center is for everyone. If you don't need PC's features and functionality, it may appear pricey. If you are only looking for a simple cell phone, then a phone that connects to the internet and to your email and also has a touch screen may seem a little pricey. But if you need those features then you are able to see the value in those features.

I can sit here, go through each unique feature in Performance Center and explain to you the value (Not saying that it will not come in a future blog :smileyhappy: ). But why would you listen to me, I'm the product manager. Of course I'm going to tell you that there is a lot of value to PC. Well IDC,a premier global provider of market intelligence and advisory services, has just released an ROI case study around HP Performance Center.

A global finance company specializing in real estate finance, automotive finance,
commercial finance, insurance, and online banking was able to achieve total ROI in 5.6 months. Yes! Only 5.6 months, not 1 or 2 years. But a total return on investment in 5.6 months. If anything I think we should be charging more for Performance Center :smileyhappy:. This company did not just begin using PC, they have been using PC for the last 4 years. And during that time they have found a cumulative benefit of $24M. I'd say that got a lot more than what they were paying for. Not only did they see a 5.6 month return on the investment but they are seeing a 44% reduction in errors and a 33% reduction in downtime in production.

What gave them these fantastic numbers?

Increased Flexibility
By moving to Performance Center they were able to schedule their tests. Before PC they had controllers sitting around being idle while other controllers where in high demand based on the scenarios that were on them. But once they were able to start to schedule their tests, they began performing impromptu load tests and concurrent load tests. They started to see that they were able to run more tests with fewer controllers.

Increased Efficiency
While they are able to increase their testing output, they didn't increase their testing resources.
Their testers/engineers were able to more through PC than what they could do with any other tool.

Central Team
With the central team they were able to increase their knowledge and best practices around performance testing. By doing this along with performing more test, they were able to reduce their error rate by 44% and their production downtime by 33%.

So you get what you pay for. Put in the time and money. Get a good enterprise performance testing product. Invest in a strong central testing team. You will get more out, than what you put in. In the case of this customer they got $21M out over 4 years.

Also invest in good shoes. Cheap shoes will only cause you headaches and embarrassment (Not saying that it happened to me).

Managing Changes – Web 2.0, where's your shame

Oh, this strange fascination started in 2004 when they coined this new generation of ‘web development’ called Web 2.0.   I witnessed this evolution of technology from my seat in steerage at Microsoft as customers switched from the old Active Architecture (remember Windows DNA?) to the warm impermanence .NET and J2EE architectures for web applications.  Out with the old and in with the new, but the performance problems were generally the same – memory management, caching, compression, heap fragmentation, connection pooling, and so on.  It might have had a new name, but it was the same people making the same mistakes.  Back then we dismissed some of these new architecture as unproven or non-standard.  But that didn’t last long.  Now almost 5 years later with Web 2.0, any major player in the software industry that hasn’t adopted the latest web architectures is being spit on as being plainly outdated or stuck with the label of being traditional.


When it comes to testing tools and Web 2.0, I think that “traditional” does not equate to obsolete – no matter how some of the “youngsters” in the testing tool market may like to imply.  The software industry is competitive, certainly and I think new companies and software should just evangelize the positive innovations they have and then the facts can speak for themselves.  If the ‘old guys’ can’t support new Web 2.0 stuff…then it will be obvious soon enough.  For instance, if a new testing tool company doesn’t fully support AJAX clients it’s just unacceptable at this point.


However, I do believe it is fair game to evaluate existing software solutions (pre-Web 2.0) on how well they can be adapted to support newer innovations in technology.  As for LoadRunner, I think we have a long history of adapting and embracing every new technology that has come along.  I started using LoadRunner with X-Motif systems running on Solaris.  That era and generation of technology is long since died (no offense intended to Motif or Sun).  Today, the same concepts for record, replay , execution, scripting, and analysis are still innovative and very relevant.  As long as the idea for the product is still valid, you can still deliver a valid product.


Adapting to changes here in LoadRunner we usually start with overcoming the technical hurdles for creating a new virtual user, or updating an existing one.  And as I stated above, we have a long and rich history of doing this – probably more than any other testing tool.  As an example, in versions 9.0, 9.1 and 9.5 we have continued to improve our support for AJAX, Flex and Asynchronous applications.  We respond to change quite well and even if we take some extra time to evaluate every aspect of what this ‘new web’ change means to our customers.  It’s worth getting right and not being swayed by the hype of the ‘Web 2.0’ label.


Let me finish by stating that these new web technologies as challenge to testing tools, but it’s even more of a change to testers.  I’ve heard that many-a-tester gets a surprise by the next version of the AUT which secretively has implemented new Web 2.0 architecture or even started using web services calls to a SOA.  Change is a surprise only if you’re unaware or unconscious.  Sure, it would be a failure to not communicate to QA that there were some significant technology changes coming, right?  To some, this would sound too familiar.  Like an institutionalized version of “throw it over the wall” behavior, but honestly these new technologies (like AJAX) have been around for nearly 5 years now.


As for most testers, here’s a thanks to Web 2.0 – “You've left us up to our necks in it!”

Video: Understanding Iteration Pacing

I recently got a question about LoadRunner’s Iteration Pacing configuration which is part of the Virtual User Run-time settings. The best way to illustrate the difference between think times and iteration pacing is to show you in a video whiteboard session.




(if your browser doesn't show the video in-line, just click here)

Labels: LoadRunner

Hey EMC – how’s it feel to run a 100,000 user benchmark?

Back in November 2008 we got a phone call from our long-time customers at EMC Documentum. They were going to attempt a very large load test pushing their solution scalability to 10-times their normal volume. This would be an absolutely huge test for the Documentum system and they believed that it would be the world’s largest Enterprise Content Management (ECM) benchmark ever to succeed.


Of course they wanted LoadRunner to run 100,000 virtual users. And they needed lots and lots of HP hardware and storage.


So, we’ve got some good friends up in the Microsoft Enterprise Engineering Center and we all got together to deliver this massive test. And recently we published all the results and a cool video that explains how the testing was accomplished.


Check out the EMC benchmark results here!


The surprising thing to us was the LoadRunner Controller performance on the latest HP commodity hardware. It did really well at running 50k vusers and probably more than that if we had more memory. Also, when you consider the whole system they were testing it didn’t take much hardware at all to deliver sub-second performance for 100k user load. And don’t forget – the database was SQL Server 2008, the latest release from Microsoft. You can see a picture of the 64 CPU’s on the SQL database Craig’s blog.


Congrats EMC and SQL Server…it’s nice to see super scalable systems in action!

Labels: LoadRunner

Diagnostics, Diagnostics and more Diagnostics

When I was first doing testing back at Microsoft Labs we had 2 main roles on our performance test consulting team: test consultants and system consultants. As a Test Consultant I was responsible for designing, planning, developing and executing performance tests in the test labs. Before joining this team I had been working for about 10 years with LoadRunner and other performance testing tools. The system consultants were responsible for hardware configuration and setting up the system under test. At that time, I really thought I knew performance and testing better than most – (heck, I’d been working with LoadRunner for an eternity!)


But I learned quickly the difference between being a performance tester and really working as a performance engineer.


As just a performance tester – I was conditioned to just find the bugs and report them. But what I learned at Microsoft is that to be really effective and efficient I had to understand WHY the performance was bad and then also suggest different options for fixing the performance defect. I learned that performance analysis and engineering is a deep and vast discipline that requires advanced technical skills and robust experience in root-cause-analysis. At Microsoft, we relied heavily on developer consultants who used private profilers and the windows debugger to dig deeply into the applications. Our performance gurus for SQL Server used a simple SQL Profiler tool and mostly relied on their advanced experience and knowledge of database performance tuning. We were truly spoiled by the abundance of performance knowledge working on our team. But how can you learn to become a performance guru?


Use LoadRunner Monitoring together with Diagnostics.


With HP Diagnostics you’ll actually see deeply into the application code as it is running and Diagnostics can open the black-box for you to learn about system performance and engineering. I think that’s the most important investment you can make in your career as a performance tester. As you dig into the system under test you’ll be able to see exactly why the % CPU utilization is so high, and why so many bytes/sec are being pumped through the stack and flooding into the network adapter. For the database you can start seeing exactly the SQL statements and returned data sets, which will allow you to start learning about how the database engine works to deliver that results set back to the client. When you use the LoadRunner Monitors and Diagnostics together, you’ll have the best ability to view all the inner-workings of the application’s architecture and how it’s built. You can accelerate your skills and experience by using LoadRunner Monitors and Diagnostics together.


The more you learn, the more valuable you are.

Search
Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • I have been working in the computer software industry since 1989. I started out in customer support then software testing where I was a very early adopter of automation, first functional test automation and them performance test automation. I worked in professional services for 8 years before returning to my roots in customer support where I have been a Technical Account Manger for HP's Premier Support department for the past 4 years. I have been using HP LoadRunner since 1998 and HP Performance Center since 2004. I also have strong technical understanding of HP Application Lifecycle Management (Quality Center) and HP SiteScope.
  • Malcolm is a functional architect, focusing on best practices and methodologies across the software development lifecycle.
  • Michael Deady is a Pr. Consultant & Solution Architect for HP Professional Service and HP's ALM Evangelist for IT Experts Community. He specializes in software development, testing, and security. He also loves science fiction movies and anything to do with Texas.
  • Mukulika is Product Manager for HP Performance Center, a core part of the HP Software Performance Validation Suite, addressing the Enterprise performance testing COE market. She has 14 years experience in IT Consulting, Software development, Architecture definition and SaaS. She is responsible for driving future strategy, roadmap, optimal solution usage and best practices and serves as primary liaison for customers and the worldwide field community.
  • HP IT Distinguished Technologist. Tooling HP's R&D and IT for product development processes and tools.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
HP Blog

HP Software Solutions Blog

Featured


Follow Us
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.