Testing myths—debunking the top 5 myths of mobile application performance testing

 

Mobile device.png

The mobility wave continues to increase. As a result, mobile application development is expected to outpace native PC projects by at least 400 percent  in the next several years. While development approaches and platforms for mobile are rapidly advancing, approaches to mobile application testing are far behind. 

 

This lack of mobile application testing experience can cause testers to stretch the limits of reality in order to accurately understand how an application will behave.  Only recently have enterprises made the shift from simply asking how to develop mobile apps to focusing how to test them.  These enterprises often turn to the advice of “experts” who are also making the transition from development to testing knowledge. 

 

The myths of mobile performance testing are endemic to the early nature of the mobile performance testing market. They are also contributing to the performance epidemic that has stricken mobile apps..  Analysts are quickly coming of age.  They are now pushing the need and ability to discover and emulate production network conditions in the test environment. By doing so, they’ve begun to debunk mobile application testing myths.  

 

You can read the “Testing myths – Debunking the top 5 myths of mobile application performance testin...

 

Myth 1: Testing in the wild

 

One of the most common and risky approaches to testing performance is to deploy the mobile app and then let it be tested in production.  This “in the wild” testing means performance data will be based on real users and real network conditions.  However, this approach also poses a significant threat to the real end-user experience and to business value.

 

If end users have a disappointing experience that makes them abandon the application or switch to a competitor, the desired end-user engagement or action is at risk.  In this scenario, performance remediation and optimization can only occur if end users report the problem accurately. They must relay the exact steps they took to create the error and note the conditions under which the application was being used.

 

Crowd sourcing “in the wild” testing is just as risky—if not more so.  By employing testers (sometimes globally) to simultaneously test your in-production application, users would expect to be able to understand how load and network conditions affect application behavior.  While crowd sourcing degrades the real user experience because it creates unnatural congestion on the network.

 

The most significant disadvantages of this myth are:

  • The inability to cover all real-world usage scenarios
  • To collect the necessary data for remediation.

Thus, application behavior measured under crowd sourced testing “in the wild” is not accurate or predictive of actual application performance. It doesn’t facilitate rapid issue resolution and poses risks for real customers or end users trying to access your systems at the same time as the “testing.”

 

Myth 2: War driving

 

Some development and QA teams may test an application by deploying it only to a few known testers. They might have these testers walk around a building or drive around a city in order to assess how an application will perform under varying conditions, including network handoffs and poor signals.

 

While this approach poses less risk to real users, it offers no consideration for the production network conditions experienced by the real distributed user base.  Rather, it provides a test that reveals performance data only for a particular location at a particular time of day.  It doesn’t adequately represent distributed user populations experiencing varied network conditions.  In addition, performance issues encountered while “war driving” are typically difficult to repeat and recreate, making remediation costly and imprecise.

 

To put it simply, testing in the wild and war driving are flawed approaches to ensuring performance. They make use of the production network and tactics employed after applications are deployed, and these options are not ideal.

 

Myth 3:  Freeware and partial emulation (a.k.a. compound inaccuracy)

 

Technologies exist to test performance earlier in the application lifecycle when testing is more cost-effective and efficient.  By thoroughly testing before deployment, potential issues can be identified and remediated before end users and businesses are affected.

 

Even if users understand the requirement to recreate network conditions in the test lab, some solutions provide a less than complete attempt at network virtualization.  Some test automation solutions introduce the concept of bandwidth as a network constraint.  This represents a fractional first step when it comes to lab testing.  Bandwidth limitations are often static, rather than dynamic settings.  Static bandwidth makes it impossible to recreate the experience of a mobile user who may be experiencing varying signal strength on a single network, switching between networks or moving from Wi-Fi to a 3G connection.  Further, if the bandwidth metric or any network condition is virtualized in the test lab, and is not based on actual measured conditions from the production network, it represents an additional variable of inaccuracy. It introduces additional risk and uncertainty to test result reliability. 

 

Myth 4: Ignore jitter-streaming media and full emulation

 

It is dangerous to ignore jitter, particularly in the case of streaming media.  Streaming media is particularly susceptible to jitter-data packets must arrive to the end-user device in the correct order and in a timely manner; otherwise playback will be choppy and inconsistent.

 

A customer-centric view on experience, or streaming content performance, is paramount. Both network and application performance must be adequately tested and managed to deliver satisfactory performance to the end user.  Most current toolsets and methodologies fail to bridge the divide between network performance and the performance of applications over the same network.  This gap means that operators may not have sufficient visibility into a customer’s app experience.

 

Myth 5: Sterile Functional Testing

 

“Performance” is sometimes misused by testing vendors.  A limited use of network conditions in a single user or functional test does not adequately represent how distributed user groups will experience app behavior.

It is important to differentiate functional testing from performance testing: 

  • How an application responds to a command (user input) is a functional consideration
  • How quickly the application responds is a performance consideration 

Functional testing can be executed on real or emulated devices.  Regardless, functional testing must be paired with performance considerations in order to deliver reliable results data.  Your development and test teams should be wary of misinformation and confusing marketing messages that promise an “end-to-end” solution.  These could leave an application at risk if they don’t include consideration of the network. 

 

Functional and load testing are important elements in an end-to-end solution.  However, both of these tests require the ability to emulate multiple network locations and users to deliver reliable insight into how an application will perform once deployed. 

 

While solutions exist that allow testers to account for mobile hardware and even distributed third-party services, the mobile network remains an elusive but necessary component of any test environment.  The constantly changing conditions on the mobile network make it particularly challenging in accounting for the impact of network constraints, so it is necessary to distinguish fact from fiction in mobile myths. 

 

For more information, be sure to check out our business whitepaper: Testing Myths.  

 

To learn more about HP Network Virtualization

Stay connected and be sure to find us at: @HPloadrunner or visit our website: hp.com/go/loadrunner

 

Labels: mobile testing
Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author
WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation