Unified communications systems today are becoming increasingly more complex. The speed with which emerging technologies are evolving is mind-boggling, and the level of complexity can vary greatly, depending on the organization and industry.
With UC environments prone to regular change, like ongoing software and systems upgrades, additions and improvements, it's no wonder that remote working has added a whole other level of intricacy with virtual users, and given rise to a new industry term - 'Performance Engineering'.
The worldwide evolution of unified communications underscores the need for performance testing tools and performance monitoring to report when changes affect the overall experience for users and customers. Any organization's UC and contact center system depends on peak performance at all times. Performance testing takes into account all performance acceptance criteria, benchmarks and standards when a system is under stress.
What exactly is performance testing?
Performance testing widely refers to the measurement & evaluation of the functional effectiveness of a software system or a component. Important factors include reliability, scalability, efficiency, interoperability, as well as its stability under load.
Performance testing provides developers and system managers with the diagnostic information they need to eliminate performance bottlenecks, and to ensure that any new system component conforms to the specified performance criteria.
Why should I conduct performance testing?
Testing your organization's unified communications and contact center ecosystems provides performance data that's critical to delivering quality, consistent end-user and customer experience.
Performance tests will help ensure your software meets the expected levels of service and provide a positive user experience, and with the right testing tools, will highlight improvements you should make to your applications relative to speed, stability, and scalability before they go into production.
The adoption, success, and productivity of any software application depends directly on properly implemented performance testing. But first let's clarify the categories of performance testing, and how they relate to this guide - which is mainly concerned with non-functional testing.
What types of things will performance testing reveal?
Organizations run performance testing metrics for one or more of the following reasons:
- To find out where and when computing bottlenecks are occurring within an application.
- To determine whether the application satisfies performance requirements (for example, if the system can handle up to 1,000 concurrent users).
- To verify whether the performance levels claimed by a software vendor are what they're supposed to be.
- To compare two or more systems and identify the one that performs best.
- To measure stability under peak traffic event
Functional testing and non-functional testing - what's the difference?
These types of performance tests verify that each function of the software application operates in conformance with the requirement specification, and it is not concerned about the source code of the application.
Every functionality of the system is tested by providing appropriate input, verifying the output and comparing the actual results with the expected results. This testing involves checking of User Interface, APIs, Database, security, client/ server applications and functionality of the Application Under Test. This type of testing can be done either manually or using automation
Non-functional performance tests check aspects like usability, reliability, flexibility, interoperability etc that are related to a software application. Non-functional performance tests are explicitly designed to test the readiness of a system as per non-functional parameters which are never addressed by functional testing.
A good example of non-functional performance tests would be to check how many people can simultaneously login into a software application.
Non-functional testing is equally as important as functional testing and affects client and user satisfaction.
Image source: Guru 99
Let's look at some of the key non-functional testing parameters.
The parameter defines how a system is safeguarded against deliberate and sudden attacks from internal and external sources. The main goal of Security Testing is to identify potential threats and measure a system's potential vulnerabilities, so the threats can be encountered and the system does not stop functioning or can not be exploited. Security testing also helps detect all possible security risks in the system, allowing developers to fix the problems through coding.
The extent to which any software system continuously performs the specified functions without failure. Reliability Testing checks whether the software can perform a failure-free operation for a specified time period in a particular environment. The purpose of Reliability testing is to assure that the software product is bug free and reliable enough for its expected purpose.
Recovery Testing verifies software’s ability to recover from failures like software/hardware crashes, network failures etc. The purpose of Recovery Testing is to determine whether software operations can continue after disaster or integrity loss. Recovery testing involves reverting back software to the point where integrity was known and reprocessing transactions to the failure point.
The parameter determines the degree to which user can depend on the system during its operation. Stability Testing is done to check the efficiency of a developed product beyond a particular workload capacity, often to a breaking point. Stability testing is also referred to as load testing or endurance testing.
The ease with which the user can learn, operate, prepare inputs and outputs through interaction with a system. It measures how end-users use software applications and exposes usability defects.
Scalability Testing measures the performance of a system or network when the number of user requests are scaled up or down. Scalability testing helps ensure that the system can handle projected increase in user traffic, data volume, transaction counts frequency, etc. It tests a system's ability to meet the evolving needs.
The purpose of Interoperability tests is to ensure that the software product is able to communicate with other components or devices without any compatibility issues.
In other words, interoperability testing aims to determine end-to-end functionality between two communicating systems as specified by the requirements. For example, interoperability testing is done between smartphones and tablets to check data transfer via Bluetooth.
Let's further break down some of the parameters in the performance testing process.
Types of performance testing
Image source: Celestial Systems
Stress testing is a type of performance test that checks the upper limits of your system by testing it under extreme loads. Stress tests not only monitor how the system behaves under intense loads, but how it recovers when going back to normal usage. It checks that KPIs like throughput and response time are the same as before spike in load. Stress testing tools also look for memory leaks, slowdowns, security issues, and data corruption.
What can you measure with a stress test?
Depending on the application, software, or technology being used in your environment/system, what's measured during a stress test can vary, but some of the metrics include overall performance issues, unexpected traffic load spikes, memory leaks, bottlenecks and more:
- Response times. Stress testing can indicate the amount of time it takes to receive a response after a request is sent.
- Hardware constraints. This measures CPU usage, RAM, disk I/O. If response times are delayed or slow, these hardware components could be potentially to blame.
- Throughput. How much data is being sent or received during the stress test based on bandwidth levels.
- Database reads and writes. If your application utilizes multiple systems, stress tests can indicate which system, or unit, is causing a bottleneck.
- Open database connections. Large databases can severely impact performance, slowing response times.
- Third-party content. Web pages and applications rely on many third-party components. Stress testing will show you which ones may impacting your page or application’s performance.
Load testing tools ensures that a network system can handle an expected volume of traffic, or load limit. In other words, load testing shows how a system behaves when bombarded with specific levels of simultaneous requests. Load tests are sometimes referred to as volume tests.
The goal of load testing is to prove that a system is capable of handling its load limit, with minimal to acceptable performance degradation. Before carrying out a load test, testers need to identify.
The example in the graph below, shows a load of 20 users, testing to see that the page time does not exceed 3.5 seconds.
Just as a stress test is a type of performance testing, there are types of load testing as well. If your stress test includes a sudden, high ramp-up in the number of virtual users, it is called a spike test. The goal of spike testing is to see how your system performs in an unexpected rise and fall in the number of users. In performance engineering, spike testing helps determine how much system performance deterioration occurs during a sudden high load.
Another goal of Spike Testing is to determine the recovery time. Between two successive spikes of user load, the system needs some time to stabilize. This recovery time should be as low as possible.
How to do spike testing
1) Determine the load capacity of your software application
2) Prepare the test environment and configure it to record performance parameters based on acceptable performance criteria
3) Define expected load by applying maximum load to your software application using your performance test tools
4) Rapidly increase the load for a set period
5) Set the load back to its original level
6) Analyze the results
Stress testing over a long period of time to check the system’s sustainability is called a soak test. Soak testing is sometimes referred to as endurance testing, capacity testing, or longevity testing, and involves testing the system to detect performance-related issues such as stability and response time.
The system is then evaluated, and resource usage checked to see whether it could perform well under a significant load for an extended period. This type of performance testing measures its reaction and analyzes its behavior under sustained use.
Scalability testing determines if software is effectively handling increasing workloads. This can be determined by gradually adding to the user load or data volume while monitoring system performance and resource usage. Also, the workload may stay at the same level while resources such as CPUs and memory are changed.
Volume testing determines how efficiently software performs in a production environment with large projected amounts of data. It is also known as flood testing because the test floods the system with data.
Configuration testing tests multiple combinations of software and hardware to evaluate the functional requirements and determine optimal configurations under which the software application works without any flaws or defects.
Likely problems encountered during performance testing
In a performance testing environment, developers are looking for a number of issues:
- Speed issues — slow responses and long load times for example , are often found in a test environment and addressed.
- Bottlenecking — This occurs when data flow is interrupted or halted because there is not enough capacity to handle the workload.
- Poor scalability — If software cannot handle the desired number of concurrent tasks, results could be delayed, errors could increase, or other unexpected behavior could happen that affects:
- Disk usage
- CPU usage
- Memory leaks
- Operating system limitations
- Network configurations
- Software configuration issues — Often settings are not set at a sufficient level to handle the workload.
- Insufficient hardware resources — Performance testing may reveal physical memory constraints or low-performing CPUs.
Performance testing best practices
Perhaps the most important tip for performance testing is testing early, testing often. A single test will not tell developers all they need to know. Successful performance testing is a collection of repeated and smaller tests:
- Run performance tests as early as possible in development. Don't wait and rush performance testing as the project winds down.
- Performance testing isn’t just for completed projects. There is value in testing individual units or modules.
- Conduct multiple performance tests to ensure consistent findings and determine metrics averages.
- Applications often involve multiple systems such as databases, servers, and services. Performance test the individual units separately as well as together.
What if I don't do performance tests?
When nonfunctional performance testing is overlooked, performance and UX defects can leave users with a bad experience and cause brand damage. Worse, without a performance test, applications could crash with an unexpectedly increased number of users. Also, accessibility defects can result in compliance fines. And your security could be at risk.
IR Testing solutions
With IR Collaborate's customer experience testing, using advanced performance testing tools, you can have confidence in your voice, web and video. Using our software performance testing solutions, you can identify the gaps between your assumptions and actual system performance, and get the real-time insights you need to deliver a level of service that exceeds customer expectations.
For more information on cloud performance testing, read our blog 'Cloud Performance Testing Tips and Tricks'
Find find out more about the performance testing process, and how our testing solutions reduce risks in your organization, as well as improving customer and user experience.