Skip to main content Skip to main navigation Skip to search Skip to footer
Type to Search Subscribe View Tags

Performance by Intent

Performance by Intent
Himanshu Agrawal - Director | February 28, 2011

There has been many instances in past where performance bottlenecks discovered late in the testing or post release leads to costly rework, sometimes leading to refactoring of the application. This has an adverse impact on the overall schedule & cost. Quoting a recent example wherein we faced issue with system response time of a software product, which had deteriorated over releases. Initial analysis led us to identify that the problems was with a couple of "query transactions" as these were at that point of taking maximum time and were the most resource intensive operation. However, on deeper analysis, we found that the actual culprit was the "modify transaction" of a module, which was consuming more system resources than usual.

Early detection of performance problems has been widely discussed area and also many practices have evolved over a period and many project teams practice this in their own defined way. However, the performance problems still surfaces. When we take a step back and analyze, we often find association of the problem was due to the fact that it was left at either the individual discretion or for the lack of a well defined formal approach to practice the same in a holistic way.

In our suggested approach of "Performance by Intent", we have a defined ways to address this throughout the lifecycle right from the product conceptualization. Few of the highlights of this approach are:

  • Definition of  the performance objectives
  • Elicitation of the  performance requirements  at the product realization phase
  • Incorporation of  the performance requirements in design & implementation stage
  • Calibration of Unit Level Performance threshold
  • Automated generation of Unit Level Performance test cases (ULPTC) & its execution
  • Incorporation of ULPTC execution as part of build process
  • Performance Analysis based on execution data from ULPTC Execution

During the initial phase of product development, this approach leverages the repository of good practices & non functional requirements to help define the applicable performance requirements for the product being developed.  Also, a reference is made here to the historical data as well as competitor product analysis.

During the architecture and design phase, an "Architect guide" helps the architects and designers in defining the design and coding guidelines along with putting in place the system performance thresholds, thereby assisting the auto generation of unit level performance test cases. These thresholds not only provide guidance to the developers at runtime but also provide them with the ability to perform on the spot check for the compliance. The unit level developer scorecard also help them ascertain the quality of their code in an automated fashion with reference to exhaustive performance attributes like unit level response time, number of parallel execution, memory leaks, call flow stack, heap memory utilization, and cyclomatic complexity. The comparison against the threshold ensures that the basic functionality along with the performance threshold is being met at all times.

This is followed by poka-yoke via the Continuous Performance Integrations (CPI) process. The automated build system ensures that tests are run on the nightly builds and any non-compliant code is caught early and regularly fixed as the need may be. The Unit Level Test Cases (ULTC)and the Unit Level Performance Test Cases (ULPTC) are run in an automated fashion as part of the CPI, and the CPI Red-Amber-Green dashboard provides real time visibility about the health of the product development to the management team. The notifications as configured are sent as desired by the management team.

The final round of software product testing is more attuned to integration system testing, wherein test efforts are focused on surfacing out issues related to the system under test and how it behaves under varied configurations and environments. These too can be compared and co-related with the unit level test results as and when needed for faster diagnosis.

The key advantages with this approach are:

  • Predictable performance each time any time
  • Enhanced  application stability and scalability
  • Faster time to market
  • Lesser overhead on  sustenance

Reduced  test cycles with automated detection of performance deviations  at each level

Contact Us

We will treat any information you submit with us as confidential. Please read our privacy statement for additional information.

We will treat any information you submit with us as confidential. Please read our privacy statement for additional information.

Sign in to Add this article to your Reading List