Software Testing is an indispensable phase in software development and maintenance cycle. It has been estimated that software testing entails between 30-50 percent of software development. When the percentage of manual testing is higher than automated testing, it tends to become cumbersome, costly, and less efficient. The challenges and complexity increases with agile development and high frequency of product releases.
Adopting more automation definitely seems to be a fitting answer, but it has its own set of associated challenges; the key being the availability of automation experts and generation of automated test cases/scripts for testing.
This blog intends to touch upon the different classification frameworks that are available to generate automated test cases and scripts.
Classification of Approaches towards Automated Functional Test script Generation
The need for automation is felt at different stages of the product life cycle, and also vary mainly depending on the organization’s culture, policies and processes. Depending on such varying parameters, the approach and the algorithm to be applied for generating the automated test cases and scripts will also vary accordingly.
Some of the parameters include –
- The source from which the test cases or scripts need to be generated
- The depth and breadth of information available for generating the cases/scripts
- Availability of up-to-date design models
- Development Methodology and frequency of releases
- Criticality of the system under test
In this following section, we introduce an all-around classification framework for automatic test case generation approaches in terms of test type and algorithm
Classification of Approach by Test Type
Classification of Approach by Algorithm
Automatic Software Test Case Generation: An Analytical Classification Framework by Mohammad Reza Keyvanpour1, Hajar Homayouni1 and Hossein Shirazee