One of the most complicated questions to reply while testing software program is to determine when to terminate because there is no way to know if the error just discovered is the last error. Indeed, even in small programs it is unwise to believe that all errors will ultimately be found.
Nevertheless, given this perplexity and bearing in mind that economics prescribe that testing will eventually have to be completed. You might be curious if this issue has to be resolved in a simply arbitrary way, or maybe there are some good stopping criteria. Moreover, the criteria appear to be senseless and counterproductive in practice. The most commonly used completion criteria are:
- Stop if the allotted test time has expired.
- Stop if all tests are executed without finding errors or if test cases are unsuccessful.
Both of these criteria are worthless; the first one can be satisfied without doing anything (namely, it does not measure efficiency of testing); the second one is useless as well, because it does not depend on the quality of the tests. In addition, the second criterion is backfire, because it instinctively encourages you to develop tests having a low probability of finding errors.
People tend to focus on the process of working towards their goals. If the programmer sets a goal to finish the task when test cases fail, he will unconsciously develop tests that lead him to this goal, neglecting the useful high probability test cases.
Ukrainian software testing services companies offer you cost effective software testing services, including Performance testing, Functional testing, Security testing, Compatibility testing and so on.
There are three categories of more or less acceptable criteria. The first category (which is far from being the best) includes completion criteria based on the use of certain test design methodologies. For example, you can define the criteria for completing the module testing as follows:
Test cases can be derived from:
1) meeting the combinatorial coverage criterion and 2) the boundary value analysis across the module interface specification and all resulting tests are ultimately unsuccessful.
You might define the functional test as being complete when the following conditions are met:
Test cases are obtained by:
1) boundary value analysis;
2) cause-effect graphing;
3) error guessing, and all resulting tests are eventually unsuccessful.