Testing is a vast study and when it comes only to testing an application there are various methods and various types of testing’s to be carried out to see that the application build is reliable, bug free as accepted by the client, maintainable and qualitative meeting all the requirements of the project. This spectrum presents the results of developing and evaluating an artifact (specifically, a characterization schema) to assist with testing technique.
Main aim of testing is to confirm the quality of software systems.
Based on the testing information flow, a testing technique specifies the strategy used in testing to select input test cases and analyze test results. Different techniques reveal different quality aspects of a software system, Based on whether the actual execution of software under evaluation is needed or not, there are two major categories of quality assurance activities:
Static Analysis focuses on the range of methods that are used to determine or estimate software quality without reference to actual executions. Techniques in this area include code inspection, program analysis, symbolic analysis, and model checking.
Dynamic Analysis deals with specific methods for ascertaining and/or approximating software quality through actual executions, i.e., with real data and under real (or simulated) circumstances. Techniques in this area include synthesis of inputs, the use of structurally dictated testing procedures, and the automation of testing environment generation.
Generally, the static and dynamic methods are sometimes inseparable, but can almost always discussed separately. In this blog, we discuss the dynamic analysis when we say testing, since most of the testing activities (thus all the techniques studied in this blog) require the execution of the software.
D Y N A M I C T E S T I N G T E C H N I Q U E S
The testing under this type of strategy are totally focused on testing the requirements and the functionality of the application based on the requirements given. The testing will be effective based on the selection of appropriate data as per the functionality and using them and testing it against the requirements given to check for the positive and negative behavior of the system. To carry out testing the tester does not need any knowledge of internal design or code. Test cases are based on requirements and functionality of the application.
Testing with executing the program or a module there are four different types.
1. Equivalence Partitioning Testing
2. Boundary Value Testing
3. Decision Table Testing
4. Pair Wise Testing
5. State Transition Testing
E Q U I V A L E N C E P A R T I T I O N I N G T E S T I N G
Concepts: Equivalence partitioning is a method for deriving test cases. In this method, classes of input conditions called equivalence classes are identified such that each member of the class causes the same kind of processing and output to occur. In this method, the tester identifies various equivalence classes for partitioning. A class is a set of input conditions that are is likely to be handled the same way by the system. If the system were to handle one case in the class erroneously, it would handle all cases erroneously.
WHY LEARN EQUIVALENCE PARTITIONING? Equivalence partitioning drastically cuts down the number of test cases required to test a system reasonably. It is an attempt to get a good 'hit rate', to find the most errors with the smallest number of test cases.
DESIGNING TEST CASES USING EQUIVALENCE PARTITIONING To use equivalence partitioning, you will need to perform two steps
For Example: Login Page verification: Where Username filed accept the word length min 7 & max 20.
So applying Equivalence Partitioning we have three class
- Valid Class: 20 =< Word length >= 7
- Invalid Class: Word Length <7
- Invalid Class: Word Length >20
One more example to give explanation for two classes: In an application we have some question having answer True / False.Now we have two class as per the Equivalence Partitioning
- Valid Class: True
- Invalid Class: False
B O U N D A R Y V A L U E A N A L Y S I S
Concepts: Boundary value analysis is a methodology for designing test cases that concentrates software-testing effort on cases near the limits of valid ranges. It refines Equivalence Partitioning. It also generates test cases that highlight errors. This trick is to concentrate software-testing efforts at the extreme ends of the equivalence classes. At those points when input values change from valid to invalid errors are most likely to occur. Unlike equivalence partitioning, it takes into account the output specifications when deriving test cases. The hope is that, if a system works correctly for these special values then it will work correctly for all values in between.
BVA focuses on the boundary of the input space to identify test cases. Rational is that errors tend to occur near the extreme values of an input variable.
1. Robustness Testing - Boundary Value Analysis plus values that go beyond the limits Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
2. Forces attention to exception handling
3. For strongly typed languages robust testing results in run-time errors that abort normal execution
Limitations BVA works best when the program is a function of several independent variables that represent bounded physical quantities
Applying BVA on EP, as per the above example where we have Login Page verification: Where Username filed accept the word length min 7 & max 20.
- Entered value 6, 7 8 – One correct input
- Entered value 19, 20, 21 – Again one correct input
C O N C L U S I O N:
Objective of software testing is to gain confidence in the software There are many testing techniques which aim to help achieve thorough testing Debate continues as to whether correctness can be inferred when a set of test cases find no errors For the production of correct software the wider the range of testing techniques used the better the software is likely to be.