TESTING IN SOFTWARE ENGINEERING
TESTING
The aim of program testing is to help realise identify all defects in a program.
Testing a program involves executing the program with a set of test programs that fails to behave as expected, then the input data and the conditions under which it fails are noted for later debugging and error correction.
Testing Terminologies
Mistake
A mistake is essentially any programmer action that later shows up as an incorrect result during program execution.
A programmer may commit a mistake in almost any development activity.
For example, during coding a programmer might commit the mistake of not initializing a certain variable, or might overlook the errors that might arise in some exceptional situations such as division by zero in an arithmetic operation.
Both these mistakes can lead to an incorrect result.
Error
An error is the result of a mistake committed by a developer in any of the development activities.
Among the extremely large variety of errors that can exist in a program. One example of an error is a call made to a wrong function.
The terms error, fault, bug, and defect are considered to be synonyms in the area of program testing. Though the terms error, fault, bug, and defect are all used interchangeably by the program testing community.
Failure
A failure of a program essentially denotes an incorrect behaviour exhibited by the program during its execution.
An incorrect behaviour is observed either as an incorrect result produced or as an inappropriate activity carried out by the program.
Every failure is caused by some bugs present in the program.
Test Case
A test case is a triplet [I , S, R], where, I is the data input to the program under test, S is the state of the program at which the data is to be input, and R is the result expected to be produced by the program.
The state of a program is also called its execution mode.
As an example, consider the different execution modes of a certain text editor software.
The text editor can at any time during its execution assume any of the following execution modes—edit, view, create, and display.
In simple words, we can say that a test case is a set of test inputs, the mode in which the input is to be applied, and the results that are expected during and after the execution of the test case.
A n example of a test case is—[input: “abc”, state: edit, result: abc is displayed], which essentially means that the input abc needs to be applied in the edit mode, and the expected result is that the string abc would be displayed.
Test Scenario
A test scenario is an abstract test case in the sense that it only identifies the aspects of the program that are to be tested without identifying the input, state, or output.
A test case can be said to be an implementation of a test scenario.
In the test case, the input, output, and the state at which the input would be applied is designed such that the scenario can be executed.
An important automatic test case design strategy is to first design test scenarios through an analysis of some program abstraction (model) and then implement the test scenarios as test cases.
Test Script
A test script is an encoding of a test case as a short program.
Test scripts are developed for automated execution of the test cases.
A test case is said to be a positive test case if it is designed to test whether the software correctly performs a required functionality.
A test case is said to be a negative test case, if it is designed to test whether the software carries out something that is not required of the system.
A positive test case can be designed to check if a login system validates a user with the correct user name and password.
A negative test case in this case can be a test case that checks whether the the login functionality validates and admits a user with wrong or bogus login user name or password.
Test Suite
A test suite is the set of all tests that have been designed by a tester to test a given program.
Testability
Testability of a requirement denotes the extent to which it is possible to determine whether an implementation of the requirement conforms to it in both functionality and performance.
In other words, the testability of a requirement is the degree to which an implementation of it can be adequately tested to determine its conformance to the requirements.
Verification Versus Validation
1.Verification is the process of determining whether the output of one phase of software development conforms to that of its previous phase.
Whereas validation is the process of determining whether a fully developed software conforms to its requirements specification.
Thus, the objective of verification is to check if the work products produced after a phase conform to that which was input to the phase.
For example, a verification step can be to check if the design documents produced after the design step conform to the requirements specification.
On the other hand, validation is applied to the fully developed and integrated software to check if it satisfies the customer’s requirements.
2. The primary techniques used for verification include review, simulation, formal verification, and testing.
Review, simulation, and testing are usually considered as informal verification techniques.
Formal verification usually involves use of theorem proving techniques or use of automated tools such as a model checker.
On the other hand, validation techniques are primarily based on product testing.
3.Verification does not require execution of the software, whereas validation requires execution of the software.
4.Verification is carried out during the development process to check if the development activities are proceeding alright, whereas validation is carried out to check if the right as required by the customer has been dedeveloped.
5.We can therefore say that the primary objective of the verification steps are to determine whether the steps in product development are being carried out alright, whereas validation is carried out towards the end of the development process to determine whether the right product has been developed.
6.Verification techniques can be viewed as an attempt to achieve phase containment of errors.
The principle of detecting errors as close to their points of commitment as possible is known as phase containment of errors.
Phase containment of errors can reduce the effort required for correcting bugs. For example, if a design problem is detected in the design phase itself, then the problem can be taken care of much more easily than if the error is identified, say, at the end of the testing phase.
In the later case, it would be necessary not only to rework the design, but also to appropriately redo the relevant coding as well as the system testing activities, thereby incurring higher cost.
There are essentially two main approaches to systematically design test cases:
- Black-box approach
- White-box (or glass-box) approach
In the black-box approach, test cases are designed using only the functional specification of the software. That is, test cases are designed solely based on an analysis of the input/out behaviour (that is, functional behaviour) and does not require any knowledge of the internal structure of a program.
For this reason, black-box testing is also known as functional testing.
On the other hand, designing white-box test cases requires a thorough knowledge of the internal structure of a program, and therefore white-box testing is also called structural testing.
Black- box test cases are designed solely based on the input-output behaviour of a program.
In contrast, white-box test cases are based on an analysis of the code.
These two approaches to test case design are complementary. That is, a program has to be tested using the test cases designed by both the approaches, and one testing using one approach does not substitute testing using the other.
Testing in the Large versus Testing in the Small
A software product is normally tested in three levels or stages:
- Unit testing
- Integration testing
- System testing
During unit testing, the individual functions (or units) of a program are tested.
Unit testing is referred to as testing in the small, whereas integration and system testing are referred to as testing in the large.
After testing all the units individually, the units are slowly integrated and tested after each step of integration (integration testing). Finally, the fully integrated system is tested (system testing).
Average Rating