One theoretical means of planning and estimating functional testing is by running gmStudio's GUI report. It shows all the controls that allow user input organized. A typical enterprise application will have hundreds of forms and thousands of active controls. The set of input controls is a crude measure of the number of function points in the application and it correlates with the number of tests needed to fully exercise the app through its user interface. In theory, if we ran test cases that exercised all of these UI inputs in a "meaningful" way, we would invoke/test all the functions of the system. Bear in mind that most of these controls support many possible valid and invalid inputs, so the effective number of use/test scenarios is much, much larger. At least one, or two orders of magnitude larger than the number of controls. Clearly the number of permutations and combinations possible is much too large a number to expect a team to define, let alone run them all.
When thinking about testing the results of a tool-assisted rewrite, there are a couple important points to consider.
- The first to consider that the new code is being generated from source code. Furthermore, that source code is assumed to be a complete, formal specification of the legacy functionality that has been verified through thousands of hours of use. In theory, the expected results for any and every conceivable use case performed in the new system, is to get the same results as produced for that use case performed in the legacy system.
- The second important point is that the new code is produced by a systematic transformation process. This transformation process is a collection of many different types of code translation and re-engineering operations. These operations can be referred to as migration features.
In order to test the migration solution efficiently, the team must identify a set of test cases that exercise code impacted by risky migration features. We can determine the location of migration features through static analysis of the legacy code and of the migration solution files. We can then trace these impacted code up to specific UI elements. Once we know which UI elements depend on risky migration features, we can ask the client to develop or select existing test cases that exercise that area of the UI an consequently the critical sections of code. The test results can be used to improve the correctness quotient of the entire migration solution is improved and the probability of functional defects is reduced. When the probability of functional defects is low enough for the , the migration process system can be considered complete.
TODO: talk about conformance quotient: how well the code conforms to technical standards.
See also /wiki/spaces/GMI/pages/1742183