System Integration Testing Methodology

Any enterprise software, in order to achieve business goals, will eventually be integrated with other systems. Today, typical service-oriented product uses a Web server, application server and several databases. It may be working alone flawlessly, but not when connected to outside infrastructure. Changes in one component may cause troubles in the others.

Here comes System Integration Testing. It verifies the interaction between different components in an integral hardware and software environment. Thus, performance and behavior of the entire system are checked. It may be also conducted with subsystems, such as a web store passing orders to an inventory system or batch job processing in ERP. On this stage, we examine interfaces, ensuring integrity of data transfer between modules, and combinations. How do we tackle those challenges in the world of complex architectures, where software interrelates and rely on each other? Let us consider commonly applied approaches.

“Big Bang” approach means assembling all modules in advance. It is done in a single step as soon as individual components of business logic, user interface, services, data access layers are ready after being separately tested. This is very time-efficient approach, adopted for express black-box testing at the target hardware, needed “at once”. It seems a clear benefit, when independent modules are developed concurrently. But it is prone to a number of errors. By virtue of integrating everything at once, it becomes difficult to understand and locate primary cause of the interface failure. Data flow between the layers may be a weak spot. Chances of critical bugs in production even grow if the test cases and their results are not adequately recorded.

Incremental approach is opposite to the “big bang”. The program is also built and tested in little segments, where it is easier to isolate and fix the bugs. But the integration testing proceeds along with appending every next module, then another and so forth. There are two incremental models:

  • “Top-down”

This approach assumes that all the parent modules are ready beforehand, followed by due step-by-step adding of child modules. Underlying functionality is simulated with temporary “stubs”. They may, for example, imitate the responses from lower levels, return the control to the superior module, or even pass parameters. Stubs may be created by the test engineer or may be the part of a “testing harness” ‒ an auxiliary software providing a testing environment. Elementary components, i.e. those not calling subordinates, do not need stubs. As the real active components become available, stubs are being replaced.

Sub-modules of the main control unit are typically tested in depth-first manner, i. e. stubs are substituted one at a time, downward a major control path in the hierarchy. As new modules are incorporated, regression test suites are run to make sure that no new errors have emerged. This system integration testing methodology is particularly useful if all critical flaws at the top levels are captured and addressed. It enables the prototype demonstration since the early stages, thus boosting the team’s morale. Shortcomings of top-down approach include necessity for stubs (having their own limitations), and challenges in setting up test conditions. Processing at low hierarchical levels may be required to sufficiently test parent units. Besides, at initial stages of top-down testing, no significant data flows upward, only simulated values; this complicates output observation.

  • “Bottom-up”

In this scenario, as the term suggests, low-level modules are developed and tested first.Then they are incorporated into clusters responsible for specific software feature. Next, the software testing expert writes a temporary “driver” harmonizing test input and output. It may be in charge for invoking the unit under test, for data transfer, or for accepting and handling output data. As in a previous model, the driver may come with testing harness or written as a test script. After the tests are passed, drivers are withdrawn, and clusters are unified, the way upward the software hierarchy. The higher integration moves, the less remains a need for test drivers ‒ the topmost module needs none at all.

A big plus of this model is that the processing needed for units, subordinate to a level under test, is always in place. And there is no need in stubs. Test conditions are easier to create, and observation of test results demands less effort due to “live” data input from the very start. However, the driver programs are necessary. And the application as a whole entity does not exist until the last unit is connected.

  • Hybrid (or “Sandwich”) approach, i. e. – combination of the previous two, allows overcoming part of those limitations. Here, we figuratively divide the system under test into three layers with our target unit in the middle. The Top-Down model is applied to the topmost layer, while bottom layers are studied using Bottom-Up model.

Good practices.

Ultimate integrated test solution will depend on specifications as well as on target user expectations. For instance, the customer sometimes will wish as quickly as possible to see a working version, thereby, system integration testing methodology will be aimed to achieve a basically working software in the earliest testing phases. Moreover, in many projects, system integration tests are notoriously challenging in terms of time and costs. Not surprisingly, there are few things, often unduly overlooked.

  • Ensuring interface integrity ‒ when modules call each other in various combinations no data should be lost or corrupted. Such situation may be caused by dissimilarity, disparity or mismatch in sequence or number of parameters.
  • Building accurate chart of dependencies ‒ we need to have a diagram giving understanding of all system, process, and data dependencies. Many people are reluctant to figure them out, which result in seemingly unexpected bugs and communication problems inside the team. Good software testers design scenarios with view to limit dependencies (i. e. reducing the occurrence of situations when the whole testing cycle stops due to single thing missing), or provide an option to bypass any given module.
  • Enough virtual users and data available ‒ this aspect is frequently forgotten. Multiple “users” for various roles to talk to each other in the system, as well as data needed by dependencies, should be created in advance. For example, if we test the integration of invoicing feature, our enterprise management tool should offer enough purchase orders to prepare invoices against.
  • Backward compatibility ‒ applications are constantly upgraded to enhance and extend their capabilities; and modifications in components can impact on prior implementations. For example, if both units A and B are operating as expected, but a change into unit A stops unit B, unit A may have an issue. We may have successfully integrated A to B, however, when C is introduced to interact with B, we have to check C as well, even if it apparently would have nothing to do with our project.