About Us
Site Map
PMP® Certification
PM Tips & Tricks
PM Tools & Techniques
Contact Us



Development Testing

Development testing covers unit, function/thread, integration, and system testing. The purpose of this testing is to ensure that the software performs all the functions specified in the design document before being passed to the Quality Assurance group to be tested against business requirements. This testing may be done with or without automated testing tools. Automated test tools are designed to reduce the manual effort required to perform the tests and provide metrics for test results, test coverage, and in some cases to enforce coding standards. Automated test tools and their use are described in a separate article; this article will describe the functions required for development testing and leave the decision to use automated test tools, and the choice of those tools, to the reader.


The project manager should ensure that there a standards set for the application or system that is passed on to the Quality Assurance group for their testing. These standards will seldom be specified in the SOW, Scope Statement, or Project Charter so must be derived from the overall goals set for the final product. Standards for development testing should include standards for code coverage and number of unresolved bugs. Quality standards for the software application, or system, under test must be enforced during integration or system testing but developers should be held accountable for the quality of their individual code. Accountability can be enforced by reporting test results for unit, function/thread testing, or by analyzing test results during integration/system testing to determine if any of the errors should have been caught during unit testing.


The project manager should take advantage of any reporting capabilities of the automated tools employed. They should rely on a manual reporting approach in the absence of automated reports to ensure that all planned testing activities are being carried out and that all quality goals are achieved.

Unit Testing

Unit testing exercises the smallest chunks of code to ensure they work properly and satisfy the requirements described in the design document. Units will usually be functions, procedures, web pages, or any other piece of code that performs a particular task. To perform unit testing the developer will need to create stubs and drivers to replicate the system or application the unit will function in. The stubs replicate functions that the unit outputs information to and the drivers replicate functions that input data to the unit. Many software languages employ the concept of global variables, or variables that are available to the entire application rather than to a single function. These variables must be replicated by some form of wrapper. With stubs, drivers, and wrapper in place the developer can now compile the unit and test with test data. Unit tests are sometimes referred to as "white box” tests, meaning that the tester has visibility to the code under test. That visibility enables the tester to determine the degree of code coverage, among other things. White box testing differs from black box testing in that, in black box testing, the tester has no visibility to the source code under test, and doesn’t need it. They are simply testing a sub-program against a set of requirements to ensure the code satisfies the requirements.


Unit tests should include such things as boundary tests and exception tests in addition to verifying the unit meets the requirements of the design. Boundary tests verify that the unit handles all inputs between the specified range (e.g. if the input is a number between 1 and 10 the unit should handle 1 and 10 as well as the numbers in between). Exception tests verify that the unit handles a number outside the expected range, usually through an error message. In the example just quoted, the unit should handle each number between 1 and 10 and display, or "throw”, an error if presented with an input of 0.


Ensure that the development team knows what unit test activities are expected of them and of the quality standards their code must meet. Standards will include the number of permissible bugs and the percentage of code coverage their testing must provide. Ensure that the team is trained on any automated test tools provided for the project. Many automated test tools come with tool training provided by the vendor; you may want to consider combining project training with the vendor’s tool training to ensure tools are being used correctly.


Unit testing will add time to code writing, after all it is far quicker to write a few hundred lines of code and chuck that into the source library and let someone else worry about any bugs in the code, than it is to perform thorough testing. Ensure that you allow sufficient time in your effort and duration estimates to allow for proper unit testing. The subject matter expert responsible for estimating effort and duration should be aware of the project plans for testing and factor those into their estimates. Now that the team has been given the tools and time they need to perform the planned unit testing it's up to you to monitor results to ensure the plan is being followed. You may also wish to capture metrics on number of tests, number of bugs found and fixed, code coverage, and any other areas that you measure.

Function/Thread Testing

Function or thread testing differs from unit testing in that function testing will combine units previously tested into a chunk of code that performs one or more complete functions. Developers can perform function tests by combining the previously tested units which perform the function under test. Frequently unit tests and function tests are combined by choosing a chunk of code that encompasses the entire function for the unit test. There is nothing wrong with this approach except where this combination makes testing too complex. Both white box and black box testing must occur where unit and function testing are combined.


Function tests call for test cases to be written by the developer, or tester, which are based on the requirements captured in the business requirements document. Here’s a tip that could save a lot of development and QA time: make the test cases written by the QA group available to the developers. This will require the QA group to begin writing their test cases as soon as the business requirements documentation is completed but can avoid developers wasting time coding a non-existent requirement, or misreading a requirement. The test cases written by the QA group can also be used by the development team either intact or with slight modifications. I’m not saying that QA testers are always right when it comes to interpreting business requirements, but they should be your arbiter until such time as the application is tested by the users.


Many software systems employ a database as a data repository. Software development projects that deliver a database as part of the system will require an instance of that database for the development phase. Each developer will also need their own instance of the database for their testing. Organizing the database licenses in this fashion, as opposed to a single shared database, will avoid much wasted effort and confusion around sharing test data. Each developer should be responsible for the creation and management of the test data unique to their area of development. Data that is common across all developers should be provided in the shared database. Examples of common data would be customer ids, addresses, telephone numbers, or product ids. Although the developer must know the attributes of every data element handled by their code, making this data available "off the shelf” will make the developers’ lives much easier and can be provided with very little effort. Ultimately the data required to complete functional testing must be the developers’ responsibility.


Functional testing should also have goals and objectives. The goals and objectives you select for your project will be unique to the project, but here are a few which are universal: 100% of written test cases executed, no severity one bugs open, no more than __ severity two bugs open, etc. Now that functional testing is complete the code can be checked into the source library ready for integration, or system, testing. You may want to track some metrics from this activity as well, such as number of test cases, number of test cases passed, number failed, etc.

Integration/System Testing

The principal difference between integration testing and system testing is that integration testing determines how well the various software pieces perform together when they are integrated (or compiled), whereas system testing tests the software application, hardware platform, and database together to verify they perform properly. System testing requires that the same type of hardware components are installed in the hardware platform as will be installed in the production environment. This doesn’t necessarily mean that the system test environment is an exact duplicate of the production environment, but the classes and types of servers and database should be duplicated. The configuration of the production environment will be unique to the system being developed so the system test environment required will also be unique. The environments, hardware, and software licenses necessary for the various test environments should be identified in advance and their implementation should be a part of the quality management plan. For our purposes, it is sufficient to note that a test environment is required to perform this testing and that the testing is of the "black box” variety.


Integration or system testing must be performed before the system is turned over to the QA group for testing and that testing may be done manually, or it may be automated as part of a Continuous Integration tool. These tools were described in a previous article on automated test tools so we won’t go into too much detail here, other than to mention that some of these tools will store tests and execute them as part of the build process. Having a suite of function tests put together by the development team is a good start to the integration/system test library. The aim of this testing is to ensure the pieces work together as a whole, so executing every test case for every piece is not necessary.


Test cases must be written for functionality that spans the entire system. For example, a software system consisting of order capture, order processing, and order delivery sub-systems will have each of those functionalities tested individually but integration/system testing should track one order through the system from capture to delivery to ensure that the data each successor sub-system expects from its predecessor is what it gets and that what gets delivered by the system is what was ordered. Members of the team will be responsible for writing these test cases and team members must also be responsible for assembling the data necessary to execute these tests. The QA team can be of help here; making their test cases available to the development team, or system test team, will ensure that the system they test will be as clean as possible and test data will be available to the QA team from the development or system test environment.


A database administrator, or someone who can fill that role, should be responsible for creation of the system/integration test environment. This person may also be responsible for creating the individual scripts that create the database, or each developer may be responsible for creating their own portion of the database. Creation of the database will not be part of any continuous build process so a database administrator should own the integration/system test database and be responsible for its creation and refreshment with test data for each new round of integration/system testing.


Test goals and objectives should be set for the integration/system test phase of the project that supports the overall quality objectives for the project. The goal of unit, function/thread, and integration/system testing should be to eliminate all development type bugs from the system before hand-off to the QA group. Development type bugs are those that arise from a failure of the system to perform the way it was designed to. Access to QA test cases can also eliminate bugs due to the system not satisfying the business requirements (as interpreted by the QA group), and this is a desirable thing. Remember that any bugs that the QA group finds in the system must be reported using the bug reporting/issue tracking tool implemented for the project. The QA tester must spend time reporting the bug, the developer must spend time on re-work and report the solving of the problem, and finally the QA tester must re-test the system and close the bug. Finding these bugs before the QA test phase eliminates all the administrative work with the bug reporting tool.


Your system should be ready for QA testing now. You should keep track of progress to the integration/system testing goals and you may also want to report system testing metrics such as number of system test cases, number passed, etc. You should also analyze test results to determine if any of the bugs discovered indicate insufficient function or unit testing. Reports of quality metrics at this stage of development should serve to assure the project stakeholders that the quality activities planned for the development phase have been executed and the results support the quality goals and objectives set for the project.


Monitoring and controlling activities necessary to meet quality goals and objectives are described for you in the PMBOK Fourth Edition. Getting your Project Management Professional (PMP) certification will not only provide you with the overview necessary to use the techniques in this article, it will demonstrate to your project stakeholders that you are competent in the area of quality management. Look for a good PMP Course, or other PMP Exam Preparation training to get you ready to pass the PMP exam.