I described 10 tips for rescuing projects in a previous article. The tips are all fairly succinct and don’t offer many details on how to implement them so I thought I’d follow the article up with a few others that delve into the tips in a little more detail, starting with the third tip: analyze the project artifacts to determine the problems.

Tip #1 advises you not to make any assumptions about the causes of that de-railed the project and tip #3 advises you to do some analysis of the project artifacts to uncover the root causes of the problem, without much guidance on how to do that. This article will give you a little more detail to help you get to the root of the problem.

You should know that there is definitely a problem because you’ve been called in by the project sponsor. If the person that engaged you to rescue the project is someone other than the sponsor, make certain the sponsor is aware that you’ve been engaged and why. You’ll be in for a rough ride if the sponsor isn’t on board with your engagement.

The first place to search for the problems that triggered your engagement are the progress reports communicated to the sponsor or steering committee. These reports will point you to the symptoms that indicate the project is off track, providing they were compiled honestly and the information they contain is accurate. Look for the SPI (Schedule Performance Index) first. Is the index under 1.0? If the last index was under 1.0, how much was it under? Check the last few SPIs to spot trends. If the project’s SPI is trending downwards over the last 3 or more reports, or is stuck at .9 or lower, the project is running behind schedule and previous efforts have failed to recover it. The SPI should be reported once per week if not more frequently so a run of 3 SPIs, or more, significantly fewer than 1.0 indicates a problem. A downward trend indicates that the problem is worsening.

Of course not every project manager will measure the SPI of their project so this information may not be available to you. Failure to measure project performance to schedule in some way is an indication that the project was never under control and is likely to be behind schedule – how badly is something you’ll have to investigate. One possible indicator of whether the project is on, behind, or ahead of schedule is the schedule being reported to the stakeholders, sponsor or steering committee in the performance reports. This report will sometimes contain a section that captures the key milestones and deliverables of the project with their associated planned and/or forecast dates, and actual date for both starting and completing. Check this part of the report over the last few cycles to determine if the deliverables are starting and finishing on time and whether the milestones are completing on time. Changing planned or forecast dates for either start or end will indicate a slipping schedule. Churn in the deliverables and milestones being reported may indicate more complex problems such as uncontrolled change or an attempt by the previous project manager to hide schedule slippages by altering the items being reported each cycle. This part of the report should be a rolling window with completed items dropping off one cycle after being reported as completed and new items appearing as their planned dates come appear in the window.

The absence of any reports that analyze the project database for schedule data will mean you’ll have to adopt the "brute force” approach to determining performance to schedule. You do this by reviewing the schedule (the MS Project file or the file produced by any other project management tool used for the project) to determine performance to schedule. You can calculate the project’s SPI using the Earned Value report provided by MS Project. Keep in mind that the accuracy of this report depends on the care taken to keep the schedule up to date. Also keep in mind that the Earned Value formula for SPI is Budgeted Cost of Work Performed/Budgeted Cost of Work Scheduled. The catch here is that the formula uses dollars rather than hours to measure schedule. Let me illustrate the difference with an example. Let’s say you’ve got 100 hours of painting at $50 dollars an hour planned but not done and 50 hours of electrical work at $100 dollars an hour completed ahead of schedule. The formula will tell you that your SPI is 1.0 (100 x $50 = 50 x $100) but the job is actually 50 hours in the hole.

Another approach is to produce a report identifying all the slipping tasks. The project will be behind schedule if any of the slipping tasks fails to complete on time. Perhaps this is a good time to talk about the project’s critical path. The critical path is composed of tasks that are connected by dependencies. When all the tasks in the project have their dependency relationships defined they will form paths and one of these paths will be of maximum length when all the durations of the tasks in the path are added together; this path is called the critical path and theoretically, only those tasks on the critical path can cause a delay in the final delivery date of the project. Of course, slippage in the schedule of any of the project tasks will impact on the budget due to the extra time it will take to complete the task. The SPI does not account for the difference between tasks on the critical path and others. A slippage in any of the tasks will cause the SPI to dip below 1.0. One way to reconcile this discrepancy is to look at the SPI and the critical path at the same time. If the SPI is only slightly below 1.0 but no tasks on the critical path are slipping the project schedule is not a major concern.

One more tip for calculating the project SPI: determine how you’ll handle Work in Progress (WIP). Since the formula for calculating the SPI does not account for partially completed tasks you can simply ignore partially completed tasks and only report on completed work. This will give you an accurate measurement of progress to schedule but the measurement will be stale, especially if the project has just begun the build phase. Another approach is to include partially completed tasks in your calculations. The total hours spent on the tasks in the WIP are added to the Budgeted Cost of the Work Performed while the total hours that should have been spent on scheduled tasks should be added to the Budgeted Cost of Work scheduled. Adding these totals into the formula will require a totally manual calculation but the increased accuracy and currency may be worth the effort. To make the calculation with the partials, add all the hours of work that should have been completed, including work on tasks in progress. Now calculate the number of hours that were planned. You’ll need to include any task with a start date in the past in the Budgeted Cost of Work Scheduled to do this. The number of hours planned will equal the "age” of the task (i.e. how long it has been in progress) in weeks x the number of hours in the work week. To be completely accurate, you should evaluate the WIP to determine whether an 80 hour task that has one week of work completed is indeed 50% complete. Without additional evidence, you’ll have to make that assumption.

Performance to budget is another key indicator of project health. You MS Project (or other software PM tool) file is your best indicator for this. The formula for calculating the Cost Performance Index (CPI) is similar to the SPI: the Budgeted Cost of Work Performed divided by the Actual Cost of Work Performed. Any measure over 1.0 is good; it means the project is under budget. Anything under 1.0 indicates the project is over budget. The hours recorded in the MS Project (or other PM tool) file can be used to calculate the actual cost and the hours of effort estimated for the tasks can be used to calculate the planned budget. The canned reports in MS Project include a CPI report. Or you can do the calculations manually; just remember that if you include partial work on the actual side you need to include it on the planned side.

Organizations frequently use time tracking systems to track employee time. If the organization you’re working in has such a system, your project should be registered with it so that hours spent on it can be tracked. You may want to validate the time recorded in the project file with the time charged to your budget in the time tracking system. Discrepancies between the times recorded in the time tracking system and in the project file may occur when resources are working overtime on the project, but not reporting the overtime to the project manager, resources working on the project aren’t properly recording their time in the system by either under-reporting the time spent on the project or over-reporting. A large discrepancy between the hours in the time tracking system and those in the project file indicate an administrative problem that must be worked out. All the hours worked on the project should be captured and reported against the project; only hours actually worked on the project should be charged to it.

Performance to schedule and performance to budget are 2 of the 3 "hot corners” of any project. The 3rd is performance to scope; simply put, did the hours and dollars spent according to plan produce the results that were planned? The project file is a key source of this information – were all the deliverables produced that were scheduled? There are other sources for this information that are more tangible. For example, were the products received by the quality group? Were units of code checked into the source library? Were the design documents, blueprints, and other plans posted to the appropriate directory? You should double check deliverables in the project file against these sources at least until you gain confidence in the accuracy of the project file. A project that has failed to deliver the products that were planned is behind schedule and over budget even if your SPI and CPI indicate otherwise.

There are 2 facets of scope: product and product quality. We’ve addressed the first of these possible causes of project disaster now let’s look at the second. The project should have quality objectives set out in the project plans. They could be stated in the project charter, scope statement, business case, quality management plan, or even individual test plans but one or more of these documents should clearly state the project’s objectives for quality and by clearly stating I mean that the measurements that will be used to determine whether the objectives have been obtained are captured. Measuring the quality of the project’s deliverables against the quality benchmarks will tell you whether the project objectives are being met.

Failure to meet quality objectives will have 2 effects on the project: the project may fall behind schedule and over budget because of the excessive amount of rework necessary to fix defects or the project scope will be reduced to accommodate the original schedule. These two effects are not mutually exclusive by the way, although projects will normally tend to select one approach or the other. Failure to meet quality objectives may be the explanation for the project being behind schedule, over budget, or for a reduction in scope. This is not the root cause of the problem, there is lots more investigative work to be done yet, but it does take you one step closer.

Start your investigation at the top. Do quality results being reported to the sponsor, the steering committee, or stakeholders reflect a failure to meet objectives? Where results reported don’t meet the stated goals, your project has a quality problem that must be addressed. Poor quality is not always reported to stakeholders as such. The objective is to promote a product that meets the stated quality goals and not necessarily to produce that product on the first attempt. Products that go through extensive rework can still meet quality objectives; they just consume a greater amount of project resources to get there. Most projects, especially software development projects, will employ a trouble or issue tracking system to record defects. Most of the systems that track defects will have reporting tools which should be used by the project to monitor product quality through the testing process. Check back through old reports to determine whether the team is producing the quality results you’d expect to see from the task being monitored. Quality results are very rarely uniform across a team due to the varying degrees of experience on the team. A disproportionately high level of defects from one or two team members may indicate a problem with those members. If the defect rate across the entire team is high, lack of experience may be a team issue. Introducing new technology to the team is often the cause of this problem.

You may not have access to the old quality reports; they may have been produced and then binned rather than archived or they may not have been produced in the first place. Don’t let the lack of historical information stop you; you can still produce reports based on old test results. The reports you produce for past periods will still show all the defects reported during that period but with updated status (hopefully to closed). The passing of time will actually yield extra information: the average time it takes to close the ticket. An unreasonable delay in closing tickets may indicate the team member responsible for the product had a difficult time correcting the defect and this could be attributable to lack of experience with the technology. It could also indicate a sloppy use of the system; defects are being corrected in a timely fashion but there is a delay between resolution and the corresponding update of the system.

Not all the evidence of poor quality is to be found in the defect tracking system reports. Symptoms of problems can be found in the project schedule as well. Check the project schedule for changes to accommodate an excessive level of rework. The original project schedule should have provided some time for rework. The amount of time will depend on the skill and experience level of the team and the complexity of the work. Changes to the schedule to allow more time for rework are one symptom you can expect to see when the project is experiencing poor quality. Shifting delivery dates are another symptom. The project change register, where one exists, is another indicator of problems with quality. Change requests for changes to delivery dates to accommodate an unanticipated amount of rework are another symptom of the problem.

The project’s issue log, action log, or RAID log, are other artifacts that should be examined to identify problems. Check the issue log to identify issues that have prevented team members from delivering their work on time. Delays in getting the equipment or services they need to perform their work are a common cause of delay in delivery of products. Check the history of the issues captured in the log. Has the forecast resolution date for the issue been extended repeatedly? Has the person assigned to resolve the issue been changed repeatedly? Are most of the issues assigned to the project manager for resolution? All of these are indications of problems that may have had a negative impact on productivity.

The project risk register is another source of clues. Therisk register should reflect an initial input of the risks identified during the project’s planning phase and frequent updates to reflect the current status of risks the project faces. Look for signs that indicate a less than thorough analysis of project risk during the planning phase (the initial identification of risks). The departing project manager should have used some method which solicited information from all the stakeholders or at least the subject matter experts amongst them. If the risks identified during the planning phase were captured from a previous risk register or some other means that did not allow for input from the team, you may have to re-visit the risk identification process to make up for the deficiency.

It is just as important to keep risk information up to date and if the previous PM did a good job of this, you won’t have to worry about a poor job of identifying risks during the planning phase; the register will have been updated with all those risks that ought to have been identified during the earlier phase plus others not evident earlier. Check the register for periodic updates to the risk list. If all the risks in the register were identified at one time you’ll have to make up for the lack of periodic updates. Agendas or outputs from the weekly status review meetings are another source of evidence of regular risk updates. An absence of an agenda item addressing risk updates is another indicator that risk information has not been updated regularly (or that they have and the information has not been captured in the risk register). Risk scores should also have been updated.

There are 2 significant measures of risk: the degree of risk the event will pose without mitigation and the degree it will pose after a mitigation strategy has been implemented. The latter measure is really intended to measure the effectiveness of the mitigation strategy and it is the one to focus your attention on now because it will tell you how well the identified risks are being managed. If the previous project manager didn’t measure the residual risk after mitigation strategies were implemented now is a good time to start.

Risks events that have occurred, either because they weren’t identified or because mitigation strategies were not effective, will have a detrimental effect on the project’s plans. The effect might be to add additional work to the project to address the risk event, to add additional cost to fix what was broken and/or to pay for the extra work, or poor quality. A gap in the desired level of risk identification and management may lead you to identify slippages in schedule, overages in budget, missing deliverables, or poor quality. Your investigation may also reverse this order: you may have identified the problem during your investigation of the other sources of information which leads you to an examination of the risk register for an explanation. You will be one step closer to the root cause of the problem you’re investigating in either case.

Some projects will track the decisions made with a decision log or with their RAID logs. The value in tracking decisions is to ensure that they are made in a timely fashion. Many decisions are time sensitive in that they must be made in time in order to have any value. They’re sort of like the point of no return for the airplane. If the decision isn’t made before the point of no return it becomes inconsequential. In the case of the airplane the choice of returning has been eliminated. Check the log to ensure that decisions are being made in a timely fashion. If the actual dates for the decision are significantly later than the forecast dates or if the forecast dates keep slipping you’ve identified a problem. The effects of the problem will be discovered in the other artifacts.

The project should have implemented a Gate Review, Phase Exit Review, Business Decision Point, or other means of deciding upon the project’s readiness to advance from the current phase to the next one. Lack of such decision points is a major problem. It means that the project lacks proper overview and control from the sponsor, steering committee, or other stakeholders. It could result in project work being started that shouldn’t be, or work stopped that isn’t completed. Examine the minutes of the decision points when they have been conducted to identify problems. Seldom do projects advance from one phase to the next with all activities from the current phase complete or resources required for the next phase in place so don’t expect decisions to be cut and dried: pass or fail, go or no go. Most decisions to pass the gate will be made contingent on some non-critical work from the previous phase being completed or some resource for a future activity made available. These loose ends should be recorded in an action log for the gate or in the project’s action register or RAID log. Check that any actions arising from the decision were completed in a timely fashion and that there are no outstanding items.

Gates that fail should trigger a follow up gate however the follow up may not be attended by a formal meeting such as the one to make the original decision. There is a fine line between failed gates and passed gates. Just as there are few projects where all deliverables have been produced in apple pie order and all resources for the next phase are in place there are very few projects that don’t have any deliverables completed or resources in place for the subsequent phase. The result is that gates that fail will usually be considered a pass upon some action being taken or criteria met. The investigation into the action register is also important when examining failed gates. Examine the actions identified for passing the gate: were the actions completed and closed? Were the actions completed on or before the forecast date?

There is one last source of documentation to check for problems: the meeting "minutes”. Some projects and project managers will record discussions at meetings and capture the records in meeting minutes. The meeting minutes may give a blow by blow account of what was said at the meeting or a summary of the points the recorder thought were important to capture or something in between. Check meeting minutes out to check for information that would indicate problems with the project. Some areas to check are:

  • Disagreements between team members over technical issues, estimates, and etcetera.
  • Warnings of the consequences of decisions taken. These are risks in disguise.
  • Complaints about roadblocks. These could be in the form of late delivery of equipment, software applications, access to systems. They could also be referring to decisions that are not forthcoming, outside interference that prevents the complainant from completing their work on time, or interference from another team member.

Any of the above that fail to produce an entry in the issue log, action register, or RAID log is a potential project problem. The fact that the complaint has stopped doesn’t necessarily indicate it has been addressed, in fact unless there is evidence in a log or register that it has, you can assume it hasn’t and that the consequences are being felt by the project.