Home
About Us
Site Map
Products
Services
PMP® Certification
PM Tips & Tricks
PM Tools & Techniques
Links
Contact Us
BLOG

 

 

Risk Analysis

The purpose of identifying potential risks to the project is to analyze the identified risks and determine which the highest priority threats to mitigate are. Analyzing the risks will allow you to rank them, one against the other, and determine which must be avoided or minimized and which can be accepted. Acceptance of some risks is based on an assumption that your project has a limited budget with which to address risks. Budget can be in monetary terms or time; since the vast majority of projects have a finite budget with which to address risks, this article was written assuming a limited budget for risk management.

There are several terms that are used throughout this article that must be understood before reading the rest of the article. These terms are:

  • Risk ToleranceThe degree to which an organization (or sponsor) is willing to accept a risk to get a reward.
  • Risk AdverseAn organization (or sponsor) which is unwilling to accept risks even when rewards are great.
  • Risk NeutralA person, or organization, which is willing to accept some risk if it is balanced by reward, but who is not willing to accept any risk.
  • Risk SeekerA person, or organization, which is willing to accept risks that the Risk Neutral person would be unwilling to accept. Keep in mind that a risk seeker is not willing to accept any risk however great to achieve any reward however small, they are simply willing to accept risks the Risk Neutral person would be unwilling to.
  • ProbabilityThe likelihood of the risk event happening
  • ImpactThe negative, or positive, effect of the risk event on the goals and objectives of the project. These include scope, schedule, budget, and quality.
  • ProximityThe closeness, in terms of time, of the situation which is likely to spawn the risk event or expose the project to the risk. For example, the proximity of a key User Acceptance Tester would be relatively low during the planning phase of the project and high the week before the start of User Acceptance Testing.
  • Cardinal Refers to a number, in this case a number used to express the score for probability, impact, or proximity
  • Ordinal A non-numeric means of ranking like items (probability, impact, proximity). For example high, medium, or low.
  • PI ScoreThe Probability score multiplied by the Impact score (cardinal), or the Probability score combined with the Impact score (ordinal).
  • Risk ThresholdThe PI score expressing the performing organization’s risk tolerance. Risks that score above this threshold must be mitigated. Those that score below may be accepted.
  • MitigationAn action or strategy taken to avoid a risk, diminish its likelihood of happening, or diminish its impact if it should happen. Strategies can include avoidance, transfer (e.g. insurance), contingency, and reduction.

Scoring the Risk

Choose a system, either cardinal or ordinal to rank the risks one against the other. The scores for both Impact and Probability must use the same system. If you choose a cardinal system you can use numbers between 1 and 10 to score the risk with 1 being the lowest probability or impact and 10 being the highest. If you choose an ordinal system, you are restricted to a 3 score system (high, medium, and low). Using anything more complex than this will only cause confusion when you combine probability and impact scores.

Keep in mind that the purpose of scoring risks is to determine those that can be accepted without spending your risk management budget, and those that must be mitigated. Imagine what the project would look like if the risk event were to happen. Using our previous example, we are counting on a key contributor from the user community to lead User Acceptance Testing. What would our project look like if that person is not available when we need them? Highly unlikely you could describe this as an unmitigated disaster because you could go live without UAT in a worst case scenario.

Let’s use another scenario to illustrate the "what if” scenario approach to scoring. The risk event we’ll consider is the unavailability of our source library tool. We have decided to implement a new source library tool we will purchase because we’ve exceeded the capacity of our old one and it can no longer be used to store our code. The results would be catastrophic if this were to happen. Without a tool to store and manage our code it would be virtually impossible to continue with the project. The resultant confusion if we decided to go ahead would make the project several times more expensive.

Now let’s choose a risk at the extreme opposite end of the spectrum. Let’s say the risk event is that one of several programmers comes down with the flu and is not available to the project for 3 days. The most serious impact this risk event could have on the project, if it were to occur, would be a 3 day delay in the delivery of the project, and then only if that programmer’s task was on the critical path.

Use the extremes of 5 for the case of the unavailable UAT tester, 10 for the unavailable source library tool, and 1 for the programmer with the flu. This will allow you to score risks with an impact somewhere between the 5 score and the 10 score with a score of 6 to 9 and those with an impact somewhere between the 1 and the 5 with a score of 2 to 4. Using the ordinal method simplifies scoring further. There are only 3 scores possible: low, medium, and high. You may find yourself re-adjusting existing scores as you go through this scoring exercise. This is normal as examining new risks will lead to further insight into ones you have already scored. Don’t get carried away with exactness in scoring. It is possible to have more than one highest, lowest, and medium score.

Scoring the risk event probability is an exercise that is very similar to scoring the impact, you just won’t be using "what if” scenarios to help you. Assessing the probability of the risk event happening requires a knowledge of the way your organization operates – you are not interested in how likely the thing is to occur in your competitor’s project. This is where you can use your Subject Matter Experts. What’s going on in that User Acceptance Tester’s organization next May that might prevent them from joining your project? Do they have a back-up? These factors will weigh into your assessment of how likely that risk event is to happen.

The scoring exercise is simplified if you do the scoring yourself. The downside of this approach is that the lack of information may lead to inaccurate scores. Doing the exercise in a workshop (see my article on Risk Identification) will allow you to apply all the Subject Matter Expert knowledge on your team to the scoring. The workshop environment will lead to disagreements over the score for a particular risk. You should only allow SMEs in the area of the risk event to participate in the scoring and then use one of the following tools to break a deadlock:

  1. Use an average score if you use the cardinal scoring method. Total the SMEs scores and then divide this total by the number of SMEs.
  2. Use a majority rule if you use the ordinal method. The score will be a high if you have 2 highs and 1 medium. In case of a tie the facilitator, moderator, or you should make a decision.
  3. Provide the SMEs with sticky notes with each risk event, one per event. Provide a whiteboard for this exercise with a section for High, Medium, and Low and then have each of the SMEs post the sticky note in a section. When the SMEs are satisfied with the final disposition of the sticky note, that’s your risk score.

Your risk register should be present at any scoring exercise so that scores can be directly entered into the spreadsheet rather than posted to some workshop artifact that might be lost or misplaced. If you use MS Excel to capture your risk register you can use a formula to capture the PI scores. Either multiply the Probability Score by the Impact score for the cardinal scoring system, or append it (e.g. HighHigh, MediumHigh, HighMedium, etc.) for the ordinal system.

Updating Scores

Scores should be kept up to date so that you and your stakeholders always have a current picture of which risks are most threatening. Your team meetings are an ideal opportunity to have these updated. On large projects you should break the lists down by team (e.g. analysis risks, development risks, QA test risks, etc.) so that the list to review does not become too large. These meetings are particularly useful for identifying new risks to the project. When a new risk is captured you should also capture the probability and impact scores.

The probability and impact of risk events will change over time. For example, the impact of a resource falling ill with the flu for 3 days during the time they are working on a critical path task will be greater than when they are working on a task with 5 days slack. The probability of a risk event will change over time as well. The probability of a hurricane having a negative impact on your construction project in Florida will be much less November than in July (hurricane season is normally considered to last from June to September). Risks should be reviewed periodically to determine any changes in the project that would change the probability or impact scores. You can do this at team meetings, on your own, or at risk review meetings with SMEs. Scores should be updated in the risk register as they change and changes in the top risks being monitored for the project should be communicated to the stakeholders.

The mitigation strategies used to manage the risk should be evaluated when monitoring project risks. You can do this by reviewing the strategy to determine if it is still effective or by weighing its effectiveness into your calculations of the probability and impact scores. For example, if you choose to mitigate the possibility of hurricane delaying your construction project by purchasing insurance, how well does the payout provided protect you? Is the amount still appropriate given changes in your project? Answers to these questions may cause you to move your impact score from a 2 to a 5. Probability scores for mitigated risks may also change due to changes in the status of the strategy. Let’s say you’ve mitigated the risk of your programmer going down with the flu by having the team inoculated for flu which downgraded the probability from medium to low. There is a new strain of flu making the rounds now that is impervious to that vaccine though. You may change the risk probability back to medium, or even to high.

Whether you rank the risk mitigation strategies separately or weigh their effectiveness into the probability and impact scores, you should review the strategies periodically to ensure that the protection you sought when you initially deployed the strategy is still possible to receive. You can do this at team meetings, at SME meetings, or on your own. If you choose to do it on your own, make sure that you either have the SME knowledge yourself, or access it before making decisions.

Proximity

Proximity is the 3rd element of risk scoring that will help you determine whether a change is required in the strategy for mitigating risks. Proximity is not normally a concern when dealing with risks whose scores fall below the risk tolerance threshold for the project, unless the PI score changes to bring it above the threshold.

Proximity simply means how close, in terms of days, weeks, or months, are the events, tasks, or deliverables in the project plan that are exposed to the risk under examination. The closer the event comes, the higher the proximity score and the more attention should be paid to the risk. For example, let’s take the case of a construction project in Winnipeg, Manitoba, Canada. The risk is that outside work on the cladding and roofing of the building will have to be stopped during a snow storm. You have mitigated the risk by starting the outside work in July and allow 3 months for completion. Winter storms in Winnipeg are non-existent in August, rare in September and aren’t expected until mid-October. The closer you get to October, the greater the proximity of the risk. On the other hand, if your outside work is 95% complete by the end of August, the impact of the risk event will become much less.

Proximity is an indicator that should be factored into every risk to allow its PI score to adjust to changes in the project environment.

This is part of a series of articles on the topic of Risk Management for software projects. Other articles in the series can be viewed in this web site: Risk Identification, Risk Strategies, and Risk Maintenance.

The tips and tricks described in this article implement some of the best practices promoted by the PMI (Project Management Institute). These are taught in most PMP® courses and other PMP® exam preparation training products. If you haven't been certified as a PMP® (Project Management Professional) by the PMI and would like to learn more about certification, visit the three O Project Solutions website at: http://threeo.ca/pmpcertifications29.php. three O Project Solutions also offers a downloadable software based training tool that has prepared project managers around the world to pass their certification exams. For more information about this product, AceIt, visit the three O website at: http://threeo.ca/aceit-features-c1288.php.