Bias: The Hidden Danger to Your Risk Assessment

Quality risk management (QRM) is a systematic process for the assessment, control, communication and review of risks to the quality of a product or process over the entire product lifecycle. An effective QRM program can ensure high quality of a given process or product by providing a proactive means to identify and control potential quality issues during development and production. QRM can also improve the quality of decision making. (ICH Q9 2005).

At the core of QRM is the risk assessment. Risk is defined as “the combination of the probability of occurrence of harm and the severity of that harm”. Risk assessment can be defined as the identification of hazards and the analysis and evaluation of risks associated with those hazards. (ICH Q9 2005) Risk assessment is generally considered to be a useful tool for continuous improvement. Therefore, risk assessment documents should be periodically reviewed in order to incorporate events (planned or unplanned) that might impact the original quality risk management decisions. This review can often result in reconsideration of previously accepted risk-based decisions. Risk assessment generally consists of the following components:

Identification: assessing what might possibly go wrong with a product or process;

Analysis: assessing each identified risk against three major parameters: likelihood of occurrence, ease of detection, and potential severity of impact;

Evaluation: either a qualitative or quantitative determination of the degree or level of the identified and analyzed risk. (ICH Q9 2005)

The Impact of Bias

There are many factors that can undermine the quality and efficacy of the most carefully designed and executed risk assessment, but perhaps the greatest risk is the presence of bias. Bias, for the purpose of this discussion, shall be defined as a pre-conceived preference or inclination that has the potential to affect the impartiality of evaluations or decisions. As noted previously, risk assessment is considered to be an integral part of a program for continuous improvement. It is not intended to be used for the justification of existing or inferior practices. The danger of bias lies in that it can take many forms, and is not always readily apparent. Bias can result in risks being overlooked, inappropriately assessed or prematurely dismissed. Areas where bias commonly occurs include, but are not limited to, the following.

Assembling the Risk Assessment Team

In order to ensure the best outcome, a strong cross-functional risk assessment team is required. It is tempting to include only the “experts” on the team, or the individuals who thoroughly understand the intricacies of the process and the supporting systems associated with it. There is also the temptation to appoint individuals with a similar mindset to the team, or to omit a representative from a particular functional area in order to ensure more rapid alignment. All are forms of bias.

Experience can work both for and against a team. The longer an individual has been with a single firm or working with a particular process, the greater the potential for bias. An individual may be more resistant to changing a process he/she has worked with successfully for years, or perhaps developed themselves. Having less experienced members on the team can help to mitigate this issue. These are the individuals who will raise questions during the assessment such as “Why are we doing this?” or “Is this likely to be a problem?” Perhaps they have seen a similar process executed or problem solved more effectively at another firm. Less experienced team members can, in this manner, raise issues that more experienced members may have ignored, overlooked or simply forgotten.

It is also beneficial to appoint an individual who is not directly impacted by the outcome of the assessment to act as a facilitator. This individual can help the team to avoid a number of other team-related biases, such as the steering of the team in a pre-determined direction, the premature dismissal of ideas or suggestions, or domination of the team discussions by individuals with more dynamic personalities or more senior positions or titles. The facilitator can ensure that all members of the team have an equal opportunity to be heard and that all suggestions are considered and vetted, resulting in a more productive team exercise.

Selecting and Utilizing the Correct Tools to Execute the Risk Assessment

The selection of the assessment tool and the method by which the identified risks will be analyzed and evaluated is critical. There are a number of excellent tools for performing risk assessments such as the Failure Mode and Effect Analysis (FMEA) (Stamatis, D.H. 2003) and the Hazard Analysis and Critical Control Points (HACCP) (WHO, 2003) among others. However, the tool must be appropriate for the type of process under consideration, and the level of formality of the risk assessment tool appropriate for the complexity of the process being assessed.

Some tools utilize either a qualitative or quantitative rating method to analyze and evaluate identified risks. If such a method is utilized, it should employ a scale that can realistically demonstrate differences. A rating scale of 1-10 may be practical if there is a discernible difference between a rating of “5” and “6”. If not, a scale of 1-5, or even “low”, “moderate” and “high” may provide more meaningful results. (Brindle, et.al, 2012) Selection of a tool or evaluation method that is inappropriate or skews the results of the assessment towards a particular direction is a form of bias.

Failure to Evaluate the Impact Supporting Systems in the Risk Assessment

Many risk assessments focus primarily on direct risks to the product or process under evaluation, but it is equally important to consider the quality and integrity of the systems supporting that product or process. Failure to do so is another form of bias. Supporting systems can have an enormous indirect impact on the efficacy of the risk assessment and subsequent quality decisions. The following are examples where a failure to effectively evaluate these systems can reduce or nullify the effectiveness of the most carefully executed risk assessment. Each illustrates a different form of bias and the possible impacts.

Overconfidence in any processing or support system

Overconfidence is a bias in that it can result in potential risk factors being dismissed without the impacts being fully investigated. For example, consider a sterilization process that has been validated with an overkill cycle. Such a validation, in general, provides a high degree of confidence that the sterilization process is robust and effective. However, the failure of an individual mechanical part can render even the most robust study utterly useless. The risk increases if individual parts are not uniquely identified, or dedicated to a single unit. In this case, how quickly and easily could a failing part be identified? Are there sporadic contamination events for which no root cause can be conclusively identified?

Consider as well the methods by which the validation is executed. How often are re-validations performed? Is the firm modifying and upgrading the protocol for each re-validation based on lessons learned and current industry trends, or is the same protocol executed for years? If so, is the protocol still robust enough to ensure quality as defined by today’s industry standards? If not, the validated cycle may not be nearly as robust as it is assumed to be. This type of bias is unfortunately very common.

Robustness of and Accuracy of Procedures and Testing Methods

Assumption that a supporting system is entirely sound (as opposed to a study, as discussed above) is another form of bias. For example, how clear and robust are the procedures and testing methods that operators utilize? Are the procedures detailed enough to ensure to the degree possible that mistakes will not be made? Are they kept updated to current industry and regulatory expectations? If not, how can accurate execution of those procedures be assured? How easily would an error in execution be detected? If errors cannot be easily detected, or the procedures are flawed, there is a risk that data accuracy and integrity will be impacted. This can then affect the accurate evaluation of risk, or even conceal risk.

Appropriateness and Robustness of the Training Program

The following examples serve to illustrate how the quality of a firm’s training program may have an enormous impact on the quality of any risk assessment. Who performs the training, and is the content exactly the same for each trainee? If operators train each other, the risk increases that a bad practice could be inadvertently transferred to a trainee. How is consistency of execution assured? Is proficiency testing in place for critical procedures, or do operators simply read the procedures and document that they understand the contents? It is critical that complete comprehension of critical procedures and the ability to accurately and consistently execute them is assured. When performance errors occur due to insufficient, improper or inconsistent training, they can be difficult to detect as the operator will likely assume that they are performing the tasks correctly. This can result in inconsistent or inaccurate data on which risk evaluations and quality decisions may be based.

Consider the fact that we work in an increasingly global environment where language barriers exist. How does one go about training an employee or contractor when the native language is not theirs, and that individual is not fluent in the native language? Some firms have addressed this issue by having procedures available in more than one language. If this is done, it is critical to ensure that any translation is correct and conveys the proper information. Whenever possible, it is wise to have someone who is fluent in both languages review and compare the instructions for consistency. This ensures that procedures are not executed incorrectly due to errors in translation. Again, operators executing the procedure are less likely to recognize and report this type error, since they will likely assume that the procedural document is correct. Both of the above scenarios increase the likelihood of concealed or underestimated risk, and therefore unintended bias in the risk assessment

Robustness of Investigational Root Cause Analyses

Another potential pitfall is the quality of tools utilized in performing root cause analyses during failure investigations. Do the supporting systems provide enough data of the appropriate type to aid in effective root cause analysis? Are there controls in place to ensure that data integrity is sufficiently robust? If not, root cause(s) identified through the investigation process may be incorrect.

Are the resident subject matter experts (SMEs) part of the investigational team, or do they only review the completed report? Some firms employ designated teams to perform the investigations. The SMEs should be part of these teams, or at least actively involved in the execution of the investigation. If the SMEs are only reviewing the final report, the risk increases that a significant piece of information may have been overlooked or dismissed. Again, a root cause may be incorrectly identified, or missed altogether. Both of the above scenarios may result in unintended bias as the identification or perception of the severity or detectability of a particular risk may be based on inaccurate information, and therefore flawed.

Conclusions

These are just some examples of how bias can have an indirect yet significant impact on even the most carefully executed risk assessment. It is the flawed perceptions created by unintended bias that result in risks being underestimated or hidden during the execution of the assessment. This may in turn result in quality decisions that are based on inaccurate information. Giving due consideration to these factors will reduce the likelihood that unintended or unseen bias will occur, which will improve the quality of both the risk assessment itself and the decisions resulting from it.

References

  1. Brindle, A., Davy, S., Tiffany, D., & Watts, C. (2012). Risk Analysis and Mitigation Matrix (RAMM)–A risk tool for quality management. Pharmaceutical Engineering, 32(1), 26-35. ICH, Q9. (2005). Quality risk management. ICH Expert Working Group.
  2. IEC 60812 Technical Committee. (2006). IEC 60812, Analysis Techniques for System Reliability-Procedure for Failure Mode and Effects Analysis (FMEA).
  3. Stamatis, D. H. (2003). Failure mode and effect analysis: FMEA from theory to execution. ASQ Quality Press.
  4. World Health Organization. (2003). Application of Hazard Analysis and Critical Control Point (HACCP) Methodology to Pharmaceuticals (Annex 7).WHO Expert Committee on Specifications for Pharmaceutical Preparations.
  • <<
  • >>

Join the Discussion