Considerations for Risk-Based Sampling During Development, Stability, and Release

Scott Surrette - Manager Combination Product Development, Regeneron Pharmaceuticals, Inc.

Introduction

When pharmaceutical companies begin using medical devices such as pre-filled syringes, safety-systems, or auto-injectors, a common challenge is integrating new medical device procedures, required by 21 CFR Part 4, into existing processes.1 One of these areas is release and stability testing. Many companies default to testing 30 samples for functional tests such as dose accuracy, break loose force, and gliding force. Acceptance criteria often requires all samples are within specification in order to release a lot for clinical or commercial use. Inevitably at some point a single unit tests out of specification for one of these functional tests. The lot release is put on hold, an investigation is initiated, and the combination product subject matter experts may be asked if this single out of specification sample should result in the entire lot being rejected. This article will discuss how to think about that question and how to build a process that addresses the answer by directly tying testing to risk.

Background

Risk is a combination of the probability of occurrence of harm and the severity of that harm.2 Sampling is the process of taking a representative portion of a lot or batch for statistical analysis. So, risk-based sampling is the process of using a combination of the probability of occurrence of harm and the severity of that harm to determine a representative quantity or appropriate statistical analysis. Determining how to make this connection can be challenging.

To incorporate risk into an established sampling process can require input from several functional areas within a company and significant time to determine how to best integrate and prepare for the new approach. With limited time and resources, the effort would need to add significant value. The first benefit worth mentioning is compliance. 21 CFR 820.250 requires manufacturers use valid statistical techniques for sampling plans and that sampling methods are adequate for their intended use.3 Beyond compliance, risk-based sampling gives a structured approach to determine how to best allocate resources, particularly lab resources and storage space. It also gives confidence throughout the various stages of a project that the product has been evaluated to an appropriate level of scrutiny.

Standards

An easy approach to apply to sampling is to utilize recognized standards that establish sampling plans.4-7 These standards adjust the sampling plan and acceptance criteria based upon the Acceptable Quality Limit (AQL), Level of Inspection, and Lot Size.4-7 While a manufacturer can tie risk levels to AQL and adjust level of inspection as confidence in the process is established, it is important to understand that AQL is only one aspect of a sampling plan. At a given AQL and level of inspection, the adjustments made by lot size in the standards change the level of protection that the sampling plan provides.4-7

The level of protection a sampling plan provides is shown by its Operating Characteristic (OC) curve as seen in Figure 1. There are four key factors that are used to summarize this level of protection: α, β, AQL, and RQL. Alpha (α) represents the probability of incorrectly rejecting a lot or the producers risk. Beta (β) represents the probability of incorrectly accepting a lot or the consumers risk. Acceptable Quality Limit is the highest nonconforming rate that is acceptable over a long-term series of lots and is where (1-α) on the y-axis meets the OC curve as shown in Figure 1. Rejectable Quality Limit (RQL) is the lowest non-conforming rate that is unacceptable over a long-term series of lots and is where β on the y-axis meets the OC curve as shown in Figure 1. RQL may also be referred to as the Lot Tolerance Percent Defective (LTPD) in the literature.

Figure 1. Operating Characteristic Curve for an attribute sampling plan

The example in Figure 1 shows that there is a 5% (α) chance of rejecting a lot with 0.09% (AQL) defects and a 5% (β) chance of accepting a lot with 5% (RQL) defects. Whether this level of protection would be acceptable depends on the risk associated with the test being performed. Figure 2 shows an example from ISO2859-1 and how the sampling plans proposed by the standards adjust the level of protection based upon lot size. In this example the α is adjusted to maintain an AQL of 1% and dependent on the lot size the RQL varies from 2.4% to 9.1% when using a reference β of 5%. This represents drastically different levels of protection within a single AQL and inspection level.

Figure 2. Lot size eff ect on level of protection from ISO/ANSI standards.

For pharmaceutical companies, lot sizes can be in the hundreds of thousands and typical sample sizes are less than 1% of the lot. When lot sizes are that much larger than what is sampled, the OC Curve remains consistent regardless of lot size.8 Additionally, risk is also lot size independent, being made of severity, or the harm from an individual device, and the probability of harm, which as a percentage would hold regardless of lot size. This means that a truly risk-based approach for pharmaceutical companies is lot-size independent.

Attribute Versus Variable

There are two types of sampling plans: attribute and variable. Attribute is qualitative or pass/fail criteria. Variable is quantitative or evaluates the distribution of samples against numeric specification limits. Attribute criteria is benefi cial in that it is simple and easy to incorporate into validated data systems. Variable is beneficial in that it requires fewer samples to obtain the same level of protection as shown when comparing Figures 1 and 3. The attribute version in Figure 1 requires 59 samples to achieve the same α, β, AQL, RQL combination as 20 samples for the variable analysis shown in Figure 3. It also provides more information about a lot such as its distribution relative to the specification limits. Due to the sample size advantage a variable sampling plan is typically preferred, if possible, over an attribute sampling plan.

Figure 3. Operating Characteristic Curve for a variable sampling plan

Application

Sampling plans help a project team make decisions and the nature of that decision changes depending on the activity and the stage of the project. Formal development testing, such as Design Verification, helps the team determine whether the device meets the engineering specifications or Design Input Requirements. Stability testing helps the team determine how long the device can maintain its end of shelf-life criteria. Process Validation helps the team determine whether the process can reliably produce product within specification. The purpose of release testing can adjust after process validation. Pre-process validation release testing helps determine whether a lot meets specification. The purpose of release testing post-process validation can shift to determining whether the process is still operating within its validated parameters. Taken together with control charts, this allows a reduction in the amount of testing that is required for release testing post-process validation.

There are many acceptable ways to approach risk-based sampling and there is no one way to adjust α, β, AQL, and RQL for each of the situations mentioned above. The proposal given in this article is a relatively simple and straightforward way to implement risk-based sampling into an existing process that meets regulations and consistently controls the level of protection provided.

One approach is to hold α and β constant at 5% for all testing situations. The reason for this is to provide the same level of confidence and power to keep the producer’s risk and consumer’s risk at 95%. For formal development testing, stability, and pre-process validation release, relate the risk level directly to a maximum RQL value. The focus on RQL, as opposed to the AQL focus provided in the standards, is to tie risk to the patient (who is the consumer) to the consumer’s risk summarized by β and RQL. This causes our risk assessments to be tied to what we expect to reject as opposed to what we expect to accept. With this method, α, β, and RQL are set either through procedure or the risk assessment. AQL remains flexible and adjusts based upon what sample size is selected. Sample size and, as a result, AQL is selected at the team’s discretion.

The team decision on sample size and AQL is a balance between the resources required to store and test the additional samples and the reduction in producer’s risk. As sample size increases, so does the AQL. This effectively means that as you test more samples with the RQL (consumer/patient risk) held constant, the chance of rejecting a “good” lot decreases. This is shown in Table 1 below for an attribute sampling plan with ‘N’ meaning sample size, ‘a’ meaning accept the lot with that many defects and ‘r’ meaning reject a lot with that many defects. Factors that the team may consider are, testing costs, risk associated with a lot rejection, cost of the final product, etc.

Table 1. Sample size variation with RQL = 5%>
<p dir=

A successful process validation demonstrates the process is in control and reliably produces in specification product. This enables post-process validation release testing to adjust to a level where the goal is detection of a change in the process. It is important to note that to do this effectively, control charts are a vital tool. There are again, a myriad of ways to approach this reduction in testing. Some options are to increase α and/or β values, increase the RQL percentage target dependent on the risk, or make the maximum RQL target the maximum AQL target. There is no one right approach, so the best way to decide as a company is to evaluate the OC curves and make an informed decision about the protection provided.

As these processes are implemented, questions may arise about how to handle various uncommon, but possible, result scenarios. Typical scenarios are: all results are in specification and acceptance criteria is met, or results are out of specification and acceptance criteria is not met. Procedures likely already exist for these situations.

Less common, but possible for both attribute and variable sampling plans are if there is a sample failure with passing acceptance criteria using the risk-based sampling plan. In this scenario, the lot can be released since acceptance criteria is met. However, it is a company decision whether to formally document the out of specification in a formal event system. The benefits of this is to track any trend of out of specifications across lots and to capture any potential lab errors. However, barring a troublesome trend of out of specifications, all events will likely close as acceptable due to the risk-based sampling process.

Another rarer possibility, which can only occur with a variable sampling plan, is that there is no individual out of specifications and the acceptance criteria are not met. This can happen if you have a very wide spread of data or if all the data is close to the specification limit. In this situation the lot cannot be released and would be treated as any other failure to meet acceptance criteria.

Summary/Conclusions

For the question posed in the introduction, the one solution would be to change the attribute analysis to a variable analysis and set the acceptance criteria such that a risk-appropriate RQL is achieved. However, the best approach is to preemptively have a process that determines the appropriate analysis and assigns an α, β, AQL, and RQL based upon risk to determine if the lot can be accepted.

Utilizing risk-based sampling gives a structured approach to answer these questions and helps determine how a company should best allocate resources. While using standards from ISO and ANSI seems like reasonable approach, there are downsides to adjusting the level of protection the sampling plan provides as lot size adjusts. A truly patient-centered risk-based approach begins with tying risk level to the RQL and β combination for formal development testing, release, stability, and process validation. However, with the data collected during process validation demonstrating the process is reliably producing product that is within specification, release testing requirements can be loosened.

There is no single correct way to approach sampling plans. What is important is fully understanding what a given sampling plan is designed to detect through an understanding of the OC curve. This article provides a starting point for a company working to implement such a process. This foundation can be built upon and adjusted once implemented to best accommodate a specific company’s needs.

References

  1. Food and Drug Administration Code of Federal Regulations Title 21 – Food and Drugs, Chapter I – Food and Drug Administration Department of Health and Human Services, Subchapter A – General, Part 4 - Regulation of Combination Products
  2. ISO 14971:2019 Medical devices – Application of risk management to medical devices
  3. Food and Drug Administration Code of Federal Regulations Title 21 – Food and Drugs – Chapter I – Food and Drug Administration Department of Health and Human Services, Subchapter H – Medical Devices, Part 820 – Quality System Regulation, Subpart O – Statistical Techniques, Sec. 820.250 Statistical techniques
  4. ISO 2859 Sampling procedures for inspection by attributes
  5. ISO 3951 Sampling procedures by variables
  6. ANSI/ASQ Z1.4-2003 (R2018): Sampling Procedures and Tables for Inspection by Attributes
  7. ANSI/ASQ Z1.9-2003 (R2018): Sampling Procedures and Tables for Inspection by Variables for Percent Nonconforming
  8. Taylor WA. Guide to acceptance sampling. Taylor Enterprises, Inc. 1992

Subscribe to our e-Newsletters
Stay up to date with the latest news, articles, and events. Plus, get special offers
from American Pharmaceutical Review – all delivered right to your inbox! Sign up now!

  • <<
  • >>

Join the Discussion