Biopharmaceutical Process Model Evolution- Enabling Process Knowledge Continuum from an Advanced Process Control Perspective

As the pharmaceutical industry moves into the cyber-physical era, sometimes referred to as Industry 4.0, it will be imperative to harmonize terminology and expectations when developing advanced manufacturing technologies to accelerate their successful adoption and maturity. One of the main pillars of Industry 4.0 is automation intelligence through advanced and cognitive controls.1 Intelligent process control strategies are also known as Advanced Process Controls (APC)– see glossary of terms in Table 1 for more information on control terminology. APC strategies enable process state/condition visibility and process self-optimization which translates to better variation management and higher process capabilities.2,3 Most APC strategies employ process models at their core.4

Even though many industries are already realizing the benefits of APC, intelligent control concepts are not yet widely adopted in pharma, with very few published applications.5,6 Most pharmaceutical plants still operate in a very manual mode without the data infrastructure required for real-time process modeling (from a data synchronization and aggregation standpoint).7 This makes model development and maintenance for APC applications very challenging. Even when the data infrastructure is sufficient, questions regarding regulatory impact, global acceptance, intended use, risk versus benefit, cost and model maintenance frequency may lead to strong headwinds to adoption of the technology and many hours of interesting debate. This debate becomes exacerbated by three constraints. The first constraint being that it is not possible to determine the final intended use of most process models until their performance has been assessed. The second constraint is finding resources with the right combination of domain knowledge, modeling and control competencies. This interdisciplinary combination is key when developing models for APC applications.8 The third constraint is the general lack of deep process understanding in support of model optimization and justification at the time of filing. For many processes, it is difficult to identify/develop robust models that describe the process dynamic states and outputs with the precision and accuracy of an analytical technique early in the commercial process lifecycle. This is particularly true when dealing with complex dynamic systems such as a bioreactor. This however doesn’t mean that modeling activities do not have a place or should be avoided for early stages of the commercial lifecycle of complex processes. In fact, it should be the opposite and process modeling should be started early in development to begin to build the necessary data and knowledge plex processes are prone to a higher number of potential failure modes due to having more degrees of freedom.9 New commercial processes could potentially be more prone to failures as well given that manufacturing plants do not have much experience with the overarching control strategy yet. Thus, in both cases for new and complex processes, process models that can help detect changes from the validated state and pin-point potential variation causalities early in the commercial lifecycle can assist in adding robustness to complex operations and strengthen the overall control strategy.10,11

Subscribe to our e-Newsletters
Stay up to date with the latest news, articles, and events. Plus, get special offers
from American Pharmaceutical Review – all delivered right to your inbox! Sign up now!

For companies to start deriving value from modeling activities as early as possible in the product lifecycle, the connection between process knowledge and model capability/intended-use needs to be understood and delineated. This will allow one to develop a model evolution approach that will enable knowledge-based model scoping and intended use maturity, both at the technical and regulatory level.

The purpose of this article is to propose an organizational model maturity philosophy and approach. This philosophy approaches model building and intended use in an evolutive manner with the goal of deriving value of APC strategies at various stages of the product lifecycle. Depending on the level of available process knowledge and characterized knowledge space, various levels of model trust and associated control strategies can be employed. This modeling maturity philosophy can also support regulatory post-approval plans in a methodological manner following knowledge management continuum precepts and aligns with the FDA’s “Advancement of Emerging Technology Applications for Pharmaceutical Innovation and Modernization Guidance for Industry” mission.12

Glossary of Some Control Terms

Modeling as Part of Your Control Culture

In order to adopt the proposed modeling maturity approach the organization should commit to incorporate process modeling elements and competencies during process development stages. Incorporating and adopting model based control strategies requires a different way of thinking, skill sets and data infrastructure/ management. Process modeling and model based process control requires moving away from making decisions that have been traditionally made based on limited and independent data to making decisions based on multivariable relationships (Figure 1).

Relevant process variable relationships tend to be in higher dimensional spaces (more than three dimensions) which is more than what human brains can process. This brings up the need for mathematical and statistical mining strategies to derive values and scorecards that represent multivariable process states and output probabilities.

Modeling with the goal of APC also requires process developers, engineers and operators to start visualizing the manufacturing process as a system which can operate within multiple states leading to a range of outputs, and that within process boundaries, these processes can exhibit infinite continuous possibilities13 (Figure 2). Thus, particularly for complex processes such as a bioreactor, a reductionist approach to seek one-to-one causal relationships is not realistically possible from an intelligent control point of view.9

Modeling as part of your culture schematic

When embarking on this journey, the organization needs to be aware that data modeling is not a single task, it is a discipline and that in most cases models will keep on maturing with the process, especially empirical models. Quick statistical analysis and data set description is relatively easy but building models for process control and quality decisions is complex, it requires time, effort and a cross-disciplinary team that speaks (roughly) the same language. With regards to language, modeling experts should be mindful that process developers are accustomed to working with a limited number of data points which they evaluate against predetermined ranges and not against each other. Especially, process developers are not used to evaluating new processes using statistical process control tools such as control charts and process capability index due to insufficient representative data. Thus, the concepts of process variation and control strategy capability may have different meanings to stake holders in the organization depending on their backgrounds and job function. This highlights the imperative need for early language and concept alignment between team members regarding the topics of data distributions, statistical trends and variation relativeness, which are key concepts for empirical model identification.

Processes, particularly complex processes, can have an infi nite number of states within the boundaries of a control strategy, leading to infi nite output values (within range).

From a modeling perspective, particularly when working with quantitative machine learning approaches, the main goal of a process model is to characterize plausible variable distributions and complex tendencies (i.e. tendencies that are multivariable, covariant, non- linear and emergent). This means that a single case of a model’s poor performance due to atypical process conditions (atypical process state) doesn’t invalidate a model’s ability to describe a significant proportion of past and/or future populations. Thus, from an organizational expectation point of view, understanding that models may not work well to describe all special causes is important to avoid the natural urge of including more complexity in a model to explain improbable states. Incorporating too much complexity in a quantitative model is cumbersome, expensive and could lead to worse performance when used to describe processes with common cause variability.

Advanced Process Control

Advanced Process Control, APC, or supervisory controllers, have been defined as Fault Detection and Classification and Model Based Process Control (MBPC).14 Some definitions of APC also include soft sensing.8 As aforementioned, most APC strategies are designed to provide real-time visibility to atypical process behavior with the end goal of systematic process recovery as well as to ensure that the process runs at its highest capability (via optimization) within process constraints when facing typical disturbances (Figure 3).

APC main strategies fault detection and process optimization.

Fault detection or abnormal event identification is an important element of APC as it allows detecting when a process is performing in an unhealthy manner or outside of process experience. Fault detection can be achieved by using qualitative, and quantitative approaches.10,15,16 Process fault detection can be done through Process Condition Monitoring (PCM). PCM has been defined by Mendonca et. al “as a system that undertakes a watch dog protective function through real-time monitoring of relevant output/state variables.”17

Soft sensors, also known as virtual sensors, have been defined as fundamental or empirical models that can estimate unmeasured and/ or unmeasurable process or quality variables from other available variables.8,18 The output of soft sensors can be used as part of PCM or to feed an open or closed control strategy.

Model Based Process Control (MBPC) are closed control strategies that use dynamic models to forecast the evolution of the process state around a time period (horizon) as part of an optimization strategy. These control strategies use a vector of manipulated variables to optimize the process over a dynamic and receding horizon. In other words, MBPC strategies recalculate new optimal process set-points depending on the process state and some historical knowledge to improve the process outcome. These new set-points are fed into the controller to ensure the output of the process is the best within overall process and time constraints.4 MBPC has been described as a chess match between the controller and the process state in which the controller adapts its set-points (within process boundaries) to cope with process variability and disturbances.19 These control strategies are focused on the process output and a model derived from the knowledge space is used to forecast the output at different time intervals. Adaptive control strategies fall under the umbrella of performance based control (ICH Q12), in which the focus is not controlling the process to a set of static set-points but controlling the outcome of the process via set-point adaptation. Figure 4 shows the different elements of APC strategies as well as where they reside within an automation architechture.

Some Important Modeling Regulatory Expectations

Automated control architecture

When evaluating modeling regulatory expectations during submission, the most important factor to be considered is how the model contributes to assuring the quality of the product through the control strategy. As stated by the ICH guidelines, the level of oversight and model documentation should be commensurate with the level of risk associated to the use of the model; this is similar to other types of process or quality controls. Figure 5 depicts the three main categories of model impact as described by ICH standards.20 The first category is low impact models. Low impact models do not affect process and do not play a role in the product quality assurance strategy. These models can be categorized as For Information Only (FIO). The second category are models with medium impact. Medium impact models can affect the product quality but the effect can be detected through product quality testing, thus there is no risk that the patient will be exposed to potentially impacted product. The last category described by ICH guidelines is high impact models. High impact models could be used as surrogates of product quality and are the sole control/assurance of product quality. High impact models that are the only assurance control of product quality will be scrutinized, validated and maintained with the same rigor as any other analytical method used for quality control.

Impact Level based on the model’s role in contributing to the product quality

With regards to empirical models (also called data driven models and machine learning models) the PAT guidance, ICH 11 and USP 40 1039 mention that the outcome of empirical models alone should not be used as surrogates of quality measurements or fed to closed controls without “further support or justification, such as a mechanistic explanation of causal links among the process, material measurements, and target quality specifications”. It is important to keep in mind these risk based expectations as pharma starts adopting machine learning approaches and incorporating them into the overall control strategy.

Modeling and the Process Analytical Technology Framework

Proposed Biopharmaceutical Modeling Maturity Approach for APC.

A desired goal of the PAT framework is to develop well understood (i.e. highly predictable) and controlled processes.21 Process predictability can only be done through process characterization/observability via the identification of the process response surface within, and in some instances outside, the validated controlled space. Since its inception, the PAT initiative has been mostly associated to measurements of product/process quality via spectroscopic techniques (which is only a subset of the PAT toolkit intended uses).

According to the FDA PAT guidance, when using the PAT toolkit, gains in quality, safety and/or efficiency are likely to come from:

  • Reducing production cycle times by using on-, in-, and/or at-line measurements and controls
  • Preventing rejects, scrap, and re-processing
  • Real Time Release Testing (RTRT)
  • Increasing automation to improve operator safety and reduce human errors
  • Improving energy and material use and increasing capacity
  • Facilitating continuous processing to improve efficiency and manage variability

Even though the guideline was written not only from a quality but also from safety, reliability and efficiency perspective much of the PAT efforts have focused on quality measurement lead time reduction. Advanced technologies for quality lead time reduction tend to incorporate high impact models. Some of these high impact models are chemometric models for multivariable analyzer calibration and parametric surrogate models for RTRT and much less on APC (fault detection and classification and model-based controls).

Due to the fact that the industry has focused on developing models with the goal of quality measurement lead time reduction, there is a tendency to scrutinize all modeling activities through the lens of high impact models, even when many low impact models can still be included as part of a lower risk APC strategy. Low to medium impact models within an APC strategy could add tremendous value to supply chain reliability, particularly when used for process state estimation, supervision and scheduling decisions. The output of these models can then evolve into more actionable information once they embed or describe causal or mechanistic knowledge. Unfortunately, to the best of the authors knowledge, there are no standards or roadmaps for model applicability/intended use maturity in pharma to enable process intelligence.

Interdependency of variables, diff erent causes and variable relationships can lead to the same symptoms.

Biopharmaceutical Model Maturity Approach for APC

Figure 6 shows our proposed approach to modeling evolution/ maturity based on factors such as modeling difficulty, level of enabled process control/intelligence, required level of data variation and needed process knowledge. This maturity model has been developed as a strategic roadmap that should allow us to:

  1. Connect different modeling activities in a facility in a logical way that leads to future and attainable process intelligence
  2. Gain value from modeling activities during the initial stages of the commercial product life cycle while more process understanding is gained to support higher impact model development
  3. Support the creation of a systematic pathway for post- approval changes that incorporate modeling as part of APC implementation

To develop this biopharmaceutical modeling maturity approach, we have drawn inspiration from big data analytics maturity models22 as well as control strategies used in other highly automated industries.23

Unsupervised Fault Detection

In our modeling maturity approach, we start with an unsupervised fault detection strategy using high frequency data from the process condition. Our current unsupervised fault detection strategy is based on variable wise batch analysis by means of bilinear modeling.24,25 This fault detection strategy is the foundation of our APC control strategy as it acts as a filter to determine when the process might be operating in an unknown state or undesirable state.26 Depending on what information is used to train the variable-wise batch model, it could also allow us to determine when data driven (empirical) soft sensors should not be trusted. Most data driven soft sensing approaches are not capable of coping with too much novel variation outside of the model’s experience. From a modeling difficulty and process expertise standpoint, an unsupervised fault detection strategy requires the least amount of process scientific knowledge and data. These models usually represent a very small subset of the process operational space (usually just data from the Proven Acceptable Range, PAR).

Soft Sensing

The soft sensing layer comes after the unsupervised fault detection layer (from bottom to top) and it usually employs quantitative models - meaning that the model can estimate an unmeasured variable out of measured ones. To identify data driven soft sensors, data sets with higher variability are needed to characterize an appropriate range of the dependent variable in question. For example, if we are using process data to estimate the level of an impurity generated in the bioreactor, using data from the PAR alone may not allow us to identify an empirical model or estimating the impurity levels when the process drifts away from typical manufacturing conditions. Thus, this modeling layer requires a more variable data set with an appropriate distribution of the dependent variable (even or normal distributions). Appropriate distributions for soft sensor identification are only found in larger historical and perturbed data sets that contain a combination of intentional and un-intentional disturbances and even faults. As aforementioned, the impact level of a soft sensor will depend on its contribution to assure the quality of the product. Thus, the level of needed mechanistic understanding and causal justification to implement soft sensors will vary depending on their application and available analytical “gatekeeping” controls to characterize the final product.

Knowledge Continuum (Adapted from Dr. Ajaz S. Hussain, 5thEGA Symposium on Biosimilars, London, 2007) to Modeling Continuum and Automation Correspondences.

Forecaster models can also reside in the soft sensing layer. Forecasters are models that can estimate future process outputs depending on earlier process states based on historical or mechanistic understanding. These models are not required to, and in most cases, will not have, the accuracy of a quality measurement of models that are built using an entire process time series. Nevertheless, forecasters could add tremendous value since they could alert plant personnel about the possibility of final quality issues earlier on the process. Forecasters can be used as part of a low impact/high value control strategy when used to trigger confirmatory measurements. Confirmatory measurements triggered by a forecaster can help us to:

  1. Recover the process, if process knowledge and procedures allow
  2. Abort the process early if recovery is not feasible
  3. Help make planning and scheduling decisions to mitigate disruptions in the supply chain

Fault Classification

The third layer of our modeling maturity approach is Fault Classification (FC). The main goal of a FC strategy is to classify the causation of a fault to enable systematic process recovery (if possible) or to guide continuous improvement efforts. A fault classification strategy requires a very structured data management approach particularly for complex processes that could exhibit a wide range of possible failure modes. In complex processes, multiple causes can trigger the same symptoms (a.k.a process faults) which could be picked up by the PCM layer (Figure 7). Thus, a very thorough system to record faults and linking them to potential root causes and corrective actions is needed to generate a fault classifier. This fault recording system should collect the information in a machine-readable format and connect it to causation probabilities derived from an expert based assessment or a historical fault database.

Model Based Process Control/Set- Point Optimizer

The fourth layer of our proposed approach to modeling evolution is MBPC for set point optimization. Models used to enable this layer require even more data variation and process knowledge (a control design space) than soft sensors. This layer closes the manufacturing process intelligence loop enabling self-optimization within the control design space boundaries.

Knowledge and Modeling Continuum

Modeling activities are closely tied to the process knowledge continuum. Hence, just like knowledge, process models and their applicability within and outside the automation architecture will evolve with time and experience. Figure 8 depicts our efforts to connect the knowledge continuum to the modeling continuum. In the knowledge continuum the level of intricate process knowledge allows for a more flexible process and less regulatory oversight given a strong quality system and agreed upon regulatory pathway. In the modeling continuum, intricate knowledge allows for set point prescription (MBPC) and systematic process recovery, which in turn allows for automated performance based control. The level of process knowledge also dictates where the model and associated control strategy should reside in the automation architecture. In Figure 8 we have used the International Society of Automation Standard 95, ISA-95 levels (ISA-95 schematic shown in Figure 9), to map where we envision parallels between these models and control strategies depending on the available process knowledge.27 On the lower layer of the modeling continuum, we have models and modeling activities used to describe variation in data. Descriptive modeling activities are mostly carried out as part of investigations. Even though knowledge derived from investigations can be used as part of continuous improvement activities, descriptive models are not built to be incorporated into the automation architecture. Once consistent correlative knowledge and causal knowledge is incorporated into the models we can embed them into the automation control architecture either for monitoring or control (L1 or L2). The level of decision making and trust that is bestowed on these models will depend on a model’s predictability as well as explained causality and/or fundamentality. The control execution can be automated (L2) or carried out by the operator by following a set of instructions specified in the batch record or event based work instructions in the case of out of control (OC) events (L0, L3). These instructions could be on a physical document, a Manufacturing Execution System (MES) or another supportive and auditable system.

ICH Q12 as The Enabler of APC

The expedited development of life-changing medicines increasingly results in a compression of product development timelines. The combination of expedited product development with the exponential growth in manufacturing technologies, results in a clear call to action for the biopharmaceutical industry and global regulators to hasten access to therapies while also enabling technologies that increase a product’s reliability, quality, and supply chain robustness. International Conference on Harmonization (ICH) Q12 Technical and Regulatory Considerations for Pharmaceutical Product Lifecycle Management, is an answer to this call. The current ICH 12 Draft guideline states that: “A harmonized approach regarding technical and regulatory considerations for lifecycle management will benefit patients, industry, and regulatory authorities by promoting innovation and continual improvement in the biopharmaceutical sector, strengthening quality assurance and improving supply of medicinal products.”28 This statement demonstrates that ICH Q12 promises to be an enabler of modern manufacturing techniques, including APC, with the goal of increasing patient supply robustness and assurance of consistent product quality. ICH 12 contains the following tools and enablers to support a harmonized life- cycle management:

  1. Categorization of Post-Approval Changes
  2. Established Conditions (EC)
  3. Post-Approval Change Management Protocols (PACMP)
  4. Product Lifecycle Management (PLCM).

In addition, the guideline includes information related to the Pharmaceutical Quality Systems (PQS), prior-knowledge and ICH Q10 as an enabler of Q12, the relationship between industry and regulators and post-approval changes for marketed products.

ISA 95 Automation Standard Functional Hierarchy

The tools that will be the most effective to support future innovation are ECs and PACMPs. An effective PQS continues to be the necessary foundation to support changes now and in the future. As stated above, APC strategies enable enhanced process visibility and process self-optimization which translates to better variation management and higher process capabilities. To support the adoption of these control strategies, an agile regulatory framework will be required that takes into account the evolving nature of process and product knowledge. This will increase supply chain robustness and enrich the biomanufacturing culture of continuous improvement. The partnerships between regulators, biopharmaceutical manufacturers and patients will be strengthened by increased transparency and changes supported by the accumulation of derived knowledge (e.g. knowledge derived through experience, experimentation and modeling activities).

The global adoption of ICH Q12 will allow for increased predictability of acceptance of such post-approval changes, which will enable updates to established processes. Without the tools allowed by ICH Q12, companies may be hesitant to implement such technology due to the timelines associated with global approval. This is particularly true for APC strategies that incorporate high impact models as they may require significant amounts of data and knowledge in order to become part of the overall control strategy. As a result, there may be an expectation that some high impact models may not have sufficient data until well past initial approval. ICH Q12 thus becomes a key enabler for supporting model implementation in the post-approval space, commensurate with the model impact and an overall robust model lifecycle process. An upfront identification of established conditions, post-approval changes, and the mechanisms to introduce changes to support APC strategies will ensure that the innovative changes are implemented expeditiously. Early and frequent communication with global regulators is recommended to build trust and to share the scientific knowledge and assumptions prior to the introduction of the change or submission of a protocol. Innovations are not limited to manufacturing technologies as novel regulatory approaches are not only possible but will be the future.

Conclusions

The production of biopharmaceutical products needs to adapt to the trends and drivers of manufacturing modernization to improve process capability leading to increased supply chain reliability. A key component of manufacturing modernization is the use of models to better predict and control the process. Data packages and knowledge used to identify/generate models are always in constant evolution. Models can have very different intended uses depending on the process control need, available data package, knowledge level, modeling strategy and process complexity. The intended use will define what kind of actions, based on the model, can be taken by the controller or floor personnel as well as the needed level of Causes-to-Relationship- to-Symptoms understanding. Understanding the difference between complex versus simple systems and the need to treat them differently (align expectations and design to the level of system complexity) is key to define intended use and supporting business case. We should strive to leverage models for much more than quality analysis lead time reduction. Mitigating supply chain risk through visibility and disruption management brings tremendous value and could employ models that are categorized as low to medium impact. A plan for model evolution and evolutionary intended use is critical to achieve control and process intelligence while deriving value from modeling activities. We foresee ICH Q12 as an instrumental opportunity to evolve process intelligence and control in a transparent, logical and attainable manner.

Author Biographies

Dr. Romero-Torres is a Senior Manager of Advanced Data Analytics at Biogen where she leads a team of mathematicians, statisticians and Advanced Process Control engineers. She has over 15 years of experience in the fielding of Process Analytical Technologies (PAT) and advanced manufacturing of bio-pharmaceuticals with a focus in the use of advanced sensors, statistical process control, multivariate data analytics and operational excellence tools. She obtained a doctorate in pharmaceutical PAT from Purdue University in 2006 and has worked for companies such as Schering Plough, Wyeth, and Pfizer. In 2014, Dr. Romero-Torres founded Bio-Hyperplane LLC, a data analytics and consultation company. At Bio- Hyperplane LLC, Romero-Torres worked with abroad portfolio of industries including biopharma, automation software (semiconductors), data analytics software and instruments. Her personal mission is advancing pharmaceutical manufacturing processes to enhance plant operations and, more importantly, improving patients’ access to critical therapies.

Kim Wolfram currently leads the protein team and is responsible for clinical, license, and post-marketing regulatory applications for biological and combination products. Her previous regulatory experiences includes acting as the global regulatory lead for programs at various stages in development. She is committed to advancing novel manufacturing technologies and defining the future for regulatory science. Kim is a member of the PhRMA GQM Workgroup and is an active participant in regulatory policy development. Prior to Biogen, she was in Quality Assurance at Abbott Bioresearch Center (now AbbVie), where she supported the contract manufacturing for Seattle Genetics and Zymogenetics. Kim received her Master of Science degree in Regulatory Affairs and Health Policy from the Massachusetts College of Pharmacy and Health Sciences and an undergraduate degree in Natural Sciences from Saint Anselm College.

John Armando, MS is a Manager of Global Regulatory Affairs CMC, and has spent over five years at Biogen, three years as a process development engineer in the Process Biochemistry group supporting small scale purification process development and technology transfer activities for early stage biologics programs, and more recently supporting clinical neurology assets and emerging technologies programs. John had previously been with Novartis Vaccines and Diagnostics as a process development engineer, and Axcella health (formerly ProNutria) as an analytical scientist. John completed his Bachelor’s and Master’s Degrees in Chemical and Biological Engineering at Tufts University under the direction of Prof. Kyongbum Lee and Prof. Blaine Pfeifer focusing on metabolomics and mass-spectrometry analysis of E.coli production systems, and additionally received as Master’s Degree in Regulatory Affairs from Northeastern University.

Dr. Syed Kaschif Ahmed has a Ph.D. in Chemical Engineering specializing in Advanced Process Control (APC) from the Illinois Institute of Technology. Kaschif has academic experience in modeling, optimizing, and controlling hybrid fuel cell electric vehicles. As a postdoctoral fellow for the Pacific Institute for Climate Solutions, he prepared a techno-economic tool for City of Surrey and Simon Fraser University to evaluate alternative vehicles. He gained industrial experience at Corning-NY where he designed Advanced Process Control solutions for new production lines in North America, Europe and Asia, while optimizing and improving existing lines. Kaschif is currently developing hybrid models for cell cultures with the goal of enabling model based controls.

Jun Ren, PhD is a Data Scientist at Biogen where he develops new applications for process modeling and Advanced Process Controls (APC). Jun obtained his doctorate degree in Applied Mathematics from Texas A&M in 2015. His thesis focused on Machine learning to predict permeability distribution in a large oil field using Generalized Linear Model and to estimate the best drilling locations using Support Vector Machines. Jun is an expert in Python and Matlab.

Chao Shi is a Data Scientist at Biogen. His work is focused on the statistical modeling and analysis on the pharmaceutical manufacturing process. Chao also provides statistical consulting to company internal clients. His prior positions include Advanced Analytics Scientist at Biogen, Instructor and Statistical Consultant at the University of Texas at San Antonio (UTSA). Chao holds a Ph.D. in Applied Statistics from UTSA.

Dan Hill has extensive experience and expertise in biomanufacturing and pharmaceutical manufacturing process analytical technology (PAT). He works at Biogen where he leads PAT strategy and commercialization. Dan earned a B.S. in Biochemistry at Ball State University and is currently working toward a Masters in Business Administration, Innovation Management at North Carolina State University.

Rob Guenard, PhD, leads the Global Process Analytics Group (GPA) at Biogen where he has been for 2.5 years. The GPA team has the mission to deliver analytics and PAT solutions for development and manufacturing across all product modalities including biologics, antisense oligonucleotides, pharmaceuticals and gene therapies. He currently the sponsor for an Advanced Process Control Initiative being implemented at a Next Generation Manufacturing facility currently under construction. Rob has more than 21 years’ experience in the field of process analysis and control with prior experience at Merck and Dow Chemical.

References

    1. Schlaepfer, R.K., Markus, Industry 4.0 Challenges and solutions for the digital transformation and use of exponential technologies. Deloitte, 2015.
    2. Bourdreau, M.A.M., Gregory K., New Directions in Bioprocess Modeling and Control- Maximizing Process Analytical Technologies Benefits, ed. ISA. 2017.
    3. Ray, W.H., Advanced Process Control 1981: McGraw Hill.
    4. Borrelli, F.B., A.; Morari, M., Predictive control for linear and hybrid systems. 2017: Cambridge University Press.
    5. Undey, C.W., Tony; Looze, Bryan; Zheng, Yingying; Myra Coufal, Predictive monitoring and control approaches in biopharmaceutical manufacturing. European Pharmaceutical Review, 2015(4).
    6. Huang, J.L.-P., David, GMP Implementation of Advanced Process Control in Tablet Manufacturing. American Pharmaceutical Review, 2017.
    7. Romero-Torres, S.M., James; Kidambi, Madhav, Towards Pharma 4.0; Leveraging Lessons and Innovation from Silicon Valley. American Pharmaceutical Review, 2017. 20(1): p. 34-41.
    8. Fortuna, L.G., S.; Rizzo, A.; Xibilia, M.G. , Soft Sensors for Monitoring and Control of Industrial Processes, ed. AIC. 2007.
    9. Venkatasubramanian, V., Systemic Failures: Challenges and Opportunities in Risk Management in Complex Systems. American Institute of Chemical Engineers, 2010. 57(1): p. 2-9.
    10. Venkatasubramanian, V.R., Raghunathan; Yin, Kewen; Kavuri, Surya N., A review of process fault detection and diagnosis Part I: Quantitative model-based methods. Computers and Chemical Engineering, 2003. 27: p. 293-311.
    11. MacGregor, J.B., Marck-John, Optimization of Processes & Products using Historical Data, in FOCAPO/CPC. 2017: Tucson, Arizona.
    12. Advancement of Emerging Technology Applications for Pharmaceutical Innovation and Modernization Guidance for Industry, FDA, Editor. 2017: Silver Spring, MD.
    13. Ray, H.W., State Estimation and Stochastic Control, in Advanced Process Control, McGraw- Hill, Editor. 1981, McGraw-Hill: USA. p. 245-318.
    14. Nishi, D., Handbook of Semiconductor Manufacturing Technology, ed. CRC.
    15. Venkatasubramanian, V.R., Raghunathan; Yin, Kewen; Kavuri, Surya N., A review of process fault detection and diagnosis Part II: Qualitative models and search strategies. Computers and Chemical Engineering, 2003(27): p. 313-326.
    16. Venkatasubramanian, V.R., Raghunathan; Yin, Kewen; Kavuri, Surya N., A review of process fault detection and diagnosis Part III: Process history based methods. Computers and Chemical Engineering, 2003(27): p. 327-346.
    17. Mendonca, J.M.R., B.; Silva, P., Process Condition Monitoring a Novel Concept for Manufacturing Management Tool Integration, in Balanced Automation Systems II, L.M. Camarinha-Matos, Editor. 1996, Springer Science+Business Media Dordrecht.
    18. Paulsson, D.G., Robert; Mandenius, Carl-Fredrik A Soft Sensor for Bioprocess Control Based on Sequential Filtering of Metabolic Heat Signals. Sensors, 2014(14): p. 17864-17882.
    19. Lee, J.H., A Lecture on Model Predictive Control. 2005, Georgia Institute of Technology: http://cepac.cheme.cmu.edu/.
    20. Guidance for Industry Q8, Q9, & Q10 Questions and Answers Appendix, FDA, Editor. 2012.
    21. Guidance for Industry PAT — A Framework for Innovative Pharmaceutical Development, Manufacturing, and Quality Assurance, FDA, Editor. 2004.
    22. Gartner, Data and Analytics Leadership Vision for 2017. https://www.gartner.com/ binaries/content/assets/events/keywords/business-intelligence/bie18i/gartner_data- analytics_research-note_da-leadership-vision_2016.pdf.
    23. Moyne, J.S., Jamini; Armacost, Michael Big Data Capabilities Applied to Semiconductor Manufacturing Advanced Process Control. IEEE Transactions On Semiconductor Manufacturing, 2016. 29(4): p. 283-291.
    24. Huang, J., et al., Intelligent Process Condition Monitoring for Industrial Tablet Coating Operation. European Pharmaceutical Review, 2012(6).
    25. Eriksson, L.B., T.; Johansson, E.; Trygg, J.; Vikstrom, C., Multi- and Megavariate Data Analysis Basic Principles and Applications. 2013: Umetrics.
    26. Lenox, B.Z., Hongwei; Lovett, David; Sandoz, David, An Integrated Approach to Advanced Process Control and Condition Monitoring. IEE Computing and Control Engineering, 2004(February/March): p. 32-37.
    27. Enterprise-Control System Integration− Part 1: Models and Terminology. ANSI/ISA- 95.00.01-2010 (IEC 62264-1 Mod), 2010.
    28. ICH guideline Q12 on technical and regulatory considerations for pharmaceutical product lifecycle management, E.M. Agency, Editor. 2017.
  • <<
  • >>

Join the Discussion