Lessons In Quality From Sanofi's Plai.qa
A conversation with Vanessa Fernandes, MD, Sanofi

Artificial intelligence will always have humans beat on two fronts — sorting data and spotting patterns. Add humans in the mix to check the work, and the combination adds rocket fuel to quality functions in drug manufacturing, including compliance and risk assessment.
Human/ai cooperation is at the heart of Sanofi’s Plai.qa platform, which sits within the broader Plai infrastructure. The platform orchestrates data to evaluate site maturity and performance and provide recommendations for improvement.
Vanessa Fernandes, MD, heads up the Plai.qa initiative and will talk about latest developments at the International Society for Pharmaceutical Engineering’s 2025 ISPE Pharma 4.0 Conference. She offered a preview of her talk and agreed to answer some questions with a focus on validation and human involvement. Here’s what she said.
How do you balance the need for deterministic data and AI's probabilistic outputs?
Fernandes: We've designed a two-tier architecture that clearly separates deterministic calculations from AI-powered insights. Our foundation is a centralized data lake that consolidates quality data from validated source systems, ensuring that all KPIs and metrics are calculated deterministically with complete reliability and reproducibility.
The AI layer operates as an intelligent assistant that identifies patterns, correlations, and potential risks that might not be evident through traditional analysis. Importantly, Plai.qa never makes autonomous decisions — it prioritizes attention and provides recommendations that users must validate by consulting source systems before taking action.
This "trust but verify" approach maintains compliance while leveraging AI's pattern recognition capabilities.
Every AI recommendation is accompanied by clear references to the source data that generated it. This enables users to directly consult these source systems to deepen their analysis and verify information before making any decision. This transparency ensures that while we benefit from AI's probabilistic insights for risk prioritization, all compliance-critical decisions remain firmly grounded in human verification of validated deterministic data. Users retain final authority by relying on official source systems for their decision-making, rather than on the interpretation proposed by the AI.
Can you describe the validation journey of this application? What steps and tools did you use to ensure compliance with data integrity and GxP expectations?
Fernandes: As a decision-support tool providing recommendations rather than automated decisions, Plai.qa follows a risk-based validation approach aligned with our AI governance framework. While not requiring traditional system validation, we've implemented comprehensive controls to ensure reliability and trustworthiness.
Our validation journey stands on three pillars: full alignment with Sanofi's security and data privacy standards, adherence to internal AI guidelines, and data integrity assurance through sourcing from validated systems via our data lake.
We adopted a continuous improvement approach centered on real-world testing and capitalizing on user feedback. This included structured pilot programs across multiple sites and regular feedback sessions to evaluate recommendation coherence and relevance. We systematically compared AI recommendations against subject matter expert assessments to calibrate the system's outputs and ensure constant evolution.
Our governance framework relies on continuous evaluation of recommendation relevance through user feedback, provision of guides and documentation to support tool usage, and dedicated training sessions to explain how to interpret and leverage recommendations. This approach ensures transparency and builds trust throughout the decision-support process.
Rather than a one-time validation exercise, we've implemented a continuous improvement approach through user pilots, systematic feedback loops, and alignment with AI governance — ensuring relevance, trust, and compliance while acknowledging the advisory nature of the tool's outputs in quality decision-making
Your talk emphasizes the Quality Maturity Index (QMI) 2.0. What metrics or dimensions define maturity in this model? How does the system synthesize them for near-real-time decision support?
Fernandes: The QMI 2.0 transformed quality maturity assessment from a biannual, manual exercise into a daily, data-driven evaluation used by over 400 professionals across our network.
The index comprises 27 KPIs spanning all quality processes, including audit and risk management or quality systems, for example, and other critical quality domains. These KPIs are weighted consistently across all sites, enabling objective comparison and benchmarking within a common framework.
This digitalization represents a paradigm shift in quality maturity assessment. What was once a manual, time-consuming, and inherently subjective evaluation performed only twice yearly is now a real-time, data-driven measurement that objectively illustrates each site's quality maturity through fundamental process evaluation.
The system employs fuzzy logic technology to handle the inherent uncertainty and nuance in quality assessment — translating multiple KPIs with different scales into a single comparable maturity score. This sophisticated aggregation method allows for meaningful comparison between sites with different contexts and production environments.
The near-real-time nature of QMI 2.0 enables continuous improvement monitoring and provides objective data for strategic decision-making, allowing quality leaders to identify trends, anticipate challenges, and allocate resources more effectively across the manufacturing network.
What were the biggest challenges in harmonizing those data sets?
Fernandes: Our module exclusively uses quality-related data including deviations, CAPAs, quality KPIs covering quality processes, complaints, and audit/inspection findings — all originating from quality assurance operations.
Harmonizing presented significant challenges across our global network. First, we faced standardization issues: while quality data was digitized, it wasn't standardized across sites and regions, with varying business rules and inconsistent data quality standards. We addressed this by implementing a unified quality management system and establishing a common data foundation with standardized definitions and governance.
Second, we encountered collection methodology disparities: certain quality KPIs relied on manual collection with non-standardized processes across different sites and quality teams. Our solution was to harmonize data collection in a single platform for all quality processes, reducing manual intervention, minimizing errors, and creating a consistent quality data ecosystem.
Third, we struggled with limited historical quality data depth needed for AI model training and prediction improvement. We adopted a progressive approach, continuously enriching our quality data lake while initially focusing on quality use cases with sufficient historical data.
These challenges required extensive collaboration within quality teams globally to build trust in a unified quality data ecosystem, ensuring data consistency while maintaining the focus on quality excellence.
Could you share an example of how the module helped achieve new efficiencies?
Fernandes: The QMI exemplifies the transformative efficiency gains we've achieved through Plai.qa. Previously, quality maturity assessments relied on time-consuming interviews conducted only twice yearly, creating a subjective and resource-intensive process that provided limited actionable insights.
The transformation has been remarkable: what once required extensive manual interviews across our network is now delivered through real-time, data-driven evaluation. This shift eliminated hundreds of hours of interview preparation, execution, and analysis while providing continuous visibility into quality performance rather than periodic snapshots.
More importantly, we've moved from subjective assessments to actionable insights. The previous interview-based approach often resulted in general observations that were difficult to act upon. Now, with over 400 monthly users accessing real-time QMI data, quality teams can immediately identify specific areas requiring attention and track improvement progress continuously.
A significant efficiency gain comes from quality review preparation, where QMI has become integral to our SMS 2.0 methodology. Quality leaders can now prepare for reviews with objective, current data rather than spending weeks gathering and consolidating information from multiple sources. This enables more focused, productive discussions centered on data-driven priorities rather than subjective impressions.
The result is not just time savings but a fundamental shift toward proactive quality management — enabling teams to identify and address quality risks before they impact operations, ultimately supporting better patient outcomes through more efficient quality processes.
About The Expert:
Vanessa Fernandes is the quality project lead of the Plai.qa application. In this role, she is driving the Plai.qa project and is accountable for the appropriate design of simplified and standardized processes by relevant business process owners. She joined Sanofi in 2015 as serialization business system owner.