Guest Column | April 26, 2021

Are You Approaching LIMS Validation Correctly?

By Tim Sandle, Ph.D.

Laboratory information management systems (LIMS) are an established part of larger laboratories. While all LIMS are based on the application of information to improve laboratory productivity, efficiency, and compliance, different systems vary in their scale, functionality, and quality. It is important that systems are appropriately validated, for it does not matter how good a system is if the validation is not conducted to an applicable standard. For example, errors can occur in terms of tracking data associated with samples, or acceptance criteria may be attributed to the wrong sample.

This article considers the essential validation criteria for LIMS to ensure the system functions as a compliant component in the digital backbone of the modern laboratory.

Validation Essentials

LIMS validation refers to the documented process of assuring that the computerized system does exactly what it is designed to do in a consistent and reproducible manner. The validation process begins with the system proposal/requirements definition and continues until the system is retired and the e-records are retained based on regulatory rules.1 In terms of validation aspects, LIMS features, such as automated reporting, reproducibility, throughput, and accuracy, are important to capture and must be quantifiable and verifiable. In undertaking validation, any individual application modules that are customized parts of the LIMS become the most critical elements within the process. Both configuration and customization require modification of the “out-of-the-box” product, which brings greater complexity and project implementation run times, especially when coding is required.

The minimum time for a LIMS implementation is typically around 18 months for a basic system, whereas a typical multi-phase, multi-site project may take 18 to 36 months to complete the first phase. Phases are normally established by site or functionality. To properly determine the timeline, the scope of the project must be understood and agreed to by everyone involved. As well as controlling time, attention also needs to be paid to the budget.

To drive an efficient validation, assign a project team made up of key stakeholders. While it is necessary to have a thorough understanding of the process and laboratory workflows, existing workflows will also need to be modified. This requires mapping the current process as is and the desired process to be achieved. This is a task that becomes more complex when dealing with multiple departments. To simplify the process of implementation, it is good practice to harmonize workflows between departments (this also helps to reduce costs, as well as being easier to convey to an auditor).

An important aspect of validation is the data; it is important to ensure that data have undergone data cleansing to ensure they are both correct and useful. For this, the user needs to be able to check for correctness, meaningfulness, and security of data that are input to the LIMS. As part of the validation exercise, this may involve:

  • Verifying that the individual characters provided through user input are consistent with the expected characters of data types: integers; decimal places; plus, minus, and parentheses
  • Assessing the minimum/maximum range
  • Checking consistency with a test for evaluating a sequence of characters, such as one or more tests against regular expressions
  • Assessing any formulae.

A major challenge that arises in LIMS validation relates to the assembly of the information necessary for model building, model validation, and the application of the models in screening sets. This is because data often exists in files or databases, and it may be stored in a variety of different formats. In addition, data may reside on a variety of different hardware platforms.2

The User Requirement Specification

An important precursor is the user requirement specification (URS). When utilizing a computerized system within a regulated environment, it is appropriate to establish system control documentation or a system description, giving a detailed written description of the system and also covering development and maintenance. This URS should include a definitive statement of what the system must or must not do. This document is also important for legacy systems and those systems under development. The URS should contain functional and non-functional requirements: functionality, effectiveness, maintainability, usability, etc. Such requirements should be objectively verifiable.

From the URS, the supplier (or in-house developer) of the software should be able to develop the functional specifications (in the case of bespoke programs) or clearly identify the functional specifications for selection and purchase of off-the-shelf systems. The functional specification should define a system to meet the URS (that is, the customer's needs). It is intended to provide a precise and detailed description of each of the essential requirements for the computer system and external interfaces. This, in turn, will lead to a design specification, a document that provides information about a designed product or process. For example, the design specification must include all necessary drawings, dimensions, environmental factors, ergonomic factors, aesthetic factors, and maintenance that will be needed. Taken together, functional and design specifications define what the system will do to meet the requirements and how the system will function at a technical level. These specifications should also be written to enable performance of objective testing. LIMS validation begins with a URS, which will contain:

  • All applicable regulatory requirements
  • System administration requirements
  • Multiple site requirements if applicable
  • Laboratories that will be utilizing the new LIMS application
  • User roles that will need to be implemented
  • All sample types that will need to be implemented
  • Workflows to include lots, samples, projects, batch related samples, and stability samples as applicable
  • Test method functionality must be documented; however, since the number of test methods that will need to be documented will vary, the use of an Appendix to the URS or a reference to a data migration plan may be considered. 
  • Product specification functionality must also be documented and, similar to test methods mentioned above, if there are many product specifications to be implemented, an Appendix or data migration plan reference may be considered.
  • Reports should include certificate of analysis, standards and reagents, stability results, and any other applicable reports.
  • Any interface requirements such as instruments, procurement software, quality management software, etc.
  • Standard operating procedures applicable to the LIMS implementation

The Risk Assessment

A risk assessment should be created to determine the risk level of the requirements documented for the LIMS application. Risk levels will help determine the level of testing that will need to be performed, SOPs that should be created, and the level of training that should be conducted. For example, if there is an identified requirement that would have a high impact on data integrity if it was to fail, plus a high probability that it will fail (mainly if it is a configured or customized part of the application), testing activities would include unit testing, operational testing, and performance testing. All LIMS should have been subjected to documented prospective validation or qualification.

Under Good Automated Manufacturing Practice (GAMP) validation, there are three core elements:

  • Installation qualification (IQ) – This confirms complete documentation, which includes checking purchase orders, proper hardware installation, and software verification according to the manufacturer’s specifications; both user and supplier share primary testing responsibility. An effective IQ establishes that the instrument is delivered as designed and specified, that it is properly installed in the selected environment, and that this environment is suitable for the operation and use of the instrument.

Between IQ and operational qualification (OQ), functionality must be designed. It is a mistake for users to still be working on scripts as the project enters OQ, leading to a waste of time and money.

  • Operational qualification – This confirms the system operations by testing the design requirements that are traced back to the functional specifications, including software and hardware functions under normal load and under realistic stress conditions to assess whether equipment and systems are working correctly; both user and supplier share primary testing responsibility. With LIMS in particular, the OQ provides a documented verification that the system or subsystem operates as specified in the LIMS specifications throughout representative or anticipated operating ranges.
  • Performance qualification (PQ) – This confirms that a system is capable of performing or controlling the activities of the process, while operating in a specific environment – namely, a series of checks by the user against the original requirement specifications of the system; responsibility falls solely on the user. For LIMS, the PQ is the process of documented verification that the integrated LIMS performs as intended in its normal operating environment, that is, the computer related system performs as intended. PQ involves either load testing, performance testing, or a combination of the two. Load testing is designed to ensure the system can handle the data and users that are expected in the system during normal usage. Load testing takes significant planning and resources. If it is not planned for in the early stages of the project, it will cause delays and cost overruns in the project.

Scripts for PQ should be produced by end users, ideally working alongside a trained script writer to ensure the scripts meet validation requirements. For multiple sites and departments, it is important, should harmonization allow it, for scripts to be identical for each group or site.

Testing For Software

There are four levels of testing for software:3

  1. Unit (or code) testing is performed by the software developer on individual units/components of software as part of the software development process.  It is important that testing be performed during the development of a system in order to potentially eliminate errors early in the process.
  2. Integration/module testing is one of the most critical aspects of the software development process, as it involves individual elements of software code (and hardware, where applicable) being combined and tested until the entire system has been integrated. Errors found at the integration testing phase are less expensive to correct than errors found at a later stage of testing.
  3. System testing tests complete and integrated software. The purpose of this level is to evaluate the system’s compliance with the specified requirements.
  4. User (or customer) acceptance testing, or performance qualification, is then performed by the users as the last phase of the software testing process. During such testing, actual users make sure it can handle required tasks in real-world scenarios, according to the requirements and the business process and associated procedures.

In terms of responsibilities for executing the validation stages, the general validation of the program is performed by the supplier. Nevertheless, the user is always required to cover all phases of a validation.4

For testing, test scripts should be developed, formally documented, and used to demonstrate that the system has been installed and is operating and performing satisfactorily. These test scripts should be related to the URS and the functional specifications for the system. As well as GAMP (IPSE, 2008), guidance for testing software can be found in IEEE 1298 (1992) and ISO/IEC/IEEE 29119-3:2013 (formerly IEEE 829). For example, the ISO 29119 standard provides an outline of useful test documentation:

  1. Organizational Test Process Documentation:
  • Test Policy
  • Organizational Test Strategy
  1. Test Management Process Documentation:
  • Test Plan (including a Test Strategy)
  • Test Status
  • Test Completion 
  1. Dynamic Test Process Documentation:
  • Test Design Specification
  • Test Case Specification
  • Test Procedure Specification
  • Test Data Requirements
  • Test Data Readiness Report
  • Test Environment Requirements
  • Test Environment Readiness Report
  • Actual Results
  • Test Result
  • Test Execution Log
  • Test Incident Report

When assessing the above, the user should review sample reports from testing and ask:

  • Has testing covered the boundaries of limits and also the input of invalid data?
  • Have all tests been documented?
  • Have all errors/failures been followed up?


This article outlines the validation essentials for a LIMS. By observing the guidance presented here, errors can be avoided and a regulatory audit can be successfully passed, provided the elements outlined are captured and data is offered to demonstrate that the validation exercise has been executed to plan. Validation should also be reviewed periodically and the system’s ongoing performance assessed, in keeping with the current life cycle validation approach paradigm.

This article has been adapted from chapter 2 of the book Digital Transformation and Regulatory Considerations for Biopharmaceutical and Healthcare Manufacturers, Volume 2, written by Tim Sandle and co-published by PDA and DHI. Copyright 2021. All rights reserved.


  1. FDA (2011) Guidance for Industry Process Validation: General Principles and Practices, U.S. Department of Health and Human Services Food and Drug Administration. Washington. Dated January 2011
  2. Machina, H.K. and Wild, D.J. (2012) Laboratory Informatics Tools Integration Strategies for Drug Discovery: Integration  of LIMS, ELN, CDS, and SDMS, Journal of Laboratory Automation 18(2) 126 –136
  3. Friedli, D., Kappeler, W., Zimmermann, S. (1998) Validation of computer systems: Practical testing of a standard LIMS, Pharmaceutica Acta Helvetiae, 72 (6): 343-348
  4. Schmitt, S. (2018) Computerized Systems Validation, Pharmaceutical Technology, 42 (3): 70

About The Author:

TimTim Sandle, Ph.D., is a pharmaceutical professional with wide experience in microbiology and quality assurance. He is the author of more than 30 books relating to pharmaceuticals, healthcare, and life sciences, as well as over 170 peer-reviewed papers and some 500 technical articles. Sandle has presented at over 200 events and he currently works at Bio Products Laboratory Ltd. (BPL), and he is a visiting professor at the University of Manchester and University College London, as well as a consultant to the pharmaceutical industry. Visit his microbiology website at