Simploud joins forces with Matrix One and becomes Matrix Quality

Design Verification & Validation for medical device

Inadequate distinction between Design Verification and Design Validation frequently leads to regulatory rejection. This guide addresses that gap by defining the engineering rigour required for each. Verification proves the device meets specifications through statistical sampling and limit testing, while Validation confirms user needs using production units. We analyse how to construct a defensible testing strategy that prevents design iterations late in the development cycle.

1.Verification or Validation ?

Verification is where your team proves that the design you created actually meets the requirements you set out in the beginning.

Did you design it right? Or did you design the right thing?

These two questions might sound similar, but they define two entirely different phases:

  • Design Verification confirms via objective evidence that specified requirements are met. These activities happen during various stages and use tests or inspections to confirm design outputs match inputs. Any method proving conformance is acceptable if technically sound. Complex designs need mixed approaches like fault tree analysis instead of just finished device testing.

  • Design Validation establishes via objective evidence that specifications conform to user needs and intended uses. This confirms the final design works for the user under real conditions. 

Verification looks at technical specs while validation looks at the medical purpose. Validation happens after verification using initial production units.

2. Design Verification

Think of Design Verification as the engineering "reality check; it essentially answers the question: Did we build the product right?. 

It is the process of confirming, through objective evidence like tests, inspections, or analyses, that your Design Outputs (the actual device, drawings, and specifications) fully satisfy your specific Design Inputs (technical requirements).

To document this effectively, you shouldn't rely on “informal” or “test-lab” testing; you need to establish a Verification Protocol early in the process that clearly defines your test methods and acceptance criteria. Once executed, you must capture the results in a Verification Report that details pass/fail outcomes, the specific design configuration tested, the date, and the person who performed the test, ensuring everything is traceable within your Design History File.

A functional verification strategy moves past arbitrary testing to statistically justified methods. The process defines sample sizes, tests design limits, and manages anomalies.

Here are a few points to keep in mind during this phase:

  • Statistical Rationale and Sample Size Selection of sample sizes needs a statistical rationale or risk based approach instead of arbitrary numbers. A sample size of 30 is insufficient without proof. Manufacturers determine sizes using Confidence Intervals (95 percent confidence) for representative data, Reliability Goals fitting the device risk, and Design Inputs where reliability requirements defined early need meeting with statistical significance.

  • Testing at Limits Verification proves design stability by testing limits of operating specifications rather than just nominal conditions. Testing covers the full range of inputs and environmental conditions expected during the device life cycle. This includes temperature extremes, maximum voltage, and high mechanical stress. Relying on mean values misses failure modes at the edges of performance.

  • Handling Failures and Test Method Validation If a test fails, a simple re test is not allowed. The manufacturer investigates the cause. Design Flaw means correcting the design and repeating verification. Test Method Error requires documenting the reason for discounting the result with objective evidence. This often needs test method validation. Test fixtures need validation to prove they do not affect results more than device variance. This uses equipment qualification and gauge repeatability and reproducibility studies.

  • Managing Design Changes If a design change happens during verification, an analysis determines which tests remain valid. The manufacturer documents reasons for Regression Testing to see if the change affects other subsystems or skipping re testing by showing technical proof that the change adds no new variables invalidating past results.

3. Design Validation

Design Validation is basically the ultimate reality check where you move past engineering specs to answer the question: Did we build the right product?. 

Unlike verification, which tests against technical inputs, validation confirms that the device actually satisfies the User Needs and Intended Uses. 

To do this effectively, you generally must use initial production units (or their equivalents) rather than R&D prototypes, because you need to capture any variability introduced by the manufacturing process. 

This involves testing under actual or simulated use environments to ensure the device works for the patient or clinician in the real world, and your scope must include validating the software, packaging, and labeling, while simultaneously updating your Risk Analysis to address any new hazards found during testing.

Design validation confirms the device fulfills its medical purpose. This phase demands production equivalence, user testing, and traceability.

Here are a few points to keep in mind during this phase:

  • Production Equivalence Validation uses devices representing the final product distributed to the market. Acceptable units come from the actual manufacturing process or equivalent methods. “Engineering prototypes” or “golden units” are unacceptable as they miss manufacturing variability.

  • Human Factors and Use Errors Validation trials simulate actual use to analyse the human machine interface. This identifies use errors. Unintended use during validation becomes a design input for risk management. Usability engineering confirms instructions and interface designs prevent unsafe usage.

  • Traceability and Benefit Risk Analysis Validation traceability links initial user needs to results. A validation matrix shows every user need is covered. Validation also confirms the benefit risk profile. It gives evidence that the clinical benefit exists in the use environment. It confirms medical benefits outweigh hazards and supports accepting residual risks.

  • Feedback Loop to Risk Management Validation checks theoretical assumptions. It often finds new hazards or failure modes not seen during earlier analyses. New hazards found during validation go back into the risk management file for assessment.

4. Conclusion

Verification and validation act as separate steps to confirm safety and performance, 

  • Verification uses technical exams to prove design outputs meet inputs.

  • Validation uses objective evidence to prove the device meets user needs.

Both processes need planning, documentation, and statistical rigor using production equivalent units.

Here are some tips that apply to both phases: 

Use a traceability matrix to link design inputs and outputs, especially when both are documents

List input requirements and references corresponding output sections or software modules

  • Optionally, build the matrix in reverse to trace outputs back to input requirements

  • To improve traceability, add the associated risks and risk control measures

  • Add all verification AND validation evidences in one place

A good traceability matrix helps you track every input, output, and piece of V&V evidence making your design control airtight, and your audit readiness bulletproof.

That’s where Matrix Req come in.

We help MedTech teams stay traceable from Day 1:

→ Capture and link user needs, requirements, outputs, tests

→ Keep traceability alive across your full product lifecycle

→ Access, review, and update everything from one place

About the Author
Eva Kautenburger
Deputy CCO