How to Choose a Health IT Vendor

Below are notes previously reserved for customers that we are making available to all

Vendor selection basics for the informed customer

Introduction

First and foremost, understand and document your requirements.

  • Compare each vendor’s features to your requirements
  • Grade how each vendor meets each requirement
  • If a requirement is not addressed in the demo, ask a question

Key IT Requirements

  • Your data is your data
  • Your data is portable
  • Interfaces are simple, documented and platform agnostic
  • Access to your data while in service is unlimited not reliant on proprietary formats
  • Multiple infrastructure platform choices are available

Key Business requirements

  • Willing to sign a Business Associates Agreement ﴾if needed﴿
  • Service Level Agreement/Support contracts
  • Licensing details

No System is Perfect

  • Sometimes you may have to compromise on some of the aforementioned requirements
  • Be sure to mitigate the risks of not adhering to the requirements

5 Reasons to Use Open Source Software in Your Small Lab

Open Image
Image by : opensource.com

Open source software is not widely used in the U.S. in the clinical laboratory setting. This is unfortunate, because the benefits and values of open source greatly outweigh any drawbacks in many cases. Here, we will briefly describe 5 top reasons your laboratory should consider open source software.

Cost

Most open source software is complete free of charge, which is a great benefit to small companies concerned about their bottom line. Many companies, including Lab Insights, LLC, offer paid support for their free software. This can ensure that your lab gets a quality product with support and customer service that they expect. Proprietary vendors, like Microsoft have argued in the past that the total cost of ownership for free software is higher than their paid solutions, but we believe that this in an antiquated belief. This is evidenced the emergence of many successful open source companies, like Canonical and Red Hat, and many other successful companies that rely on open source software for their business, such as Facebook and Google.

Reliability

With open source software, you can count on the fact that the software you use will be around for years to come. Good open source software is built to last, and is not subject to the whims of a company that may make breaking changes to their product while requiring you to upgrade. There is no license key that will expire on you just when you need it most. You’re software will just continue to work.

Auditability

Since the code is available, and version control is generally done in the public, Savy open source software consumers can audit the code, or pay to have it audited by professionals. This means that you can be confident in the security and functionality that is promised by the software developers. You can also check for unwanted features, such as security backdoors and private data siphoning. Open source software can be trusted because you can look for yourself.

Freedom

Richard Stallman, one of the pioneers of the free software movement, is attributed to developing the four essential freedoms of software. In a nutshell, they state that users should be free to run, examine, distribute, and modify software as they see fit. In a laboratory setting, this can be extended to the data that is generated by the software. When it comes to free and open source software, your date is your data.

Interoperability

Open source also means open interfaces. All of the popular open source tools have documented interfaces, and are designed for interoperability. Since you are free to modify the code, open source software can be extended for even more interoperability as your needs change.

Challenges to Consider

While there are many advantages of using open source software in your laboratory, it does come with some risks and challenges. Most noticeably, technical support for many open source projects is either not available or hard to come by. While in many cases this is not an issue, having professional support can be a key part of IT strategy. Be careful to choose software that not only meets your functional requirements, but also your IT support needs as well. For more information regarding some of the challenges, read this article.


Other Resources

  1. opensource.com
  2. Open Source Initiative
  3. 10 Reasons Open Source is Good for Business
  4. GNU.org
  5. Free Software Foundation

Quality Costs in Laboratory Information Systems

Can you put a price-tag on developing, deploying, and maintaining high-quality computer systems in your organization? How about estimating the cost of poor-quality of your systems? In this post, we will explore the cost of quality of Laboratory Information Management Systems and related software for scientific data management.

Introduction

Quality costs have been studied since at least the 1950s. Before it was first formally characterized, people in business anecdotally understood that higher quality products and services meant higher costs, but also had the potential to lead to better sales and higher profits. Naturally, trade-offs must be made to ensure that an organization can control costs while being able to deliver products and services that meet the demands of their customers. The problem was there were no concrete ways to measure the cost differences between good and poor quality.  Academics and accountants soon began developing models for measuring these costs based on empirical evidence from different industries.

Quality Costs in Software

Cost of Quality accounting has been proven to be a useful practice in measuring the effectiveness of quality management systems for the manufacturing industry, and has adapted and modeled for Software development as well. However, it has not been widely incorporated into most Software Quality Assurance groups in the informatics industry. In this article, we will explore some of the common sources of quality costs in software and how proactive measures can be taken to reduce them.

When examining Quality Costs, there are two general categories:

  • Conformance costs
  • Non-conformance costs

Conformance Costs

Conformance Costs, sometimes referred to as Achievement Costs or the Costs of Good Quality, are costs associated with maintaining good quality. Conformance Costs can be further categorized into appraisal and prevention costs. Appraisal costs are the costs associated with measuring, evaluating or auditing products or services to assure conformance to quality standards and performance requirements.  Prevention costs are the costs of all activities specifically designed to prevent poor quality in products and services.

The table below shows examples of common appraisal and prevention costs in software quality.

Prevention Costs       Appraisal Costs    
Project Management     Unit Testing       
Requirements ManagementIntegration Testing
Continuous Integration External Audits    
Functional Testing     Quality Assurance  

Non-conformance Costs

Non-conformance costs, sometimes described as the costs of poor quality, are costs associated with remediating the affects of poor quality. These costs have two general sources:

  • Internal failure costs
  • External failure costs

The table below shows examples of internal and external failure costs.

Internal Failure CostsExternal Failure Costs   
Design Change Rework  Customer Support         
Defect Management     Warranty Rework/Repayment
Retesting             Reputation Management    
Requirements rework   Market Loss              

Examples of internal and external costs of failure are obvious for any software development organization or department, but determining the best way to measure these non-conformance cost is not as apparent.  Also, even if the costs can be meaningfully quantified, the more challenging problem is determining how to balance conformance costs with non-conformance costs. Anecdotal evidence tells us that it is usually easier to justify non-conformance costs to senior management, especially in young companies. It is human nature to not consider how to avoid an issue until after it has already affected you, and no business person wants to spend money on items that does not add value to the business.

It is the responsibility of the quality department in these organizations to clearly and effectively communicate the value of good quality, and how it is cheaper in the long run than bad quality.

Cost of quality program

Developing a quality cost program is the best way to account for all costs of quality, and the best way to show the value of good quality to the organization. For the program to be effective, it is important to choose the most appropriate metrics to gather, and to present the results to senior management and other stakeholders in a thoughtful and engaging way.

If your organization manages quality software, consider how the costs of quality are accounted and managed.

References

  1. Knox, Stephen T. Modeling the Cost of Software Quality
  2. Ali Elbireer, MBA, MT(ASCP), CPHQ, Alicia R. Gable, MPH, J. Brooks Jackson, MD, MBA. Lab Medicine.com
  3. Douglas C. Wood. Principles of Quality Costs, Fourth Edition. 2012
  4. ASQ Quality Costs

Challenges Facing a LIMS implementation Using an Agile Method

Introduction

Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. A time-boxed iterative approach, Agile software development methodology promotes adaptive planning, and encourages rapid and flexible response to change. It also provides a vehicle for evolutionary development and delivery of the product. Agile methodology is a conceptual framework that promotes foreseen tight interactions between functional groups throughout the development cycle.

The Agile approach to software development, configuration and project planning has recently become very popular for implementing and developing complex systems, and for good reasons:

  1. Builds confidence in the product
  2. Something in the hands of the users sooner, rather than later
  3. Constant contact with stakeholders allows for speedy design changes and requirement refinement

Purpose

To determine and analyze some of the obstacles and issues that are likely to be encountered doing a LIMS implementation using Agile methods.

Scope

The scope of this article is limited to the Scrum Agile method used in LIMS implementation of a CLIA (or otherwise regulated) laboratory. Not intended for GMP or IVD software solutions, but some information can be leveraged for these.

Analysis

The below are a list of issues to be considered and mitigated when using an agile method on a LIMS implementation.

Data Management and Continuity

Clinical laboratories collect an array of different data sets to conduct their business. Data sets include client and patient demographics, laboratory workflow data, inventory management transactions, QC data, intermediate and final patient results, and billing information. Data from the different data sets are often sourced from separate systems and interfaces, and must be integrated to ensure positive patient identification and result traceability.

Using an Agile method means that not all of these data sets will be implemented at the same time. There needs to be stakeholder buy-in for the planned data flow gaps in early releases. It may require some other methods for transferring data from the current system to the new one.

A traditional waterfall project management approach can require a data migration task as a part of the cutover activities, the scope of which had been previously determined and approved. After the migration, all of the data (agreed upon in the plan) will reside in the new system.

How do you address data continuity in an Agile method?

Regulatory requirements affecting user stories and priorities

There are regulatory requirements and quality system guidelines that the laboratory follows in conducting their business. Certain requirements and guidelines are specific to electronic/computer-based systems. These items must be identified as such in the user stories to ensure that the system requirements derived for regulatory compliance are appropriately represented and complete in early sprints.

Resource availability

One of the most effective features described in agile methods has to do with the rhythmic, continuous involvement with the business stakeholders. With increasing costs and shirking reimbursement rates, clinical laboratories are running increasingly leaner when it comes to staffing. Finding availability for the subject matter experts operating in production can be daunting.
Timing of sprint cycles and meetings should be approved by the business prior to finalization of the Project Plan. If possible, the business should provide proof that they have adequate resources to accommodate the proposed schedule.

Validation

Although CLIA does not require a validation of computer systems in the way that the FDA does, CAP and other voluntary accrediting agencies require documentation of system testing before initial implementation and during change events. Therefore, many laboratories may choose to validate its CLIA-regulated systems.

A waterfall project management approach can require validation and/or user acceptance testing of the computer-based system, either in whole or in part, prior to release to production. Upon approval of a given sprint implementation, a validation plan must be created, approved and executed prior to release to production. The project timings should account for validation efforts.

One suggested approach to validation a system developed using an Agile method is to write an “Agile” validation plan at the beginning of the project that calls for risk assessment at the planning phase of each sprint cycle. The results of the risk assessment will be used to determine which testing activities (unit testing, integration testing and regression testing) to perform for the upcoming sprint. As a part of the project plan, the validation plan can also describe specific milestones of “doneness” to perform full or partial system validations.

When planning the project, it may be useful to discuss the level of effort in validating the proposed requirements in a sprint to determine the proper scope of the sprint.

Conclusion

Traditionally-approached LIMS implementation projects are prone to pitfalls that can lead to projects being delayed, over budget and ultimately failed. Agile methodology attempts to diminish the likelihood of falling into these hazards by maintaining close engagement of users and project members and using iterative, adaptive project planning. However, Agile methods can introduce new challenges when implementing a system in a regulated, resource-lean environment such as a clinical laboratory.

The challenges described in this article may deter many project managers from attempting to use such an approach, especially the first time. As in life, identifying the challenges, planning for them, and facing them head on can lead to great success.

NO EVENT SHALL LAB INSIGHTS, LLC BE LIABLE, WHETHER IN CONTRACT, TORT, WARRANT, OR UNDER ANY STATUTE OR ON ANY OTHER BASIS FOR SPECIAL, INCIDENTAL, INDIRECT, PUNITIVE, MULTIPLE OR CONSEQUENTIAL DAMAGES IN CONNECTION OR ARISING FROM LAB INSIGHTS, LLC SERVICES OR USE OF THIS DOCUMENT.

A Perspective on the FDA’s IVDMIA Draft Guidance

Introduction

The completion of Human Genome Project marks a defining moment in the history of medical science. New technologies in molecular diagnostics and computational biology have furthered understanding of the transcriptome and proteome at a rate previously unimaginable. Since then, a flood of gene expression profile tests and other biomarker panels have been introduced to the clinical diagnostics market. These tests were all developed in a similar manner—by measuring and comparing the abundance of a group of specific gene transcripts, proteins, or other biomarkers in populations of patients or tissues with a known pathology or prognosis state. This comparative analysis results in the development of a predictive algorithm that can determine the probability of disease or prognosis in unknown samples.

On July 26 2007, the Food and Drug Administration (FDA) published a draft guidance to address this emerging field of In Vitro Diagnostics for Multivariate Index Assays (IVDMA). An IVDMIA is defined by the FDA as a device that:

  1. Combines the values of multiple variables using an interpretation function to yield a single, patient-specific result (e.g., a “classification,” “score,” “index,” etc.), that is intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment or prevention of disease, and
  2. Provides a result whose derivation is non-transparent and cannot be independently derived or verified by the end user.As a result, several organizations filed their products for de novo 510(k) clearance, while most waited for a final guidance. The topic has since been shelved by the FDA as they now consider broadening the guidance to govern complex Laboratory Developed Tests (LDTs) currently governed under CLIA. Although the final guidance has been postponed, the final version will most likely still contain most of the items identified in the draft.

Purpose

The purpose of this document is to summarize the draft guidance as it pertains to Laboratory Developed Tests and discuss its implications at an independent clinical laboratory.

Scope

The scope of this document is limited to studying the draft guidance as it was written in 2007. It does not contain any commentary regarding any potential changes to the guidance as a response to industry comments or a changing regulatory climate.

Analysis

Approach

The FDA continues its efforts at administrating a “least burdensome approach” for IVDMIA. The guidance states that, although there is a potential for complicated and emerging technologies to be included in IVDMIA submissions, the classification of devices will be based on risks associated with intended use. (The guidance includes an example of a test that indicates the likely prognosis of a cancer will most likely be considered Class II, versus a test that indicates the therapy regimen would be considered Class III.) The FDA acknowledges the fact the nature of the types of disease states analyzed by these new tests will most likely require a classification of II or III.

The guidance states that safety and effectiveness determination should include “review of the performance of the entire system, including the accurate measurement of the input variables, directions for use, and expected analytical or clinical performance, rather than a review of only certain subcomponents of the test” (i.e. just the algorithm). This approach is consistent with classification determination of other devices, like clinical chemistry and clinical toxicology test systems.

Premarket Regulation

In the spirit of using a “least burdensome approach”, the FDA proposes a flexible approach to safety and effectiveness determination. Although a prospective study is preferred, the administration will also consider archived samples and/or retrospective studies, as long as the study design, sample composition, sample selection and sample storage processes reflect the intended use and intended population for use of the device.

In lieu of properly designed retrospective studies, IVDMIA manufacturers may also file for “Investigational Use Only” labeling as defined by 21 CFR 809.10 and apply for an investigational device exemption (IDE). Other labeling requirements are to be consistent with those regarding other devices.

Postmarket Regulation

Consistent with other devices, the FDA states that IVDMIAs are subject to Quality System regulation as described in 21 CFR Part 820, however, the FDA intends to continue to exercise enforcement discretion for CLIA-regulated laboratories until a final guidance is approved for laboratory developed tests.

The Administration states that all IVDIAs must comply with 21 CFR 803 – Medical Device

Reporting standards. Also, laboratories that run IVDMIA test systems are considered “user facilities”, and must submit serious injury/device malfunction reports as such (CFR 21 830.50).

Enforcement

In an effort to reduce the effect on innovation costs in the market, FDA intends to exercise enforcement discretion with respect to all regulatory requirements for currently marketed, laboratory-developed IVDMIAs for 12 months following publication of the final guidance document. In an effort to encourage early adoption to the new standards, FDA intends to exercise enforcement discretion for an additional 6 months for any currently marketed, laboratory-developed IVDMIAs if the manufacturer submits a 510(k) or PMA within the initial 12 month period following publication of the final guidance.

Conclusion

While the debate on whether Laboratory Developed Tests are medical services (and should be regulated by CMS under CLIA) or medical devices (and should be regulated by the FDA as IVDs) seems to be coming to a head, LDT developers and IVD manufactures brace for a dramatic shift in regulatory environment. FDA has used “enforcement discretion” in exerting what it sees as its legal authority to regulate medical devices, but many analysts believe that the time will soon come when enforcement will be applied to some areas currently left up to CLIA. In particular, multivariate index assays have come under particular attention in recent years; enough to spur publication of draft guidance by the FDA.

This emerging development may provide some interesting opportunites for device manufacturers. Since they many systems already in place to govern compliance to FDA device manufacturing requirements for new and existing products, these systems can and should be leveraged in order to bring their own CLIA laboratory into compliance for a potential 510(k) or PMA submission when/if the time comes. Other leaders in IVDMIA laboratory testing have already begun to retrofit their current processes and build their new products with the FDA in mind, but without the institutional regulatory architecture, they will be at a disadvantage. Life Technologies has the opportunity to create IVDMIA products with competitive compliance that surpasses the expectations of regulators and their customers while controlling costs.

References

  1. 2007. Food and Drug administration. Draft Guidance for Industry, Clinical Laboratories, and FDA Staff. In Vitro Diagnostic Multivariate Index Assays. http://www.fda.gov
  2. 2011. Smith, Katie M. Exploring FDA-Approved IVDMIAs. http://www.ivdtechnology.com/article/exploring-fda-approved-ivdmias
  3. 2012. Wiess, Ronald L. The Long and Winding Regulatory Road for Laboratory-Developed Tests. American Journal of Clinical Pathology, 138, 20-26.

NO EVENT SHALL LAB INSIGHTS, LLC BE LIABLE, WHETHER IN CONTRACT, TORT, WARRANT, OR UNDER ANY STATUTE OR ON ANY OTHER BASIS FOR SPECIAL, INCIDENTAL, INDIRECT, PUNITIVE, MULTIPLE OR CONSEQUENTIAL DAMAGES IN CONNECTION OR ARISING FROM LAB INSIGHTS, LLC SERVICES OR USE OF THIS DOCUMENT.

Electronic Document Management: Common Pitfalls

Originally written March 21, 2014

Introduction

The documentation of an organization’s Quality Management System (QMS) is required by many regulatory and standards bodies, including the US FDA and ISO-9001. Properly implemented, this documentation allows for clear, traceable communication of the an organization’s quality standards, processes, and other critical information. Because processes and their related documents need to change over time, document control is used to ensure:

  1. Traceability of changes in documentation
  2. Appropriate and adequate approval of changes
  3. Old versions of the documentation are no longer available for use
  4. New versions of the documentation are readily available for use

 

Doc Control InfographA document management system (DMS) is any tactic used to organize and administer access to documents. It is a tool that is required to execute document control in most Quality Management Systems. Since computer-generated document has become the norm for most organizations, there has been a push to move away from traditional paper-based DMSs and towards electronic system (eDMS).

Purpose

The purpose of this opinion is to explore some of the common mistakes made when implementing an electronic document management system that is to be used for document control.

Scope

The scope of this document is limited to electronic document management systems used for document control. It will not cover other content management systems or other uses for EDMSs.

Analysis

Requirements of a Document Management System

Workflow

In its simplest terms, most Document Control procedures have the same basic workflow:

  1. Document Change Order (DCO Request) — Initiation of change, which describes what and why a change is needed.
  2. Change Execution — The changes are made to the document. This is often a collaborative effort, where the changes of one or more contributors are collected and reconciled until a version is ready for approval.
  3. DCO Review — Acceptance of change request by manager of process principal results in the new version being released for use. Rejection of the request results in the current version persisting in production use. The decision point must be recorded.
  4. Historical Archiving — The previous version is stored and taken out of use once a new version is approved. Obsolete documents are also archived in a similar manner.
  5. Change Notification — When the controlled document describes a process, policy or procedure, staff and euqipment following the instructions need to be identified and notified that a new version is in effect and must be re-trained or re-configured.

Statuses and Access Rights

There are usually three different statuses of a document lifecylcle following the aforementioned workflow, each status has different access rights associated with it:

  • Draft — open for both viewing and editing by few or many, depending on the organization.
  • Released/Approved — open for viewing only for many. A system administrator or process lead may have access to edit the document for cosmetic changes. Easy and appropriate access is vital for adherence to written policies and procedures
  • Archived/Obsolete — Not viewable by most. Not editable by anyone. A system administrator or process lead may have access to view.

Identification of Controlled Document

With the proliferation of type-writing and word processing, one could not only rely on recognizing the original author’s handwriting in order to identify the authentic and original controlled document. It is vitally important to be able to identify the current, released version of a controlled document from its predecessors, copies and altered iterations.

Record Keeping

The final aspect of a document management system is establishing traceability of the document identification, change requests and approval, as well as document history logs. The importance of such a log becomes apparent during troubleshooting of an issue. During a historical review, it may be critical to identify exactly which version of one or more procedures was in effect during the time of the issue.

Paper-based Systems

For the majority of the last century, the only way to document information was by writing, typing or printing on paper. Access was controlled physically by locking binders in cabinets or rooms. Change request were processed in folders or binders with a change request form as a coversheet and the proposed new version behind it. The signed copies of the DCOs were catalogued for record-keeping and a blue wet-ink signature often served as controlled copy authentication (copies made with a black & white copier would show the signature in black). Stamps were often used for documenting the status of the document, while meetings (later emails) were the only way to notify others of the change. If you wanted a copy of the controlled document, you would have to ask the “Doc Control Guy” to get it out and make you a copy or sign it out to you.

This manual and tedious process is only as effective as the staff who are asked to follow it, much to the dismay of quality managers and document control analyst. Audits and inspections often show that uncontrolled copies are in circulation because the teams did not want to be slowed down by document control or limited access to their documents. Copies of old versions of a document are often found in drawers or worse; in the hands of the technician.

Large and older companies could end up having giant rooms (even buildings) of document archives. Finding an old copy for a particular date range could take an entire day. Thankfully, computers came around, so all of these problems are automatically solved, right?

Switching to Electronic Systems

One of the first major changes in document control came as the price of electronic data storage locally-hosted file servers started to fall. Now that entire building of historical data can fit in one server room, but many of the other issues mentioned above continue.

Why?

The biggest mistake organizations make when switching to eDMSs is not changing their approach to document control. Many times, organizations simply electronify their paper-based system. Instead of a hard copy locked in a cabinet, they now have a soft copy locked on a harddrive or server. If you are still relying on one or a few people giving you access to the controlled document, you did not alleviate the issue of limited access. If anyone can take a copy of a controlled document and alter it without being able to distinguish it from it from the controlled version, you have lost control over your documents. If you print the documents, then scan them and put them into document control, you did not save any trees, plus you’ve lost the powers of searching the content of the document and the ability to change the actual controlled document for the next revision.

The second biggest mistake organizations make is not effectively implementing eDMS solutions. Companies can spend thousands of dollars on a solution that does not meet all of the criteria listed above because of some other feature that wowed them. Sure, you can make pretty fonts or collaborate with 50 people at a time, but if there is no approval workflow, you still have to use your manual method. The same can be true for companies who spend a great deal of money for licenses to a perfectly-suited solution, but don’t spend what it takes to implement the functionality that makes the software so great.

Conclusion

Like any DMS, an electronic document management system is only as effective as the document control process it supports. A document control process is only as good as the team that designs and enforces it. One should not expect to change the quality and/or ease of their organization’s document control process simply by changing the way the data is stored.

If you want a more efficient and effective process, change the process, and find tools that allow for you to implement the change. You do not have to spend a fortune to do so either. There are open-source solutions available as well as simple tools that your organization already uses that can greatly improve your system (if you know how to use them).

An organization’s documents should be valued as an asset and a liability. Learn from them. Use them. Protect them.

References

  1. Guidance on the Documentation Requirements of ISO 9001:2008

NO EVENT SHALL LAB INSIGHTS, LLC BE LIABLE, WHETHER IN CONTRACT, TORT, WARRANT, OR UNDER ANY STATUTE OR ON ANY OTHER BASIS FOR SPECIAL, INCIDENTAL, INDIRECT, PUNITIVE, MULTIPLE OR CONSEQUENTIAL DAMAGES IN CONNECTION OR ARISING FROM LAB INSIGHTS, LLC SERVICES OR USE OF THIS DOCUMENT.

Lab-Developed Tests and the FDA: A road forward

Printable Version: Lab-Developed Tests and the FDA

Introduction

Advancements in personalized medicine have the potential to revolutionize health care. Genomics and molecular biology technologies are vital in the development of theranostics and other predictive and preventative areas of personalized medicine. However, few of these laboratory-developed-tests are currently regulated by the Food and Drug Administration (FDA) for analytical and clinical efficacy. Currently, the only oversight requirements governing LDTs are the laboratory requirements prescribed in the Clinical Laboratory Improvement Amendments of 1988 (CLIA).

While the FDA assumes authority for regulating LDTs, and is currently exercising enforcement discretion, it has indicated it will be increasing oversight of LDTs in the near future. FDA Commissioner Margaret A. Hamburg is now renewing FDA’s call for more active FDA regulation of LDTs and touting the Agency’s risk-based framework for regulating LDTs that is “under development.”

On July 9, 2012, President Obama signed into law the bipartisan FDA user-fee bill, the Food and Drug Administration Safety and Innovation Act (FDASIA). For the next five years, the Act prohibits the FDA from issuing guidance on LDT regulation unless the Agency provides a 60-day advance notice to the House Energy and Commerce Committee and the Senate Health, Education, Labor, and Pension Committee of its intent to take such action. This law all but places a time clock on the impending shift in the regulation of these tests.

Several stakeholders have submitted proposals to the FDA on risk-based strategies to facilitate tighter regulation on LDTs, among these are the College of American Pathologists (CAP) and the Advanced Medical Technology Association (AdvaMed). We have selected these proposals as highest likelihood of adoption.

Purpose

This article is a comparative study of the proposals by CAP and AdvaMed. It will analyze the proposed risk management strategies and comment on their likely impact on the clinical diagnostics industry should the FDA choose to accept their proposals. In addition, this article will propose a best-of-breed strategy to use as a template for any high-complexity CLIA laboratory in their assessment of the pending regulatory changes.

Scope

Only the proposals presented by the College of American Pathologists and AdvaMed as referenced below were considered when designing the best-of-breed strategy.

Analysis

 

The CAP Proposal Summary

The CAP proposes a risk-based model employing a public-private partnership to address oversight of LDTs. In their proposal, third-party accreditors and inspectors would oversee and monitor standards for low- and moderate-risk LDTs; high-risk LDTs would be reviewed directly by the FDA. We recognize that the CAP has a biased stake in these recommendations because they are the best-equipped accrediting agency to thrive from public/private partnership; however, it is a risk-balanced approach that also addresses the lack of currently-available resources by the FDA to regulate the LDT industry. The regulatory flexibility proposed by the CAP would encourage innovation of new diagnostic and predictive tests to promote and protect public health. Each laboratory would self-assess their LDT classification based on the FDA’s criteria for low-, moderate-, and high-risk tests. The determination would be verified by the laboratory’s certifier and/or accreditor (e.g. CAP, COLA, AABB, etc). Appendix A is a summary of the proposed tiers.

The proposal also states that there should be a harmonization between CMS’s CLIA standards and the FDA when it comes to quality systems management. Since the Quality Systems Management Standards of the FDA are more robust than those of CLIA, our opinion is that CLIA would most likely adopt the FDA standards, where applicable. In addition, the CAP proposes that certain direct-to-consumer tests which are currently unregulated would now fall under CLIA and have to follow the same guidance as other CLIA laboratories.

The AdvaMed Proposal Summary

Like the CAP proposal, AdvaMed suggests a harmonization of CLIA and FDA, and that all clinical laboratories should be subject to CLIA regulations. But unlike CAP, AdvaMed proposes that the FDA should oversee the safety and effectiveness of all diagnostic tests, whether that are made in a laboratory or by a manufacturer, because they all have the same risk/benefit profile for patients. Similarly to the CAP proposal, AdvaMed suggests a risk-based tiered approach for oversight focus. While well-standardized and low-risk tests could be exempted from and FDA premarket review, novel biomarkers using new technology could face the scrutiny of a Tier III FDA review. In between these two extremes, AdvaMed proposes using the existing three-tiered FDA definitions to categorize tests, where risk assessment and mitigation ability are used to further stratify classification. Appendix B is a summary of their approach.

The AdvaMed proposal also contains a risk decision tree to aid in the determination of tier classification, risk assessment points of concentration and possible mitigating factors for these risk points.

Conclusion

While no one can accurately predict the political climate in the next 5 years, we can say that there is and will continue to be a significant push back from the nation’s largest clinical diagnostics companies for dramatic reforms in the CLIA/FDA regulatory areas such as those proposed by AdvaMed. It will cost these companies millions, maybe hundreds of millions of dollars to validate their systems to these new standards. It has the potential to stifle innovation and patient access to cutting-edge technology. Also, from a logistics point of view, the FDA does not currently have the resources to regulate the entire LDT industry.

The risk-based private/public partnership proposed by the CAP addresses both the logistics and the economic concerns of this issue while keeping patient safety in mind. Once considered high risk (as defined by the CAP proposal), it would then seem appropriate to classify an LDT as described in the AdvaMed proposal.

At Lab Insights, LLC, our current focus in the clinical diagnostics area is in the development of high-complexity, moderate to high risk tests run on established and new technologies. Many tests in our focus utilize multivariate algorithms, which disqualifies the tests from any FDA tier but Tiers II and III. It is most likely that when/if the FDA utilizes a risked-based approach to examine Laboratory-Developed Tests, the tests developed here will undergo a rigorous level of review.

To assess the level of regulation anticipated for FDA submission, we propose the adoption of a modified AdvaMed decision tree shown in Appendix C.
Under CAP, the following characteristics must be determined and approved by a Certified Lab Director for high-complexity testing as defined by CLIA (42 CFR 493.1443):

  • Analytic Accuracy and Precision
  • Analytic Sensitivity
  • Analytic Specificity
  • Analytic Interfering Substances
  • Reportable Range

CAP Guidelines should be followed to comply with this regulation.

Also, under CAP, computer systems must be evaluated for:

  • Computer Facility
  • LIS/Computer Manual documentation
  • Hardware and Software testing and documentation
  • Training
  • System Maintenance
  • System Security
  • Patient Result verification

References

  1. 2012. College of American Pathologists. 2012 CAP Checklists. http://www.cap.org
  2. 2010. College of American Pathologists. Proposed Approach to Oversight of Laboratory Developed Tests. http://www.cap.org
  3. 2012. Advanced Medical Technology Association. Risk-based Regulation of Diagnostics. www.advamed.org
  4. 2013. 42 CFR 493 – Laboratory Requirements. http://www.gpo.gov/fdsys/pkg/CFR-2003-title42-vol3/xml/CFR-2003-title42-vol3-part493.xml
  5. 2007. FDA Draft Guidance for Industry, Clinical Laboratories, and FDA Staff – In Vitro Diagnostic Multivariate Index Assays

 

NO EVENT SHALL LAB INSIGHTS, LLC BE LIABLE, WHETHER IN CONTRACT, TORT, WARRANT, OR UNDER ANY STATUTE OR ON ANY OTHER BASIS FOR SPECIAL, INCIDENTAL, INDIRECT, PUNITIVE, MULTIPLE OR CONSEQUENTIAL DAMAGES IN CONNECTION OR ARISING FROM LAB INSIGHTS, LLC SERVICES OR USE OF THIS DOCUMENT.

Appendix

Summary table of the AdvaMed Proposal Triage decision matrix

Classification Determining Factors Oversight
Low Risk:
the consequence of an incorrect result or incorrect interpretation is unlikely to lead to serious morbidity/mortality.
The test result is typically used in conjunction with other clinical findings to establish or confirm diagnosis. No claim that the test result alone determines prognosis or direction of therapy. The laboratory internally performs analytical validation and determines adequacy of clinical validation prior to offering for clinical testing. The accreditor during the normally scheduled inspections will verify that the laboratory performed appropriate validation studies.
Moderate Risk:
the consequence of an incorrect result or incorrect interpretation may lead to serious morbidity/mortality
AND the test methodology is well understood and independently verifiable.
The test result is often used for predicting disease progression or identifying whether a patient is eligible for a specific therapy. The laboratory may make claims about clinical accuracy. The laboratory must submit validation studies to the CMS-deemed accreditor for review and the accreditor must make a determination that there is adequate evidence of analytical and clinical validity before the laboratory may offer the test clinically.
High Risk:
the consequence of an incorrect result or incorrect interpretation cCaould lead to serious morbidity/mortality
AND the test methodology is not well understood or is not independently verifiable.
The test is used to predict risk of, progression of, or patient eligibility for a specific therapy to treat a disease associated with significant morbidity or mortality, AND;
The test methodology uses proprietary algorithms or computations such that the test result cannot be tied to the methods used or inter-laboratory comparisons cannot be performed.
The laboratory must submit test to FDA for review prior to offering the test clinically. CMS and accreditor determine compliance.

Summary table of the AdvaMed Proposal Tiered decision matrix

Category New (use of) Biomarker Established (use of) Biomarker
New Technology No Predicate devices (i.e. novel or high risk)
Little of no clinical literature
Requires analytical and clinical validation
Manufacturers and laboratories subject to premarket review
Tier III: PMA or de novo 510(k)
Sufficient Clinical evidence to assess safety and effectiveness of biomarker
Requires analytical validation of new method on clinical specimens
Review level separated by FDA experience with technology
Tier II: traditional or de novo 510(k)
Tier I: traditional or streamlined 510(k), possible labeling review
Established Technology Could have predicate device
Little/no literature on biomarker, but literature and/or FDA experience with technology platform; moderate risk products
Manufacturers and laboratories subject to premarket review
Tier III: PMA or de novo 510(k)
Tier II: traditional or de novo 510(k)
Sufficient Clinical evidence to assess safety and effectiveness of biomarker
Submission of labeling or data summarizing performance characteristics
Self certification/declaration of conformity with standards
Tier II: if moderate risk associated with use (traditional 510(k))
Tier I: if low risk associated with use (labeling review or streamlined 510(k))
Tier O: if risk low and managed, labeling review and/or consider exempt

Proposed “Triage-then-Tier” decision tree by Lab Insights, LLC

Decision Tree