Search This Blog

Thursday, July 1, 2010

How can eClinical deliver on the vision?

Although EDC has taken a tighter grip over the last 5 years, replacing Paper as the primary medium for capturing clinical trial data, often, the value of EDC is not being realized.

I have certain theories as to the root cause - some of which I have witnessed first hand. The question is, what can be done to tackle these barriers to progress?

PROCESSES

Such a simple issue on the surface, but so often being the primary reason for failure to achieve ROI. Here are my 6 reasons why poor Processes impact efficient clinical trials

  1. Processes are not re-written to take advantage of the end-to-end impact of EDC.
  2. Processes are left as generic, therefore failing to leverage the capabilities of specific EDC systems over others.
  3. Process Silos - rather than having a set of processes that work efficiently across Clinical Research, processes are developed within the different departments - Protocol Authoring, Study Build, Data Management and Programming / BioStats. This means that potentially work carried out in Study Build is detrimental to what occurs in Data Management and in BioStats.
  4. Process Bloat - Processes are developed to such as level of detail, that they become a hindrance to getting the job done. Processes should not instruct the reader on the use of a System User Interface. Processes should guide, but not replace the skills and common sense of trained staff.
  5. Process Training - Writing the processes is the easy part - communicating these processes in a way that staff understand and follow is where it becomes tough. eLearning, wiki's and other such tools can change the endless tedious reading into 'knowledge on demand'.
  6. Processes are good for business first and foremost, and secondly to support a company comply with rules and regulations. Often processes are seen as a necessary evil in a regulated environment.
CONSERVATISM

Often when implementing EDC organizations work on the basis of :- better do what we did before, in addition to what we do with EDC.

So - it may be the case that EDC is programmed by Data Management to check virtually all the data, but, just in case multiple subsequent reviews are carried out before the datapoints and therefore database is locked. Studies have shown that a negligible amount of data is modified after it has been first entered. It is really efficient to check, re-check and further check data, if the results have such a minimal effect on the quality of the resulting data?

CRA Monitoring duties related to the clinical data reduces by around 75% in comparison to paper studies. No more logging of enrollment, patient visits, SDV or such things. This should all be automatically captured by the EDC system and communicated through reporting, or a status portal. Repeating this is simply a waste of effort.

STANDARDS

The use of standards is a popular topic. CDISC have made great leaps. However, the actual resulting savings in the application of standards have often yet to be seen. Poor tools should take some blame, but also poor work practices. If 50% of a study is equivalent to a previous study, then the Design, Development, Testing and Programming should reflect equivalent savings to the degree of re-use. In reality, 50% re-use often only creates a small % saving.

PROFIT/POSITION PROTECTION
This applies to sponsor companies where big departments have built up as well as to CRO companies that maintain the perception of the need for old (timely) work methods. Put simply, if it takes longer, or cost more to implement and execute a clinical trial using EDC, then something is wrong. I appreciate that exceptions exist - i.e. on very small studies. However, even in these circumstances, standards can creating savings, and improve quality.

Within Sponsor companies, departments can be keen to protect their existence, even if this is at the cost of the company and efficient R&D as a whole.


SOLUTIONS?
Increasingly, I feel that we will see value and risk based assessments. For example, if traditionally, the BioStats / Programming group have re-checked the data that is delivered from Data Management, then unless they are able to prove that the efforts they applied have had a sufficiently significant impact on the final data, then, they will not be performing that task in the future. Similar situations will occur across other areas.


I would be interested in hearing of other experiences in how eClinical systems fail to achieve either quality or cost savings.

Thursday, April 1, 2010

Linking SDTM / ODM for better FDA Submissions

If this is not already underway, I think it is time that we examined how we can do a better job of combining SDTM data submissions with ODM data and metadata.   The following commentary is hopefully going to raise some comments, and, ideas as to next steps.

The Challenge

CDISC SDTM – the standard for the Submission of Clinical Data for New Drug Applications - is challenging to work with for a number of reasons.  Data is structured per Domain, rather than per CRF – quite rightly in my view – however, this re-modeling does create a number of issues that deserve to be addressed.

First of all – getting the data into this format in the first place.  In a typical EDC system, you have data captured across many pages.  Often these pages contain 1 or more domains for data.  Data is presented in nice friendly CRF Page like formats, with quick links to audit trails, queries, comments etc.   In the SDTM world, things are not quiet as friendly.  You have long lists of records structured by domain.    You cannot see the audit trail.   You struggle to relate the comments.  The queries may not even exist.

Now – ok – maybe the audit trail and query log should be of no significant relevance to the Medical Reviewers…  It is supposedly more evidence of the data cleaning process having taken place. Maybe that is just a carry over from working with paper CRF’s for decades. However,I would argue that data is never 100% clean.  It is sufficiently clean to merit safe statistical analysis.  Reviewers may feel that in a position of doubt, that they would like to see context behind the data being recorded, and therefore see the query and audit logs.

So – after that short ramble – how could we make the life for Reviewers better?

Combining ODM with SDTM

Well, first of all, why not leverage the standards we already have for clinical data – ODM and SDTM – but combine them in a more effective way to offer  SDTM directly linked to ODM?

What I mean by this – and I am sure XML4Pharma will point out that this has been suggested previously – is that we extend the ODM specification to accommodate SDTM Domains within the ODM spec, AND that we provide the means to link the SDTM domain content with the associated eCRF data & metadata in the present ODM.

To the end user – this would result in a mechanism to switch between a tabular SDTM view of data to an eCRF view of data, with ready access to the audit trail and queries, as well as potentially a better context of the data in the way it was captured.

Easy to achieve?

Sponsors struggle to create SDTM today.  However, I am not certain that this is due to underlying faults in SDTM itself.  I think the tools will mature, the standards will mature, and sponsor companies will simply get better at it.

Creating related ODM is also not too difficult. Any EDC vendor for instance that wish to be credible in the marketplace need to be able to offer data and metadata in the CDISC ODM format.

Programmatically linking the SDTM and ODM is probably the hardest part.

In theory, you could have a situation where every field that exists on a SDTM records belongs to a separate ODM page instance.   The problem can be solved, but, it is not going to be easy.

Conclusion

Delivering SDTM data in a ODM style format, and creating a means to link the SDTM/ODM to the present ODM data and metadata will create data that is considerably more useful for both analysis, submission and archiving.

Tuesday, March 23, 2010

Apple iPad and eSource

The Apple iPhone broke new ground in offering an all in one device for Music, Phone, Games, Organizer and general applications.  Other companies had introduced almost all the concepts. It was Apple that brought them all together so effectively.

I have considered the suitability of the iPhone specifically as a device for capturing clinical trial data.  The obvious application is an eDiary device. People familiar with the iPhone will be aware of the phenomenal success of the App Store.  It may be possible to write an eDiary app.  However, there are challenges with metadata deployment and application patching that might prove insurmountable given the restrictions that Apple place in deployment.  As far as using the native browser on the iPhone – the form factor is just not that suitable. Yes – you can fill in a browser based form, you can do the 2 fingered pan and zoom. But, it just doesn’t quite fly when it comes to regular data entry operations.

Shortly, Apple will release the iPad.  In many ways this is like an iPhone or iTouch, but larger. Form factor wise, it is similar to many Tablet PC’s. However, it has the advantage of being tied to the iPhone/iTouch OS. It will also be provided with both Wifi and 3G connectivity.

One of the real boundaries to eSource in clinical trials is the portability and availability of the device at the appropriate times.  With the larger touch screen of the iPad and the option of connectivity with either 3G or Wifi, it will be increasingly possible to efficiently capture data at the place of data availability.

So – could the iPad break the capture to paper / transpose to EDC bottleneck at the sites?  I think so.  The solution is likely to be browser based though – App deployment is still too restrictive.  It needs to be fully touch screen friendly.   It needs to make it beautifully easy for an investigator to ‘interview’ a patient, and key the data during the interview where appropriate.   It needs to provide a means for the investigator to indicate through a simple highlighter pen style UI metaphor that data is being entered as source, or, being transposed from source.

I appreciate that other devices exist today that perform a similar function, but, I believe the connectivity, general ease of use, and low price point will make the iPad stand out.

One critical feature may be the Electronic Health Records link.  I think the data that should be copied needs to be at the discretion of the site personnel. I am not yet convinced that data privacy together with a sponsor controlling the study build /  data propagation is viable right now. A really simply copy/paste mechanism might be better than nothing in the time being.  We will need to see how well the Safari browser performs.   

Initially, I see the iPad making inroads within Phase I units.  Hardware device interfacing is less of an issue here now – most devices should be looking at centralizing the data interchange, rather than sending it directly to the data entry device.  Web 2.0 interactive technologies will allow developers to create some of the realtime functionality that dedicated Phase I solutions have enjoyed in the past.

I am looking forward to seeing the first iPad EDC demonstrations at the DIA in June!

Sunday, February 14, 2010

Value of Batch validation?

One of the questions often asked of EDC systems is ‘Where is the batch validation’.  The question I would like to ask, is what is the value of Batch Validation versus Online Validation.

I should start by saying that I have a personal dislike of technology that works in a particular way – because that is the way it has always worked – rather than because a pressing requirement exists to make it work the way it does today.

Performance – Batch Validation generally dates back to the good old days of batch data processing.  With Clinical Data Management systems where the act of entering data, and the triggering of queries were not time critical, batch processing made sense.   The centralized Clinical Data Coordinators would double-enter the data rapidly, and at an appropriate point in time, the batch processing would be triggered, and the appropriate DCF’s lined up for review and distribution.

For EDC – things are different.   It is all about Cleaner Data Faster. So not data checking immediately after entry creates an inherent delay.   No site personnel want to be hit with a Query/DCF hours or even days after data was keyed if it could have been highlighted to them when they originally entered the data – and, presumably had the source data at hand.

A couple of CDM based tools provide both online edit checking, as well as offline batch validation.   The Batch validation elements come from the legacy days of paper CDM as per above.  The online checking is a subsequent add-on created due to the difficulty of efficiently parameterize and executing Batch validation checks per Subject eCRF.

Lets have a look at a couple of other differentiators.

1). Online edit checking tends to run within the same transaction scope as the page – so, when a user sees the submited page – they are able to immediately see the results of the edit check execution.   This means the data submission and edit check execution for all checks must occur in less than a couple of seconds in order to be sufficiently responsive.  With Batch Validation, running across data can be more efficient, and user experience is not impacted – waiting for a page refresh.

I believe most leading EDC products have the performance aspects of real time edit check execution cracked. Networks are faster, computers are maybe 10 time faster than 4 years ago. I don’t believe that performance is an issue in a modern EDC system with properly designed edit checks.

2). Scope – Batch validation is able to read all data  within a subject regardless of visit. In addition, some are also capable of checking across patients.   EDC systems with online validation also generally manage to read all data for a subject, but do not permit reading across subjects. 

3). Capabilities – Most EDC systems edit checking mechanisms are application intelligent, rather than based on SQL or syntax that interprets down to SQL as with Batch Validation.   As a result, the syntaxes tend to more business aware. If you are having to write code – SQL or other syntax, then you have a demand to validate the code in a similar fashion to the vendors validation of the system itself.  Avoiding  coding in favor of a configuration / point and click tool makes the testing considerably easier with automation possible.

4). Architectural Simplicity – If you were a software designer, and you saw a requirement to check data that is entered into a database.  Would you create one syntax, or multiple syntaxes?  Even if  you saw a need for offline batch validation – I think you would go with a single syntax.  If you have a means to balance where and when the rules run, then that might be ideal – either at the client side, application side, or database layer.  Using 2 or even more syntaxes would be something you would avoid.

5). Integration implications – Data that is imported into an EDC or CDM system should go through exactly the same rules regardless of the medium used to capture it - Browser, PDA, Lab, ECG etc. This even applies if you are importing ODM data.  If this is not the case, then downstream data analysis needs to confirm that the validity of the data against the protocol was assured across the devices.  Managing to achieve this if you have separate batch and online edit checking is difficult.

 

On re-reading the details above, it sounds a bit like I am bashing systems that do Batch Validation.  That is probably slightly unfair.  I have worked with both EDC and CDM systems, and written checks for both. In the paper CDM world, the User Interface for the batch execution of rules makes sense. You choose the appropriate point in time, and, you can determine the scheduling and scope of DCF’s.  So – for a pure Paper environment, this meets requirements.

However,  in an increasing EDC world – I am not sure this has value.  It could be argued that it gives you the best of both worlds.  However, I think it is an unsatisfactory compromise that increases complexity when migrating to focus on EDC. It simply does not create a good scalable solution.  Users will be left wondering why things are so complex.

Thursday, February 11, 2010

CDASH, SDTM and the FDA

Hurrah!  The FDA have made an announcement on their preference towards SDTM!!  Well.   Sort of.   They met up with representatives from CDISC. The CDISC organization wrote down some notes on the discussion, and posted them to their Blog.

Ok – maybe I am being overly flippant. However, why does this message need to come out by proxy from CDISC?  Why can the FDA CDER / CBER not step off the fence and make a firm statement on what they want, and when they want it?

One point made was that applying CDASH is the key to attaining SDTM datasets.  Well.  Sort of.  It is a good start point. But, it is only a start point.

The CDASH forms are very closely modeled on the structure of SDTM domains.   Do I always want to capture one domain, on one eCRF form – not always.  Do I want to sometimes capture information that is logically grouped together according to source documents that belongs to multiple domains on the same eCRF – often I do.  We should not compromise the user friendliness and therefore compliance at the sites because of a need to capture data according to the structure of the data extracts.

CDASH was developed around the principle that the EDC or CDM system modeled eCRF’s to equal SDTM domains.   If your EDC or CDM system does not do that, then compliance with CDASH is not entirely valuable.

However – or rather HOWEVER – if you fail to apply equivalent naming conventions to CDASH/SDTM and fail to use matching Controlled Terminology, and, you expect to achieve SDTM – you will be severely disappointed. Achieving SDTM will not be hard – it will be virtually impossible.

With regards to the statement that applying CDASH can create 70-90% savings.  That is not the whole story.  Apply CDASH + standardizing all of the other elements such as rules, visits etc – and automating testing and documentation – yes, then you can achieve a 70-90% savings.

Sunday, January 24, 2010

CDISC Rules 2

In my last posting, I discussed potentially using ARDEN as a syntax for expanding CDISC ODM with rules.

After a couple of months of on and off investigation, I have decided that ARDEN is dead as an option. Actually, ARDEN is largely dead as a potential syntax in general.

The value of a rules syntax lies primarily in the potential ability to put context around data once it reaches a repository or data warehouse.

In theory, the transfer of rules would be of value in transferring a study definition between systems. However, I cannot think of a really valuable situation where this might happen. If data is captured into an IVR System and then transferred across to EDC - does it really matter if they both have access to the rules? Instead, the rules could be applied by one of the systems.

That last point takes me to the other reason why rules are less critical. If the last decade was about standards development this new decade must be about standards application - and, in particular the real time exchange of data between systems. The need to validate data in System A first, before transferring it to System B is only really necessary if System A cannot check directly with System B. With the increasingly prevalent Web Services combined with standards - it will be possible to carry out these checks online.