I have certain theories as to the root cause - some of which I have witnessed first hand. The question is, what can be done to tackle these barriers to progress?
Such a simple issue on the surface, but so often being the primary reason for failure to achieve ROI. Here are my 6 reasons why poor Processes impact efficient clinical trials
- Processes are not re-written to take advantage of the end-to-end impact of EDC.
- Processes are left as generic, therefore failing to leverage the capabilities of specific EDC systems over others.
- Process Silos - rather than having a set of processes that work efficiently across Clinical Research, processes are developed within the different departments - Protocol Authoring, Study Build, Data Management and Programming / BioStats. This means that potentially work carried out in Study Build is detrimental to what occurs in Data Management and in BioStats.
- Process Bloat - Processes are developed to such as level of detail, that they become a hindrance to getting the job done. Processes should not instruct the reader on the use of a System User Interface. Processes should guide, but not replace the skills and common sense of trained staff.
- Process Training - Writing the processes is the easy part - communicating these processes in a way that staff understand and follow is where it becomes tough. eLearning, wiki's and other such tools can change the endless tedious reading into 'knowledge on demand'.
- Processes are good for business first and foremost, and secondly to support a company comply with rules and regulations. Often processes are seen as a necessary evil in a regulated environment.
Often when implementing EDC organizations work on the basis of :- better do what we did before, in addition to what we do with EDC.
So - it may be the case that EDC is programmed by Data Management to check virtually all the data, but, just in case multiple subsequent reviews are carried out before the datapoints and therefore database is locked. Studies have shown that a negligible amount of data is modified after it has been first entered. It is really efficient to check, re-check and further check data, if the results have such a minimal effect on the quality of the resulting data?
CRA Monitoring duties related to the clinical data reduces by around 75% in comparison to paper studies. No more logging of enrollment, patient visits, SDV or such things. This should all be automatically captured by the EDC system and communicated through reporting, or a status portal. Repeating this is simply a waste of effort.
The use of standards is a popular topic. CDISC have made great leaps. However, the actual resulting savings in the application of standards have often yet to be seen. Poor tools should take some blame, but also poor work practices. If 50% of a study is equivalent to a previous study, then the Design, Development, Testing and Programming should reflect equivalent savings to the degree of re-use. In reality, 50% re-use often only creates a small % saving.
This applies to sponsor companies where big departments have built up as well as to CRO companies that maintain the perception of the need for old (timely) work methods. Put simply, if it takes longer, or cost more to implement and execute a clinical trial using EDC, then something is wrong. I appreciate that exceptions exist - i.e. on very small studies. However, even in these circumstances, standards can creating savings, and improve quality.
Within Sponsor companies, departments can be keen to protect their existence, even if this is at the cost of the company and efficient R&D as a whole.
Increasingly, I feel that we will see value and risk based assessments. For example, if traditionally, the BioStats / Programming group have re-checked the data that is delivered from Data Management, then unless they are able to prove that the efforts they applied have had a sufficiently significant impact on the final data, then, they will not be performing that task in the future. Similar situations will occur across other areas.
I would be interested in hearing of other experiences in how eClinical systems fail to achieve either quality or cost savings.