Search This Blog

Monday, March 23, 2009

EDC Study Startup Time

EDC Guru left a comment on the Top 10 mistakes made when implementing EDC posting.  In this comment, it is suggested that one of the potential failure areas is not leaving sufficient startup time when switching from Paper to EDC.  I fully agree, although, from a sales perspective, the alternative is the sponsor goes to another CRO or Vendor that will offer the required timeline.

If you speak to an EDC company, one of the metrics they often pride themselves in is the time to First Patient In.  12 weeks, 10 weeks, 8 weeks or even 6 weeks are banded about.  This is clearly a metric worth considering, but, what actually is included in these weeks?   Is that how long it takes the vendor or CRO to *do* the study build, or is that how long it takes for a sponsor to be ready for EDC.

I would put forward that any sponsor company that goes from no experience to executing EDC studies in 12 weeks will not demonstrate the full benefits with EDC. In fact, there is a danger that the experience will be sufficiently poor as to impact the ongoing acceptance of EDC in an organization.Unwisely implementing EDC can result in similar metrics to paper studies, but at considerably extra cost.

So - what is the solution?  Do I think that drug programs be delayed for the benefit of EDC technologies?     Well - no, but, should an EDC based study be delayed until the organization is sufficiently in tune to the experience - yes.

So taking the above as accepted, what are the boundaries to achieving a better prepared company?

1). Senior Management within sponsor organizations will measure the success of a drug development program but the compounds that go in the various phases of study.  Placing a delay in startup will appear like a failure

2). It is very difficult to develop effective EDC processes :-

  • Sponsor companies new to EDC do not have the experience to determine where the eClinical tools brings benefits and where other methods are better.
  • Vendor companies tackle the process problems from the perspective of technology rather than by tackling the actual business requirements.
  • Consulting companies deliver non adventurous palatable process solutions based on regurgitated generic concepts.

3). Silo structured organizations may not offer broad department support for process changes that will make a company wide eClinical approach successful.

 

Ok, so I have managed to point out 3 potential problem areas, so, what is the solution.... or rather, what is my take on a potential solution?

1) Senior Management -

the measurement of success is as important as the success itself.   I have seen this attempted, but I have not seen a lot of success.  If everything changes, how do you compare 'apples with apples'.  If nothing changes, then what is the point. You need to look for the lowest common denominator. Faster development, lower costs is a start...

2) Processes -

How many readers go along to DIA or similar conferences, and either avoid, or sleep through Process topics.  That is a shame, as it is still the Process, Workflow and Change elements of eClinical that are causing bottlenecks.

You get Technology Guru's that create brilliant eClinical systems. Why not empower a Change Guru that 'gets' the technology and 'understands' the potential for changing an organization.

3). I have just answered 3) with 2).

So - back to EDC Guru's comment.  There is a place for rapid start EDC where the vendor or CRO acts just as they did for an old Paper study, and takes on the bulk of the work to make an effective deployment.  Some benefits will be realized, but certainly not all. I don't think the term 'Start Up' should be defined as study start up.  It should implementation program start-up.  A study will appear, down the road, but, not in the same sentence.

Sunday, March 8, 2009

Open Source eClinical - Myths & Facts

I came across a blog / article posted by here that provides a Q&A with Ben Baumann of Akaza Research.  On reading the article, I feel obliged to respond to some of the comments made. 

First of all, I believe that Open Source based systems do provide a valid alternative to closed source products.  I have both developed, and used Open Source systems in the past. Both Open and Closed Source based systems are designed to create a financial return for the companies that develop and support them.  With the Open Source model, the revenue comes from support and consulting with the reduced cost of development shared across a number of organizations.   With Commercial software, revenue comes from licenses, support and consulting.  The cost of the development is higher, but, this cost is typically covered by the corresponding license revenue.

Back to the article. There are a few points that jump out at me as arguments for OpenClinica;

In addition to a full set of EDC and CDM features one might expect in such a system, OpenClinica has  built-in features that give users the ability to set-up their own studies.

Any EDC or CDM system worth their salt will provide functions that give users the ability to setup their own studies.  Good systems will also ensure that an absolute and clear separation is maintained between the configuration of the study and the tool itself. If the tool has to be modified in any way, then you have a validation nightmare each time you run a study.

In short, an organization can make a rapid and highly informed decision whether or not to use OpenClinica without having to go through lengthy vendor-biased demonstrations and negotiations, and rely on a vendor in order to get their studies configured appropriately.

Well... instead of going through a demonstration that presents the features of the product, the user really needs to start digging into the source code to gain a good understanding of what goes on. So, as a user you need to have a good understanding of fairly complex 3rd generation language programming.   To fully understand how a tool works takes some time. For large EDC or CDM systems I would put that estimate, for an experienced programmer, weeks if not months of detailed analysis. 

Enhanced validation. Validation can be much more thorough with open source software. Buying proprietary software is like buying a car with the  hood welded shut-you don’t know what’s really know going on behind the scenes. Open source provides the highest level of transparency making it possible to truly validate a system from end-to-end.

"Oh! so I need to validate the system myself?" Validation is complicated and expensive.  All systems in the eClinical world require that the development runs within a structured process. You generally need to maintain a trace-ability matrix that takes the software through a lifecycle from specification through to coding and on to completed and recorded testing.   As with InHouse systems, maintaining this will can be challenging

Typical eClinical systems split the code product, that is developed and validated, from the configuration of the product. That part is typically just 'tested'.  In the situation, you don't need to re-validate the software product following a configuration of the product for a particular deployment (i.e. Study).  The validation of a full EDC product might take 200 man days. The testing of a configuration might only take 10 days.   If you mix the configuration with changes to the core software product, you need to be very careful that you don't compromise the core product validation.

As I said at the start, open source solutions have a place in the eClinical business area.  They can be the right solution in certain circumstances, but, they are not always suitable.

Thursday, February 26, 2009

Modelling ODM Metadata - a response

Today, I would like to respond to a recent comment from XML4Pharma regarding the use, or otherwise of ODM in modelling CDISC based studies. X makes some valid points, but some of the concerns are based more on a misunderstanding of the proposals.

First of all, thank you XML4Pharma for your input – all input is good input as far as I am concerned.

Hopefully, with this posting, I can clarify that I do know the differences between ODM and SDTM!... and yes, I do believe the ODM and SDTM complement each other.

I think you have misunderstood how I am suggesting SDTM be used versus ODM. If you look at my recent post, what I am suggesting is precisely what you have been evangelizing about - to think about the Outputs - SDTM - in order to create the inputs. The principle defined was for the modelling of metadata – how you potentially get to an ODM based definition of a study - not how the metadata is used and processed in an EDC product. 

The difference in my proposal from yours is that I am suggesting a 3 tier model in order to achieve the underlying definition of forms and rules in a study. The end result may well be ODM, but, it is how the ODM is prepared is what I am suggesting. To understand where I believe the challenge is we need to think of the definition of a whole study.

Hypothetically, a typical EDC study build includes lets say 8 days of Forms development and 20 days of rules development. Looking at just the forms, the re-use can be effective from study to study... A second study can be 4 days, 3rd study 2 days etc. But what about all the associated rules. How will they work when the visit structures change, new forms are included, and fields are taken away. Will we see 20 days going down to 10 days and then down to 5 days... that depends on whether the use and content of forms for a study are impacted... but not if the logic is hanging off the forms.

Lets take an example problem. I have 5 similar forms that all need to populate the same SDTM domain. With the proposed model we start with the definition of the SDTM Domain (Tier 1). This contains the definition of what we are aiming for. It contains all fields, not just the ones we might use on a form. Next, the definition of a superset Logical structure that contains all of the appropriate fields that might used by a sponsor together with logic that is applied regardless of the capture method (tier 2). Finally at Tier 3, we have the 5 different forms. These all subset the Logical structure defined at Tier 2 and inherit the rules. As we have a consistent thread from Tier 3 thru Tier 2 to Tier 1, it is possible to create a definition that can be used by an eventual target EDC product to populate the same Tier 1 (SDTM Domain) regardless of the form structure or logic.

At tier 1, you define as much information as you can that will be consistent across Tier 2 and 3. Fieldname, data type,  etc.  Tier 2 inherits information from Tier 1 and adds relationships and rules.  You might have many logical structures to capture the same information, but, the information in Tier 1 is only defined once.  Tier 3 applies the same idea.  You might have many instances of a form,  that in turn apply the same rules, but one form might be visualized differently from another.

That was a simple example. Lets imagine I wanted to capture data for 3 domains on the same form. We could simply say to the Investigator – sorry, “we need to split this into separate pages, because that was SDTM demands,..” the true answer is that ‘this is what our modelling structure demands’.  I don't think the model should demand it.

I could have an AE form that is captured on a single page in one study, but captured across 3 pages in another study. I could even reach the point were I want to capture AE information on a PDA (though unlikely!). The eventual SDTM Data domain is the same, the rules are the same (I only want to define them once). The only thing that is changing is the presentation.

As you say, ODM does contain cross references back to SDTM - Fields and Domains. This will allow you to map information back to SDTM.  With the 3 tier model suggested, the way in which the Field and Domain (SDSVarName/Domain) are defined is through inheritance. 

I won't respond to the Audit trail and 21CFRPart 11 comments that XML4Pharma made. Hopefully, by this point my explanations have corrected this misunderstanding.

XML4Pharma's last point regarding HL7-v3...

we do already have a format for exchange of clinical data (or is submission not "exchange"?). So formatting SDTM as ODM is a very nice, simple and efficient way.

if I was part of CDISC and wanted to work closely with HL7, or, if I wanted to be considered a bridge builder to Electronic Health Record systems I would probably embrace HL7-v3 without even opening the specification. It may or may not be the best solution... but on paper it is probably the most marketable.  In response, I would recommend that you constructively and diplomatically present the benefits of an ODM/SDTM based standard over HL7-v3 with further real-world examples. Without this sort of approach - as a leading proponent of ODM based systems, XML4Pharma may come across as simply showing bias due to home grown interests. Personally, my gut feel is that you are correct, but, I do not yet know enough about HL7-v3 to make a fully considered opinion. However, I believe there are sufficient intelligent individuals in the CDISC organization to take a considered technical argument onboard provided it is delivered in the right way.

Tuesday, February 10, 2009

Applying XForms to CDISC

In a previous posting, I made reference to a technology called XForms.  I mentioned that it might prove of some value in addressing the challenges that eClinical systems face in meeting business demands.

XForms is an interesting technology in a number of ways.  To understand what it brings, it should be understood what the traditional method of browser data presentation lacks.  HTML remains the standard means of presenting form information on a screen.  It fundamentally dates back to 1995 in its present state when the Internet really took off. As far as broad standards for forms based data capture, things have not really moved forward greatly since then. 

HTML forms are built in 100's of different ways by software residing on the server.  Typically, the developers will take lots of strings containing things like <Head>, <TR> or <BR> etc and glue them together to make up the necessary codes required to present a form on a browser. Admittedly, there are lots of 'helper' technologies to make things easier such as Extensible Style Sheets (XSLT), Dynamic HTML etc, but, ultimately, there is no one way to effectively create and deploy forms to a browser. In particular, there is no way to easily separate what a Form Does from how a form Looks

The W3c group recognized this challenge and create the XForms standard released October 2003.

'So, yet another standard that will disappear into obscurity!'  I hear you say.  Well, maybe, but, even if it does, it has some attributes that are worth understanding, when considering how things should work in the ideal world.

Benefits of xForms

  1. User Interfaces built on XForms require fewer round-trips from client to server - they are more self contained than pure HTML implementations.  This leads to a better, faster user experience
  2. Mobile device capabilities vary significantly.  Creating a form interface that works on a iPhone as well as on a 1400x1200 resolution desktop is very difficult. XForms provides the separation to allow this to occur with common code.
  3. Javascript is the tool many developers use, on the client browser side, to workaround the limitations of HTML.  However, Javascript is implemented differently by different browsers, and, can even be disabled for security reasons.  XForms provides a means to avoid this limitation.

More about xForms

All sounds cool.... so, lets find out more about xForms...

XForms implements the concept of Model/View/Controller.   Sounds like mumbo jumbo, yes, so lets translate that into some meaningful.

When you capture information from forms, you can define 3 distinct layers to efficiently achieve a design.  

Presentation - 'the View'

Starting at the user side, you have the Presentation side.  On a regular desktop browser, you will maybe have a form presented in a free form layout with 20 questions, nice shaded graphics etc etc.  On a PDA, you want something simply, that scrolls well - so you would have a smaller set of questions, presenting with no fancy graphics, and in a simple list layout.  The Data you are after is the same.  The Logical rules you want to apply to the data are the same - it is only the presentation medium that is changing.

Logic - 'the Controller'

Next, we have the brain.  The part where all the rules exist.   This is not 'eClinical' specific, but, to use an EDC requirement, this is where all the edit check rules exist.  The logic is attached here, and separated from the Presentation Layer in order to ensure that regardless of the layout of the form, or, the device used, the same rules can be applied (and most importantly) re-used.    This controller can be split into two areas - handling requests form the presentation for dynamic activities - as well as processing the results in a common way.

Data - 'the Model'

Finally, the 'what' in the structure. The data resides in the Model - or rather the definition of what data. Aligning with the eClinical analogy, SDTM would be an appropriate specification for the definition of what might exist here.

xForms and CDISC - applying the Model

So, how might XForms be applied to the CDISC in general?   

To some degree, CDASH defines a set of fields that will be captured without consideration of the device that might capture them.  You could have a situation where a PDA might capture 4 questions at a time whereas a desktop browser would capture 20.  The definition of the form itself will differ due to the medium.  So - CDASH in this perspective doesn't fully fit.  However, as a means to standardize the 90% rule - most forms are completed on a full size browser - it is fine. CDASH provides a recommendation of what would be captured together in the presentation layer for one type of device.

So, where would CDASH fit in the XForms Model / View / Controller paradigm?  

Lets look at a possible model for metadata that could be implemented with xForms:-

xforms123

Starting from the left - the obvious model for the Data (Model) is SDTM.  This would include a list of all of the standards Domains. It could be extended as required, and include sponsor specific extensions structured as per the SDTM standards specification

Next in the middle, we have the Logic (Controller). CDASH could potentially work in this middle tier although you would need to add the logic elements.  A Module could equal a CDASH Domain with all of the potential (required and optional) fields included. 

Finally, at the Presentation (View) the Form. This would be a representation of the CDASH domain with specific controls (drop downs, radio's etc) added and fields not required for a specific instance removed. Multiple instances of the form might exist with the same rules and relationship to the Data Model carried forward.

 

Conclusion

xForms in the context of CDISC and eClinical is not so much a means of capturing the data. Rather, it is a means for modelling the metadata to better support the separation of the data that is needed from the logic that must be applied, and the presentation of how this is achieved.

With a model applied similar to the above SDTM data sets could be quickly achieved while at the same time offering flexibility in the medium used to capture the data together with a consistency in the rules enforced.

I have purposely left out references to ODM in the above article.  It may have a place here, but I will expand on this in a future posting.

Monday, January 26, 2009

Green gone

Following on from the announcement that Datatrak had resolved the contractual disagreement regarding the ownership of the Clickfind technology, Jeff Green has stepped down as Chairman and CEO.

So, who will buy the remaining assets of DataTrak?   At 15cents a share and a market cap of 2.6M USD some company might see value.  I suspect a mid tier CRO might come in see the technology as a sort of 'House Wine' implementation of EDC for sponsors that do not show a preference for one of the big players.

Wednesday, January 7, 2009

Datatrak back on Track?

After a few months of uncertainty,  DataTrak have announced at the end of December that they have resolved their legal dispute with the former owners of the ClickFind EDC product they procured in 2006.  I am sure that Jeff Green and his team will be somewhat more positive about the prospects for 2009 than 2 months ago.  The share price though remains very low with a potential de-listing due to occur, pending a hearing with Nasdaq.

Hopefully, the company will pull through.  It would be sad to lose one of the early providers of EDC systems.

Monday, December 22, 2008

eClinical in 2009?

So what will the eClinical landscape be like in 2009?   Here is a projection on what I believe will transpire in the eClinical business area over the course of the next year.  As ever, these are purely my opinions, and do not reflect the opinion of the company I work for...

We have seen significant changes over the preceding 2 years.  The number of vendors have shrunk with Medidata and PhaseForward apparently swallowing up the vast majority of new business.   

Oracle have been suggesting the availability of a new, improved product.  If they want to stand a chance against the big 2, then they are going to have to get the EDC portion right this time. Previous attempts were so far from the mark with regards basic EDC functionality and ease of use that they bombed in the market.

Medidata's announcement of Developer Central is potentially more significant.  ClinPage describes this as an API. However, I think that is a simplification that underplays the significance of the release. Back in September, I wrote a blog on Web Services, and how they might impact eClinical.

A number of EDC/CDM vendors have claimed to offer API's - FW-IMPACT and Oracle for example, but many were simply an exposure of a set of Stored Procedures. Techie's developing eClinical systems have talked about a full web service solution for many years. Medidata appear to have take their experiences with CDISC and actually delivered a fully operational solution - a full CDISC compliant Web Service for eClinical data Importing and Exporting.  

Google were the first company to really generate buzz around the principle of an API over the Internet with the release of the Google Toolkit.  With this, any developer with an Internet connection could send a program request - using web service calls - to Google, and receive search responses that they could use for their own purposes.    If the Web Service eClinical API that Medidata has announced is all its potentially cracked up to be, then we should see a long queue of  eClinical providers - eDiary, CTMS etc all lining up to be in a position to offer Medidata Rave real-time connectivity out of the box.

On a technical note, it will be interesting to see if the use of the CDASH standards will be utilized as a means to standardize the metadata used to capture the data.  If this does occur, we could see a simple handshake occurring at the start-point between the two inter-connected systems to confirm 100% compliance with CDASH. If this is the case, then no manual metadata synchronization need occur. In reality, I suspect ALL CDASH based implementations will be customized, but, its a good start.

Back to 2009 forecasting...

The financial crisis will clearly impact the industry.  The smaller companies - BioTech's specifically - will struggle to find cash.  If the development programs they are running rely on a constant cash flow, then they could go under.  This might provide rich pickings for the big companies looking to pick up a bargain, but, potentially at the cost of new innovations.  For the lower end of the eClinical marketplace, things will be tough.    eClinical vendors that can demonstrate lowering costs - especially over paper - will see increasing business.

eDiary vendors will continue to see business grow provided they can show interoperability, and, can manage to keep infrastructure costs down.   We may see an increase in the prevalence of eDiary solutions based around Off the Shelf or pre-existing hardware - the iPhone for example appears to have the ease of use, connectivity  and synchronization capabilities to make it more widely usable then any other OTS product today.

CTMS vendors will increasingly struggle.  The value of such solutions is increasingly marginalized with the enhancements offered by leading EDC system providers.  Similar to the eDiary space, if they can 'play nicely' with the other systems, they stand a chance.  Otherwise, they will be considered unnecessary in the overall eClinical Life Cycle.

Overall, I think we will see the more marginal areas of the eClinical vendor solution business impacted the most.  When finances are tight, the argument for these types of systems will be difficult.  Life Science companies will focus on cost saving and direct efficiency gain solutions.  This will be no time for long term technology prospecting.

 

For those following these blogs, I wish you well for the holiday period, and Best Wishes for 2009.