Search This Blog

Friday, October 30, 2009

CDISC Rules!

Ok, so a play on words. CDISC may rule in the field of Clinical Data standards, but, it does not rule in the standardisation of rules associated with data.


Let me expand here for those not familiar with the issue.


CDISC ODM provides a syntax for the definition of metadata (and data) used in the interchange of information between (and sometimes within) systems. CDISC ODM does not scope the definition of edit check rules that are applied to the data when it is captured. I feel that is a significant omission as the rules element of the data a) take a considerable time to develop and b). provide a context to the data.


Question - So, why do we not already have rules built into the standards?


Answer - rules are often technology or vendor specific. There are almost as many methods of implementing rules, as there are EDC products.


Question - Why not define a standard mechanism for creating rules that vendors could either comply with, or, support as part of interfacing?


Answer - Well, it all depends on what you want the rules to do. In their simplest form, rules are boolean expressions that result in the production of a Query or Discrepancy. However, many systems go well beyond simply raising queries. The boolean element of the rule may be consistent, but the activity performed in the situation that the boolean returns true, is often very vendor specific.


Lowest Common Denominator


So - lets assume that we are looking at implementing a lowest common denominator of rules and actions that the majority of systems support, and require


What can we do to standardize a syntax. Three options I think;


1) Choose a syntax from one of the leading vendors,


2). Develop a new syntax building on existing ODM conventions


3). Bring in another standard syntax, potentially already in the Health or LifeScience field


Lets look at them in order.   


No. 1 - Choosing a Leading Vendor Syntax is probably great for the chosen leading vendor, but, bad for most other vendors. A benefit though would be that it would already be proven as a means to represent rules and actions in a clinical study. Some syntaxes are based around standard tools such as Visual Basic for Applications, JavaScript or even SQL. This approach may create almost insurmountable boundaries for other vendor systems that do not, or cannot implement the technology - for example, it is not easy to interpret VBA on a non Microsoft platform. So - option 1 has some potential, but, depending on the chosen vendor, may result in closing the door to the standard for others.


No. 2 - Creating a new Syntax would result in something most vendors would be happy with, but, would require considerable effort from the contributors in order to develop a complete specification for the standard, as well as a reference implementation. The advantage of such approach would of course be that it would be seen as a common standard open to all, and not specifically biased to any one vendor company. In practice, the technology approach chosen would favor some more than others.


No. 3 - Leverage an existing Syntax may well bring the benefits of No. 2 without all the costs of designing something from scratch.


Ok, so lets say we go ahead with option 3 - what are the candidate standards for rules in the Health and/or LifeSciences are?


As far as I can tell, not many. In fact, I was only able to find one candidate that had any level of success - a syntax called ARDEN.


ARDEN has existed since 1989 as a syntax for describing Medical Logic. Similar to typical Rules in EDC, they are defined in Modules - Medical Logic Modules - and called based on the triggering of an event.


[For an accurate definition of ARDEN and its roots, check Google Books - search for ARDEN Syntax and examine Clinical knowledge management: opportunities and challenges By Rajeev K. Bali, Pages 209 --> 211]


As a syntax, it is mostly general purpose. Here is a snip from an Arden Syntax module,




logic:


if


last_creat is null and last_BUN is null
then


alert_text := "No recent serum creatinine available. Consider patient's kidney function before ordering contrast studies.";

conclude true;

elseif


last_creat > 1.5 or last_BUN > 30
then


alert_text := "Consider impaired kidney function when ordering contrast studies for this patient.";

conclude true;

else


conclude false;
endif;

;;




In the example, you can see that the syntax uses standard if/then/elseif/endif constructs. Assignments use the := combination etc.


HL7 have a section dedicated to ARDEN here. The activity appears to be limited with no Postings or Documentation since 2004. On walking through some of the presentations, some of the consumer companies such as Eclipsys were proposing extensions to the syntax - for example to add object definition support. It would appear that the take-up of the standard has been limited to those organizations that had a problem to solve in the EHR area, and wanted to re-use a syntax, instead of inventing their own.


The fact that HL7 has lended support for ARDEN may be sufficient in itself. However, we would need to dig considerably deeper to understand how ARDEN syntax would fit with a syntax such as ODM. The first challenge is the conversion to an XML form. There are plenty of articles on ARDEN XML for further reading.


RuleML is another standard that may address the need to create Rules, as well as meeting the perceived need of being XML based.


More about ARDEN and RuleML in a later posting I think. This one is quite long enough for today.



Friday, October 9, 2009

Source to eSource with EDC

One of the areas that I have felt for some time as being a compromise to the effectiveness of EDC, is the subject of Source data, or rather, the fact that source data is often not entered directly into an EDC system.


I appreciate that we have situations where this is impractical for logistical reasons - location of computers, circumstances of source data capture etc.


However, often, it is mandated by the sponsor that source data is not logged in the EDC system, but is instead recorded elsewhere first.   Some advocates will indicate the need to comply with the regulations that state data must remain ‘at the site’.   Personally, I don’t concur with this assessment. The data is ‘at the site’. The cable connecting the screen to the computer might be very very long (the internet) but the data is constantly available at site. I probably shouldn’t be flippant on this point, but, the conservatism in conflict with progress strikes a nerve.


Transposing information from paper to an EDC screen introduces the potential for error.   In the old world days of paper CDM, we had Double Data Entry as a method to confirm transcription errors did not occur - 2 staff enter the same data from the paper CRF’, and the differences flagged/corrected.   With onsite EDC we don’t have Double Data Entry, but, we do have Source Data Verification. Instead of 2 staff sitting next to each other double keying data, we have a monitor fly in to perform the 2nd check. Yes, I know, they carry out other duties, but still. This seems like an enormous effort to check for transcription issues. It also has a massive effect on slowing down time to database lock.


So – where are the solutions:-


1). Data at Site


We could put on our ‘common sense hats’ and make a statement that allows the entry of source data into the online EDC system. I know of a number of EDC companies that have simply placed this in their procedures. No audit findings / concerns have been raised as a result that I am aware of. Come on Regulators – why the delay in making a statement on this subject? The confusion caused is creating a measurable delay in bringing drugs to market!


2). Physical availability


This one is harder to tackle. When you have data to log, do you always have access to a system/device to log the data? Possibly not. Do you simply try to remember it, like an Italian waiter remembering a dinner order… I don’t think so. We either need to provide portable data entry devices, or, we accept paper transcription for these elements.


3). Differentiating Source from eSource


This does open up one area of concern. If we have some data as source, and some as eSource, how do we know which is which? When a Monitor goes looking for the source data, if they don’t find it, does that make it eSource – no.   There needs to be some very simple flagging, and visual indication system built into such a mixed source/esource system that supports this. I have seen this in one system, but, it is very rare. Come on EDC vendors – you’re turn here!


4). Other systems


Health Record systems amongst others will often be the first point of entry for data that may apply to an eCRF.   The current approach for organizations such as CDISC and HL7 is to create interfaces between systems. This will be a slow burner, I predict. There are so many hurdles in the way. It requires active cooperation from both sides – EDC providers may be fully onboard, but, I am not so sure about most EHR providers.  


We may see a company emerge (maybe this is what some former PhaseForward employees are up to – who knows!) develop an online Clinical Development Electronic Health Record System that is immediately EDC ready. Of course, with such a small deployment potential, I am not sure that we will see this appear at all sites in a global study, but for domestic application – say within a single country, inside research hospitals, it could work. I digress - back to the integration issue – yes, when we have standards, the feeds will start to happen, but not for many years.


5). Patient Recorded Information


ePro, Patient Diaries etc.   These are increasingly realistic, for the right sort of study for the accurate capture of patient data. If used appropriately, they can cut down the volume of data that needs to be transposed into EDC, and therefore source data verified.


On a side note, I am sure we will see a downloadable iPhone app that will make the current hardware/software dependent or basic PDA browser based systems seem old and tired.


The advantage of diary based systems is that instead of an investigator quizzing a patient, writing down the responses, transposing the responses etc. The information is captured ‘at source’ often considerably closer to the time of event.


Expanding from my previous post, I expect to see systems like PatientsLikeMe expand onto portable devices and act as an entry point for both patient identification and data entry. Internet enabled device pervasiveness will simply make this happen.




Conclusion


A lack of eSource has direct impact on the time it takes to lock data. With adaptive clinical trials executed by forward thinking sponsor companies, the point at which an adaption can occur corresponds directly with how long it takes for the sample size end-point significant datapoints to achieve a locked status. To simplify - the less eSource you have, the longer your studies will take. Forget a few days - we are talking wasted months.






Tuesday, July 21, 2009

Applying Social Networking system principles to eClinical

Social Networking software is one of these technologies that has just sneaked up over the last 5 or so years.  We have sites like Twitter, Facebook, Bebo, MySpace and LinkedIn all vying for marketshare attention.  Like all new business areas, segmentation is occurring with Bebo aimed towards kids, MySpace adults and celebrities etc.  Over time, I am sure we will see accepted leaders in the same way Google leads the search engine pack.

For a number of years, we have seen small clusters of information sharing sites aimed at specific disease types.  PatientsLikeMe was one of the early success stories with an initial focus on ALS (Lou Gehrig's disease).  It is beautifully written with a user interface specifically designed to be accessible and approachable. [It is also open source, and written on the modern new platform - Ruby on Rails].  Sharing experiences, discussing issues with other sufferers, and learning of potential therapies when dealing with chronic debilitating diseases is an ideal target for the social networking concept.  For something like ALS, where patients are geographical dispersed, a site like this can act as a electronic 'drop in centre' providing a level of support not possible with any other mediums.

An example on PatientsLikeMe is one of the focus communities that supports Devic’s neuromyelitis optica.  This is an incredibly rare disease, often confused with MS that only effects a few thousand people in the world today. As of July 2009, the site has a community of 140 suffers, more than 5 times the population of the largest published clinical study.  The home page for this site is a lesson on how to provide a site that the target users will return to;

image

Without having to login, or register, the site show a list of treatments and symptoms that have been shared by the 140 patients that have previously registered.  What better way to encourage mutual sharing and involvement!

A replacement for Traditional Clinical Trials?

To a degree PatientsLikeMe and other health focused social network sites are providing an open source equivalent to clinical trials.   Clinical trials are highly structure organized affairs, tightly bound by regulations and rules.  In this way, they are similar to a formal software development approach where specifications are prepared, code developed and testing carried out.  Compare this to open source community approach in software development. Thousands of individuals all  contribute, self regulating, to create products that many would argue are as good, or even better than their commercial cousins. With social networking health sites you have a very large community all contributing on an ongoing basis both in terms of the raw data, but also, the analysis of the raw data to determine trends.

With a typical clinical trial, a subject with be interviewed by an investigator, measurements taken, and information recorded into a system according to pre-prepared protocol.  Together with the terms of the protocol, the investigator will be in a position to make an assessment as to whether the subject is sufficient compliant for the corresponding data to be considered part in the overall data set going forward.  With a patient community approach, the compliance with an fixed regime, or protocol will be limited.   The sample size must therefore be considerably larger in order to make any kind of assessment based on the results.  Also, there are considerable challenges in determining the degree of compliance. Statisticians would most likely struggle to offer up a statistically safe trend based on the variability of the surrounding conditions, many of which would potentially be un-recorded.

How about as an entry point?

Nothing new here of course - PatientsLikeMe are already partnering with commercial companies, but, potentially a more open approach would be to create bridge between the community and the commercial world by leverage standards to create an opt in gateway.  Sponsor companies could publish a set of entry criteria and schedules for clinical trials through a protocol standard language (CDISC PRD ?) push approach. Community health portals could utilize these criteria to determine a filtered list of individuals that comply and then give them the opportunity to opt into the published study.

Maybe it would be a shame if the 'big bad' commercial world started to spoil these not-for-profit community led efforts.  I am not sure.  I think both could benefit.  Patients that suffer these diseases are actively looking for therapies and help.  If the drug companies are better able to target and develop these study therapies, then surely it is a win/win.

Wednesday, June 10, 2009

eClinical Vendors - Build or Merge

Over the last year, we have seen a spat of eClinical companies either swallowing up smaller players, or, merging.   The recent procurement of eTrials brought this to mind. From a capability checklist perspective, this looks good.   As a sponsor, instead of going to many companies for many systems, they can be procured from a single source. If problems occur with a system, there is only one number to call.  If the systems aren't speaking, then no finger pointing - the single vendor is responsible.

However, with today's eClinical systems demands, I believe some challenges exist particularly related to scalability and metadata management.

In former days, where Client/Server solutions were the norm, and, the cost of setup and integration was just an understood overhead of working with eClinical systems, mergers made a lot of sense.  It often took months to fully configure a platform, carry out validation and adjust the configurable settings to make it work as required.  If the system came from 2 sources versus 1, it didn't really mater as much, as it was expected that it would take considerably time and cost to make everything work anyway.

The technology landscape today is very different.  In a Software or Platform as a Service model, sponsor companies are looking for a number of capabilities that can potentially conflict with the abilities of separate products coming from different sources.

How long should it take to setup an eClinical product so that it is ready to be configured to support a clinical study?  3 months, 1 month, 1 week, 1 day?     In a capacity managed, multi-tenanted Platform as a Service environment anything more than a day is too long. Organizations are increasingly expecting instant capacity support, and immediate responsiveness.

Let us imagine a hypothetical case study. 

A company has developed a nice multi-tiered Java EE web app, with horizontal scalability - things are looking good...  the operational configuration and management of the platform has reached a point where an instant 'on' is a reality for clients. There is a single location for user and metadata management. Security is looking solid, a development plan is in place to expand and extend the core software... and then bang...

... a merger occurs...  The companies IT Hosting group are handed an entirely new platform to co-exist with the current platform... the R&D group inherit a new technology.  This one is .Net, on a different database, using a different app and web server.   Both applications are large and complex - a complete re-design is out of the question.  So, what next?

Well,the first thing that might happen is that the systems are made to appear integrated.  A common User Interface, often called a 'portal' is created that gives the impression that the independent systems are operating closely together.

Next, an interface.   Integration would be nice, but, as the two products were designed in isolation, they don't share a common architecture or platform stack, so, an interface is the only viable solution.  Both products are not standards interface based so hard-coded bi-directional feeds as put in place.

Ok, so they share an access point, and, they share data, but, do they provide an effective solution?

CDISC & Standards

CDISC may offer potential to organizations that choose to buy rather than build.   One of the key objectives of the CDISC standards are to achieve cross system interoperability.   If the disparate systems are capable of offering CDISC ODM integration - both data and metadata - then the challenge may be less steep. 

We are also seeing the emergence of Web Services, often leveraging CDISC standards, that in theory will allow the non programmed bi-directional interchange of data and metadata between supporting systems.

The question though ultimately is, will the resulting systems deliver solutions in the way the customer wants a solution, or, will the the customers end up with having to work in a convoluted way, because that is they way the systems came together, and that is the way they need to work.

I will not expand on this commentary for now as I am interested in hearing experiences from those that have faced, or are facing these integration challenges.

Friday, May 8, 2009

Too busy for EDC?

 

When things are busy, people tend to become more re-active than pro-active.  Instead of working on a list of tasks that are set out according to a pre-prepared plan, tasks are tackled as they come through the door.   Email for example is often a curse in this regard.   The priority of work tackled is often related to the order that an email appears in your inbox, rather than any real priority for the underlying work.

With a blog, it is necessary to proactively go and monitor feedback.  If you don't go to the blog site, you don't recognize the feedback.

EDC can be impacted negatively by this.   Imagine you have very busy site personnel - not hard... Typical online EDC systems contain workflow that often demand pro-active interaction.   A Monitor might for example create a query on data that requires a response from site personnel.   The turnaround time for the query is dependent on the frequency that the site personnel access the EDC system.   For smaller sites without dedicated study personnel the likelihood of an investigator simply not finding the time to login to an online system to look for pending actions.

ECO made a suggestion a few months back regarding the potential use of RSS Feeds - a simple concept with a technical name - whereby such feeds could be used to notify a participant of a particular action.  For example, if a Monitor raised a Query, then this would trigger - at the discretion of the recipient, an RSS / email notification that the action is exists.  The recipient would then click on the link, login, and carry out the action.

So, questions -

Would this cause security concerns?  If the feed included information that might be considered patient confidential - then yes. However, the actual requirement to include any real details in the communication is limited.   The message could simply say 'You have a new Query on Subject xyz'.

How would this be implemented?   Certainly a system or study switch from the sponsor to make it available.  Next, it should be optional at the user level - when a user signs up, they could opt in for correspondence when things required action, by type.  After that - it would be relatively automatic to the users.   As far as the technology - relatively easy for web based systems provided outbound communications don't cause security issues.

Would this be called an 'RSS Feed' - Why should it?  It tends to be teenagers that recognize the term.  On the other hand, ask an investigator if they would like to be notified of actions they need to be performing - that is understandable.

So, a question for blog readers.   Anyone aware of an EDC system that is already doing this?   Maybe this is already an existing feature in Medidata Rave or PhaseForward Inform products?    Like to share some experiences?

Monday, March 23, 2009

EDC Study Startup Time

EDC Guru left a comment on the Top 10 mistakes made when implementing EDC posting.  In this comment, it is suggested that one of the potential failure areas is not leaving sufficient startup time when switching from Paper to EDC.  I fully agree, although, from a sales perspective, the alternative is the sponsor goes to another CRO or Vendor that will offer the required timeline.

If you speak to an EDC company, one of the metrics they often pride themselves in is the time to First Patient In.  12 weeks, 10 weeks, 8 weeks or even 6 weeks are banded about.  This is clearly a metric worth considering, but, what actually is included in these weeks?   Is that how long it takes the vendor or CRO to *do* the study build, or is that how long it takes for a sponsor to be ready for EDC.

I would put forward that any sponsor company that goes from no experience to executing EDC studies in 12 weeks will not demonstrate the full benefits with EDC. In fact, there is a danger that the experience will be sufficiently poor as to impact the ongoing acceptance of EDC in an organization.Unwisely implementing EDC can result in similar metrics to paper studies, but at considerably extra cost.

So - what is the solution?  Do I think that drug programs be delayed for the benefit of EDC technologies?     Well - no, but, should an EDC based study be delayed until the organization is sufficiently in tune to the experience - yes.

So taking the above as accepted, what are the boundaries to achieving a better prepared company?

1). Senior Management within sponsor organizations will measure the success of a drug development program but the compounds that go in the various phases of study.  Placing a delay in startup will appear like a failure

2). It is very difficult to develop effective EDC processes :-

  • Sponsor companies new to EDC do not have the experience to determine where the eClinical tools brings benefits and where other methods are better.
  • Vendor companies tackle the process problems from the perspective of technology rather than by tackling the actual business requirements.
  • Consulting companies deliver non adventurous palatable process solutions based on regurgitated generic concepts.

3). Silo structured organizations may not offer broad department support for process changes that will make a company wide eClinical approach successful.

 

Ok, so I have managed to point out 3 potential problem areas, so, what is the solution.... or rather, what is my take on a potential solution?

1) Senior Management -

the measurement of success is as important as the success itself.   I have seen this attempted, but I have not seen a lot of success.  If everything changes, how do you compare 'apples with apples'.  If nothing changes, then what is the point. You need to look for the lowest common denominator. Faster development, lower costs is a start...

2) Processes -

How many readers go along to DIA or similar conferences, and either avoid, or sleep through Process topics.  That is a shame, as it is still the Process, Workflow and Change elements of eClinical that are causing bottlenecks.

You get Technology Guru's that create brilliant eClinical systems. Why not empower a Change Guru that 'gets' the technology and 'understands' the potential for changing an organization.

3). I have just answered 3) with 2).

So - back to EDC Guru's comment.  There is a place for rapid start EDC where the vendor or CRO acts just as they did for an old Paper study, and takes on the bulk of the work to make an effective deployment.  Some benefits will be realized, but certainly not all. I don't think the term 'Start Up' should be defined as study start up.  It should implementation program start-up.  A study will appear, down the road, but, not in the same sentence.

Sunday, March 8, 2009

Open Source eClinical - Myths & Facts

I came across a blog / article posted by here that provides a Q&A with Ben Baumann of Akaza Research.  On reading the article, I feel obliged to respond to some of the comments made. 

First of all, I believe that Open Source based systems do provide a valid alternative to closed source products.  I have both developed, and used Open Source systems in the past. Both Open and Closed Source based systems are designed to create a financial return for the companies that develop and support them.  With the Open Source model, the revenue comes from support and consulting with the reduced cost of development shared across a number of organizations.   With Commercial software, revenue comes from licenses, support and consulting.  The cost of the development is higher, but, this cost is typically covered by the corresponding license revenue.

Back to the article. There are a few points that jump out at me as arguments for OpenClinica;

In addition to a full set of EDC and CDM features one might expect in such a system, OpenClinica has  built-in features that give users the ability to set-up their own studies.

Any EDC or CDM system worth their salt will provide functions that give users the ability to setup their own studies.  Good systems will also ensure that an absolute and clear separation is maintained between the configuration of the study and the tool itself. If the tool has to be modified in any way, then you have a validation nightmare each time you run a study.

In short, an organization can make a rapid and highly informed decision whether or not to use OpenClinica without having to go through lengthy vendor-biased demonstrations and negotiations, and rely on a vendor in order to get their studies configured appropriately.

Well... instead of going through a demonstration that presents the features of the product, the user really needs to start digging into the source code to gain a good understanding of what goes on. So, as a user you need to have a good understanding of fairly complex 3rd generation language programming.   To fully understand how a tool works takes some time. For large EDC or CDM systems I would put that estimate, for an experienced programmer, weeks if not months of detailed analysis. 

Enhanced validation. Validation can be much more thorough with open source software. Buying proprietary software is like buying a car with the  hood welded shut-you don’t know what’s really know going on behind the scenes. Open source provides the highest level of transparency making it possible to truly validate a system from end-to-end.

"Oh! so I need to validate the system myself?" Validation is complicated and expensive.  All systems in the eClinical world require that the development runs within a structured process. You generally need to maintain a trace-ability matrix that takes the software through a lifecycle from specification through to coding and on to completed and recorded testing.   As with InHouse systems, maintaining this will can be challenging

Typical eClinical systems split the code product, that is developed and validated, from the configuration of the product. That part is typically just 'tested'.  In the situation, you don't need to re-validate the software product following a configuration of the product for a particular deployment (i.e. Study).  The validation of a full EDC product might take 200 man days. The testing of a configuration might only take 10 days.   If you mix the configuration with changes to the core software product, you need to be very careful that you don't compromise the core product validation.

As I said at the start, open source solutions have a place in the eClinical business area.  They can be the right solution in certain circumstances, but, they are not always suitable.

Thursday, February 26, 2009

Modelling ODM Metadata - a response

Today, I would like to respond to a recent comment from XML4Pharma regarding the use, or otherwise of ODM in modelling CDISC based studies. X makes some valid points, but some of the concerns are based more on a misunderstanding of the proposals.

First of all, thank you XML4Pharma for your input – all input is good input as far as I am concerned.

Hopefully, with this posting, I can clarify that I do know the differences between ODM and SDTM!... and yes, I do believe the ODM and SDTM complement each other.

I think you have misunderstood how I am suggesting SDTM be used versus ODM. If you look at my recent post, what I am suggesting is precisely what you have been evangelizing about - to think about the Outputs - SDTM - in order to create the inputs. The principle defined was for the modelling of metadata – how you potentially get to an ODM based definition of a study - not how the metadata is used and processed in an EDC product. 

The difference in my proposal from yours is that I am suggesting a 3 tier model in order to achieve the underlying definition of forms and rules in a study. The end result may well be ODM, but, it is how the ODM is prepared is what I am suggesting. To understand where I believe the challenge is we need to think of the definition of a whole study.

Hypothetically, a typical EDC study build includes lets say 8 days of Forms development and 20 days of rules development. Looking at just the forms, the re-use can be effective from study to study... A second study can be 4 days, 3rd study 2 days etc. But what about all the associated rules. How will they work when the visit structures change, new forms are included, and fields are taken away. Will we see 20 days going down to 10 days and then down to 5 days... that depends on whether the use and content of forms for a study are impacted... but not if the logic is hanging off the forms.

Lets take an example problem. I have 5 similar forms that all need to populate the same SDTM domain. With the proposed model we start with the definition of the SDTM Domain (Tier 1). This contains the definition of what we are aiming for. It contains all fields, not just the ones we might use on a form. Next, the definition of a superset Logical structure that contains all of the appropriate fields that might used by a sponsor together with logic that is applied regardless of the capture method (tier 2). Finally at Tier 3, we have the 5 different forms. These all subset the Logical structure defined at Tier 2 and inherit the rules. As we have a consistent thread from Tier 3 thru Tier 2 to Tier 1, it is possible to create a definition that can be used by an eventual target EDC product to populate the same Tier 1 (SDTM Domain) regardless of the form structure or logic.

At tier 1, you define as much information as you can that will be consistent across Tier 2 and 3. Fieldname, data type,  etc.  Tier 2 inherits information from Tier 1 and adds relationships and rules.  You might have many logical structures to capture the same information, but, the information in Tier 1 is only defined once.  Tier 3 applies the same idea.  You might have many instances of a form,  that in turn apply the same rules, but one form might be visualized differently from another.

That was a simple example. Lets imagine I wanted to capture data for 3 domains on the same form. We could simply say to the Investigator – sorry, “we need to split this into separate pages, because that was SDTM demands,..” the true answer is that ‘this is what our modelling structure demands’.  I don't think the model should demand it.

I could have an AE form that is captured on a single page in one study, but captured across 3 pages in another study. I could even reach the point were I want to capture AE information on a PDA (though unlikely!). The eventual SDTM Data domain is the same, the rules are the same (I only want to define them once). The only thing that is changing is the presentation.

As you say, ODM does contain cross references back to SDTM - Fields and Domains. This will allow you to map information back to SDTM.  With the 3 tier model suggested, the way in which the Field and Domain (SDSVarName/Domain) are defined is through inheritance. 

I won't respond to the Audit trail and 21CFRPart 11 comments that XML4Pharma made. Hopefully, by this point my explanations have corrected this misunderstanding.

XML4Pharma's last point regarding HL7-v3...

we do already have a format for exchange of clinical data (or is submission not "exchange"?). So formatting SDTM as ODM is a very nice, simple and efficient way.

if I was part of CDISC and wanted to work closely with HL7, or, if I wanted to be considered a bridge builder to Electronic Health Record systems I would probably embrace HL7-v3 without even opening the specification. It may or may not be the best solution... but on paper it is probably the most marketable.  In response, I would recommend that you constructively and diplomatically present the benefits of an ODM/SDTM based standard over HL7-v3 with further real-world examples. Without this sort of approach - as a leading proponent of ODM based systems, XML4Pharma may come across as simply showing bias due to home grown interests. Personally, my gut feel is that you are correct, but, I do not yet know enough about HL7-v3 to make a fully considered opinion. However, I believe there are sufficient intelligent individuals in the CDISC organization to take a considered technical argument onboard provided it is delivered in the right way.

Tuesday, February 10, 2009

Applying XForms to CDISC

In a previous posting, I made reference to a technology called XForms.  I mentioned that it might prove of some value in addressing the challenges that eClinical systems face in meeting business demands.

XForms is an interesting technology in a number of ways.  To understand what it brings, it should be understood what the traditional method of browser data presentation lacks.  HTML remains the standard means of presenting form information on a screen.  It fundamentally dates back to 1995 in its present state when the Internet really took off. As far as broad standards for forms based data capture, things have not really moved forward greatly since then. 

HTML forms are built in 100's of different ways by software residing on the server.  Typically, the developers will take lots of strings containing things like <Head>, <TR> or <BR> etc and glue them together to make up the necessary codes required to present a form on a browser. Admittedly, there are lots of 'helper' technologies to make things easier such as Extensible Style Sheets (XSLT), Dynamic HTML etc, but, ultimately, there is no one way to effectively create and deploy forms to a browser. In particular, there is no way to easily separate what a Form Does from how a form Looks

The W3c group recognized this challenge and create the XForms standard released October 2003.

'So, yet another standard that will disappear into obscurity!'  I hear you say.  Well, maybe, but, even if it does, it has some attributes that are worth understanding, when considering how things should work in the ideal world.

Benefits of xForms

  1. User Interfaces built on XForms require fewer round-trips from client to server - they are more self contained than pure HTML implementations.  This leads to a better, faster user experience
  2. Mobile device capabilities vary significantly.  Creating a form interface that works on a iPhone as well as on a 1400x1200 resolution desktop is very difficult. XForms provides the separation to allow this to occur with common code.
  3. Javascript is the tool many developers use, on the client browser side, to workaround the limitations of HTML.  However, Javascript is implemented differently by different browsers, and, can even be disabled for security reasons.  XForms provides a means to avoid this limitation.

More about xForms

All sounds cool.... so, lets find out more about xForms...

XForms implements the concept of Model/View/Controller.   Sounds like mumbo jumbo, yes, so lets translate that into some meaningful.

When you capture information from forms, you can define 3 distinct layers to efficiently achieve a design.  

Presentation - 'the View'

Starting at the user side, you have the Presentation side.  On a regular desktop browser, you will maybe have a form presented in a free form layout with 20 questions, nice shaded graphics etc etc.  On a PDA, you want something simply, that scrolls well - so you would have a smaller set of questions, presenting with no fancy graphics, and in a simple list layout.  The Data you are after is the same.  The Logical rules you want to apply to the data are the same - it is only the presentation medium that is changing.

Logic - 'the Controller'

Next, we have the brain.  The part where all the rules exist.   This is not 'eClinical' specific, but, to use an EDC requirement, this is where all the edit check rules exist.  The logic is attached here, and separated from the Presentation Layer in order to ensure that regardless of the layout of the form, or, the device used, the same rules can be applied (and most importantly) re-used.    This controller can be split into two areas - handling requests form the presentation for dynamic activities - as well as processing the results in a common way.

Data - 'the Model'

Finally, the 'what' in the structure. The data resides in the Model - or rather the definition of what data. Aligning with the eClinical analogy, SDTM would be an appropriate specification for the definition of what might exist here.

xForms and CDISC - applying the Model

So, how might XForms be applied to the CDISC in general?   

To some degree, CDASH defines a set of fields that will be captured without consideration of the device that might capture them.  You could have a situation where a PDA might capture 4 questions at a time whereas a desktop browser would capture 20.  The definition of the form itself will differ due to the medium.  So - CDASH in this perspective doesn't fully fit.  However, as a means to standardize the 90% rule - most forms are completed on a full size browser - it is fine. CDASH provides a recommendation of what would be captured together in the presentation layer for one type of device.

So, where would CDASH fit in the XForms Model / View / Controller paradigm?  

Lets look at a possible model for metadata that could be implemented with xForms:-

xforms123

Starting from the left - the obvious model for the Data (Model) is SDTM.  This would include a list of all of the standards Domains. It could be extended as required, and include sponsor specific extensions structured as per the SDTM standards specification

Next in the middle, we have the Logic (Controller). CDASH could potentially work in this middle tier although you would need to add the logic elements.  A Module could equal a CDASH Domain with all of the potential (required and optional) fields included. 

Finally, at the Presentation (View) the Form. This would be a representation of the CDASH domain with specific controls (drop downs, radio's etc) added and fields not required for a specific instance removed. Multiple instances of the form might exist with the same rules and relationship to the Data Model carried forward.

 

Conclusion

xForms in the context of CDISC and eClinical is not so much a means of capturing the data. Rather, it is a means for modelling the metadata to better support the separation of the data that is needed from the logic that must be applied, and the presentation of how this is achieved.

With a model applied similar to the above SDTM data sets could be quickly achieved while at the same time offering flexibility in the medium used to capture the data together with a consistency in the rules enforced.

I have purposely left out references to ODM in the above article.  It may have a place here, but I will expand on this in a future posting.

Monday, January 26, 2009

Green gone

Following on from the announcement that Datatrak had resolved the contractual disagreement regarding the ownership of the Clickfind technology, Jeff Green has stepped down as Chairman and CEO.

So, who will buy the remaining assets of DataTrak?   At 15cents a share and a market cap of 2.6M USD some company might see value.  I suspect a mid tier CRO might come in see the technology as a sort of 'House Wine' implementation of EDC for sponsors that do not show a preference for one of the big players.

Wednesday, January 7, 2009

Datatrak back on Track?

After a few months of uncertainty,  DataTrak have announced at the end of December that they have resolved their legal dispute with the former owners of the ClickFind EDC product they procured in 2006.  I am sure that Jeff Green and his team will be somewhat more positive about the prospects for 2009 than 2 months ago.  The share price though remains very low with a potential de-listing due to occur, pending a hearing with Nasdaq.

Hopefully, the company will pull through.  It would be sad to lose one of the early providers of EDC systems.