Improving clinical trial reporting – the Edinburgh experience
Many institutions across Europe are currently working to improve their clinical trial reporting and uploading the results of trials onto trial registries. At the University of Edinburgh in Scotland, we started this process several years ago.
This blog outlines what we did, the challenges we faced, and our plans for the future.
Cleaning up EudraCT records
We spent a lot of time on closing out past records on the European trial registry EudraCT and building a system to ensure that future results are rapidly uploaded, only to find that Brexit was making all of that largely redundant!
Essentially our process was:
(1) Identify studies missing results
(2) Contact Principal Investigators to ask if they wanted help
(3) Contact them again
(4) Try to find the study publication in academic journals
(5) If all else failed, sit down with the investigator while s/he filled in the details
A major issue was that investigators considered their work done once the results had been published in a peer-reviewed journal. A few of them argued that if the results have been made public in a journal, it does not matter that the record is not closed in EudraCT. Personally, I think a "tidy" set of study records is also a reflection of a "tidy" research management process, so we should have both.
We are now changing our internal rules to make closing out registry records an explicit requirement. We are also updating our training materials accordingly.
Cleaning up ClinicalTrials.gov records
The American ClinicalTrials.gov registry presents more complicated challenges:
Some of the studies registered there are RCTs which fall under U.S. Food and Drug Administration reporting rules (making it a legal requirement to upload results)
But many studies are RCTs which do not fall under those rules.
Some studies are not even RCTs; researchers simply used ClinicalTrials.gov as a "register of convenience" - which just creates a mess.
We have used the ClinicalTrials.gov API to get a continually updated list of what studies are due for reporting. We remind investigators to close out records, chase them up when they do not, and if there are problems again, we offer to help - sometimes quite forcefully!
Sometimes all that is required is a simple update to the record, but even then we find studies registered by PhD students, who have now left. Their old academic email is usually dead and so even when we do get in contact (I have gone as far as using Facebook and LinkedIn to contact them), they may be unable to get direct access to their record.
So we have put a lot of work into prospectively educating researchers on the most appropriate system for registering a study, for example encouraging them to use the Open Science Framework (OSF) for studies that are not RCTs.
Current approach
Our current close out "map" for trials is:
(1) Get your results out early using a preprint server like MedRxiv
(2) Get your results published, open access if at all possible, making trial datasets available
(3) Get your trial registration record closed out
We are trying to embed this into the everyday practice of what everyone does. We do not have many “carrots” at trial close compared with when we are approving up front, so it is education and support more than arm twisting, and we are very aware that we are only at the beginning of the journey.
Future plans
We have not yet started tackling observational studies. An external review of the clinical trial and research pathway in Edinburgh recommended investment in a research design service or similar for non-CTIMPs. How this might best be delivered is currently under consideration.
We also hope to address systematic review registrations on Prospero. I hope that that community is a little more tuned in to these issues, but I am prepared to be surprised.
I think new dashboard tools like the one currently being developed by Nico Riedel and his colleagues at QUEST will really help at an institutional level. While I am sceptical about university rankings in general, I do think that timely publication and record keeping are integral to good science, and if people want to "game" the system by demonstrating that they perform well on these measures, I am comfortable with that!
However, the most important metric for me is not how we compare to other institutions, but how our performance has improved compared to one or two years ago.
This guest blog was written by Malcolm MacLeod is Professor of Neurology and Translational Neuroscience at the University of Edinburgh and a member of the Edinburgh CAMARADES group. He can be contacted on Twitter. TranspariMED would like to thank him for sharing his experiences with the wider medical research community.
Comments