Researchers Create New Tool to Better Evaluate Implementation of Science Research Proposals

Researchers at Boston University’s Evans Center for Implementation and Improvement Sciences (CIIS) have developed a new scoring criteria for evaluating the quality of scientific research proposals. Termed ImplemeNtation and Improvement Sciences Proposals Evaluation CriTeria (INSPECT), this new approach aims to improve identification of high-quality proposed research that advances improvements in health care delivery and patient outcomes.

Research proposals are traditionally evaluated using National Institutes of Health (NIH) criteria for impact, significance, innovation and approach. This criteria works well for evaluating the quality of research seeking to test the effectiveness of new interventions. However, the CIIS team found NIH criteria were not specific enough to evaluate research that tests strategies to promote uptake of evidence-based practices in real-world settings.

COM_erika crable
Corresponding author Erika Crable, MPH

“Implementation science is the study of strategies applied at the patient, provider, organization or health system level that promote the systematic uptake of evidence-based practices which are otherwise underused,” said corresponding author Erika Crable, MPH, research fellow at CIIS. “Once we have evidence that an intervention works, implementation science asks, ‘How do we get people to use the intervention, with fidelity, in a sustainable way?”

In order to test the reliability of INSPECT, CIIS researchers from BUSM independently applied this new criteria to 30 grant proposals. Overall, the proposals scored high on INSPECT criteria evaluating the significance of the care or quality gap to be addressed by the proposed research. However, proposals scored poorly across most other criteria, signaling the need for expanding education and training in implementation science at an academic medical center.

“Our study suggests that the traditional efficacy/effectiveness grant scoring lens is insufficient to evaluate key aspects of research seeking to promote the use of evidence-based practices in real-world settings. Instead we suggest a new grant scoring criteria that is reliable in evaluating specific goals of implementation science research,” Crable said.

The researchers believe that developing a reliable, implementation science-specific scoring criteria will be a valuable tool for grant reviewers seeking to evaluate proposed implementation science, and for grant writers looking for guidance on how to effectively communicate implementation science research approaches.