Abstract [from journal]
Background: Research ethics review committees (RERCs) and Human Research Protection Programs (HRPPs) are responsible for protecting the rights and welfare of research participants while avoiding unnecessary inhibition of valuable research. Evaluating RERC/HRPP quality is vital to determining whether they are achieving these goals effectively and efficiently, as well as what adjustments might be necessary. Various tools, standards, and accreditation mechanisms have been developed in the United States and internationally to measure and promote RERC/HRPP quality.
Methods: We systematically reviewed 10 quality assessment instruments, examining their overall approaches, factors considered relevant to quality, how they compare to each other, and what they leave out. For each tool, we counted the number of times each of 34 topics (divided into structure, process, and outcome categories) was mentioned. We generated lists of which topics are most and least mentioned for each tool, which are most prevalent across tools, and which are left unmentioned. We also conducted content analysis for the 10 most common topics.
Results: We found wide variability between instruments, common emphasis on process and structure with little attention to participant outcomes, and failure to identify clear priorities for assessment. The most frequently mentioned topics are Review Type, IRB Member Expertise, Training and Educational Resources, Protocol Maintenance, Record Keeping, and Mission, Approach, and Culture. Participant Outcomes is unmentioned in 8 tools; the remaining 2 tools include assessments based on adverse events, failures of informed consent, and consideration of participant experiences.
Conclusions: Our analysis confirms that RERC/HRPP quality assessment instruments largely rely on surrogate measures of participant protection. To prioritize between these measures and preserve limited resources for evaluating the most important criteria, we recommend that instruments focus on elements relevant to participant outcomes, robust board deliberation, and procedures most likely to address participant risks. Validation of these approaches remains an essential next step.