AIBS has been awarded a Standard Grant from the National Science Foundation on the science of peer review entitled: “Reliability, Risk Aversion, and Bias in Grant Peer Review”.
Agencies that fund scientific research use peer review to identify the most meritorious research projects for support. Peer review involves multiple expert scientists evaluating applications based on pre-determined criteria (e.g., the quality of the methodology, the qualifications of the scientists, etc.) and assigning it a score. As research funding is highly competitive, peer reviewers must identify the most outstanding projects among many of those deserving funding. Peer reviewers? judgments, however, are subject to individual biases and preferences. Currently, little research exists on peer reviewer preferences and how these may influence their judgments. In this project, peer review is considered as a process of evaluating and weighing project risks and benefits. The hypothesis is that reviewers? attitude towards risk, i.e., risk preference, will influence their judgments and scores. Science can proceed in smaller steps or in larger leaps. Risk is important to progress in science: innovative and novel projects may be associated with higher risk, and result in great achievement or failure. Therefore, peer reviewer risk preference may influence the rate of scientific progress. This project involves an experiment in which peer reviewers judge the merits of projects that vary in terms of risk. The information from this project will inform peer reviewer training and evaluation criteria. In addition, a module for undergraduate science classes will educate students on peer review and potential biases.
The hypothesis is tested in this project by having peer reviewers evaluate and score a set of fictitious NIH-style overall impact statements (i.e., summary critiques) for an R01-type research grant application. Different versions of the impact statement are created to systematically reflect different sources of risk (methodology versus investigator) and levels of risk (high/low) associated with the application. The text is varied to reflect either female or male applicants. Peer reviewer participants also complete questions about demographic information, expertise, self-assessed level of bias, and a measure of risk preference. The factors associated with reviewer demographics and risk preference are then tested as potential predictors of the reviewers? scores; furthermore, the interaction of these predictors with different sources and levels of risk is examined. Finally, the effect of gender on reviewers? scores is examined, and how gender interacts with reviewer preference and type and level of risk. The results will account for more of the variability in reviewers? scores than is evident in the extant research and potentially inform efforts to orient peer reviewers regarding risk preference and biases.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.