162 Analytical Tools and Approaches for (Multidimensional) Scholarly Research Assessment and Decision Support in the Biomedical Enterprise.
Number of Anticipated Awards: 2-3.
Fast-Track proposals and Direct to Phase II proposals will be accepted.
Budget (total costs):
Phase I: $225,000 for 6 months
Phase II: $1,500,000 for 2 years
Fast-Track budget may not exceed $1,725,000 and Fast-Track duration may not exceed 3 years.
It is strongly suggested that proposals adhere to the above budget amounts and project periods. Proposals with budgets exceeding the above amounts and project periods may not be funded.
Contemporary science evaluates and is also a subject to evaluation. Research assessment is increasingly becoming an integral part of any scientific activity. Among the reasons for such attention is the increasing demand by the public and government to demonstrate cost-benefit measures of the research programs within the institutions, especially those that are publically funded. Policy makers now explicitly expect science to demonstrate its value to society. Another reason is the current economical atmosphere where budgets are strained and funding is difficult to secure, making the ongoing, diverse and thorough assessment of an immense importance for the progression of scientific and research programs. The current consummate availability of and ability to collect and analyze large scale datasets also contributes to the increased interest in research assessment. While a decade ago, scientific evaluation relied mainly on citations and publications counts, most of which were done manually, today this data is not only available digitally but can also be triangulated with other data types. For example, publications and citations counts can be triangulated with collaborative indicators, text analysis and econometric measures producing multi-level view of an institution, program or an individual. Research funders begin to expect not only publications but also other indicators to be given as the proposed outputs and outcomes of research in proposals, signaling that other forms of scholarly products and novel metrics may play an important part in research evaluation. Appropriately, in the 2016-2020 Strategic plan, NIH announced the intent to take greater leadership in developing and validating the methodologies that are needed to evaluate scientific investments and to use transparent, scientific approaches in decision making.
The RFP solicits the research and development of advanced and sophisticated analytical models, tools and metrics to enhance the professional evaluation and decision making in life sciences management and administration. Those metrics must be developed to be embraced broadly by the life science community, be readily understandable by non-scientists and grounded in outcomes that are highly valued by the general public, funders and the policy makers. It is envisioned that, if proven, those metrics will be used by the NGOs/disease foundations, advocacy groups, research funders, policy makers and by the academic institutional bodies (e.g. promotion committees).
Examples of the projects may include, but not limited to:
• Studies to define and validate the metrics specifically measuring how virtuous the research is (quality, transparency, reproducibility, integrity and potential for translation/application).
• Studies to compare/investigate the relationship between traditional metrics, like text citations and expert evaluations, and webometrics/altmetrics, like social media usage analysis.
• Tools and approaches to quantify relationships between publications and registered products (drugs, devices, diagnostics, etc.), to help increase public appreciation of the societal value of life science discoveries, to provide instructive insights for policy makers, to guide funding decision making and path selection that would accelerate progress towards cures.
• Application of advanced empirical methods to altmetrics: large-scale studies assessing the reliability, validity and context of the metrics.
• Analytical approaches answering the question of how can research -productive scientists be identified, clustered, and configured for optimal research synergies.
• Sophisticated technologies to accurately analyze the demographics of research users, e.g. scholars or non-scholars, career stage, what was the actual research product they used and why, etc.
• Sophisticated approaches and tools that, based on bibliometrics or otherwise, would enable the meaningful nomination of research studies for replication.
• Sophisticated approaches and tools for the standardized evaluation of evidence in large numbers of biomedical research documents (project progress reports, research manuscripts, etc.)
• For student education, building the models of good and bad scientific behavior with demonstration of the possible consequences of both.
• Products that track a variety of scholarly activities such as teaching and service activities correlating them with the lecture attendance and popularity status of the reading lists
• Approaches to directly compare or intelligently combine the metrics (biblio- or alt-) and peer review
• Studies to investigate the new forms of impact measurements that are broader, speedier and more diversified than traditional metrics