FR 2021-02240

Overview

Title

Agency Information Collection Activities: Comment Request; Merit Review Survey-2021 and 2023 Assessment of Applicant and Reviewer Experiences

Agencies

ELI5 AI

The National Science Foundation wants to ask people about their experience with a process, like how happy or frustrated they are, and they need the government to say it's okay to do this survey. They are going to ask a lot of people and it will take a lot of time and money, but it's important so they can try to make things better for everyone.

Summary AI

The National Science Foundation (NSF) is requesting public comments on a proposed information collection related to its Merit Review Survey for the years 2021 and 2023. This survey aims to evaluate the experience of applicants and reviewers in the NSF's merit review process, focusing on aspects like satisfaction, perceptions of burden, and the quality of reviews. The survey will target 87,000 participants for each survey round, with an expected response rate of 40%. Comments are invited on the necessity, accuracy, and methods of the information collection, and should be submitted within 30 days of the notice's publication.

Abstract

The National Science Foundation (NSF) has submitted the following information collection requirement to OMB for review and clearance under the Paperwork Reduction Act of 1995. This is the second notice for public comment; the first was published in the Federal Register, and no comments were received. NSF is forwarding the proposed submission to the Office of Management and Budget (OMB) for clearance simultaneously with the publication of this second notice.

Type: Notice
Citation: 86 FR 8045
Document #: 2021-02240
Date:
Volume: 86
Pages: 8045-8047

AnalysisAI


Summary of the Document

The National Science Foundation (NSF) has proposed an information collection effort concerning its Merit Review Surveys for 2021 and 2023. These surveys aim to gather feedback from applicants and reviewers involved in NSF's rigorous merit review process. The focus areas include applicant and reviewer satisfaction, perceptions of the review's burden, and the quality of feedback received. This initiative plans to engage around 87,000 participants per survey cycle with an anticipated 40% response rate.

Significant Issues or Concerns

Firstly, there is a substantial estimated cost associated with this survey data collection. The total burden is projected to be 23,200 hours per survey year, resulting in a financial cost exceeding $440,000 each year. This cost estimate is based on the assumption that most participants are postsecondary teachers, and it may not accurately reflect the diversity of respondents, potentially leading to skewed figures.

The document employs specialized terminology that might be incomprehensible to the general public. Terms such as "FastLane" and criteria like "Intellectual Merit" and "Broader Impacts" are specific to NSF's internal processes and might reduce clarity.

Additionally, while the document briefly touches upon potential biases in the review process, it does not offer a comprehensive analysis of these biases or the strategies in place to mitigate them. Consequently, it remains unclear how these reviews are ensuring fairness across all demographic groups.

Lastly, while the survey is intended to aid "continual improvement activities," the document lacks concrete details on how exactly this data will influence policy or procedural changes, leaving the practical benefits somewhat ambiguous.

Public Impact

On a broad level, this document highlights NSF's intent to refine its merit review process by actively engaging those involved in submitting and reviewing proposals. The process affects many stakeholders within the scientific and academic communities, potentially leading to a more efficient and fairer system that could benefit the community at large.

Impact on Specific Stakeholders

For applicants and reviewers, the survey could provide an opportunity to have their feedback influence the NSF's evaluation process positively. However, without clear information on resulting actions or changes, stakeholders might feel uncertain about the impact of their participation. Moreover, the assumption-based cost estimates for conducting these surveys might not accurately capture the full range of academic professionals involved, potentially affecting the perceived legitimacy of the survey findings.

There is also mention of potential changes to the proposal deadline structure, but the implications for reviewer and applicant burdens are not outlined. This could result in mixed reactions depending on individual or institutional reliance on existing deadlines.

Overall, while the effort to collect feedback is commendable, the lack of detailed analysis and clear outcomes could concern those looking for tangible improvements in the NSF's funding processes.

Financial Assessment

The document under review concerns the National Science Foundation (NSF) seeking approval for a Merit Review Survey, specifically assessing the experiences of applicants and reviewers involved in NSF's merit review process. The financial implications of this survey are addressed through burden estimates, which provide insights into the anticipated costs related to collecting and analyzing survey data.

Summary of Financial References
The document estimates a total burden of 23,200 hours for collecting responses to the survey conducted over two respective years, 2021 and 2023. This burden is primarily calculated based on the assumption that most survey respondents are postsecondary teachers. Consequently, the document utilizes the annual mean wage for this occupation, identified as $79,540, to determine the respondents' hourly wage, which is approximately $38.00 per hour. Based on this calculation, the estimated cost of conducting the survey for each year is approximately $440,800.

Relation to Identified Issues
These financial references relate to several potential issues:

  1. Cost Justification and Transparency: The calculated cost of $440,800 per survey year could be perceived as high, particularly if the benefits or returns of conducting such a survey are not explicitly justified or apparent. The document does not clearly elaborate on how the survey's findings will lead to specific improvements, which may obscure the perceived value of this financial outlay.

  2. Assumptions on Respondent Demographics: The calculation assumes respondents are generally postsecondary teachers, which may not represent all survey participants accurately. If this assumption does not hold true across the respondent pool, it could result in skewed cost estimates and potentially undermine the survey's budgetary provisions.

  3. Use of Financial Terms: The references to financial allocations are somewhat technical. While the breakdown of the hourly wage calculation is straightforward, the general public might not immediately grasp the implications of assuming a specific respondent demographic based solely on the wage average of postsecondary teachers.

  4. Implications of Removing Proposal Deadlines: The document alludes to investigating the impact of eliminating annual proposal deadlines but does not clarify how this might affect fiscal outcomes or processes. A more detailed explanation could guide an understanding of whether these changes could potentially offer financial benefits or require additional resources.

In essence, while the document outlines an explicit cost estimation process, the broader financial implications, justification of expenses, and underlying assumptions warrant a clearer and more definitive exposition to reassure stakeholders of the survey's financial prudence and operational transparency.

Issues

  • • The document estimates a total burden of 23,200 hours for the collection of survey responses, which at an hourly wage of $38 amounts to a cost of $440,800 per survey year. This could be interpreted as a high cost for survey analysis, especially if the return on investment is unclear.

  • • The document assumes that most respondents will be postsecondary teachers and uses their average wage for estimating costs. This assumption may not apply to all respondents, potentially skewing cost estimates.

  • • The document uses complex terms and industry-specific jargon (e.g., 'FastLane', 'Intellectual Merit criterion', 'Broader Impacts criterion') that may be difficult for those unfamiliar with NSF processes to understand, potentially reducing clarity for a general audience.

  • • The methodology and potential biases in the NSF merit review process are briefly discussed but not sufficiently detailed, making it unclear how biases are managed and whether all demographic groups are equally satisfied with the process.

  • • There is no detailed explanation of how the data collected from the survey will directly translate into improvements or specific changes in policy or practice, which might appear to lack transparency or lead to unclear benefits.

  • • The potential impact of eliminating annual proposal deadlines is mentioned, but details on how this specifically affects the merit review process and costs or benefits associated are not thoroughly explained.

  • • The survey findings are intended to inform 'continual improvement activities,' which is vague and may not clearly communicate the tangible outcomes or changes that may result from the survey findings.

Statistics

Size

Pages: 3
Words: 1,889
Sentences: 73
Entities: 137

Language

Nouns: 652
Verbs: 162
Adjectives: 79
Adverbs: 27
Numbers: 95

Complexity

Average Token Length:
5.26
Average Sentence Length:
25.88
Token Entropy:
5.46
Readability (ARI):
20.08

Reading Time

about 7 minutes