1
Writing Research and Evaluation Plans for NSF Grants: How are they similar and different?*
by
Kavita Mittapalli, Ph.D.
In general, several National Science Foundation (NSF) solicitations in the Directorate for
Education & Human Resources including:
Improving Undergraduate STEM Education (IUSE),
Robert Noyce Teacher Scholarship Program (Noyce),
Advancing Informal STEM Learning (AISL),
Innovative Technology Experiences for Students and Teachers (ITEST), and
Scholarships in Science, Technology, Engineering and Mathematics Program (S-STEM)
may require separate research and evaluation plans. It is important to be able to distinguish
between the two and write both sections well for a competitive proposal.
Research vs. Evaluation
Several fellow researchers and evaluators have done a tremendous job of explaining the
fundamental differences between the two including: John LaVelle (AEA365 blog post ); Sandra
Mathison’s book chapter; and Patricia Rogers’ Better Evaluation blog post.
Research and evaluation are conducted using similar data collection and analytic methods. The
difference is in the intent or purpose and the audience to which results will be reported.
In essence, the research aspect of an NSF grant is intended to test a hypothesis and its results
are generalizable in nature. Research measures participant-level metrics, mediating, and
moderating factors in a study.
Research asks: What’s so? How does it work?
Typically, results are reported at the end of the research project and are meant to inform the
field of study.
The evaluation aspect of an NSF grant particularizes as it is designed to improve the program
itself.
Evaluation assesses the value or merit of the program and asks the So what? and How well does
it work? questions.
Therefore, evaluation results should be provided to PIs, Co-PIs, other project leads, and major
stakeholder groups throughout the life of the study and not just at the end.
2
Let’s dissect a typical research plan.
A research plan typically comprises:
Hypothesis: The hypothesis can examine the project’s/intervention’s outcomes to measure
participants’ achievements in STEM courses, STEM teaching and learning, engagement, training,
careers/jobs, and the gain in knowledge that may be the result of the intervention(s). (Hint:
If/Then)
Research Question: The research question answers questions such as:
a) What is….?
b) How does/do….?
c) In what ways do/es (a particular intervention) work?
d) How does it show its intended effects on participants’/stakeholders’ cognition
and/or affective behaviors?
e) If and how participants’ background characteristics (e.g, demographics, academic
performance measures, affective factors) play a role in decision-making for choosing
STEM majors, engaging in training opportunities, utilizing available resources,
services, and careers/jobs when exposed to the intervention(s)?
Study Plan: The study plan refers to the type of research design the researchers intend to use.
It could be a single case pre-post design, a quasi-experimental design, or a randomized control
trial. The study design will depend on the type of research questions that are being developed,
the feasibility of undertaking a complex study, time, and budget among other factors.
Analytical techniques: Analytical techniques comprise the types of analyses the researcher will
undertake to derive findings from the study. Qualitative data collected via interviews, focus
groups and other means would require coding the data to develop common themes to build
narratives. Quantitative data can be analyzed using descriptive or predictive analyses which
would depend on the data quality and quantity (e.g., sample size) and outcome measures that
may require more complex analytical techniques.
Let’s dissect a typical evaluation plan.
An evaluation plan typically comprises:
Evaluation Questions: They begin with:
a) To what extent…?
b) With what fidelity…?
c) Has an intervention and/or a program model been planned and implemented as
intended?
d) What worked (well) and what are the lessons learned?
e) With what quality are the research activities planned and completed?
f) What is the scope for a broader impact of the intervention/program model?
3
g) To what extent has the program brought about a change/growth in community or
institutional transformation?
Evaluation Approach/Design: Among other options, approaches include formative and
summative evaluations.
Formative evaluation, usually completed in the first 2 years of the grant, typically informs
the attributes of the program such as adherence, delivery, and quality as proposed and
provides just-in-time feedback to help improve the program in its formative stages of planning
and implementation.
Summative evaluation, usually completed in the latter years of the grant, focuses on the
type and number of outputs or products as well as if the intended outcomes were achieved and
how well. It also assesses the model of the programits merit and broader impacts insofar as
its viability to sustain beyond its funding cycle and/or scale up.
Data Collection & Analytical Techniques: Data collection sources as well as analytical
techniques are similar to the research plan and will depend on evaluation questions, feasibility
of undertaking the study, budget, and time among other factors.
As I envision, NSF grant proposal work is a triangle with the PI/Co-PI (grant writer) at the top
vertex and a researcher and an evaluator at the other two vertices working in close concert to
plan and develop a winning proposal.
Always plan to engage a researcher and an evaluator early and often!
External Evaluator vs. Advisory Team/Board
In some cases/tracks, principal investigators (PI) may select to have an advisory team/board,
instead of an external evaluator, conduct the project evaluation in addition to serving in an
advisory role.
Let’s talk about the selection process, roles, and functions of an advisory team/board when that
case arises. Please note that this is to not state that a separate advisory team/board is not
required by the projects, they serve different roles in different tracks. Please read the proposal
solicitation closely before making a decision.
Ask these questions before selecting an advisory team/board in addition to or instead of an
external evaluator:
o Who will serve on the team/board? (e.g., community members, faculty, K-12 teachers,
evaluators, industry partners, other)
o Why do you need an advisory team/board instead of or in addition to an evaluator?
(e.g., nature of the project, budgetary constraints, others)
4
o What are their roles/functions in the project? (e.g., purely advisory, rubber stamping
and/or conducting data collection and reporting out)
o What is in it for them and you? (e.g., what is the mutual benefit of engaging the
team/board?)
o What is the nature of work the team/board will do? (e.g., clearly defining their roles,
functions, expectations, and deliverables, others)
Once the advisory team/board is selected, outline their specific roles and expectations and
stipends/compensation clearly in the budget narrative.
In cases when there is a separate advisory team/board and an external evaluator, it is
important to understand their different roles, functions, and contributions. External evaluators
and advisory board members generally contribute to the accountability component of project
management and evaluation(s). Both parties may be used to provide the project team with
feedback (on both as-needed and annual bases) on a range of issues, including the project’s
implementation, research/evaluation methods and analysis, dissemination processes, how to
improve engagement with a project’s target audiences, and how to effectively bridge
boundaries in collaborations and partnerships.
Although, we cannot advocate engaging one group/team over the other to get the work done,
it is important to understand their roles/functions and needs within a grant before making a
decision.
*An earlier version of this piece appeared in the August issue of HSISTEMHub at New Mexico
State University and t can be accessed here: https://hsistemhub.org/portfolio-item/august-
2020-newsletter/
Useful evaluation resources:
NSF 2010 User-Friendly Handbook for Project Evaluation, https://www.evalu-ate.org/wp-
content/uploads/formidable/Doc_2010_NSFHandbook.pdf
CDC Approach to Evaluation, https://www.cdc.gov/eval/approach/index.htm
Evaluation Resources from Ohio University, https://www.ohio.edu/sites/default/files/sites/community-
engaged-research/Evaluation%20Resources%20_Final.pdf
W.K. Kellogg Foundation Logic Model Development Guide,
https://www.aacu.org/sites/default/files/LogicModel.pdf
Reimagining S-STEM Project Assessment and Knowledge Generation, webinar presented by Kavita
Mittapalli, as part of the AAAS S-STEM initiative. Slides,
https://www.sstem.aaas.org/s/November-2020-S-STEM-Webinar.pdf
5
MN Associates, Inc is a small, woman-owned
education research and evaluation firm in Northern
Virginia. MNA is headed by Kavita Mittapalli, Ph.D.,
who brings over 18 years of experience in conducting
R & E work for various programs and initiatives
across the country. She worked at various consulting
firms before founding MNA in 2004. Kavita started
her career in Agricultural Sciences in India before
becoming an Applied Sociologist and a mixed
methodologist with an interest in research design in
K-20. She brings her multi-disciplinary skills and
knowledge to all the work she does at MNA. She is
supported by five team members who bring their
very diverse backgrounds, academic training, and professional experiences to MNA. To date,
MNA has evaluated 32 NSF grants in 16 tracks (e.g., ATE, AISL, CAREER, Conference, CREST, ECR-
CORE, ERN, HBCU-UP, HIS, IUSE, INCLUDES/DDLP, ITEST, MSP/STEM C, Noyce, PREM, and S-
STEM) in addition to evaluating several medium to large grants funded by other agencies (e.g.,
USDE, DOL, NASA, DODEA, and DOT). Kavita can be reached at [email protected].
Connect with her on LinkedIn (linkedin.com/in/kavitamittapalli) and on Twitter @KavitaMNA.