Survey Methods For Educators: Collaborative Survey Development (Part 1 of 3)
Survey Methods For Educators: Collaborative Survey Development (Part 1 of 3)
Collaborative survey
development (part 1 of 3)
Clare W. Irwin
Erin T. Stafford
REL 2016–163
The National Center for Education Evaluation and Regional Assistance (NCEE) conducts
unbiased large-scale evaluations of education programs and practices supported by federal
funds; provides research-based technical assistance to educators and policymakers; and
supports the synthesis and the widespread dissemination of the results of research and
evaluation throughout the United States.
August 2016
This report was prepared for the Institute of Education Sciences (IES) under Contract ED
IES-12-C-0009 by Regional Educational Laboratory Northeast & Islands administered by
Education Development Center, Inc. The content of the publication does not necessarily
reflect the views or policies of IES or the U.S. Department of Education, nor does mention
of trade names, commercial products, or organizations imply endorsement by the U.S.
Government.
This REL report is in the public domain. While permission to reprint this publication is
not necessary, it should be cited as:
Irwin, C. W., & Stafford, E. T. (2016). Survey methods for educators: Collaborative survey
development (part 1 of 3) (REL 2016–163). Washington, DC: U.S. Department of Education,
Institute of Education Sciences, National Center for Education Evaluation and Regional
Assistance, Regional Educational Laboratory Northeast & Islands. Retrieved from http://
ies.ed.gov/ncee/edlabs.
This guide describes a five-step collaborative process that educators can use with other
educators, researchers, and content experts to write or adapt questions and develop surveys
for education contexts. This process allows educators to leverage the expertise of individ
uals within and outside of their organization to ensure a high-quality survey instrument
that meets the policy or practice goals of the organization. Examples from collaborative
survey development projects are highlighted for each step.
This guide is the first in a three-part series of survey method guides for educators. The
second guide in the series covers sample selection and survey administration, and the third
guide in the series covers data analysis and reporting.
i
Contents
Summary i
Step 3: Draft new survey items and adapt existing survey items 8
Step 4: Review draft survey items with stakeholders and content experts 10
Step 5: Refine the draft survey with pretesting using cognitive interviewing 11
Organize data and refine the survey items based on cognitive interview data 13
Appendix B. Sample table of specifications: Excerpt from the Early Childhood Education Research
Appendix C. Sample analysis plan: Excerpt from the Northeast Rural Districts Research Alliance
Appendix D. Sample feedback form: Excerpt from the Early Childhood Education Research Alliance
ii
Appendix E. Sample cognitive interview protocol from the Northeast Rural Districts Research
Alliance collaborative survey development project E-1
Appendix F. Sample cognitive interview analysis codebook from the Northeast Rural Districts
Research Alliance collaborative survey development project F-1
References Ref-1
Boxes
1 Three research alliance projects provide examples of collaborative survey development 3
2 Definitions of key terms 3
3 Step 1 in practice: Identifying policy-relevant questions with the English Language
Learner Alliance 6
4 Step 2 in practice: Identifying existing survey items related to online learning 7
5 Considerations for item stems and response options 8
6 Step 3 in practice: Drafting and adapting survey items with early childhood educators 10
7 Step 4 in practice: Reviewing draft items with stakeholders and experts in early
childhood education 11
8 Step 5 in practice: Cognitive interviewing with online and distance learning educators 12
Figures
1 Three stages in the survey research process 1
2 Five steps in the collaborative survey development process 2
Tables
1 Common response option types and considerations 9
B1 Sample table of specifications from the Early Childhood Education Research Alliance B-1
C1 Sample analysis plan: Survey item-to-research-question mapping from the Northeast
Rural Districts Research Alliance C-1
D1 Sample survey feedback form from the Early Childhood Education Research Alliance D-1
F1 Relevance: The extent to which the survey and its items tap into appropriate policies
and practices F-2
F2 Length: The number of items and the time taken to complete the survey F-2
F3 Flow: The survey format and grouping and ordering items F-2
F4 Overview and instruction pages F-3
F5 Single item: Question 1: In school year 2012/13, were any students in your school
enrolled in online courses? F-3
iii
Why this guide?
Increasingly, state and local educators are using data to inform policy decisions (Hamilton
et al., 2009; Knapp, Swinnerton, Copland, & Monpas-Huber, 2006; U.S. Department of
Education, 2010). Educators need access to a wide variety of data, some of which may not
be in their data systems. Survey data can be particularly useful for educators by offering a
relatively inexpensive and flexible way to describe the characteristics of a population. To
obtain such information, educators may need to conduct their own survey research.
The survey research process includes survey development, sample selection and survey
administration, and data analysis and reporting (figure 1). The activities undertaken in
each stage may vary depending on the type of information being collected (Fink, 2013).
For example, if educators want to conduct a survey on a topic that has been widely sur
veyed in similar settings, they may be able to use an existing survey instrument and skip
the survey development stage altogether. Or they may decide to use an existing survey but
add items of particular interest to their context. If no surveys exist for the topic, educators
will need to develop a new survey in stage 1 (see figure 1).
The three guides in this series correspond to the three stages in figure 1 and provide an
overview of survey methodologies for a nontechnical audience of educators. However, the
resources needed to complete some of the activities may make the series most relevant to
larger school districts, state departments of education, and other large agencies. This first
guide in the series covers survey development, the second guide in the series covers sample
selection and survey administration (Pazzaglia, Stafford, & Rodriguez, 2016a), and the
third guide in the series covers data analysis and reporting (Pazzaglia, Stafford, & Rodri
guez, 2016b).
These guides are intended to lead to the use of survey findings in decisions about policy
and practice in schools and local or state education agencies; however, offering guidance
on how decisionmakers can use the results is not the focus.
1
Collaborative survey development
Methodological resources for survey researchers found in theoretical texts may not guide
educators through a collaborative approach to survey research in practical education
contexts. The Regional Educational Laboratory (REL) Northeast & Islands collabora
tive survey development process is a process by which educators work with one another,
researchers, and content experts to write or adapt survey items to create a new survey
instrument. This five-step process allows educators to leverage the expertise of individu
als within and outside their organization to ensure a high-quality survey and reduce the
burden on any one individual (figure 2).
The core survey development team is a small group of individuals who lead development of
the survey through this process. The team should be made up of three to eight individuals,
some with experience in survey development, some with expertise in the content area to
be surveyed, and some with knowledge of the local context. The size of the team may vary
by project. The five steps are intended to guide a core development team through develop
ing a survey.
To address the needs identified by the three research alliances (box 1), REL Northeast &
Islands organized core survey development teams that implemented the five-step process.
This guide draws on examples from these survey development projects conducted with
educators across multiple research alliances of REL Northeast & Islands. The three collab
orative survey development efforts described in box 1 were guided by the steps proposed in
this guide and provide examples for the guide. See box 2 for definitions of key terms used
in this guide.
2
Box 1. Three research alliance projects provide examples of collaborative survey
development
Throughout this guide, examples illustrate each step of the collaborative survey development
process. Examples are from Regional Educational Laboratory Northeast & Islands research
alliance projects described below.
• English learner students survey development project. In a state with a growing popula
tion of English learner students, state education administrators wondered whether they
should provide school leaders with direct guidance on issues related to the education of
these students. They wanted to know school leaders’ policies for identifying and monitor
ing English learner students, evaluating teachers who have English learner students in
their classrooms, and implementing response-to-intervention services with English learner
students, among other things. Because the state did not mandate collection of this infor
mation, the administrators wanted to survey principals to gather information about school
policies and practices for English learner students. This project was conducted as a part
of the English Language Learners Alliance.
• Online learning survey development project. A partnership of educators in New York
wanted to know if they should encourage state leaders to invest in support for distance
education and online learning. Because the state’s longitudinal data system did not distin
guish distance education or online courses from face-to-face courses, stakeholders had
little data about how and how much distance education and online courses were being
used. Accordingly, the practitioners wanted to survey schools to collect basic information,
such as the number of online and distance education course enrollments by academic
domain. This project was conducted as part of the Northeast Rural Districts Research
Alliance.
• Early childhood survey development project. Leaders in state departments of education
and departments of early learning wanted to understand how early childhood educators
in their states were using child assessment data. Specifically, they wanted to know what
assessments were being administered, which center or school staff members were admin
istering the assessments, and how assessment data were being used, among other
things. Because no such data existed in state data systems, the state leaders wanted to
survey early childhood classroom educators and program administrators to better under
stand how assessments were being used with children from birth through grade 3. This
project was conducted as part of the Early Childhood Education Research Alliance.
Analysis plan. A preparatory survey tool that maps topics of interest to survey items and sug
gests types of data analyses and results presentation.
Anonymous survey responses. Survey responses that cannot be linked to the survey respondent.
Cognitive interviewing. A method of identifying and correcting problems with surveys that
involves administering a draft survey to a respondent while interviewing the respondent to
determine whether the survey items are eliciting the intended information.
Cognitive interviewing codebook. A record of the codes used to categorize feedback gathered
during cognitive interviewing.
(continued)
3
Box 2. Definitions of key terms (continued)
Confidential survey responses. Survey responses that can be linked to respondents but that
will not be shared outside the survey team or other designated individuals.
Core survey development team. The group of individuals primarily responsible for developing
a survey. This team is responsible for scheduling and conducting meetings, developing the
survey items, eliciting stakeholder feedback, and making final decisions regarding the survey.
Item stem. For each survey item this is the part that a respondent is asked to answer or rate.
An example item stem reads, “I feel supported by my principal.” The item stem is usually
followed by a set of response options (for example, strongly agree, agree, disagree, strongly
disagree).
Research question. A clearly stated question that articulates the problem to be addressed or
hypothesis to be tested by a research activity and guides the research.
Response option. The types of answers a respondent can provide to a survey item. Common
response option types include multiple choice, rating scales, and open response (table 1).
Stakeholder group. The subset of stakeholders who are collaborating on the survey develop
ment and providing feedback to the core survey development team.
Stakeholders. Specific members of the education community who have an interest in the
survey and its outcomes. Depending on the purpose of the survey, this group may include
teachers, administrators, parents, students, researchers, policymakers, and others.
Survey administration format. The format in which the survey is administered. Common
formats are paper-and-pencil surveys, online surveys, and phone surveys.
Survey items. Questions or statements that make up the survey and are used to gather
information.
Target population. The entire group the survey team wants to obtain information about by gen
eralizing from the survey results.
4
Step 1: Identify topics of interest
After the need for a survey instrument has been identified by stakeholders, articulate the
goal or goals for the survey and resulting data (for example, a goal may be to identify
challenges to implementing online learning). From this goal, identify a set of topics that
the survey instrument will address. To ensure that the final instrument reflects the needs
of the stakeholders, follow the suggestions below. An example of implementing step 1 is
shown in box 3.
5
can be used in the data analysis. Examining differences in answers from groups
with different demographic characteristics may be interesting and useful. For
example, survey data related to satisfaction with a district’s maternity leave policy
could be examined collectively or by men and women separately.
A group of educators formed the English Language Learner Alliance to discuss researching
the effectiveness and integrity of implementation of programs for English learner students. A
group member responsible for providing teachers and principals in her state with optional pro
fessional development for the education of English learner students expressed concerns that
the lack of participation in this professional development program might limit educators’ ability
to adhere to the state’s research-based guidelines and standards for educating English learner
students. Accordingly, alliance members and researchers developed policy-relevant topics of
interest for a survey of school leaders, such as: How does the school assign students to
English learner programs? What certification and professional development requirements are
there for teachers of English learner students?
6
Step 2: Identify relevant, existing survey items
After topics of interest are identified and the table of specifications finalized, the team can
begin developing survey items. Survey items are the questions or statements that make up
the survey and are used to gather information. To save time, before brainstorming new
survey items review available surveys for relevant, previously evaluated items. An example
of this step is shown in box 4.
Box 4. Step 2 in practice: Identifying existing survey items related to online learning
Members of the Northeast Rural Districts Research Alliance identified online and distance
learning as their topic of interest. After identifying their topics and subtopics, the team con
ducted a literature review and a web search using key terms (for example, online learning,
distance education, virtual education, and surveys), and contacted other researchers and pro
fessional organizations with similar interests. Based on their professional networks, the team
contacted the authors of recent relevant surveys. The team identified four relevant surveys and
incorporated items from them into their table of specifications.
7
example, always, often, sometimes, never) will need to be modified for the new
survey instrument.
• Gather all potential items by topic, keeping track of the sources of original survey
items in case they need to be referenced later.
Step 3: Draft new survey items and adapt existing survey items
The core survey development team now shifts to developing and adapting items for the
survey. Multiple considerations must be kept in mind, several of which are listed in box 5
and table 1. An example of how this step was implemented is shown in box 6.
8
Table 1. Common response option types and considerations
Response
option Description Considerations
Open response Respondent is asked to type or Although providing the opportunity for respondents to write a response ensures
write a response to a question. coverage of all possible responses, the data provided can be difficult to code
consistently for analyses. For example, the same response for an item that asks
respondents to specify the age of the oldest child in their classroom may be
recorded in several different ways, such as three, 3, 3-yrs-old, three years old.
Whenever possible, open response survey items should be avoided for ease of
data collection, restructuring, and interpretation.
Multiple choice Respondent is presented with a Multiple-choice options provide data that are easier than open response options
set of possible responses and to restructure and interpret; however, if the response options do not cover the
asked to choose one or more. universe of possible responses to the item, information may be lost. Providing
an “other” option and allowing respondents to define what it means is one
way to ensure information is not lost, though it presents the same need to
restructure data as open response items. In general, response options such as
“not applicable” or “don’t know” should be avoided because they may generate
unnecessary missing data. These response options may be appropriate in
some circumstances; for example, when trying to gauge the extent to which
respondents are knowledgeable about a topic or to avoid forcing a response
that may not be accurate.
Rating scales Respondent is asked to rate Sometimes referred to as Likert scale items, rating scales are a good way
a statement presented in the to gauge respondents’ satisfaction with something, level of agreement with
item stem on a scale. a statement, or frequency of behaviors. If a rating scale has an odd number
of response options, the middle point (for example, “neither agree nor
disagree”) may serve as a default option. Developers may consider offering
an even number of scale responses to force respondents to take a position.
To encourage respondents to carefully read and respond to each survey item,
developers can word some items negatively and others positively. Because
respondents may interpret numbers associated with each level of the scale as
evaluative, it is best to leave the possible responses unnumbered.
9
Box 6. Step 3 in practice: Drafting and adapting survey items with early childhood
educators
After the Regional Educational Laboratory (REL) Northeast & Islands Early Childhood Education
Research Alliance team developed and refined a table of specifications, it drafted survey items.
At the outset, the team decided that open response options would be avoided and, to the
extent possible, that multiple-choice and rating scale response options would be created. The
notes in the table of specifications (see appendix B) were helpful in drafting survey items and
developing response options. To reduce the time commitment for educators, REL Northeast &
Islands researchers took the lead on drafting the survey items using the table of specifications
as a guide. Throughout the item-writing process, the initial topic was revisited to ensure that
the items would provide the information necessary to address those topics.
Step 4: Review draft survey items with stakeholders and content experts
After the core survey development team drafts the items, it should obtain feedback from
the stakeholder group on item wording and language, the method of survey administration
(paper and pencil, web based, or phone interview), and the final number of items. An
example of how this step was implemented is shown in box 7.
10
Box 7. Step 4 in practice: Reviewing draft items with stakeholders and experts in
early childhood education
After the table of specifications was finalized and feedback on item stems and response
options was provided by stakeholders, Early Childhood Education Research Alliance research
ers drafted an initial set of items to be reviewed by the early childhood stakeholders. Feedback
was provided over the course of three one-and-a-half-hour phone meetings. When the allotted
time was not enough to gather all the feedback needed for the development of the surveys,
additional feedback regarding item wording, content, and the importance of each item for mea
suring the topics of interest was gathered using the form shown in appendix D, which was sent
to stakeholders via email. Some stakeholders shared the form with their colleagues and asked
them to provide additional feedback, thus expanding the feedback to the core team. The feed
back was used to further refine survey items and delete items that team members felt were
not essential.
the target population. For example, although web surveys can reduce administra
tion, data entry, and analysis costs, they may not be appropriate for respondents
with limited Internet connectivity. Choose the survey format easiest for respon
dents to use.
• While developing the survey, think about supporting materials, such as a letter
of introduction and reminders, that must be developed along with the survey.
Respondents often see the supporting materials prior to the survey. Supporting
materials are addressed in part 2 of this series.
Step 5: Refine the draft survey with pretesting using cognitive interviewing
Cognitive interviewing is a method for identifying and correcting problems with surveys
that involves administering a draft survey while interviewing the respondent to determine
whether the survey items elicit the information their author intends (Beatty & Willis,
2007). This methodology can improve the clarity, relevance, length, and coverage of
survey items. The goal is to reduce potential sources of confusion by identifying and cor
recting problems before administering a large-scale survey. An example of how this step
was implemented is reported in box 8.
11
Box 8. Step 5 in practice: Cognitive interviewing with online and distance learning
educators
The online and distance learning core survey team decided to conduct 60-minute cognitive inter
views with educators. They used standardized probes to elicit feedback about the language,
comprehensibility, relevance, and comprehensiveness of survey items. Cognitive interviews
were conducted with four school staff members in the Greater Capital Region of New York
recruited by core survey team members. The team used this information to further refine the
survey items and survey length. After additional feedback from stakeholders, the team finalized
the survey. Types of revisions made based on cognitive interviewing included: distinguishing
between distance learning courses that were hosted by the responding school versus received
by the school, clarifying that “core courses” are those required for a Regents diploma in New
York, and using different font colors for the questions focused on online learning and distance
learning.
• Decide how each cognitive interview session will be conducted (for example,
whether the interview will occur in person, over the phone, or with the assistance
of web-based technology). If possible, the interviews should be conducted using
the questionnaire format that will be used in the survey (for example, if the survey
is to be administered online, cognitive interview respondents should be asked to
complete a web-based survey).
• Determine how participant feedback will be captured (for example, by the inter
viewers taking notes using paper and pencil or by the interview team making an
audio or video recording of the session).
• Ensure participants have access to equipment needed to complete the survey (for
example, paper and pencil, computer, or phone) and that the session is conducted
at a time convenient to the respondent.
12
Organize data and refine the survey items based on cognitive interview data
The core survey development team should organize the feedback from the cognitive inter
view sessions to help facilitate discussion of what changes are needed to the draft survey. It
may be helpful to develop a codebook of the system used to categorize feedback.
• Create a system to analyze feedback across all interviews. Create specific codes
related to the survey overall (for example, survey length, clarity, and relevance) or
to item-by-item feedback (for example, item clarity, item relevance, and coverage
of response options). A sample codebook used for the online and distance learning
survey is provided in appendix F. Organize the comments according to the coded
topics.
• After the interview feedback is discussed, determine how to revise the survey to
address the concerns raised.
• Keep a record of the revisions based on the cognitive interview feedback.
• Update the analysis plan developed during step 3 to match the final survey items.
• Compile the data from the pretest administered during the cognitive interview
and examine it to ensure that the mechanics of the survey are functioning proper
ly (for example, that items appear in the right order).
• Create a revised version of the survey and conduct a final review with the stake
holder group.
Educators can use this guide (the first in a three-part series) to work with other educa
tors, researchers, and content experts to write or adapt survey items for a survey tailored
for their use. The new survey can be refined using content-expert review and pretesting
through cognitive interviewing. This process allows educators to leverage the expertise
of different individuals within and outside of their organizations to ensure a high-quality
product that meets the goals of the survey development team. The second guide in the
series covers sample selection and survey administration (Pazzaglia et al., 2016a), and the
third guide in the series covers data analysis and reporting (Pazzaglia et al., 2016b).
This guide has two limitations. First, while it provides educators with an overview of
methodologies that can be used to develop survey instruments collaboratively in practical
education contexts, those interested in piloting their surveys, conducting complex statisti
cal analyses survey results, or creating measures made up of multiple survey items will need
to seek additional resources. See appendix A for suggested resources.
Second, the guide does not provide all resources for pretesting survey items. Some activities
associated with this process may require resources such as screen-sharing software, survey
administration software, or technology to assist with the recording and transcription of
cognitive interviews. Many of these resources can be obtained through freely available
audio or video conferencing technologies via the web.
13
Appendix A. Additional survey development resources
Suggested resources for sampling and survey administration are provided in this appendix,
including references to textbooks and professional organizations and university depart
ments with expertise in these topics.
Useful texts
Czaja, R., & Blair, J. (2005). Designing surveys: A guide to decisions and procedures (2nd ed.).
Thousand Oaks, CA: Sage.
Dillman, D., Smyth, J., & Christian, L. (2008). Internet, mail, and mixed-mode surveys: The
tailored design method (3rd ed.). Hoboken, NJ: Wiley.
Fink, A. (2013). How to conduct surveys: A step-by-step guide (5th ed.). Thousand Oaks,
CA: Sage.
Fowler, F. J. (2008). Survey research methods (4th ed.). Thousand Oaks, CA: Sage.
Groves, R. M., Fowler, F. J., Couper, M. P., & Lepkowski, J. M. (2009). Survey methodology.
New York: Wiley.
Rea, L. M., & Parker, R. A. (2005). Designing and conducting survey research: A comprehen
sive guide (3rd ed.). San Francisco: Jossey-Bass Publishers.
Wright, P. V., & Marsden, J. D. (Eds.). (2010). Handbook of survey research. Bingley, UK:
Emerald Group Publishing.
Duke Initiative on Survey Methodology at the Social Science Research Institute: Instru
ment design and development
http://www.dism.ssri.duke.edu/question_design.php
A-1
Appendix B. Sample table of specifications: Excerpt
from the Early Childhood Education Research Alliance
collaborative survey development project
Table B1. Sample table of specifications from the Early Childhood Education Research Alliance
Respondent Topic Description Notes
Administrator Program type Type of program, accreditation Type of program. Licensing definitions by state can be
status used.
Program accreditation status if applicable.
Other program information?
Administrator Program Program statistics such Number of part-time and full-time educators employed in
characteristics as number of educators, program or center.
classrooms, and student– Percent of employees who are new each year.
teacher ratio
Number of classrooms in program or center.
Student–teacher ratio. Does the ratio depart from state
requirements? If so, in what way?
Will differ by children’s ages.
Administrator Program educator Percent of staff with various Percent of staff at center or school who have an early
qualifications credentials and participation childhood certificate. Is certification a school or program
in higher education and requirement or a state requirement? If there are state
professional development requirements, do the school or program requirements
depart from the state requirements? If so, in what ways
do they depart from the state requirements?
Percent of staff at center or school who have obtained a
bachelor’s degree or higher.
Administrator and Child assessment Policies related to use, Whether the program or school has any policies related
teacher/educator policies assessment development, to assessment use for early childhood students. May
and leadership around differ by children’s age; response options will vary.
assessments Types of child assessments used in program or school.
The questions will distinguish between assessments
used for formative purposes and standardized
assessments.
Frequency of assessment. The questions will distinguish
between the frequency of formative assessments
and standardized assessments. Respondents might
respond that formative assessment is ongoing, while
standardized assessments are used on an as-needed
basis to determine whether children might need
additional evaluation.
Administrator and Child assessment Why assessments are Purpose of the child assessments used in program or
teacher/educator use administered and how the data school. May differ by children’s age; responses will vary.
are used Ways the assessment information is used.
(continued)
B-1
Table B1. Sample table of specifications from the Early Childhood Education Research Alliance (continued)
Respondent Topic Description Notes
Administrator and Child assessment Administrator role in Monitoring data from assessments—frequency and
teacher/educator use assessment use purpose.
Types of decisions made on the basis of assessment
results.
Administrator Users of Access to the assessment data Whether it is the program’s policy to share assessment
assessment data data with the state (may be required when there is state
funding), with children’s kindergarten program, or with
parents.
B-2
Appendix C. Sample analysis plan: Excerpt from the Northeast Rural
Districts Research Alliance collaborative survey development project
Developing an analysis plan prior to calculating summary statistics (for example, frequen
cies, percentages, means, or standard deviations) ensures that the resulting information
will help address the survey development team’s topics of interest. To develop an analy
sis plan, first map each survey item to each topic of interest. Second, think about what
summary statistics, tables, and figures will be most useful for presenting survey results in
a manner that will best address the topics of interest and will be accessible for multiple
audiences. The analysis plan will continue to be refined as the survey items are revised
and refined. An example of item-to-research-question mapping with potential analyses and
presentation formats from the Northeast Rural Districts Research Alliance’s survey study
is presented in this appendix.
Table C1. Sample analysis plan: Survey item-to-research-question mapping from the Northeast Rural
Districts Research Alliance
Potential analysis Potential method
Research question Survey item methods of presentation
1a: How are rural and nonrural high Items 1–4 Mean; Table
schools in the Greater Capital Region standard error
Example item: For school year 2012/13,
of New York using online learning in the report the number of students in your
2012/13 school year? Specifically, what school who were enrolled in online courses.
is the number of student enrollments?
1b: What percentage of students Item 5 Mean; Table
successfully complete the online standard error
Example item: Of these enrollments,
courses? how many resulted in successful course
completions with a passing grade?
1c: What are the academic domains for Item 6 Mean; Table;
which online courses are being utilized standard error stacked bar graph
Example item: For each box, report the
(for example, math, science, or foreign number of online course enrollments
language)? in school year 2012/13 in each of the
following academic areas.
1d: What is the academic purpose for Items 8–9 Mean; Table;
which students are taking these courses standard error stacked bar graph
Example item: For each box, report the
(for example, to recover credit or access number of online course enrollments
advanced placement courses)? in school year 2012/13 in each of the
following course categories.
2: Why are rural and nonrural schools Item 7 Mean; Table; pie chart
using online course options? standard error
Example item: How important were the
following reasons for having online courses
in your school in 2012/13? (Check one on
each line).
3: What are the policies and practices Items 11–18 Mean; Table
these schools employ to monitor students standard error
Example item: In 2011/12, did your school
enrolled in online courses? monitor student progress in online courses
in any of the following ways (check one on
each line).
C-1
Appendix D. Sample feedback form: Excerpt from the Early Childhood
Education Research Alliance collaborative survey development project
The feedback form in this appendix was created for the Early Childhood Education
Research Alliance’s collaborative survey development project. It allowed the survey devel
opment team to obtain input on item wording and importance across all stakeholders
despite limited time to meet with the stakeholder group.
Table D1. Sample survey feedback form from the Early Childhood Education Research Alliance
Column 3: How important is this item for measuring the topic in relation to standards implementation?
0 = not at all important, do not include item
1 = somewhat important, item may be useful
2 = important, item should probably be included
3 = very important, item needs to be included
Column 5: Comments
Please include any comments or further clarification if necessary.
To what extent
How important is this are this item and
item for measuring response choices
Topic Item the topic? (0–3)
– clearly written? (0–2)
– Comments
Knowledge of Does your state have early learning
the standards standards?
• Yes
• No
• Not sure
Knowledge of What age groups are covered by your
the standards state’s early learning standards?
(Check all that apply.)
• Birth to 3 years old
• 3–5 years old
• 5–8 years old
• Not sure
Knowledge of When was the latest version of your state’s
the standards early learning standards revised?
• Currently under revision
• Within the past year
• Within the past 3 years
• Within the past 5 years
• Over 5 years ago
• Not sure
(continued)
D-1
Table D1. Sample survey feedback form from the Early Childhood Education Research Alliance (continued)
To what extent
How important is this are this item and
item for measuring response choices
Topic Item the topic? (0 – 3) clearly written? (0 –2) Comments
Knowledge of Did you participate in your state’s early
the standards learning standards revision process?
• Yes
• No
• Not applicable
Knowledge of In what way did you participate in your
the standards state’s early learning standards revision
process? (Check all that apply.)
• Attended a focus group or hearing
• Was part of the revision team
• Sent written feedback
• Other: ______________________
Knowledge of Does the state provide you with a copy of
the standards the early learning standards?
• Yes
• No
• Not sure
Knowledge of By what methods has the state provided
the standards you a copy of the early learning standards?
(Check all that apply.)
• Email
• United States Parcel Service Mail
• Licensor provided me a copy
• Available on state website
• Other: ______________________
Knowledge of How familiar are you with your state’s latest
the standards version of the early learning standards as
they pertain to children from birth to grade 3?
• Very familiar
• Somewhat familiar
• Not very familiar
• Not at all familiar
• Not applicable
D-2
Appendix E. Sample cognitive interview protocol from the Northeast Rural
Districts Research Alliance collaborative survey development project
E-1
Taking the survey (15 minutes)
• I’d like to begin by having you complete the online and distance learning survey.
Please click the arrow at the bottom of the page to proceed to the first page of the
survey and begin.
• I would like you to complete the survey as you would if I were not with you, but I
would like you to think out loud while completing the survey. For example, if the
question says, “What is your favorite color?” you might say, “I used to like red when
I was young, but now it is blue, so I would pick blue.” Then make your selection.
• While you may ask me questions, I may or may not answer them. The intent of
this session is to see how people would take the survey without someone watching.
If you ask questions that I do not answer, I will answer them after you have com
pleted the session.
Prompts for use during survey taking. During the session, mark any questions where the
respondent was confused, hesitated, or did not respond to the question. Use the condition
al probes (CP) for follow-up during the item response section. If the respondent responds
“other” to any of the questions, ask the person to enter text or at minimum verbalize what
they would enter in the “other” category.
• General probe. Please remember that there are no right or wrong answers. Do your
best.
• Sticking point. At this point, what would you do if you were not taking the survey
with me listening?
• Additional probe: If the participant’s response is anything but “I’d close the
survey,” say, “Then why don’t you try that?”
• Additional probe: If the participant’s response is “I would quit the survey at
this point,” ask the participant to skip the question and move to the next
question. Note that question for follow-up.
• Think-aloud reminder. I know this may be uncomfortable, but please try to think
aloud while answering the survey items.
Relevance (extent to which survey items tap into appropriate policies and practices)
• On a scale of 1–10 (10 = most relevant), how relevant were the survey questions
to online and/or distance learning in your school? Tell me what influenced you to
choose that number.
• CP: What do you think were the most relevant components?
• CP: What parts were irrelevant to your school?
E-2
• If you were completing this survey on your own, how many minutes do you think
you would spend on it?
• Based on your experience, how willing will school staff members in positions
similar to yours be to complete this survey?
Allow time for participant to flip through the four Overview and Instructions pages.
• Coverage: Did the overview and instructions cover what you needed to know?
• CP: If no, what additional information would have been helpful to you?
• Clarity: What, if anything, was confusing about any of these sections?
• Coverage: What, if anything, did you feel was unnecessary in the overview or
instructions?
• Clarity: Let’s look specifically at the part about the school year: was it clear to you
what the timeframe of the survey was?
• CP: If no, please explain.
• Clarity: After reading all of the instructions, was it clear what was meant by online
courses and distance learning courses?
• CP: If no, please explain.
• Clarity: Was it clear what the difference is between online courses and distance
learning courses?
E-3
Let’s start with questions 1 and 2: In school year 2012/13 were any students in your school
enrolled in online courses? In school year 2012/13 were any students in your school enrolled
in distance learning courses?
• Clarity: After reading these questions, was it clear to you whether your school had
online courses or distance learning courses?
• CP: If no, please explain.
• Clarity: Was there anything confusing about these two questions?
• CP: If yes, do you have any suggestions to make it clearer?
(Only to be used for questions noted while the respondent was taking the survey.)
• When you were responding to this question, I noticed that you seemed to
(…hesitate, spend a while on it, change your answer). Tell me what you were
thinking about while answering it.
• CP: Was there something about the question that was unclear to you?
• CP: Was there a response option that you were looking for?
• CP: Did you not know the answer to the question?
• CP: Was the question too difficult to complete?
• When you were taking the survey, I noticed you skipped this question.
• CP: Can you tell me what made you decide to skip this?
• CP: Was there a response option that you were looking for?
• CP: What can we do to improve this question?
E-4
Appendix F. Sample cognitive interview analysis
codebook from the Northeast Rural Districts Research
Alliance collaborative survey development project
A codebook is a record of the codes used to categorize feedback gathered during cogni
tive interviewing. A sample from a codebook used with the online and distance learning
survey developed by the Northeast Rural Districts Research Alliance through the collabo
rative survey development process is presented in this appendix.
Codebook overview
Through qualitative analysis of cognitive interview data, the core survey team seeks to
explore respondents’ overall perceptions of the survey; identify and revise items that lack
clarity, relevance, flow, or coverage; and limit nonresponse bias and nonsampling error in
the large-scale administrations of the final surveys.
Data are assigned codes (described in the following text) based on the following coding
hierarchy: overall or item specific, topical area of the feedback (for example, relevance,
length, flow, or clarity), and classification (as support, issue or question, or suggestion for
improvement). If data are not covered by the predeveloped coding hierarchy, they are
marked “other.”
Protocol questions targeting respondents’ overall perceptions are in the “Overall percep
tion” and “Wrap-up” sections of the cognitive interview. Responses to these questions as
well as information provided in the “think aloud” exercise will be categorized into three
topical areas: relevance, length, and flow. Feedback that does not fit into one or more of
the topical categories will be coded as “overall perceptions—other.” To assist with analysis,
responses coded into one of the topical areas will then be coded as one of the following
subcategories: “support” for survey, “issue or question,” “suggestion for improvement,” or
“other.” The three topical categories are shown in tables F1–F3 along with subcategories
and examples from the cognitive interview record.
Data coded for a survey item will reflect both information from the “think aloud” portion
and the item-by-item follow-up section of the cognitive interview protocol. Protocol ques
tions pertaining to specific survey items include both standardized and conditional probes.
Responses will be categorized into three topical areas: clarity, coverage, and relevance. For
the purpose of this project, these topics are defined as follows:
• Clarity. The extent to which the survey item is readable and understandable to
the respondent.
• Coverage. The extent to which the survey item and its response options are suffi
cient to cover the range of respondent experiences and situations.
• Relevance. The extent to which the survey item and its response options are per
tinent to the topic and tap into appropriate policies and practices.
F-1
Table F1. Relevance: The extent to which the survey and its items tap into
appropriate policies and practices
Subcategory Example
Support This survey covers exactly what we are doing at our school. I think it is very
relevant.
Issue or question It is sort of relevant for us, but I wasn’t sure about the part that asked about
monitors. I am not sure that makes sense for my school.
Suggestion for improvement The survey questions were relevant for me, but I might add more options to
some of the questions to make it work for all of the schools in my district.
Table F2. Length: The number of items and the time taken to complete the survey
Subcategory Example
Support I think it went quickly. I had no problem with the length.
Issue or question I am not sure if people in schools will take the time to complete all of these
questions with their busy schedules.
Suggestion for improvement It would have gone more quickly for me if I knew what specific information I
would need to complete the survey. So maybe you could add that information
to the directions.
Table F3. Flow: The survey format and grouping and ordering items
Subcategory Example
Support The questions made sense to me and seemed to be in the right order.
Issue or question I couldn’t get the survey to let me go to the next page several times. I
couldn’t figure out how to make it work.
Suggestion for improvement Maybe you could put the questions about the enrollment at the end so that
I could then go and look up the information after I completed the rest of the
survey.
Responses that cannot be coded into one of these three areas will be coded as “question
X—other.” To assist with analysis, responses coded to one of the topical areas will be further
coded into subcategories, including “support” for item, “issue or question,” “suggestion for
improvement,” and “other.” Codebook sample entries for the overview and instruction pages
are shown in table F4; sample entries for an individual item are shown in table F5.
F-2
Table F4. Overview and instruction pages
Category Subcategory Example
Clarity Support I didn’t find anything confusing about what was presented.
Issue or question I wasn’t exactly sure what an Iowa school number is so I decided to leave it blank.
Suggestion for improvement Maybe say more upfront that blended learning should not be included. It wasn’t clear to
me if I should include those courses.
Coverage Support These pages made sense to me. I covered what I needed to know.
Issue or question So we use Plato. I am not sure if my courses count for this survey.
Suggestion for improvement I think the instructions were too long. I would suggest cutting them down and just cover
the basics.
Relevance Support The instructions made sense for how we do online learning at my school.
Issue or question These instructions don’t seem to address how we do online learning. I am not sure if I
would fill out the survey after reading these.
Suggestion for improvement I might make it clearer that all schools are supposed to fill this out no matter how many
online courses you have. Maybe underline that.
Table F5. Single item: Question 1: In school year 2012/13, were any students in your school enrolled
in online courses?
Category Subcategory Example
Clarity Support This question was pretty straightforward.
Issue or question Does this include students who are enrolled part-time in the local community college as
well?
Suggestion for improvement Maybe you could repeat the definition of online learning here to make it clear what you
are covering.
Coverage Support This question has what I need to answer it.
Issue or question What if I am not sure if we offer these?
Suggestion for improvement I would add “not sure” as an option here.
Relevance Support This question makes sense for our school.
Issue or question Hmm … I am not sure if this is relevant for us since I don’t know what type of online
courses we offer.
Suggestion for improvement Maybe indicate that you should not count blended learning here.
F-3
References
Beatty, P. C., & Willis, G. B. (2007). Research synthesis: The practice of cognitive inter
viewing. Public Opinion Quarterly, 71(2), 287–311.
Blair, J., & Conrad, F. G. (2011). Sample size for cognitive interview pretesting. Public
Opinion Quarterly, 75(4), 636–658.
DeVellis, R. F. (2011). Scale development: Theory and applications (applied social research
methods), (3rd ed.). Thousand Oaks, CA: Sage.
Fink, A. (2013). How to conduct surveys: A step-by-step guide (5th ed.). Thousand Oaks,
CA: Sage.
Hamilton, L., Halverson, R., Jackson, S., Mandinach, E., Supovitz, J., & Wayman, J.
(2009). Using student achievement data to support instructional decision making (NCEE
No. 2009–4067). Washington, DC: U.S. Department of Education, Institute of Edu
cation Sciences, National Center for Education Evaluation and Regional Assistance.
http://eric.ed.gov/?id=ED506645
Knapp, M. S., Swinnerton, J. A., Copland, M. A., & Monpas-Huber, J. (2006). Data-in
formed leadership in education. Seattle, WA: University of Washington, Center for the
Study of Teaching and Policy. http://eric.ed.gov/?id=ED494198
Pazzaglia, A. M., Stafford, E. T., & Rodriguez, S. (2016a). Survey methods for educators:
Sampling respondents and survey administration (part 2 of 3) (REL 2016–160). Washing
ton, DC: U.S. Department of Education, Institute of Education Sciences, National
Center for Education Evaluation and Regional Assistance, Regional Educational Lab
oratory Northeast & Islands. Retrieved from http://ies.ed.gov/ncee/edlabs.
Pazzaglia, A. M., Stafford, E. T., & Rodriguez, S. (2016b). Survey methods for educators:
Analysis and reporting of survey data (part 3 of 3) (REL 2016–164). Washington, DC:
U.S. Department of Education, Institute of Education Sciences, National Center for
Education Evaluation and Regional Assistance, Regional Educational Laboratory
Northeast & Islands. Retrieved from http://ies.ed.gov/ncee/edlabs.
Ref-1
The Regional Educational Laboratory Program produces 7 types of reports
Making Connections
Studies of correlational relationships
Making an Impact
Studies of cause and effect
What’s Happening
Descriptions of policies, programs, implementation status, or data trends
What’s Known
Summaries of previous research
Stated Briefly
Summaries of research findings for specific audiences
Tools
Help for planning, gathering, analyzing, or reporting data or research