0% found this document useful (0 votes)
129 views20 pages

Delphi Technique

The document describes how to conduct a modified Delphi study and analyze its results. It discusses: 1) The steps of a modified Delphi which include identifying items through literature review, expert consensus through questionnaires, and finalizing the list in a meeting. 2) How the author conducted their modified Delphi over two phases with experts from different countries to establish consensus on curriculum indicators. 3) How the data from the Delphi rounds was analyzed to determine consensus based on percentages of adjacent Likert scale scores and mean values to rank order indicators.

Uploaded by

mudassir rehman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
129 views20 pages

Delphi Technique

The document describes how to conduct a modified Delphi study and analyze its results. It discusses: 1) The steps of a modified Delphi which include identifying items through literature review, expert consensus through questionnaires, and finalizing the list in a meeting. 2) How the author conducted their modified Delphi over two phases with experts from different countries to establish consensus on curriculum indicators. 3) How the data from the Delphi rounds was analyzed to determine consensus based on percentages of adjacent Likert scale scores and mean values to rank order indicators.

Uploaded by

mudassir rehman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 20

How to do a modified Delphi?

How the author


conducted it. What are the differences in the
steps of Delphi and Nominal group technique?
How to analyse the data in the Delphi Technique
Dr. Sadia
Dr. Samia
Dr. Chaman
How to do a modified Delphi?
The modified Delphi method is a group consensus strategy that
systematically uses
• literature review
• opinion of stakeholders
• the judgment of experts within a field to reach agreement
• (1) identifying the list of features of measurement plans that were
potential candidates for inclusion based on literature review and the
study team’s experience;
• (2) a two-round modified Delphi exercise with a panel of experts to
establish consensus on the importance of these features; and
• (3) a small in-person consensus group meeting to finalize the list of
features.
How the author conducted it?
• Two phases.
• In the first phase, a pilot and two rounds of modified Delphi were
conducted to establish consensus on curriculum viability indicators.
• In the second phase, the indicators upon which consensus was
developed were rank-ordered according to their relative importance.
This process is depicted in Figure 1.
• The duration of the study was 11 months, including its conception,
data collection, and reporting. The data were collected in 7 months.
• The gap between the first and second rounds of the Delphi study was
4 months;
• the gap between the second round of the Delphi study and the
second phase of the study was 3 months.
• Ethical approval was obtained from the Ethical Review Committee of
Riphah International University (Reference #
participants
• Based on their formal qualifications and experience in Education, 34
experts were sent a request to participate in the Delphi study.
• Among the 25 experts who agreed to participate, 12 held PhDs in
educational sciences, 10 Masters in health professions education, and
one each in education and psychology.
• One participant was a PhD in Internal Medicine but was involved in
medical education for 35 years.
• Their educational experience had a range of 14–48 years with a mean
of 19 years and a median of 14 years.
• All experts had experience in curriculum design; 10 had experience in
program evaluation, and 8 also had experience in accreditation.
• The experts included 15 males and 10 females from seven medical
universities and two organizations, from both developing and
developed countries.
• This was done to maximize the diversity of participants having
exposure to different curricula in different regional and social contexts
and also because standards and inhibitors may differ in these regions.
• The countries where they were working included Australia, Egypt,
Malaysia, The Netherlands, Pakistan, Sri Lanka, and United States of
America.
Materials
• To answer our first research question, a questionnaire containing 44
items was constructed based on a scoping review (Khan et al., 2019).
The main headings constituted broad areas of the medical curriculum,
whereas the subheadings comprised standards and inhibitors.
• This questionnaire was modified for a second round based on the
consensus developed and feedback provided by the experts in the
first round..
• For the second phase of the study, the questionnaire was based on
the 40 items on which experts agreed following the Delphi method.
• It had an option for the experts to order the indicators according to
their importance in affecting the curriculum viability in descending
order, 1 being the highest rank.
• Phase 1—Pilot study and Delphi rounds. This study was based on a
modified Delphi method (Esmaily et al., 2008; Skulmoski et al., 2007).
We developed the content of the questionnaire for the first round
through extensive literature search done for the scoping review. This
is different from a traditional Delphi study in which the first round
explores the content for the questionnaire through opinions of
experts obtained through face-to-face discussion or distant
communication modes such as email (
• lot study. A pilot study was done before Delphi Round 1 involving five
participants who had done a Master’s program or equivalent course
in health professions education or who had more than five years of
experience in education. The questionnaire was sent to them via a
link through email. Participants were asked to provide feedback on
the questionnaire based on language, structure, understanding of the
questions, accessibility of the questionnaire on the website
(www.qualtrics.com), ease of browsing, and time required to fill it
out. They reported satisfaction regarding the questionnaire through
face-to-face meetings and via phone with the primary researcher and
suggested no changes to it.
• Delphi round 1. After the pilot study, the link to the questionnaire was sent
to the selected experts through email for the first round. Anonymity among
the expert participants was ensured to minimize bias. They were requested
to score each item according to its importance to an undergraduate medical
program, based on a 5-point Likert scale (consisting of 1 ¼ extremely
important, 2 ¼ very important, 3 ¼ moderately important, 4 ¼ slightly
important, and 5 ¼ not at all important). They were also asked to provide a
justification if they selected the options “extremely important” or “not at all
important.” This was done to gain an understanding of the reason behind
choosing an extreme value on the “Likert scale” so that quantitative data
obtained through selecting an option were further strengthened by the
qualitative data, mentioned in the results section as “representative quotes.”
• Delphi round 2. For the second round, expert participants were again
sent an email containing their individual and group results in an Excel
sheet and a link to the second questionnaire, which included
questions based on the items for which no consensus was reached.
Anonymity was again ensured. Statements in the questionnaire that
required more explanation or were not clear to the expert
participants were modified by the primary author in consultation with
co-authors, based on the responses from Round 1.
•.
• Expert participants were asked to provide a reason if they changed
their response from the previous round. This was done to understand
their considerations so that we could better interpret the data.
Between each round, those who did not respond were sent two to
three reminders after a gap of three weeks. This helped increase the
participation of the experts
• After two rounds, a consensus was developed for 40 out of 44 items
on predetermined criteria, as explained in the data analysis section.
Hence a third round was not conducted.
• Phase 2: Ranking the curriculum viability indicators. To answer our
second research question, a 40-item questionnaire comprising
standards and inhibitors on which consensus was reached in the first
two rounds was sent to all the 25 expert participants. They were
asked to rate the items in the eight areas specified above so that the
relative importance of these items could be determined.
What are the differences in the steps of
Delphi and Nominal group technique?
How to analyse the data in the Delphi
Technique
• The consensus agreement was predetermined. For the Delphi Study,
the first and second rounds were studied by analyzing the
percentages of combinations of adjacent Likert scores.
• A percentage of 80 or more on two adjacent scores was considered as
agreement on that particular item. Hence, a combined percentage of
80 or more of “extremely important” and “very important,” “very
important” and “moderately important,” “moderately important” and
“slightly important,” and “slightly important” and “not important at
all” were used to measure consensus on a particular item.
• The expert feedback (i.e., quotes) was gathered by the primary researcher
(RAK) for synthesis through the Qualtrics website and shared with co-authors.
• Quotes were selected by three authors (RAK, UM & MAL) independently, and
then consensus was reached on representative quotes, which was further
validated by two co-authors (AS and JVM).
• Quotes that were illustrative for one of the indicators (standards or inhibitors)
brought up by the experts were considered as the representative quotes.
• Criteria for selecting them were based on clarity and alignment with the
indicators, and this helped to address the discrepancies between the quotes by
the experts.
• In Phase 2 of the study, the mean values of indicators were calculated
to order them under each area addressing curriculum viability.
• The mean was calculated by dividing the sum of the total score of a
particular standard or inhibitor, as marked by participants divided by
the total number of participants responding.
• The mean values were then arranged in descending order, with the
lowest value indicating the highest priority. This was done because
“1” was given the highest rank order number.

You might also like