0% found this document useful (0 votes)
23 views33 pages

Chapter 2 Class Slides V2

Uploaded by

mariams174
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views33 pages

Chapter 2 Class Slides V2

Uploaded by

mariams174
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Chapter 2:

Foundations of Recruitment and


Selection I: Reliability and
Validity

1
Learning Objectives
At the conclusion of this lesson you should be able to:
• Summarize the basic components that make up a traditional
employee selection model
• Describe the concepts of reliability and validity in the context of
recruitment and selection
• Identify common strategies that are used to provide evidence of
the reliability and validity of measures used in recruitment and
selection activities

2
The Goal of Recruitment and Selection

• An employer’s goal is to hire an applicant who possesses the


knowledge, skills, abilities, or other attributes (KSAOs)
required to perform the job

3
Discussion Question

Why, or is it, better to base a selection system on science


rather than a “gut feeling”?

4
Table 2.1
Science versus Practice in Selection

Copyright © 2021 by Top Hat


5
A Systems Approach

Copyright © 2021 by Top Hat 6


Constructs and Variables
• Construct
• refers to ideas or concepts constructed or invoked to
explain relationships between observations
• For example, the construct “extraversion” has been
invoked to explain the relationship between “social
forthrightness” and sales.
• Variable
• refers to how someone or something varies on the
construct of interest
• For example, the variable “IQ” is used to represent
variability in intelligence.
Copyright © 2019 by Nelson Education Ltd.
7
What makes a good assessment?
An employment assessment is considered “good” if the
following can be said about it:
• It measures what it claims to measure.
• It measures what it claims to measure consistently or reliably.
• The assessment is job-relevant.
• By using the assessment, more effective employment decisions
can be made about individuals.
The degree to which an assessment has these qualities is
indicated by two technical properties: reliability and validity.

8
Reliability
• Refers to how dependably or consistently an assessment
measures a characteristic (If a person takes the assessment
again, will he or she get similar results?)
• An assessment that yields similar results for a person who
repeats the assessment is said to measure a characteristic
reliably.

Copyright © 2016 by Nelson Education Ltd.


9
Factors Affecting Reliability

Why don’t people always get exactly the same results every time
they take an assessment?

10
Factors Affecting Reliability
Reasons why individuals would not get exactly the same results
every time they take an assessment:

• Temporary Individual Characteristics


• Lack of Standardization
• Chance

Copyright © 2016 by Nelson Education Ltd.


11
Factors Affecting Reliability

Copyright © 2019 by Nelson Education Ltd.


2-12
Interpreting Reliability Coefficients
• The score obtained on any one administration (i.e. the “observed score”) is
comprised of the person’s “true” score on the attribute assessed and some
amount of random “measurement error”.
• So…
• An observed score is a combination of a true score and an error score
• True score: the average score that an individual would earn on an
infinite number of administrations of the same test or parallel versions of
the same test
• Error score: the hypothetical difference between an observed score and
a true score
• Measurement error: the hypothetical difference between an observed
score and a true score; comprises both random error and systematic error.

Copyright © 2016 by Nelson Education Ltd.


13
Reliability Coefficients
• The reliability of a test is indicated by the reliability coefficient.

• It is denoted by the letter “r,” and is expressed as a number ranging between


0 and 1.00, with r = 0 indicating no reliability, and r = 1.00 indicating perfect
reliability.
• The larger the reliability coefficient, the more repeatable or reliable the test
scores.

14
Methods of Estimating Reliability
• There are several types of reliability estimates.
• Test developers have the responsibility of reporting the
reliability estimates that are relevant for a particular test.
• Before deciding to use a test, read the test manual and any
independent reviews to determine if its reliability is acceptable.

15
Methods of Estimating Reliability
• Parallel (Alternate) Forms
• Test and Retest
• Internal Consistency
• Inter-rater reliability

Copyright © 2021 by Top Hat 16


Parallel (Alternate) Forms
Test and Retest
• Parallel (Alternate) Forms
• For example, when instructors give different forms of a test
to different class sections.
• Test and Retest
• The same test and measurement procedure are used to assess
the same attribute for the same group of people on two
different occasions.

Copyright © 2021 by Top Hat


17
Internal Consistency and
Inter-Rater reliability
• Internal Consistency
• Rather than select any one pair of items, the correlations are
calculated between all possible pairs of items and then
averaged.
• Inter-Rater Reliability
• For example, how likely is it that two managers providing
independent performance ratings for each of several
employees would assign the same ratings?

Copyright © 2021 by Top Hat


18
Validity
• most important issue in selecting an assessment
• refers to what characteristic the assessment measures and how
well the assessment measures that characteristic
• tells you if the characteristic being measured by an assessment is
related to job qualifications and requirements
• gives meaning to assessment scores
• it can tell you what you may conclude or predict about someone
from his or her score on the assessment
• describes the degree to which you can make specific conclusions
or predictions about people based on their assessment scores -
indicates the usefulness of the assessment
19
Validation Strategies
• Three methods or strategies of validation:
• Criterion-related validation
• Content-related validation
• Construct-related validation
• All inter-related
• Construct and content-related validation strategies provide
evidence based on assessment content
• Criterion-related validation strategies provides evidence
based on relationships to other variables

20
Validation Strategies
• Content validity
• whether the items on a test appear to match the content or
subject matter they are intended to assess; assessed through
judgments of experts in the subject area

• Construct validity
• the degree to which a test or procedure assesses an
underlying theoretical construct it is supposed to measure;
assessed through multiple sources of evidence showing that it
measures what it purports to measure and not other
constructs; e.g., an IQ test must measure intelligence and not
personality

21
Validation Strategies

22
Validation Strategies
Criterion-related validation:
• requires demonstration of a correlation or other statistical
relationship between assessment performance and job
performance – in other words, individuals who score high on
the assessment tend to perform better on the job than those who
score low on the assessment.
• If the criterion is obtained at the same time the test is given, it
is called concurrent validity; if the criterion is obtained at a
later time, it is called predictive validity.

23
Criterion-Related Validation

Copyright © 2016 by Nelson Education Ltd.


24
Validation Strategies: Face
• Face validity
• the degree to which the test takers (not subject matter
experts) view the content of a test or test items as relevant to
the context in which the test is being administered

Copyright © 2021 by Top Hat


25
Interpreting Validity Coefficients

• The validity coefficient is a number between 0 and 1.00 that indicates the
strength of the relationship between the test and a measure of job
performance (criterion).
• The larger the validity coefficient, the more confidence you can have in
predictions made from the test scores.
• As a general rule, the higher the validity coefficient the more beneficial it
is to use the test.
Copyright © 2016 by Nelson Education Ltd.
26
Validity of
Assessment
Methods

Copyright © 2019 by Nelson Education Ltd.


27
Discussion Activity

Can an invalid selection test be reliable? Why or why not?

28
Validity &
Reliability
Evidence

Copyright © 2016 by Nelson Education Ltd.


29
Evaluation of Assessment Methods
Criteria for Selecting and Evaluating Assessment Methods:

1. Validity: the extent to which the assessment method is useful for


predicting subsequent job performance.
2. Adverse impact: the extent to which protected group members
score lower on the assessment than majority group members
(bias).
3. Cost: both to develop and to administer the assessment.
4. Applicant reactions: the extent to which applicants react
positively versus negatively to the assessment method (fairness).

Source: Pulakos, E. (2005). Selection Assessment Methods. Society for Human Resources Management

30
Evaluation of Assessment Methods

31
Review: Reliability & Validity
Coefficients
• In your groups, access the “Caliper White Paper” document in the week 8
folder of DC Connect (under recommended reading).
• What method of estimating reliability was used? Define this method.
• What is the assessment’s reliability coefficient? What does this tell
you?
• What is the assessment’s validity coefficient? What does this tell you?

32
Summary
• Best way to predict which job applicants will do well on the
job is to use scientifically derived selection methods which
predict work performance in an non-discriminatory manner
• To determine appropriate selection methods, HR practitioners
must be familiar with measurement, reliability, and validity
issues
• The reliability and validity of the information used as part of
personnel selection procedures must be established empirically

33

You might also like