0% found this document useful (0 votes)
25 views13 pages

Data Analysis and Reliability Testing

Uploaded by

AMMAR SHAMSHAD
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views13 pages

Data Analysis and Reliability Testing

Uploaded by

AMMAR SHAMSHAD
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Chapter#4: Data Analysis

4. Data Analysis

4.1 Introduction:
The major goal of the data assessment is to put the research/proposed conceptual

framework model to the test SPSS (Statistical Package for the Social Sciences) is a statistical

package for the social sciences) and Smart PLS (Partial Least Squares) are two statistical and

analytical approaches that have been used. The SPSS tool was used to perform the process of

data screening, and Smart PLS was utilized to evaluate the link between the latent constructs,

determining whether or not the presented hypothesis had a substantial impact on other constructs.

PLS-SEM has expanded more extensively used throughout subsequent years in a variety of

areas, minimum sample size, and the usage of content validity seems to be the most frequent

causes to their utilization. PLS- SEM’s tool facility has a while back been updated to facilitate

more complex conceptual development and address data issues like genetic variation (Hair Jr et

al., 2014).

4.2 Reliability (Pilot Study):


The smaller version of a larger study; it is also known as a research feasibility research or

explicit pre-testing of a research instrument, such as a questionnaire is considered to pretested in

the pilot study. According to (Van-Teijlingen and Hundley, 2001), a prototype study is a critical

component of high-quality research), and it is typically used to assess the latent components'

dependability. Furthermore, reliability refers to the degree to which various elements of the

research yield consistent and predictable results, with Cronbach's Alpha, named for the inventor

Lee Cronbach’s, being the most used method for measuring reliability (1951).In beginning, 50

questionnaires have been distributed to respondents by LinkedIn and email for data collection,
and the acquired data was analyzed using SPSS in order to determine the Cronbach alpha. It

should be noted that Cronbach Alpha’s value must be higher than 0.7 and it has been shown in

table I (Appendix B), demonstrates that all indicators have satisfactory consistency and stability,

which was the primary goal of the pilot testing.

Reliability Statistics

Cronbach's
Alpha N of Items

.952 39

4.3 Data Screening:


The 250 questionnaires were distributed through email, LinkedIn to the appropriate

responders in order to collect the data. A total of approximately 92.8% of people responded on

survey. The data screening process was then applied to these 232 appropriate responses. Prior to

running the data through the statistical analysis, it is essential to complete the screening of the

data. SPSS is being used for data screening; data was checked for missing values, multivariate

and univariate outliers.

4.3.1 Missing Values:


It is accounted as a concern in the study since there exist few delicate questions that

respondents may be not be able to respond to due to absences of comprehension, stress, or

exhaustion. Apparently lack of answers in data would be considered as missing values.

Consequently, if the researcher does not appropriately manage the missing values, this problem

must be corrected before future experiments may be conducted. There is a great chance that in

the absence of missing data, an inaccurate interpretation of the data will be represented, and the
obtained result will be different from the actual. However, in the study, no value was missing,

and all of the data was meticulously organized.

4.3.2 Univariate and Multivariate Outliers:


Data screening should be performed before data coding and analysis to ensure data

integrity. The goal of data screening is to increase evidence and reduce pollution by identifying,

correcting, and eliminating errors, which entails checking and or detecting faults in the data.

Univariates are classified as a specific set of data items that do not fit with the total data, and

they are recognized in SPSS using the Z table. The absolute Z-score of the items, according to

Tabachnick and Fidell (2007), must be between – 3.29 + 3.29. After eliminating all

contaminants, the sample size for this research was 232 as total of 18 outliers were eliminated,

this was used for further analysis.

4.4 Descriptive Analysis and Interpretations:


In this research, the data was collected from a variety of manufacturing industries,

because the objective of this particular research is associated to the manufacturing sector. On the

sample of 232, the descriptive analysis was conducted to determine. The below table represent

the demographics, age, education, income and favourite restaurant. The data was mainly

collected from the respondents that are indulged in visiting the restaurants.

Table II-

Descriptive statistics (N=232)

Demographics Frequency Percentage

Gender

Male 124 53.4

Female 108 46.6


Age

21-30 58 25

21.3

31-40 25.4

50 28

59
41-50
65

Above 50

Education

Intermediate 41 17.7

Graduate 49 21.1

Masters 45 19.4

Ph.D 42 18.1

Others 55 23.7

Income

Less than 5000 39 16.8

5000-10000 57 24.6

10001-15000 46 19.8

15001-25000 51 22.0

Above 25001 39 16.8

Favourite Restaurant
KFC 60 25.9

McDonald’s 51 22.0

Burger King 53 22.8

Pizza Hut 68 29.3

Source: Author’s estimation

4.5 Analysis:
Smart PLS was utilized in order to further evaluate the data. Outer evaluation was first

performed on the data afterwards it underwent the process of hypothesis testing.

4.5.1 Outer Model Measurement:


The outer model's objective is figuring out what are the constructs that can be detected

and are consider as fundamental or important constructs are. PLS-SEM was recommended by

Hair et al., (2011) for explanatory and complicated models. The validity and reliability of the

data are evaluated using the outer model. As a consequence, in order to confirm that a given

construct is valid, it is required to check the indicators' appropriateness (Churchill, 1979). The

variables' internal consistency is measured by the reliability (composite reliability) and the

validity includes measuring the convergent validity through average variance extracted and cross

loading and also the discriminant validity that includes Fornell- Larcker criterion and HTMT,

which is also validate by ( Hair et al., 2011 & Henseler et al., 2015). The researcher employed

PLS-SEM for estimating such a sophisticated model (Ringle et al., 2015).

4.5.1.1 Reliability Testing:


Composite reliability is used to assess internal consistency (Neuman, 2007), Hair et al.,

(2011) explained that the value of CR must be at least 0.7, as composite reliability (CR)
considers better provider measure of internal consistency indicator so in terms of reliability it is a

better measure than Cronbach alpha. Table III shows that all values of CR are greater than the

indicated value (CR>0.7), which is in the range of 0.821-0.935, which validates the accuracy of

the data (O'Leary-Kelly and Vokurka, 1998).

4.5.1.2 Convergent Validity Analysis:


For measuring the convergent validity the AVE (Average variance extracted) and item

loading is used (Hair et al., 2011).As stated by Hair et al., (2010) determined that the convergent

validity as well as factor loading must be greater than 0.65 whereas AVE should be 0.5 or greater

than 0.5. Hair et al.,(2014) explained that in factor loading must be greater than 0.7. Table III

highlights that AVE values are greater than the benchmark value (AVE >0.5).

Bagozzi et al., (1991) advise deleting indicators with outer loadings lower than 0.4,

taking into account the impact of their elimination on reliability and validity, where as if the

outer loading are between 0.4 and 0.70, according to Hair et al., (2014), these items should only

be eliminated from the scale when doing so results in a rise in the AVE that is more than the

advised threshold value and the threshold value of average variance must be above than 0.5 (Hair

et al., 2014 ).

Table III:

Reliability Testing & Convergent Validity

Constructs Items Loadings CR AVE

Affective Experience AE1 0.731 0.784 0.686

AE2

AE3 0.884 0.861


Attitudinal Loyalty AL2 0.885 0.861 0.782

AL3 0.899

AL4 0.870

Behavioural Loyalty BL1 0.825 0.827 0.743

BL3 0.878

BL4 0.882

Cognitive Engagement CE2 0.853 0.807 0.719

CE3 0.859

CE4 0.831

Cognitive Style CS2 0.830 0.764 0.675

CS3 0.887

CS4 0.741

Aesthetical Experience Ae.E1 0.684 0.819 0.649

Ae.E3 0.851

Ae.E4 0.856

Ae.E5 0.819

Emotional Engagement Emo.E3 0.754 0.902 0.709

Emo.E4 0.873

Emo.E5 0.877

Emo.E6 0.838

Emo.E7 0.863

Sensory Experience SE1 0.868 0.834 0.751


SE2 0.903

SE3 0.827

Source: Author’s estimation

Figure (I) Algorithm

4.5.1.3 Discriminant Validity:


It is used to determine if the variables should not correlate with each other (Hair Jr et al.,

2014), discriminate validity explains how a construct is unique. Moreover, the variables should

be different when compared to the other variables. According to (Hair Jr et al., 2014; Henseler et

al., 2015), Fornell and Larcker criterion, Heterotrait-Monotrait ratio of correlations (HTMT) and

cross loading between the items is used to determine the discriminant validity. It is used in order

to determine the accuracy of data and ensure that are no major statistical differences in the result

(Henseler et al., 2015). The basic premise is that two variables should not correlate with each

other in order to differentiate the constructs from each other (Hair Jr et al., 2014).
The Fornell and Larcker criterion recommended that an individual construct should have

lower variance with other constructs whereas it should have a greater variance with its own item

(Hair Jr et al., 2014). The criteria of Fornell and Larcker are the square root of AVE and it should

be greater than other correlation (Hair et al., 2011). The table below demonstrates that

discriminant validity exists as the diagonal lines in the table are greater in the rows of their own

items.

Table IV:

Fornell and Larcker (1981)

Emo
AE AL BL CE CS Ae.E E SE
AE 0.828
AL 0.673 0.884
BL 0.707 0.658 0.862
CE 0.648 0.625 0.690 0.848
CS 0.434 0.469 0.599 0.386 0.822
Ae.E 0.793 0.662 0.650 0.615 0.508 0.805
EmoE 0.699 0.776 0.688 0.663 0.482 0.688 0.842
SE 0.526 0.563 0.473 0.522 0.349 0.597 0.572 0.867

Source: Author’s estimation

There have been discussions regarding the Fornell and larcker criteria and its efficacy in

determining the discriminant validity. As a consequence, Henseleter et al., (2015) developed the

new way that is known as HTMT correlation ratio which is considered as an approach which has

less limitations and is more extensive in determining discriminant validity with the benchmark

value being less than 0.9 (Henseler et al., 2015), so if there’s following value in such a way

discriminant validity is established, which demonstrate that all of the values are less than 0.90.

Table V:

Heterotrait- Monotrait Ratio (HTMT) Results


Emo
AE AL BL CE CS Ae.E E SE
AE
AL 0.822
BL 0.887 0.780
CE 0.817 0.747 0.836
CS 0.578 0.582 0.761 0.487
Ae.E 0.887 0.780 0.784 0.751 0.637
EmoE 0.835 0.896 0.800 0.780 0.585 0.794
SE 0.650 0.662 0.568 0.638 0.427 0.720 0.656

Source: Author’s estimation

Cross loading elements is another way to verify discriminant validity. Each item should

be cross-loaded more than once in its own build (Hair Jr et al., 2014). According to Gefen and

Straub (2005), cross loading on its own construct should differ by 0.1 from cross loading on

other construct. The cross loading of all components is shown in Table VI.

Emo
AE AL BL CE CS Ae.E E SE
AE1 0.731 0.456 0.529 0.452 0.410 0.657 0.444 0.346
AE2 0.861 0.631 0.592 0.576 0.335 0.643 0.663 0.496
AE3 0.884 0.572 0.631 0.571 0.348 0.679 0.610 0.451
AL2 0.607 0.885 0.567 0.578 0.422 0.619 0.873 0.575
AL3 0.630 0.899 0.584 0.568 0.402 0.588 0.877 0.463
AL4 0.548 0.870 0.594 0.511 0.419 0.548 0.838 0.456
BL1 0.568 0.540 0.825 0.495 0.654 0.563 0.561 0.353
BL3 0.656 0.587 0.878 0.679 0.440 0.563 0.605 0.411
BL4 0.600 0.572 0.882 0.608 0.456 0.553 0.612 0.459
CE2 0.553 0.482 0.522 0.853 0.257 0.544 0.522 0.436
CE3 0.506 0.547 0.519 0.859 0.311 0.477 0.576 0.490
CE4 0.582 0.555 0.696 0.831 0.402 0.539 0.582 0.405
CS2 0.392 0.411 0.451 0.328 0.830 0.486 0.414 0.397
CS3 0.357 0.379 0.548 0.353 0.887 0.412 0.407 0.301
CS4 0.319 0.364 0.478 0.266 0.741 0.348 0.365 0.146
Ae.E1 0.526 0.597 0.522 0.507 0.385 0.684 0.591 0.650
Ae.E3 0.697 0.516 0.495 0.516 0.399 0.851 0.552 0.445
Ae.E4 0.675 0.565 0.595 0.507 0.490 0.856 0.590 0.386
Ae.E5 0.647 0.424 0.457 0.431 0.337 0.819 0.454 0.431
EmoE
3 0.557 0.647 0.541 0.553 0.346 0.475 0.754 0.372
EmoE
4 0.607 0.885 0.567 0.578 0.422 0.619 0.873 0.575
EmoE
5 0.630 0.899 0.584 0.568 0.402 0.588 0.877 0.463
EmoE
6 0.548 0.870 0.594 0.511 0.419 0.548 0.838 0.456
EmoE
7 0.603 0.784 0.610 0.585 0.435 0.656 0.862 0.529
SE1 0.447 0.467 0.401 0.451 0.242 0.511 0.484 0.868
SE2 0.454 0.468 0.385 0.472 0.264 0.513 0.479 0.903
SE3 0.462 0.524 0.440 0.433 0.391 0.525 0.520 0.827

Factor Analysis

Figure (2) Bootstrapping

4.5.2 Inner model measurement and hypothesis testing:


The Data is further processed for the internal measurement of the model once the outer

model measurements have been verified (Henseler et al., 2009; Hair et al., 2011).By

bootstrapping, the PLS SEM Partial Least Square approach was employed to evaluate the
hypothesis with the Smart PLS (Haenlein and Akaplan, 2004). In bootstrapping (Hair Jr et al.,

2014), a large the total number of sub-samples, i.e. 5000, are created from the original data,

ensuring the results' stability. Figure 2 shows the bootstrapping figure.

4.5.2.1 Predictive Relevance of the Model:


Internal model's predictability for endogenous constructs determines its quality (Hair Jr et

al., 2014). In addition, the inner model is consistent was assessed using cross-validated

redundancy (Q2) as well as the determination coefficient (R2) (Hair et al., 2011; Hair Jr et al.,

2014; Henseler et al., 2009). The impact that the independent variable has on the dependent

variable is represented by R2 (Hair Jr et al., 2014). R2 was divided into three categories by

Sanchez (2013), high, moderate, and low. It is deemed strong when R2 is more than 0.6., when

the value is between 0.3 and 0.6, it is usually taken into consideration, whenever the value

becomes less than 0.3, it is considered low. The R2 values in Table VII indicate that the model is

fit.

Another way to verify the model's accuracy is to use cross-validated redundancy (Q 2).

According to (Hair Jr et al., 2014), Q2 assesses the predictive value of the inner model. The Q

square is calculated using the blindfolding method, Q square should be bigger than zero,

according to Henseler et al., (2009). The model's fitness is confirmed by the values of Q 2 in

Table VII, the values are greater than zero.

Table VII:

Predictive power of construct

R-square Q-square
AL 0.953 0.519
BL 0.647 0.496
CE 0.475 0.458
CS 0.263 0.236
EmoE 0.568 0.555
Source: Author’s estimation

4.5.2.2 Hypothesis Testing:


This research includes eleven hypotheses. The hypothesis was examined using the

Structural equation model (SEM). For this particular research, Smart PLS was used in order to

test the model (Hair et al., 2011). The Table VIII highlights the results of the hypothesis.

Table VIII:

Hypothesis Testing

Original Standard deviation T statistics P


sample (O) (STDEV) (|O/STDEV|) values Decision
AE -> CE 0.402 0.086 4.668 0.000 Accepted
AE -> CS 0.076 0.108 0.704 0.482 Rejected
AE -
>EmoE 0.383 0.078 4.940 0.000 Accepted
CE -> AL -0.038 0.018 2.089 0.037 Accepted
CE -> BL 0.381 0.060 6.309 0.000 Accepted
CS -> AL 0.001 0.016 0.073 0.942 Rejected
CS -> BL 0.316 0.046 6.875 0.000 Accepted
EE -> CE 0.171 0.095 1.803 0.071 Rejected
EE -> CS 0.410 0.107 3.845 0.000 Accepted
EE ->
EmoE 0.253 0.078 3.242 0.001 Accepted
EmoE -
>AL 1.000 0.014 72.361 0.000 Accepted
EmoE ->
BL 0.283 0.058 4.922 0.000 Accepted
SE -> CE 0.208 0.072 2.880 0.004 Accepted
SE -> CS 0.064 0.074 0.864 0.388 Rejected
SE ->
EmoE 0.220 0.064 3.435 0.001 Accepted

Source: Author’s estimation

On the basis of the above analysis, it has been observed that H2, H6, H8 and H16 are found to
have a negative impact on each other. On the other hand, the accepted hypotheses are found to
have significant impact on each other.

You might also like