Course Code
Course Code
1. (a) What do you understand by forecast control? What could be the various methods to ensure
that the forecasting system is appropriate?
Forecast Control refers to the systematic process of monitoring and assessing the performance of a
forecasting system to ensure its alignment with real-world data and organizational objectives. It aims
to identify discrepancies between forecasted and actual outcomes, understand the causes of
deviations, and implement corrective measures to enhance the system's accuracy. Forecast control is
a crucial concept in decision-making, strategic planning, and operations management.
4. Feedback Mechanism: Incorporating insights from past errors to refine future forecasts.
1. Improved Accuracy:
o Example: A retail chain forecasts demand for a seasonal product. Forecast control
helps detect inaccuracies in previous forecasts, improving future predictions.
2. Risk Mitigation:
o Example: An airline uses forecast control to adjust ticket pricing strategies based on
passenger demand patterns.
3. Resource Optimization:
4. Decision Support:
o Example: A logistics firm leverages forecast control to optimize delivery routes and
schedules.
Methods to Ensure an Appropriate Forecasting System
An effective forecasting system requires regular evaluation and refinement. Below are several
methods to ensure the forecasting system is appropriate:
1. Error Analysis
Definition: Involves calculating and analysing forecast errors to evaluate the system's
performance.
Metrics Used:
Example: A supply chain manager calculates MAPE for monthly demand forecasts and
adjusts the model to reduce errors.
2. Tracking Signal
Example: A retail business monitors tracking signals for sales forecasts to determine if
adjustments are needed.
3. Feedback Loops
Definition: Incorporating actual performance data into the forecasting process to refine
future predictions.
Example: A weather forecasting agency updates its models based on real-time data to
improve accuracy.
4. Scenario Analysis
Example: An investment firm uses scenario analysis to test portfolio performance under
fluctuating economic conditions.
5. Model Validation
Definition: Assessing the forecasting model's assumptions, methodologies, and data inputs
for relevance and accuracy.
o Artificial Intelligence (AI): Uses machine learning algorithms to identify patterns and
predict future trends.
7. Benchmarking
Example: A financial institution benchmarks its risk forecasting system against industry
leaders to enhance its methodologies.
Retail Industry:A retail chain forecasts sales for the holiday season. By monitoring actual sales data
against predictions, the company identifies discrepancies and adjusts inventory levels accordingly.
Conclusion
1. (b) What do you understand by the term correlation? Explain how the study of correlation helps
in forecasting demand for a product.
Correlation refers to a statistical measure that describes the relationship between two or more
variables. Specifically, it measures how changes in one variable correspond to changes in another
variable. Correlation is typically measured on a scale from -1 to 1, where:
1 indicates a perfect positive correlation: as one variable increases, the other also increases.
-1 indicates a perfect negative correlation: as one variable increases, the other decreases.
Types of correlation:
Understanding correlation is vital in many fields, particularly in forecasting. In the context of demand
forecasting, correlation helps businesses understand how demand for a product might be influenced
by various factors, such as price, marketing efforts, seasonal variations, or external economic
conditions.
1. Identifying Key Drivers of Demand: Correlation analysis allows businesses to identify which
factors most significantly affect product demand. For example, a company may find a strong
positive correlation between advertising expenditure and sales. This means that when the
company increases its advertising budget, demand for the product tends to rise.
5. Market Trends: Businesses use correlation analysis to study broader market trends. For
instance, a positive correlation between economic growth and consumer spending could
help businesses forecast demand for luxury goods during economic booms.
6. Building Predictive Models: Correlation can also help in building predictive models for
demand forecasting. In multiple regression analysis, for instance, businesses use correlation
coefficients to understand the strength and direction of relationships between the
dependent variable (e.g., demand) and multiple independent variables (e.g., price,
advertising, seasonality, etc.).
Conclusion
Correlation is a vital statistical tool in understanding relationships between variables and forecasting
demand for products. By analysing the strength and direction of relationships, businesses can
develop predictive models, make informed decisions, and align their operations with anticipated
demand. While correlation analysis has limitations, when combined with other forecasting
techniques, it offers valuable insights into market behaviour. In an increasingly dynamic business
environment, mastering correlation and its application to demand forecasting is essential for staying
competitive. By effectively leveraging this tool, organizations can navigate uncertainties, meet
consumer needs, and achieve long-term success.
2. (a) Explain the terms ‘Population’ and ‘Sample.’ Why is it sometimes necessary and often
desirable to collect information about the population by conducting a sample survey instead of
complete enumeration?
Population refers to the entire set of individuals, items, or data points that are the subject of
a statistical study. It encompasses all possible units that meet a particular criterion. For
example, in a study of student preferences for online learning, the population would consist
of all students enrolled in the school or university.
Sample refers to a subset of the population that is selected for analysis. A sample is often
chosen because studying the entire population is not feasible due to time, cost, or logistical
constraints. A well-chosen sample should ideally represent the population accurately,
allowing conclusions to be generalized.
Methods of Sampling
1. Random Sampling: Every individual has an equal chance of being selected. Example: Lottery
system for choosing survey participants.
2. Stratified Sampling: Population is divided into strata, and samples are taken from each
group. Example: Selecting students from different academic streams in a university.
3. Systematic Sampling: Individuals are selected at regular intervals from a list. Example:
Surveying every 10th customer entering a store.
4. Cluster Sampling: Dividing the population into clusters and randomly selecting clusters for
study. Example: Surveying households in randomly chosen neighbourhoods.
5. Convenience Sampling: Selecting participants based on ease of access. Example: Feedback
from customers visiting a nearby store.
1. Cost and Time Efficiency: In many cases, conducting a survey of the entire population would
be prohibitively expensive and time-consuming. For example, if a company wanted to
understand consumer preferences for a new product, surveying every potential customer
worldwide would be impractical. A sample survey provides a cost-effective solution by
gathering data from a smaller, manageable subset of the population.
2. Practicality: It is often physically or logistically impossible to collect data from every member
of a population. For example, in studies involving a large geographical area or in situations
where participants are difficult to reach (such as in remote locations), a sample survey
provides a practical alternative.
4. Minimizing Bias: In some cases, obtaining a complete enumeration may introduce bias or
skewed results, especially if there is a non-response or voluntary participation bias. A well-
designed sample survey can use random sampling techniques to avoid such biases, ensuring
that the sample is representative of the population.
5. Speed of Data Collection: When information is required quickly, sampling can often yield
results much faster than a complete enumeration. This is especially important in fast-paced
industries where timely decision-making is crucial.
6. Accuracy and Feasibility: With a smaller sample size, it is often easier to control the quality
of data collection. Efforts can be focused on ensuring that the sample is well-chosen, and the
data is accurate, which may not always be feasible in a large-scale census.
Conclusion
The terms population and sample are fundamental in research and statistics. While studying the
entire population (complete enumeration) provides comprehensive insights, it is often impractical
due to constraints of time, cost, and resources. Sample surveys, when conducted systematically, offer
an efficient and effective alternative for data collection. By using appropriate sampling methods and
minimizing biases, researchers can draw meaningful conclusions, aiding decision-making in various
fields, from business and healthcare to public policy and education without the costs and limitations
of surveying an entire population.
2. (b) How would you conduct an opinion poll to determine student reading habits and preferences
towards daily newspapers and weekly magazines?
An opinion poll is a research method that involves surveying a sample population to infer their
attitudes, preferences, or behaviours on a specific subject. Conducting an opinion poll to determine
student reading habits and preferences towards daily newspapers and weekly magazines involves
several steps to ensure accuracy, reliability, and actionable insights.
The first step in designing an opinion poll is to clearly define the objectives of the survey. In
this case, the objective is to understand student preferences regarding daily newspapers and
weekly magazines. Specifically, you may want to explore:
Decide on the desired sample size, which should be representative of the student population
you want to study. Consider factors such as the level of confidence and margin of error you
are willing to accept.
Since surveying the entire population of students is impractical, you would need to select a
sample. Sampling techniques could include:
o Random Sampling: Selecting students randomly to ensure every individual has an equal
chance of being surveyed. This reduces bias.
o Stratified Sampling: Dividing students into categories (e.g., by age, year of study, or
gender) and sampling proportionately from each category.
The questionnaire should be designed to gather relevant information and should be concise
and clear. Questions could be both quantitative and qualitative. Some examples of questions
are:
o Do you read newspapers or magazines online, or do you prefer the printed format?
o Why do you choose to read a daily newspaper over a weekly magazine (or vice versa)?
Questions should also include demographic information such as age, gender, and academic
discipline to analyze how preferences might differ across various student groups.
6.Data Collection:
o Online Surveys: Using platforms like Google Forms or SurveyMonkey, which allows
students to fill out the survey at their convenience.
7.Data Analysis:
Once the data is collected, it should be analyzed to uncover trends, preferences, and
patterns. Quantitative data can be analyzed using statistical methods like percentages, mean
scores, and frequency distribution. Qualitative data can be coded and categorized to identify
common themes or responses.
For example, you may analyze the percentage of students who prefer reading newspapers
online versus print or compare preferences between different academic disciplines.
The results of the poll should be presented clearly and concisely. Findings should include
both statistical summaries and descriptive insights. For example, if a significant number of
students prefer reading newspapers online due to convenience, this insight can help guide
the production and distribution of news content.
Present the findings in charts, graphs, or tables for clarity. Summarize key points, such as the
most preferred type of content and preferred reading medium.
Based on the poll results, you can make recommendations for improving student
engagement with reading materials. For example, if the poll reveals that students prefer
digital formats, schools or publishers might consider offering digital subscriptions or apps.
Conclusion
Conducting an opinion poll on students' reading habits involves meticulous planning, from defining
objectives and designing questionnaires to analysing data. By carefully selecting a representative
sample and using diverse data collection methods, one can gain valuable insights into students'
preferences. For instance, findings might reveal a shift toward digital consumption, emphasizing the
need for newspapers and magazines to adapt to digital trends. Such insights are invaluable for
publishers, educational institutions, and marketers aiming to cater to the preferences of the student
demographic effectively.
(a) "Different issues arise while analyzing decision problems under uncertain conditions of
outcomes."
For instance, when launching a new product in a highly competitive market, a company may face
uncertainty about customer response, competitors’ reactions, and market dynamics.
Definition: Uncertainty often arises due to incomplete or insufficient data about the
environment, alternatives, or outcomes.
2. Complexity of Alternatives
Definition: The presence of multiple decision options with interconnected outcomes adds
complexity to the decision process.
3. Ambiguity in Outcomes
Definition: When the relationship between actions and outcomes is unclear or ambiguous,
predicting the results becomes challenging.
Definition: Rapid changes in the external environment increase unpredictability and make
past data less relevant.
Definition: Cognitive biases influence how individuals perceive and respond to uncertainty,
leading to irrational decisions.
Impact: These biases can skew analysis and lead to poor decision outcomes.
6. Conflicting Objectives
Definition: Decisions often involve multiple stakeholders with differing goals and priorities.
Impact: Conflicts can delay decision-making or result in compromises that fail to satisfy all
parties.
Definition: External factors such as natural disasters, pandemics, or political upheavals can
disrupt predictions.
Impact: High costs may deter organizations from in-depth analysis, increasing the risk of
errors.
2. Decision Trees: A graphical representation of decision options and possible outcomes, used
to evaluate alternatives systematically. For example, deciding whether to expand operations
domestically or internationally.
3. Sensitivity Analysis: Assessing how changes in one variable affect outcomes. For example,
Analysing the impact of raw material price fluctuations on profit margins.
5. Heuristics and Rules of Thumb: Simplifying complex decisions using practical rules. For
example: Allocating a fixed percentage of resources to high-risk projects.
Startups: A new business deciding between bootstrapping and seeking venture capital without
knowing market acceptance.
Conclusion
Sampling is a fundamental concept in statistics and research that involves selecting a subset of
individuals, items, or observations from a larger population to analyse and draw conclusions about
the entire population. The process is widely regarded as efficient, cost-effective, and practical,
particularly when dealing with large populations. The attractiveness of sampling stems from its
ability to provide reliable and actionable insights without the need to examine every member of the
population.Sampling is attractive in drawing conclusions about the population due to several
reasons:
2. Time Efficiency: Sampling allows for quicker data collection and analysis compared to conducting a
complete enumeration of the population. It enables researchers to obtain results in a shorter time
frame, which is particularly important when time constraints exist.
3. Feasibility: In some cases, conducting a complete census of the population may be impractical or
even impossible. For example, if the population is geographically dispersed or inaccessible, sampling
provides a practical solution to gather representative data from a subset of the population.
4. Accuracy: With proper sampling techniques and adequate sample sizes, sampling can provide
accurate estimates of population parameters. The principles of probability and statistics ensure that
valid inferences can be drawn from the sample to the population when proper sampling methods are
employed.
5. Non-Destructive: Sampling allows for the collection of data without the need to disturb or disrupt
the entire population. This is particularly useful when studying sensitive or endangered populations,
as it minimizes any potential harm or impact on the population.
6. Practicality: Sampling provides a practical approach for data collection in situations where it is not
feasible or practical to collect data from the entire population. By selecting a representative sample,
researchers can obtain reliable information and make valid inferences about the population as a
whole.
7. Generalizability: Properly conducted sampling ensures that the sample is representative of the
population, allowing for the generalization of findings from the sample to the larger population. This
allows researchers to draw meaningful conclusions about the population based on the characteristics
observed in the sample.
8. Flexibility: Sampling provides flexibility in terms of sample size, sampling techniques, and data
collection methods. Researchers can adapt their sampling approach based on the specific research
objectives and available resources, allowing for a customized and efficient data collection process.
By utilizing sampling techniques, researchers can obtain reliable and representative data from a
subset of the population, enabling them to make accurate inferences and draw meaningful
conclusions about the entire population.
Variability is a measure of the spread or dispersion within a dataset, indicating how much individual
data points differ from the average or mean. Measuring variability is critical in statistical analysis as it
provides insights into the spread, consistency, and reliability of data.
For example:
In a class of students, if the marks range from 40 to 90, the variability is high.
1. Range:
Definition: The difference between the maximum and minimum values in a dataset.
Example: In a dataset [15, 22, 35, 40, 55], the range is 55 - 15 = 40.
2. Variance:
Definition: The average of the squared differences between each data point and the mean.
Variance = (4−6)2+(8−6)2+(6−6)2)/3
= 4/3.
3. Standard Deviation:
Definition: The square root of variance, providing a measure of spread in the same units as
the data.
Definition: The range between the first quartile (Q1) and the third quartile (Q3).
Formula: IQR = Q3 − Q1
1. Understanding Data Distribution: Variability reveals whether data points are tightly
clustered around the mean or widely dispersed. For Example, High variability in patient
recovery times may indicate treatment inconsistencies.
2. Assessing Reliability and Consistency: Low variability suggests consistency, which is crucial in
quality control and testing. For Example, A factory producing screws with minimal size
variation ensures product reliability.
3. Improving Predictive Models: Variability helps fine-tune predictive models by accounting for
the spread of data. For Example, In stock price prediction, standard deviation is used to
estimate volatility.
4. Enabling Comparisons: Measures like CV allow comparisons across datasets with different
scales. For Example, Comparing income inequality across countries with different average
incomes.
5. Identifying Outliers and Trends: High variability may indicate the presence of outliers or
anomalies. For example, In sales data, an unusually high variability may indicate a seasonal
trend.
6. Decision-Making: Variability informs risk assessment and resource allocation. For example,
In finance, a high SD of returns may signal a risky investment.
1. Regression Analysis: Variance is used to assess the goodness of fit of a regression model. For
example, Explaining variations in house prices based on location and size.
3. Risk Management: Standard deviation and CV are used to assess financial risks. For example,
Analysing the variability of returns in a stock portfolio.
4. Quality Control: Variability measures ensure production processes meet specifications. For
example, Monitoring variations in car engine performance.
Conclusion
Measuring variability is not merely a statistical exercise; it is a cornerstone of advanced data analysis
and decision-making. By quantifying the spread of data, researchers and practitioners can gain
insights into patterns, predict future trends, and make informed decisions. Whether it is in finance,
healthcare, manufacturing, or social sciences, variability provides the lens through which data
becomes meaningful. Without understanding variability, analysis would lack depth, and conclusions
would risk being unreliable. Thus, it plays an indispensable role in advancing knowledge and practice
across domains.
(d) "Test the significance of the correlation coefficient using a t-test at a significance level of 5%."
The correlation coefficient (r) measures the strength and direction of the linear relationship between
two variables. While the correlation coefficient provides a numerical value, determining its statistical
significance is essential to assess whether the observed relationship is due to chance or is truly
significant. The t-test for correlation is a method to test the significance of r by examining whether it
differs significantly from zero (indicating no correlation).
Definition: A statistic that quantifies the strength and direction of a linear relationship
between two variables.
Range: −1≤r≤1
Where:
n: Sample size.
t: t-statistic.
df = n−2
The calculated t-value is compared against the critical t-value from the t-distribution table at the
chosen significance level (α)
Steps to Test the Significance of r Using a t-Test
Typically α = 0.05
6. Compare tt-Values:
7. Draw a Conclusion:
Problem: A researcher observes a correlation coefficient r=0.6 between study hours and exam scores
for a sample of 10 students. Test the significance of this correlation at α = 0.05.
Solution:
H0: r=0.
H1: r≠0
2. Significance Level:
α=0.05.
Substitute into the formula: = 0.6. sqrt (10 – 2)/ sqrt (1 - 0.6^2) = 0.6 sqrt {8} / sqrt {1 - 0.36}
= (0.6 x 2.828) /sqrt (0.64) = frac (1.6968) (0.8) = 2.121
5. Compare t-Values:
6. Conclusion:
Fail to reject H0. The correlation r=0.6 is not statistically significant at α = 0.05.
Conclusion
Testing the significance of the correlation coefficient using a t-test is a vital statistical procedure to
validate observed relationships between variables. It helps researchers distinguish genuine
relationships from random associations. By following systematic steps and considering the
significance level, decision-makers and analysts can draw reliable conclusions. This process,
therefore, forms a cornerstone of advanced statistical analysis and applied research, ensuring both
accuracy and relevance in interpreting data relationships.
The arithmetic mean is the sum of all data points divided by the number of points. It is sensitive to
extreme values (outliers), which can skew the result.
The median is the middle value when the data is sorted in ascending order. If there is an even
number of data points, the median is the average of the two middle numbers.
While they serve similar purposes, they have different mathematical properties. Mathematical
Properties of each measures are following :
Arithmetic Mean:
1. Additivity: The arithmetic mean has the property of additivity. This means that if we have two sets
of data with their respective means, the mean of the combined data set can be obtained by taking
the weighted average of the individual means.
2. Sensitivity to Magnitude: The arithmetic mean is influenced by the magnitude of all values in the
data set. Adding or subtracting a constant value to each data point will result in a corresponding
change in the mean.
3. Sensitivity to Outliers: The arithmetic mean is highly sensitive to outliers or extreme values. A
single outlier can have a significant impact on the mean value, pulling it towards the extreme value.
4. Unique Solution: The arithmetic mean is a unique value that represents the center of the data set.
There is only one value that satisfies the condition of minimizing the sum of squared deviations from
the mean.
Median:
1. Order Preservation: The median has the property of order preservation. It only considers the
position or rank of values and does not rely on their actual magnitudes. As a result, the median is not
affected by the specific values but rather the relative order of the values.
2. Robustness: The median is a robust measure of central tendency. It is less sensitive to outliers or
extreme values compared to the mean. Even if there are extreme values in the data set, the median
tends to remain relatively stable.
3. Non-Uniqueness: The median is not always a unique value. In the case of an odd number of
values, the median is the middle value. However, in the case of an even number of values, there are
two middle values, and the median is the average of these two values.
It's important to note that both the arithmetic mean and median have their strengths and
weaknesses. The choice between them depends on the nature of the data, the presence of outliers,
and the research question at hand. The arithmetic mean provides a more comprehensive view of the
data, but it can be heavily influenced by extreme values. The median, on the other hand, is more
robust to outliers and extreme values but may not capture the full picture of the data set.
The Standard Error of the Mean (SEM) measures how much the sample mean (𝑋) is likely to deviate
from the actual population mean (μ). It is an estimate of the variability of the sample mean in
repeated sampling and helps assess the precision of the sample mean as an estimate of the
population mean.
Where:
n = Sample size
If the population standard deviation (σ) is unknown, it is estimated using the sample standard
deviation (s):
SEx̄ = 8n
1. Measures Precision:
A higher SEM suggests that the sample mean is more variable and less reliable.
As the sample size (n) increases, the SEM decreases, meaning the sample mean becomes a
more accurate estimate of the population mean.
Example: If two studies have sample sizes of 100 and 400, the study with 400 samples will
have a lower SEM and a more precise estimate.
The SEM is always smaller than the standard deviation (σ) of the population.
While standard deviation measures variability within a dataset, SEM measures variability in
sample means.
The SEM is used to construct confidence intervals for the population mean.
X̅ ± Z × SEX̅
Where Z is the Z-score for the desired confidence level (e.g., 1.96 for 95%).
Conclusion:
The Standard Error of the Mean (SEM) is an essential statistical measure that quantifies the accuracy
of a sample mean in estimating the population mean. It is influenced by sample size and standard
deviation and is widely used in hypothesis testing and confidence interval estimation. Understanding
SEM helps researchers and analysts determine the reliability of sample data when making inferences
about a population.
c) Linear Regression
Linear regression is a statistical method used to model the relationship between a dependent
variable (Y) and one or more independent variables (X) using a straight-line equation. It is commonly
used in predictive analytics, trend analysis, and data modeling to determine the effect of one or
more predictor variables on an outcome variable.
1. Simple Linear Regression – Involves one independent variable (X) and one dependent
variable (Y).
2. Multiple Linear Regression – Involves multiple independent variables (X1, X2, X3,Xn) and
one dependent variable (Y).
Y = a + bX + e
Where:
1. Linearity – The relationship between the independent and dependent variable must be
linear.
2. Widely used in forecasting – Useful in sales predictions, economic analysis, and financial
modelling.
3. Computationally efficient – Simple to implement using statistical tools like Excel, Python,
and R.
1. Assumes a linear relationship – May not work well for non-linear data.
Conclusion:
Linear regression is a fundamental statistical tool used to understand and predict relationships
between variables. It is highly effective for data-driven decision-making but requires careful
validation of assumptions and consideration of data quality.
Time Series Analysis is a statistical technique used to analyse and interpret data points collected or
recorded at successive time intervals. The primary goal of time series analysis is to identify patterns,
trends, and seasonal variations over time to make informed predictions about future data points.
1. Time Dependency – Observations are recorded sequentially over time, meaning past values
influence future values.
4. Cyclic Behaviour – Recurrent but non-fixed fluctuations in the data over time.
1. Trend Component (T) – Represents the long-term movement of data (e.g., increasing
population, inflation rate).
2. Seasonal Component (S) – Captures periodic fluctuations occurring at regular intervals (e.g.,
increased ice cream sales in summer).
3. Cyclical Component (C) – Long-term fluctuations that are not fixed (e.g., economic business
cycles).
4. Irregular Component (I) – Random, unpredictable variations (e.g., sudden economic crises).
1. Moving Averages Method – Smooths out fluctuations to identify the underlying trend.
2. Exponential Smoothing – Assigns more weight to recent observations for trend forecasting.
3. Decomposition Method – Breaks down a time series into its components (Trend, Seasonality,
Cycles, and Irregularity).
4. ARIMA (Auto-Regressive Integrated Moving Average) – A widely used statistical model for
forecasting time series data.
A retail company tracks monthly sales revenue over the last 5 years.
Seasonality: Higher sales are observed during festive seasons (e.g., Diwali, Christmas).
Irregular Variations: A sudden drop in sales due to economic slowdown or unforeseen events
(e.g., COVID-19 pandemic).
2. Financial Markets – Stock price prediction, risk analysis, and economic forecasting.
Useful for seasonal demand prediction in industries like tourism, retail, and agriculture.
Historical data dependency – Predictions rely solely on past trends, which may not always
hold true.
Conclusion:
Time series analysis is a powerful tool used across various fields to analyse historical data, detect
patterns, and make future predictions. It helps businesses and policymakers make data-driven
decisions, although it requires careful selection of models and consideration of external influencing
factors.
Definition Represents countable values with Represents data that can take any value
specific frequency. within a given range.
Nature of Data Consists of distinct and separate Includes continuous data where values
values (e.g., number of students in fall within an interval (e.g., heights of
a class). students).
Representation Tabulated as individual values with Uses class intervals with corresponding
their frequencies. frequencies.
Example Number of cars owned by families Heights of students (140-150 cm, 150-
(0, 1, 2, 3, etc.). 160 cm, etc.).
Type of Data Works well for symmetrical More useful for skewed distributions.
Used distributions.
Example Used in financial analysis for returns Used in social sciences where extreme
distribution. values impact distribution.
Bias Less bias as selection is random. Higher bias due to subjective selection.
Use Case Used in large-scale surveys and Used in exploratory studies and qualitative
research. research.
Example Selecting students randomly from a Selecting the first 50 customers entering a
school for a survey. store.
Definition The smallest and largest values that The difference between the upper
define a class in a frequency and lower-class limits.
distribution.
Components Consists of lower-class limit and upper- Defined as the range of a class.
class limit.
Example In a class of 10-20, 10 is the lower limit If the class is 10-20, the class interval
and 20 is the upper limit. is 10 (20 - 10).