Data Structure UNIT-1
Data Structure UNIT-1
[BCS-301]
Shyam B. Verma
(Assistant Professor, CSE Department)
Syllabus
• Introduction: Basic Terminology, Elementary Data
Organization, Built in Data Types in C. Algorithm,
Efficiency of an Algorithm, Time and Space Complexity,
Asymptotic notations: Big Oh, Big Theta and Big Omega,
Time-Space trade-off. Abstract Data Types (ADT)
Text Books
• Aaron M. Tenenbaum, Yedidyah Langsam and Moshe J.
Augenstein, “Data Structures Using C and C++”, PHI Learning
Private Limited, Delhi India
• Horowitz and Sahani, “Fundamentals of Data Structures”,
Galgotia Publications Pvt Ltd Delhi India.
• Lipschutz, “Data Structures” Schaum’s Outline Series, Tata
McGraw-hill Education (India) Pvt. Ltd.
• Thareja, “Data Structure Using C” Oxford Higher Education.
Course Outcomes [CO]
• CO1: Describe the ways arrays, linked lists, stacks, queues, trees,
and graphs are represented in memory, used by the algorithms and
their common applications
• CO2: Understand and implement linear data structures such as
Stack, Queue and Priority Queue.
• CO3: Analyze the computational efficiency of the Sorting and
Searching algorithms.
• CO4: Implementation of Trees and Graphs and perform various
operations on these data structure.
• CO5: Identify the alternative implementations of data structures
with respect to its performance to solve a real world problem.
Introduction
• Data is a collection of facts and figures or a set of values that can be
recorded.
• Data Structure is a way of collecting and organizing data in such a way
that we can perform operations on these data in an effective way.
• Data Structure is a particular way of storing and organizing data in the
memory of the computer so that these data can easily be retrieved and
efficiently utilized in the future when required.
• The choice of a good data structure makes it possible to perform varieties of
critical operations effectively.
• An efficient data structure also uses minimum memory space and minimum
execution time to process the task. The main goal is to reduce the space
and time complexities of different tasks.
Why is Data Structure important?
• Data structure is important because it is used in almost every program or
software system.
• Data Structures are necessary for designing efficient algorithms.
• Using appropriate data structures can help programmers save a good
amount of time while performing operations such as storage, retrieval, or
processing of data.
• Manipulation of large amounts of data is easier.
• Data structures can help improve the performance of our code by reducing
the time and space complexity of algorithms.
• Data structures can help you solve complex problems by breaking
them down into smaller, more manageable parts.
• Data structures provide a way to abstract the underlying complexity of a
problem and simplify it, making it easier to understand and solve.
Basic Types of Data Structures
• As we have discussed above, anything that can store data can be called as
a data structure, hence Integer, Float, Boolean, Char etc, all are data
structures. They are known as Primitive Data Structures.
• Then we also have some complex Data Structures, which are used to store
large and connected data. Some example of Abstract Data
Structure are :
▫ Arrays
▫ Linked List
▫ Tree
▫ Graph
▫ Stack, Queue etc.
• All these data structures allow us to perform different operations on data.
We select these data structures based on which type of operation is
required.
Characteristic Description
Linear In Linear data structures, the data items are arranged in a linear
sequence and sequential order. Example: Array, Linked List
Non-Linear In Non-Linear data structures, the data items are not in sequence.
Example: Tree, Graph
Homogeneous In homogeneous data structures, all the elements are of same
type. Example: Array
Non- In Non-Homogeneous data structure, the elements may or may
not be of the same type. Example: Structures
Homogeneous
Static Static data structures are those whose sizes and structures
associated memory locations are fixed, at compile time.
Example: Array
Dynamic Dynamic structures are those which expands or shrinks
depending upon the program need and its execution. Also, their
associated memory locations changes. Example: Linked List
created using pointers
Operations on Data Structure
The major or the common operations that can be performed on the data structures are:
• Traversal: To access each data exactly once.
• Searching: We can search for any element in a data structure.
• Sorting: We can sort the elements of a data structure either in an ascending or
descending order.
• Insert: We can also insert the new element in a data structure.
• Update: We can also update the element, i.e., we can replace the element with another
element.
• Delete: We can also perform the delete operation to remove the element from the data
structure.
Basic Terminologies
• Data: We can define data as an elementary value or a collection of values. For
example, the Employee's name and ID are the data related to the Employee.
• Data Items: A Single unit of value is known as Data Item.
• Group Items: Data Items that have subordinate data items are known as Group
Items. For example, an employee's name can have a first, middle, and last name.
• Elementary Items: Data Items that are unable to divide into sub-items are known
as Elementary Items. For example, the ID of an Employee.
• Entity and Attribute: A class of certain objects is represented by an Entity. It
consists of different Attributes. Each Attribute symbolizes the specific property of
that Entity.
• Meaningful or processed data is called information. The
collection of data is organized into the hierarchy of fields, records
and files. A single elementary unit of information representing an
attribute of an entity is called a Field.
• Records are the collection of field values of a given entity.
Collection of records of the entities in a given entity set is called
a file. Each record may contain a certain field that uniquely
represents that record. Such a field is called a primary key.
• Based on their length, records may be classified into two. They
are:
▫ Fixed-length record : All record contain the same data items with
the same amount of space assigned to each items.
▫ Variance length record: Records may contain different length
data items.
What is an Algorithm ?
• An algorithm is any well-defined computational procedure that takes some value, or set of
values, as input and produces some value, or set of values, as output.
• An algorithm is a set of well-defined instructions to solve a particular problem. It takes a set of
input(s) and produces the desired output.
• Properties of an algorithm:
• Input- There should be 0 or more inputs supplied externally to the algorithm.
• Output- There should be at least 1 output obtained.
• Definiteness- Every step of the algorithm should be clear and well defined.
• Finiteness- The algorithm should have finite number of steps.
• Correctness- Every step of the algorithm must generate a correct output.
Characteristics
Efficiency of an Algorithm
• The efficiency of an algorithm depends on its design,
implementation, resources, and memory required by it for the
computation.
• An algorithm is said to be efficient and fast, if it takes less time to
execute and consumes less memory space. The performance of an
algorithm is measured on the basis of following properties :
• Time Complexity
• Space Complexity
Space Complexity
• It is the amount of memory space required by the algorithm, during
the course of its execution. Space complexity must be taken seriously
for multi-user systems and in situations where limited memory is
available.
• Auxiliary Space is the extra space or temporary space used by an
algorithm.
• Space Complexity of an algorithm is the total space taken by the
algorithm with respect to the input size. Space complexity includes
both Auxiliary space and space used by input.
• Note: Space complexity depends on a variety of things such as the
programming language, the compiler, or even the machine running
the algorithm.
Time Complexity
• Time Complexity is a way to represent the amount of time required
by the program to run till its completion.
• It's generally a good practice to try to keep the time required
minimum, so that our algorithm completes it's execution in the
minimum time possible.
• The time complexity of an algorithm depends on the behavior of
input:
▫ Worst-case
▫ Best-case
▫ Average-case
Best, Average and Worst-case
Analysis of Algorithms
• We all know that the running time of an algorithm increases (or
remains constant) as the input size (n) increases.
• Sometimes even if the size of the input is same, the running time
varies among different instances of the input.
• In that case, we perform best, average and worst-case analysis.
• The best case gives the minimum time, the worst case running time
gives the maximum time and average case running time gives the
time required on average to execute the algorithm.
int LinearSearch(int a, int n, int item) {
int i;
for (i = 0; i < n; i++) {
if (a[i] == item) {
return a[i];
}
}
return -1;
}
Best case happens when the item we are looking for is in the very first
position of the array: Θ(1)
Worst case happens when the item we are searching is in the last position
of the array or the item is not in the array: Θ(n)
Average case analyses all possible inputs and calculate the running time
for all inputs. Add up all the calculated values and divide the sum by the
total number of entries:(1+2+3+…+n+(n+1))/(n+1)
=O(n)
Example-1
int a = 0, b = 0;
for (i = 0; i < N; i++) {
a = a + 1;
}
for (j = 0; j < M; j++) {
b = b + 1;
}
• O(N + M) time, O(1) space
• The first loop is O(N) and the second loop is O(M).
Since N and M are independent variables, so we can’t say which one is
the leading term. Therefore Time complexity of the given problem will
be O(N+M).
Since variables size does not depend on the size of the input,
therefore Space Complexity will be constant or O(1).
Example-2
int i, j, k = 0;
for (i = 1; i <= n; i++) {
for (j = 1; j <= n; j = j +1) {
k = k + n / 2;
}
}
counter = 0;
for (i = 1; i < = n; i++)
{
if (A[i] == 1)
counter++;
else {
f(counter);
counter = 0;
}
}
• The complexity of this program fragment is θ(n)
Asymptotic Notations
• When it comes to analyzing the complexity of any algorithm in
terms of time and space, we can never provide an exact number to
define the time required and the space required by the algorithm,
instead we express it using some standard notations, also known
as Asymptotic Notations.
• When we analyse any algorithm, we generally get a formula to
represent the amount of time required for execution or the time
required by the computer to run the lines of the code, number
of memory accesses, number of comparisons, temporary
variables occupying memory space etc.
• Asymptotic Notations allow us to analyze an algorithm's running
time by identifying its behavior as its input size grows. This is also
referred to as an algorithm's growth rate.
Let us take an example, if some algorithm has a time complexity of
T(n) = (n2 + 3n + 4), which is a quadratic equation. For large values of n, the 3n
+ 4 part will become insignificant compared to the n2 part.
Growth rate of functions
• Logarithmic Function - log n
• Linear Function - an + b
• Quadratic Function - an2 + bn + c
• Polynomial Function - anz + . . . + an2 + an1 + an0, where z is some
constant
• Exponential Function - an, where a is some constant
GATE-CS-2021
Consider the following three functions.
f1=10n
f2=nlogn
f3=n√n
Arranges the functions in the increasing order of asymptotic growth rate:
f2<f3<f1
AKTU Questions
• Define best case, average case and worst case for analyzing
the complexity of a program. [2022-23] [2 Marks]
• Rank the following typical bounds in increasing order of
growth rate: O(log n), O(n4), O(1), O(n2 log n) [2021-22] [2
Marks]
• Define the following terms: (i) Time complexity (ii) Space
complexity (iii) Asymptotic notation (iv) Big O notation
[2019-20] [10 Marks]
What is Asymptotic Behavior?
• The word Asymptotic means approaching a value or curve
arbitrarily closely (i.e., as some sort of limit is taken).
• In asymptotic notations, we use the some model to ignore
the constant factors and insignificant parts of an
expression, to device a better way of representing
complexities of algorithms, in a single term, so that
comparison between algorithms can be done easily.
• Let's take an example to understand this:
• If we have two algorithms with the following expressions representing the
time required by them for execution:
• Expression 1: (20n2 + 3n - 4)
• Expression 2: (n3 + 100n - 2)
• Now, as per asymptotic notations, we should just worry about how the
function will grow as the value of n (input) will grow, and that will entirely
depend on n2 for the Expression 1, and on n3 for Expression 2. Hence, we
can clearly say that the algorithm for which running time is represented by
the Expression 2, will grow faster than the other one, simply by analyzing
the highest power coefficient and ignoring the other constants(20 in 20n2)
and insignificant parts of the expression(3n - 4 and 100n - 2).
• The main idea behind casting aside the less important part is to make
things manageable.
• All we need to do is, first analyze the algorithm to find out an expression to
define it's time requirements and then analyse how that expression will
grow as the input (n) will grow.
Types of Asymptotic Notations
We use three types of asymptotic notations to represent the
growth of any algorithm, as input increases:
▫ Big Theta (Θ)
▫ Big Oh(O)
▫ Big Omega (Ω)
Upper Bounds: Big-O
• This notation is known as the upper bound of the algorithm, or a
Worst Case of an algorithm.
• It tells us that a certain function will never exceed for any value of
input n at any time.
• Consider Linear Search algorithm, in which we traverse an array
elements, one by one to search a given number.
• In Worst case, starting from the front of the array, we find the
element we are searching for at the end, which will lead to a time
complexity of n, where n represents the number of total elements.
• If the element we are searching for is the first element of the array,
in which case the time complexity will be constant.
• So, the time complexity is O(n) in worst case.
Let f(n) and g(n) are two nonnegative functions indicating the running
time of two algorithms. We say, g(n) is providing an upper bound to f(n)
if there exist some positive constants c and n0 such that
0 ≤ f(n) ≤ c.g(n) for all n ≥ n0. It is denoted as f(n) = Ο(g(n)).
Lower Bounds: Omega
• Big Omega notation is used to define the lower bound of any
algorithm or we can say the best case of any algorithm.
• This always indicates the minimum time required for any
algorithm for all input values, therefore the best case of any
algorithm.
• In simple words, when we represent a time complexity for any
algorithm in the form of big-Ω, we mean that the algorithm will take
at least this much time to complete it's execution. It can definitely
take more time than this too.
Let f(n) and g(n) are two nonnegative functions indicating the running time of two
algorithms. We say the function g(n) is lower bound of function f(n) if there exist
some positive constants c and n0 such that 0 ≤ c.g(n) ≤ f(n) for all n ≥ n0. It is
denoted as f(n) = Ω (g(n)).
Tight Bounds: Theta
• When we say tight bounds, we mean that the time complexity
represented by the Big-Θ notation is like the average value or range
within which the actual time of execution of the algorithm will be.
• For example, if for some algorithm the time complexity is
represented by the expression 3n2 + 5n, and we use the Big-Θ
notation to represent this, then the time complexity would be Θ(n2),
ignoring the constant coefficient and removing the insignificant part,
which is 5n.
• Here, in the example above, complexity of Θ(n2) means, that the
average time for any input n will remain in between, c1*n2 and c2*n2,
where c1, c2 are two constants, thereby tightly binding the expression
representing the growth of the algorithm.
Let f(n) and g(n) are two nonnegative functions indicating running time of two
algorithms. We say the function g(n) is tight bound of function f(n) if there exist
some positive constants c1, c2, and n0 such that 0 ≤ c1 g(n) ≤ f(n) ≤ c2 g(n) for all
n ≥ n0. It is denoted as f(n) = Θ (g(n)).
ISRO CS 2015
int recursive (int n)
{
if(n == 1)
return (1);
else return (recursive (n-1) + recursive (n-1));
}
As per the above illustration, following are the important points to be considered.
1. Index starts with 0.
2. Array length is 10 which means it can store 10 elements.
3. Each element can be accessed via its index. For ex, we can fetch an element at index 6 as 27.
Calculation of address
• The address of any particular element in the 1-D array can be
calculated by the following equation:
Address of element a[k] = Base address + W*E
• Here, W is the size of each element of the array, and E is the
effective index (E=k-LB) of element we need in the array.
• Example:
We wish to find the address of a 6th element of the 1-dimensional
array ‘a[10]’, whose base address is 1000. The size of each element
in this example is 4 byte. So, the calculation based on this can be
performed in the following way.
Address of element a[6] = 1000+4*6
= 1000+24
= 1024
Basic Operations
Following are the basic operations supported by an array.
• Traversal − Process all the array elements one by one.
• Insertion − Adds an element at the given index.
• Deletion − Deletes an element at the given index.
• Search − Search an element using the given index or by the
value.
• Update − Updates an element at the given index.
Traversal of Array
• The method of processing each element in the array exactly
once for a specific purpose is known as Traversal. In array Traversal
starts from first element in the array and ends at the last element of
the array.
• For ex. printing every element, counting the total number of
elements, or performing any process on these elements.
• Time complexity of traversal algorithm is O(n) in all cases.
Algorithm for Array Traversal
Step 1: START = 0
Step 2: Repeat Step-3 while (START<N)
Step 3: Read A[START]
START = START + 1
Program for Array Traversal
#include<stdio.h>
#define N 5
void traverse(int *A);
int main()
{
int A[N]={10,20,30,40,50};
traverse(A);
}
void traverse(int *A)
{
int START=0;
while(START<N)
{
printf("%d\n“, A[START]);
START=START+1;
}
}
Insertion in an Array
Following can be a situation with array insertion:
▫ Insertion at the beginning of an array
▫ Insertion at the given index of an array
▫ Insertion after/before the given index of an array
Insertion at the Beginning of an Array
• When the insertion happens at the beginning, it causes all the
existing data items to shift one step towards right.
• We assume A is an array with N elements. The maximum numbers
of elements it can store is defined by MAX. We shall first check if
an array has any empty space to store any element and then we
proceed with the insertion process. The time complexity is O(n).
Algorithm
1. Begin
2. IF N == MAX
return
3. ELSE N = N + 1
a. For all Elements in A Move to next adjacent location
b. A[FIRST] = New Element
4. End
Insertion at the given index of an array
• In this scenario, we are given the exact location (index) of an array where a new
data element (value) needs to be inserted. First we shall check if the array is full, if
it is not, then we shall move all data elements from that location to one step towards
right. This will make room for a new data element.
• We assume A is an array with N elements. The maximum numbers of elements it
can store is defined by MAX.
Algorithm
1. Begin
2. IF N == MAX
return
3. ELSE N = N + 1
a. Input index
b. For All Elements from A[index] to A[N], Move to next adjacent location
c. A[index] = New Element
4. End
Insertion after the given index of an array
• In this scenario we are given a location (index) of an array after which a
new data element (value) has to be inserted. Only the seek process varies,
the rest of the activities are the same as in the previous example.
• We assume A is an array with N elements. The maximum numbers of
elements it can store is defined by MAX.
• Algorithm
1. Begin
2. IF N = MAX
return
3. ELSE
N=N+1
a. Input index
b. For All Elements from A[index + 1] to A[N] Move to next adjacent location
c. A[index + 1] = New Element
3. End
The time complexity of insertion
algorithm in:
The time is linearly dependent on the number of elements, which is not bad, but
not that good too.
Binary Search
• Binary Search is useful when there are large numbers of elements in
an array and they are sorted. So a necessary condition for Binary
search to work is that the list/array should be sorted.
▫ Delete an element
GATE-CS-2015 (Set 2)
• An unordered array contains n distinct elements. The number of
comparisons to find an element in this list that is neither maximum nor
minimum is:
• Θ(1)
• Θ(nlogn)
• Θ(n)
• Θ(logn)
• Θ(1) We only need to consider any 3 elements and compare them. Using
3 comparisons, we can find that the middle element. So the number of
comparisons is constants, that makes time complexity as Θ(1).
UGC NET CS 2016 July – III
Let A[1...n] be an array of n distinct numbers. If i < j and A[i] > A[j],
then the pair (i, j) is called an inversion of A. What is the expected
number of inversions in any permutation on n elements ?
▫ n(n-1)/2
▫ n(n-1)/4
▫ n(n+1)/4
▫ 2n[logn]
▫ n(n-1)/4
GATE CS 2005
• A program P reads in 500 integers in the range
[0..100] representing the scores of 500 students. It then
prints the frequency of each score above 50. What would be
the best way for P to store the frequencies?
▫ An array of 50 numbers
▫ An array of 100 numbers
▫ An array of 500 numbers
▫ A dynamically allocated array of 550 numbers
▫ An array of 50 numbers
Advantages and disadvantages of Arrays
Advantages
1. Reading an array element is simple and efficient. This is because any
element can be access directly using indexes.
2. Array is a foundation for other data structures. For example other
data structures such as Linked List, Stack, Queue etc. are
implemented using array.
3. All the elements of an array can be accessed using a single name
(array name) along with the index, which is readable, user-friendly
and efficient rather than storing those elements in different-2
variables.
4. All the elements in array are stored at contiguous memory
locations.
Disadvantages
1. While using array, we must need to make the decision of the size of
the array in the beginning (static memory allocation), so if we
are not aware how many elements we are going to store in array, it
would make the task difficult.
2. The size of the array is fixed so if at later point, if we need to store
more elements in it then it can’t be done. On the other hand, if we
store less number of elements than the declared size, the remaining
allocated memory is wasted.
3. Insertion and deletion operations required O(n) shifting of
elements.
Two dimensional (2D) arrays
• An array of arrays is known as 2D array. The two dimensional (2D) array is also
known as matrix. A matrix can be represented as a table of rows and columns.
• A two-dimensional array is stored in the form of the row-
column matrix, where the first index designates the row and
second index shows the column.
These matrices are stored in the memory as given below.
▫ Row-Major order Implementation
▫ Column-Major order Implementation
Row-Major order Implementation
• In Row-Major Implementation of the arrays, the arrays are
stored in the memory in terms of the row design, i.e. first the first
row of the array is stored in the memory then second and so on.
Suppose we have an array named arr having 3 rows and 4 columns
then it can be stored in the memory in the following manner:
Column-Major Implementation
• In Column-Major Implementation of the arrays, the arrays are stored in
the memory in the term of the column design, i.e. the first column of the array is
stored in the memory then the second and so on.
UGC NET CS 2014 Dec - II
• Consider a two-dimensional array A[20][10]. Assume 4 words per
memory cell, the base address of array A is 100, elements are stored
in row-major order and first element is A[0][0]. What is the address
of A[11][5] ?
▫ 560
▫ 460
▫ 570
▫ 575
▫ 560
GATE-CS-2014-(Set-3)
Let A be a square matrix of size n x n. What is the expected output of the following program:
C = 100
for i = 1 to n do • The matrix A itself
for j = 1 to n do • Transpose of matrix A
{ • Adding 100 to the upper diagonal elements
Temp = A[i][j] + C and subtracting 100 from diagonal
A[i][j] = A[j][i] elements of A
A[j][i] = Temp - C • None of the above
}
for i = 1 to n do
for j = 1 to n do
Output(A[i][j]);
• Suppose multi dimensional arrays P and Q are declared as P(-2:12, 4:22) and
Q(1:9, -5:15, -3:8) stored in column major order.
(i) Find the length of each dimensions of P and Q.
(ii) Find the number of elements in P and Q.
(iii) Assuming base address (Q) =400, W=4, find the effective indices E1, E2 and E3 and
address of element Q[3,3, 3]. [AKTU-2018][10 Marks]
Applications of Array
• Array stores data elements of the same data type.
• Arrays help to maintain large data under a single variable name. This
avoids the confusion of using multiple variables.
• Arrays can be used for sorting data elements. Different sorting
techniques like the Bubble sort, Insertion sort, Selection sort, etc use
arrays to store and sort elements easily.
• Arrays can be used for performing matrix operations.
• Many databases, small and large, consist of one-dimensional and
two-dimensional arrays whose elements are records.
• Arrays can be used for CPU scheduling.
• Arrays are also used to implement other data structures like Stacks,
Queues, Heaps, Hash tables, etc.
Sparse Matrix
• If a matrix contains more number of ZERO values than NON-ZERO values. Such
matrix is known as sparse matrix.
Sparse matrix is a matrix which contains very few non-zero elements.
• When a sparse matrix is represented with a 2-dimensional array, we waste a lot
of space to represent that matrix.
https://visualgo.net/en/list?slide=1
https://cmps-people.ok.ubc.ca/ylucet/DS/Algorithms.html
Singly Linked List
• Singly linked lists contain nodes which have a data part as well as an
address part i.e. next, which points to the next node in the
sequence of nodes.
• A singly linked list is formed when many such nodes are linked
together to form a chain. Each node points to the next node present
in the order. The first node is always used as a reference to traverse
the list and is called HEAD. The last node points to NULL.
Declaring a Linked list
In C language, a linked list can be implemented using structure and pointers.
struct LinkedList
{
int data;
struct LinkedList *next;
};
• The above definition is used to create a node in the list. The data field stores the
value and the next is a pointer to store the address of the next node.
• Noticed something unusual with next?
• In place of a data type, struct LinkedList is written before next. That's
because its a self-referencing pointer. It means a pointer that points to
whatever it is a part of. Here next is a part of a node and it will point to the next
node.
Inserting a node in Singly Linked List
A node can be added in three ways:
1) At the front (beginning) of the linked list
2) At a given position k.
3) At the end of the linked list.
Insertion at the beginning of the list
• In the first case, we make a new node and points its next to the head
of the existing list and then change the head to the newly added node.
It is similar to picture given below.
• Time complexity of this algorithm is O(1) as it does constant
amount of work.
Insertion of a node at a given position
• Time complexity of this algorithm is O(n) as n can be any
position.
Insertion at the end of the linked list
• Time complexity of this algorithm is O(n) where n is the number
of nodes in linked list. Since there is a loop from head to end, the
function does O(n) work.
• This method can also be optimized to work in O(1) by keeping an
extra pointer to tail (end) of the linked list.
ISRO CS 2014
Consider a single linked list where F and L are pointers to the first and last
elements respectively of the linked list. The time for performing which of the
given operations depends on the length of the linked list?
▫ rear node
▫ front node
▫ not possible with a single pointer
▫ node next to front
Polynomial Representation
A polynomial p(x) is the expression in variable x which is in the form (axn + bxn-1 +
…. + jx+ k), where a, b, c …., k fall in the category of real numbers and 'n' is non
negative integer, which is called the degree of polynomial.An essential
characteristic of the polynomial is that each term in the polynomial expression
consists of two parts:
• Coefficient
• Exponent
Representation of Polynomial
Polynomial can be represented in the various ways. These are:
• By the use of arrays
• By the use of Linked List
Representation of Polynomial Using Arrays
Addition of Two Polynomial
• For adding two polynomials using arrays is straightforward
method, since both the arrays may be added up element
wise beginning from 0 to n-1, resulting in addition of two
polynomials.
• Addition of two polynomials using linked list requires
comparing the exponents, and wherever the exponents are
found to be same, the coefficients are added up.
• For terms with different exponents, the complete term is
simply added to the result thereby making it a part of
addition result.
Multiplication of Two Polynomial
• Multiplication of two polynomials however requires
manipulation of each node such that the exponents are
added up and the coefficients are multiplied.
• After each term of first polynomial is operated upon with
each term of the second polynomial, then the result has to be
added up by comparing the exponents and adding the
coefficients for similar exponents and including terms as
such with dissimilar exponents in the result.
Representation of Polynomial Using Linked List
To understand this concept better let's first brush up all the basic contents
that are required.
• linked list is a data structure that stores each element as an object in a
node of the list. every note contains two parts data and links to the next
node.
• Polynomial is a mathematical expression that consists of variables and
coefficients. for example x^2 - 4x + 7
• In the Polynomial linked list, the coefficients and exponents of the
polynomial are defined as the data node of the list.
• For adding two polynomials that are stored as a linked list. We need to
add the coefficients of variables with the same power. In a linked list node
contains 3 members, coefficient value link to the next node. A linked list
that is used to store Polynomial looks like − 4x7 + 12x2 + 45
Addition of two polynomials using Linked list
• Adding two polynomials that are represented by a linked list.
We check values at the exponent value of the node. For the
same values of exponent, we will add the coefficients.
• Example,
Input :p1= 13x8 + 7x5 + 32x2 + 54p2= 3x12 + 17x5 + 3x3 + 98
Output : 3x12 + 13x8 + 24x5 + 3x3 + 32x2 + 152
Multiplication of two polynomials using Linked list
Two Variables Polynomial
• Polynomials in two variables are algebraic expressions consisting of
terms in the form axnym . The degree of each term in a polynomial in
two variables is the sum of the exponents in each term and
the degree of the polynomial is the largest such sum.
• Here are some examples of polynomials in two variables and their
degrees.
x2y−6x3y12+10x2−7y+1 degree : 15
6x4+8y4−xy2 degree : 4
x4y2−x3y3−xy+x4 degree : 6
6x14−10y3+3x−11y degree : 14
Advantages of Linked Lists
• They are dynamic in nature i.e it allocates the memory when
required.
• The list is not required to be contiguously present in the
memory. The node can reside any where in the memory and
linked together to make a list. This achieves optimized
utilization of space.
• List size is limited to the memory size and doesn't need to be
declared in advance.
• Insertion and deletion operations can be easily implemented.
• Stacks and queues can be easily implemented.
Disadvantages of Linked Lists
• The memory is wasted as pointers require extra memory
for storage.
• No element can be accessed randomly; it has to access
each node sequentially.
• Not cache friendly. Since array elements are contiguous
locations, there is locality of reference which is not there in
case of linked lists.
• Reverse Traversing is difficult in linked list.
Applications of Linked Lists
• Linked lists are used to implement stacks, queues, graphs, etc.
• Polynomial representation
• Representation of sparse matrices
• Symbol table creation in compiler
• Mailing list
• Memory management
• Linked lists let you insert elements at the beginning and end of the
list.
• In Linked Lists we don't need to know the size in advance.