Control Structures: Control structures are fundamental components of programming languages that
determine the flow of execution in a program. They allow developers to control how and when certain
blocks of code are executed, enabling decision-making, repetition, and branching.
1. The sequence structure
The first type of control structures is called the sequence structure. This structure is the most elementary
structure. The sequence structure is a case where the steps in an algorithm are constructed in such a way that,
no condition step is required. The sequence structure is the logical equivalent of a straight line.
Example 1: Suppose you are required to design an algorithm for finding the average of six numbers, and the
sum of the numbers is given. The pseudocode will be as follows:
Start
Get the sum
Average = sum / 6
Output the average
Stop
2. Decision Structure or Selection Structure
The decision structure or mostly commonly known as a selection structure, is case where in the algorithm,
one has to make a choice of two alternatives by making decision depending on a given condition. Selection
structures are also called CASE selection structures when there are two or more alternatives to choose from.
This structure can be illustrated in a flowchart as follows:
In pseudocode form:
If condition is true
Then do task A
else
Do Task-B
In this example, the condition is evaluated, if the condition is true Task-A is evaluated and if it is
false, then Task-B is executed
3. Repetition or Iteration Structure
A third structure causes the certain steps to be repeated. The Repetition structure can be implemented using :
• Repeat Until Loop
• The While Loop
• The For Loop
Any program instruction that repeats some statement or sequence of statements a number of times is called
an iteration or a loop. The commands used to create iterations or loops are all based on logical tests. There
three constructs for iterations or loops in our pseudo- code.
Programming language paradigm
Paradigm means an example that serves as pattern, approach or model. Programming paradigm therefore
means a pattern that serves as a school of thoughts for programming of computers. It is the preferred
approach to programming that a language supports and also a classification of
programming languages based on their features even though most popular languages support multiple
paradigms. A good programmer might write great software using any programming paradigm
1. Imperative paradigm
The word 'imperative' can be used both as an adjective and as a noun. As an adjective it means 'expressing a
command or plea'. In other words, asking for something to be done. As a noun, an imperative is a command
or an order. For example “First do this and next do that” The 'first do this and next do that' is a short phrase
which really in a nutshell describes the spirit of the imperative paradigm. The basic idea is the command,
which has a measurable effect on the program state. The phrase also reflects that the order to the commands
is important. 'First do that, then do this' would be different from 'do this, then do that'. It is the oldest but still
the dominant paradigm.Examples of an imperative language are FORTRAN (FORmula TRANslation),
COBOL (Common Business-Oriented Language), Pascal, C, and Ada.
2. Functional Paradigm
In the functional paradigm, a program is considered a mathematical function. In this context, a function is a
black box that maps a list of inputs to a list of outputs (Figure 1). For example, “summation” can be
considered as a function with n inputs and only one output. The function takes the n inputs and adds them
and created the sum. Functional programming is in many respects a simpler and cleaner programming
paradigm than the imperative one. The reason is that the paradigm originates from a purely mathematical
discipline: the theory of functions.The language fits well with computations driven by needs. Examples of
functional language are LISP (LISt Programming), Scheme, Haskell, F#, etc.
3. Logic paradigm
The logic paradigm is dramatically different from the other three main programming paradigms. It uses the
principle of logical reasoning to answer queries. It based on formal logic. The logic paradigm fits extremely
well when applied in problem domains that deal with the extraction of knowledge from basic facts and
relations. The logical paradigm seems less natural in the more general areas of computation. For example,
the famous rule of deduction in logic is:
If (A is B) and (B is C), then (A is C)
One famous logic language is PROLOG (PROgramming in LOGic)
4. Object-oriented paradigm
The object-oriented paradigm has gained great popularity in the recent decade. The primary and most direct
reason is undoubtedly the strong support of encapsulation and the logical grouping of program aspects.
These properties are very important when programs become larger and larger. The underlying, and
somewhat deeper reason to the success of the object-oriented paradigm is probably the conceptual anchoring
of the paradigm. An object-oriented program is constructed with the outset in concepts, which are important
in the problem domain of interest. An objectoriented program he theory of concepts, and models of human
interaction with real world phenomena.
There are four key object-oriented program concepts, these are data abstraction, encapsulation, inheritance,
and polymorphism. Data Abstraction focuses on the essential characteristics of some object which yields
clearly defined boundaries. It is relative to the perspective of the viewer. Encapsulation is the
compartmentalisation of structure and behaviour so that the details of an object’s implementation are hidden.
Inheritance allow classes to use parent classes’ behavior and structure. It improves reliability and
manageability and allows code reusability. Polymorphism ensures that different implementations can be
hidden behind a common interface. This means we
can define several operations with the same name that can do different things in related classes. Some
object-oriented programming languages include C++,JAVA, C#, etc
Compiler
A compiler translates a program written in one high level language, the source code into another language
which is the object code. Most compilers are organized into three stages: a front end, an optimizer, and a
back end. The front end is responsible for understanding the program. It makes sure the program is valid and
transforms it into an intermediate representation, a data structure used by the compiler to represent the
program. The optimizer improves the intermediate representation to increase the speed or reduce the size of
the executable which is ultimately produced by the compiler. The back end converts the optimized
intermediate representation into the output language of the compiler. Compilation is a different process,
where a compiler reads in a program, but instead of running the program, the compiler translates it into some
other language, such as bytecode or machine code. The translated code may either be directly executed by
hardware, or serve as input to another interpreter or another compiler Figure 2.
149 Figure 2: Compilation Process
Different Phases of Compilation
The major phases of compilation process are lexical analysis, Syntactic analysis, Semantic analysis, Code
Optimisation, and Code generation.
A. Lexical analysis
The aim of lexical analysis is to read the symbols (characters) forming the program sequentially from the
input and to group these symbols into meaningful logical units, which we call tokens. For example, the
lexical analyser of C or Java, when presented with the string x = 1 + y++; will produce 7 tokens: the
identifier x, the assignment operator =, the number 1, the addition operator +, the identifier foo, the auto
increment operator ++ and finally the command termination token.
B. Syntactic analysis
Once the list of tokens has been constructed, the syntactic analyser (or parser) seeks to construct a derivation
tree for this list. This is, clearly, a derivation tree in the grammar of the language. Each leaf of this tree must
correspond to a token from the list obtained by the scanner (Figure 3).
Figure 3: Scanning and Parsing
C. Semantic analysis
The derivation tree (which represents the syntactic correctness of the input string) is subjected to checks of
the language’s various context-based constraints. As we have seen, it is at this stage that 150 declarations,
types, number of function parameters, etc., are processed. As these checks are performed, the derivation tree
is augmented with the information derived from them and new structures are generated.
D. Code Optimisation
The code obtained from the preceding phases by repeatedly traversing the derivation tree is fairly inefficient.
There are many optimisations that can be made before generating object code. Typical operations that can be
performed are:
i. Removal of useless code (dead code removal).
ii. In-line expansion of function calls.
iii. Subexpression factorisation.
iv. Loop optimisations.
E. Code generation
Starting with optimised intermediate form, the final object code is generated. There follows, in general, a
last phase of optimisation which depends upon the specific characteristics of the object language. In a
compiler that generates machine code, an important part of this last phase is register assignment (decisions
as to which variables should be stored in which processor registers). This is a choice of enormous
importance for the efficiency of the final program.
Attributes/Characteristics of a Good Programming Language
Simplicity : Easy to learn and use.
Readability : Code should be easy to understand and maintain.
Efficiency : Programs should execute quickly and use minimal resources.
Portability : Code should run on different platforms without modification.
Reliability : Should minimize errors and handle exceptions gracefully.
Extensibility : Ability to add new features or libraries.
Security : Protect against vulnerabilities like memory leaks or buffer overflows.
Language Definition Structure
While programming as a beginning programmer, you are expected to have a real picture of how you want
your program to look like or probably how it should behave/work after compilation. It is important for every
programmer to have an understanding of programming language definition structure.
Grammars : In Formal Language, a grammar is a set of production rules for strings in the language. The
rules describe how to form strings from the language alphabets that are valued according to system. It is a
grammar that consists of finite set of production rules. Formal language is an important topic area in the
study of programming languages. Every programming language has its set of grammar that should be well
defined while building the language translator (compiler in particular).
A parse tree is also known as derivation tree. This is defined as an ordered, rooted tree that represent the
syntactic structure of a string according to some context free grammar. Parse tree is a concept that has to be
understood in the context of Compiler Construction. This tree shows the procedure that every compiler uses
to parse program statements in a program being fed into it.
Grammar
A grammar lets us transform a program, which is normally represented as a linear sequence of ASCII
characters, into a syntax tree. Only programs that are syntactically valid can be transformed in this way. This
tree will be the main data-structure that a compiler or interpreter uses to process the program. By traversing
this tree the compiler can produce machine code, or can type check the program, for instance. And by
traversing this very tree the interpreter can simulate the execution of the program.
The main notation used to represent grammars is the Backus-Naur Form (BNF). This notation, invented by
John Backus and further improved by Peter Naur, was first used to describe the syntax of the ALGOL
programming language. A BNF grammar is defined by a four-element tuple represented by (T, N, P, S). The
meaning of these elements is as follows:
T is a set of tokens. Tokens form the vocabulary of the language and are the smallest units of syntax.
These elements are the symbols that programmers see when they are typing their code, e.g., the while's,
for's, +'s, ('s, etc.
N is a set of nonterminals. Nonterminals are not part of the language per se. Rather, they help to
determine the structure of the derivation trees that can be derived from the grammar. Usually we enclose
these symbols in angle brackets, to distinguish them from the terminals.
P is a set of productions rules. Each production is composed of a left-hand side, a separator and a
right-hand side, e.g., <non-terminal> := <expr1> ... <exprN>, where ':=' is the separator. For
convenience, productions with the same left-hand side can be abbreviated using the symbol '|'. The pipe,
in this case, is used to separate different alternatives.
S is a start symbol. Any sequence of derivations that ultimately produces a grammatically valid
program starts from this special non-terminal. As an example, below we have a very simple grammar,
that recognizes arithmetic expressions. In other words, any program in this simple language represents
the product or the sum of names such as 'a', 'b' and 'c'.
<exp> ::= <exp> "+" <exp>
<exp> ::= <exp> "*" <exp>
<exp> ::= "(" <exp> ")"
<exp> ::= "a"
<exp> ::= "b"
<exp> ::= "c"
This grammar could be also represented in a more convenient way using a sequence of bar symbols, e.g.:
<exp> ::= <exp> "+" <exp> | <exp> "*" <exp> | "(" <exp> ")" | "a" | "b" | "c"
Notice that context-free grammars are not the only kind of grammar that computers can use to recognize
languages. In fact, there exist a whole family of formal grammars, which have been first studied by Noam
Chomsky, and today form what we usually call the Chomsky's hierarchy. Some members of this hierarchy,
such as the regular grammars are very simple, and recognize a relatively small number of languages.
Nevertheless, these grammars are still very useful.
Regular grammars are at the heart of a compiler's lexical analysis, for instance. Other types of grammars are
very powerful. As an example, the unrestricted grammars are as computationally powerful as the Turing
Machines. Nevertheless, in this book we will focus on context-free grammars, because they are the main
tool that a compiler uses to convert a program into a format that it can easily process.
DATA DECLARATION
A variable is a symbolic name assigned to a data item by the programmer. At any particular time, a variable
will stand for one particular data, called the value of a variable, which may change from time to time during
a computing process. The value of a variable may change many times during the execution of a program. A
variable is usually given a name by the programmer. The variable must be declared that is to state its data
type and its Value.
A data type is a classification of data, which can store a specific type of information. Data types are
primarily used in computer programming in which variables are created to store data. There are a number of
traditional data types found in most languages. The act of defining a variable is called data declaration.
Declarations provide information about the name and type of data objects needed during program execution.
Every language supports a set of primitive data types. Usually these include integer, real, Boolean, and
character or string. A language standard determines the minimum set of primitive types that the language
compiler must implement. There are two types of declaration, implicit declaration and explicit declaration.
Implicit declaration or default declaration are those declaration which is done by compiler when no explicit
declaration or user defined declaration is mentioned.
Scope, Visibility and Lifetime of variables in C Language are very much related to each other, but still,
there are some different properties associated with them that make them distinct. Scope determines the
region in a C program where a variable is available to use, Visibility of a variable is related to
the accessibility of a variable in a particular scope of the program and Lifetime of a variable is for how
much time a variable remains in the system's memory.
Local Variables
When we declare variables inside a function or an inner block of code, then these variables are known as
local variables. They can only be used inside the function scope or the block scope.
Global Variables
When we declare variables outside of all the functions then these variables are known as global variables.
These variables always have file scope and can be accessed anywhere in the program, they also remain in
the memory until the execution of our program finishes.
Tokenization is a security technique that replaces sensitive data, like credit card numbers, with a
nonsensitive substitute called a token. This token is a random string of characters that has no inherent value
and cannot be used to access the original data without a specific system. It's used in various applications,
including payment processing, data security, and natural language processing.
Static binding, also known as early or compile-time binding, occurs when the compiler resolves method
calls and variable references at compile time. This means the specific code to be executed is determined
before the program runs, based on the declared type of the variable or object.
Dynamic binding, also known as late binding, is a programming concept where the specific method to be
called is determined at runtime, not at compile time. This means the exact method execution isn't known
until the program is running.
A palindrome is a word, sentence, verse, or even number that reads the same backward or forward. It
derives from Greek roots that literally mean “running back” (palin is “again, back,” and dromos, “running.”)
In computing, a meta character is a special character with a defined meaning or function when interpreted
by a program like a shell or a regular expression engine. These characters are not literally interpreted but
used to manipulate or modify the way a pattern or command is processed.
REGULAR EXPRESSIONS
The lexical analyzer needs to scan and identify only a finite set of valid string/token/lexeme that belong to
the language in hand. It searches for the pattern defined by the language rules. Regular expressions have the
capability to express finite languages by defining a pattern for finite strings of symbols. The grammar
defined by regular expressions is known as regular grammar. The language defined by regular grammar is
known as regular language. Regular expression is an important notation for specifying patterns. Each pattern
matches a set of strings, so regular expressions serve as names for a set of strings. Programming
language tokens can be described by regular languages. The specification of regular
expressions is an example of a recursive definition. Regular languages are easy to understand and have
efficient implementation. There are a number of algebraic laws that are obeyed by regular expressions,
which can be used to manipulate regular expressions into equivalent forms.
Operations
The various operations on languages are:
Union of two languages L and M is written as
L U M = {s | s is in L or s is in M}
Concatenation of two languages L and M is written as
LM = {st | s is in L and t is in M}
The Kleene Closure of a language L is written as
L* = Zero or more occurrence of language L.
Notations
If r and s are regular expressions denoting the languages L(r) and L(s), then
Union : (r)|(s) is a regular expression denoting L(r) U L(s)
Concatenation : (r)(s) is a regular expression denoting L(r)L(s)
Kleene closure : (r)* is a regular expression denoting (L(r))*
(r) is a regular expression denoting L(r)
REGULAR EXPRESSIONS
Compiler Design
Precedence and Associativity
*, concatenation (.), and | (pipe sign) are left associative
* has the highest precedence
Concatenation (.) has the second highest precedence.
| (pipe sign) has the lowest precedence of all.
Representing valid tokens of a language in regular expression
If x is a regular expression, then:
x* means zero or more occurrence of x.
i.e., it can generate { e, x, xx, xxx, xxxx, … }
x+ means one or more occurrence of x.
i.e., it can generate { x, xx, xxx, xxxx … } or x.x*
x? means at most one occurrence of x
i.e., it can generate either {x} or {e}.
[a-z] is all lower-case alphabets of English language.
[A-Z] is all upper-case alphabets of English language.
[0-9] is all natural digits used in mathematics.
Representing occurrence of symbols using regular expressions
letter = [a – z] or [A – Z]
digit = 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 or [0-9]
sign = [ + | - ]
Representing language tokens using regular expressions
Decimal = (sign)?(digit)+
Identifier = (letter)(letter | digit)*
The only problem left with the lexical analyzer is how to verify the validity of a regular expression used in
specifying the patterns of keywords of a language. A well-accepted solution is to use finite automata for
verification.
Runtime Consideration in Programming Languages
Runtime programming is about being able to specify program logic during application execution, without
going through the code-compile-execute cycle. This article describes key elements of the infrastructure
required to support runtime programming in Java, and presents a fairly detailed analysis of the design.
Programmers are expected to consider the run time for his choice of implementation in the chosen
programming language.
Binding in Programming Languages
We may have to start this part by asking the question “What is binding in programming languages?” Recall
that a variable is the storage location for the storing of data in a program. Binding simply means the
association of attributes with program entities. Binding can be static or dynamic. C, Java are examples of
programming languages that support binding. Dynamic Binding allows greater flexibility but at the expense
of readability, efficiency and reliability.
Exception Handling in Programming Languages
Exception handling is a language feature designed to handle runtime errors, providing a structured way to
catch completely unexpected situations as well as predictable errors or unusual results without the amount of
inline error checking required of languages without it. More recent advancements in runtime engines enable
automated exception handling which provides 'rootcause' debug information for every exception of interest
and is implemented independent of the source code, by attaching a special software product to the runtime
engine. When an error occurs in a Java program it results in an exception being thrown. It can then be
handled using various
exception handling techniques.
Finite automata is a state machine that takes a string of symbols as input and changes its state accordingly.
Finite automata is a recognizer for regular expressions. When a regular expression string is fed into finite
automata, it changes its state for each literal. If the input string is successfully processed and the automata
reaches its final state, it is accepted, i.e., the string just fed was said to be a valid token of the language in
hand.
The mathematical model of finite automata consists of:
Finite set of states (Q)
Finite set of input symbols (Σ)
One Start state (q0)
Set of final states (qf)
Transition function (δ)
Finite automata are broadly classified into two main types: Deterministic Finite Automata (DFA) and Non-
Deterministic Finite Automata (NFA). DFAs have a single, unique transition for each input symbol, while
NFAs can have multiple possible transitions.
1. Deterministic Finite Automata (DFA):
A DFA is a state machine that has a single, well-defined transition for each input symbol. The next state is
uniquely determined by the current state and the input symbol. A DFA will always move to a specific next
state for a given input. Examples of DFAs include lexical analyzers and regular expression engines.
2. Non-Deterministic Finite Automata (NFA):
An NFA can have multiple possible transitions for a given input symbol. It can have multiple next states or
even no transition for a given input. The NFA's behavior is not fully predictable, as the next state depends on
the current state and input, but also on a possible choice of transition. NFAs are often used in the design of
regular expression engines and compiler development.
Parsing read syntactic analysis
Top-down Parsing
When the parser starts constructing the parse tree from the start symbol and then tries to transform the start
symbol to the input, it is called top-down parsing.
Recursive descent parsing : It is a common form of top-down parsing. It is called
recursive, as it uses recursive procedures to process the input. Recursive descent
parsing suffers from backtracking.
Backtracking : It means, if one derivation of a production fails, the syntax analyzer restarts the process
using different rules of same production. This technique may
process the input string more than once to determine the right production.
Bottom-up Parsing
As the name suggests, bottom-up parsing starts with the input symbols and tries to construct the parse tree
up to the start symbol.
In computer programming, a strongly typed language enforces strict type checking, meaning the type of a
variable must be defined and cannot be changed during runtime. This helps prevent errors and makes code
more predictable.
In essence, a weakly typed programming language is one where variables are not bound to a specific data
type, meaning that the type of a variable can change during runtime. This flexibility, while sometimes
convenient, can also lead to unexpected errors or behaviors
In programming, static binding (early binding) resolves the call to a method during compile time, while
dynamic binding (late binding) resolves it during runtime. Static binding is faster but less flexible, as the
method call is determined before execution. Dynamic binding offers greater flexibility but incurs some
runtime overhead.
In programming, primitive and reference types represent different ways data is stored in
memory. Primitive types store values directly, while reference types store memory addresses (pointers) to
objects. This difference affects how values are copied and modified.
Primitive types are the fundamental, built-in data types provided by a programming language. In Java, these
include byte, short, int, long, float, double, boolean, and char. In JavaScript, they
are number, string, boolean, undefined, null, and symbol.
Reference types are objects or data structures, and they store a reference (address) to a memory location
where the actual object is stored. In Java, these include String, Array, and any class you create. In JavaScript,
they are object (including arrays and functions).
Stack memory is a sort of memory allocation that the operating system continuously manages and uses to
store local variables in a LIFO order. On the other hand, heap memory is a type of dynamic memory
allocation used for storing objects and data structures that require a longer lifespan than stack memory.
Stack Allocation: The allocation happens on contiguous blocks of memory. We call it a stack memory
allocation because the allocation happens in the function call stack. The size of memory to be allocated is
known to the compiler and whenever a function is called, its variables get memory allocated on the stack.
And whenever the function call is over, the memory for the variables is de-allocated. This all happens using
some predefined routines in the compiler. A programmer does not have to worry about memory allocation
and de-allocation of stack variables. This kind of memory allocation is also known as Temporary memory
allocation because as soon as the method finishes its execution all the data belonging to that method flushes
out from the stack automatically. This means any value stored in the stack memory scheme is accessible as
long as the method hasn’t completed its execution and is currently in a running state.
Key Points:
It’s a temporary memory allocation scheme where the data members are accessible only if the method()
that contained them is currently running.
It allocates or de-allocates the memory automatically as soon as the corresponding method completes its
execution.
Stack memory allocation is considered safer as compared to heap memory allocation because the data
stored can only be accessed by the owner thread.
Memory allocation and de-allocation are faster as compared to Heap-memory allocation.
Stack memory has less storage space as compared to Heap-memory.
QUEUES AND TREES.
A tree is a nonlinear hierarchical data structure and comprises a collection of entities known as nodes. It
connects each node in the tree data structure using "edges”, both directed and undirected.
Trees: A tree is a hierarchical data structure where each node (except the root) has a parent node and can
have multiple child nodes.
Hierarchical Structure: Trees are used to represent hierarchical relationships, such as family trees or
organizational charts.
Nodes and Edges: Trees consist of nodes connected by edges.
Types: Binary tree, binary search tree (BST), and general tree.
Binary Tree: A binary tree is a tree where each node can have at most two children, typically called the left
child and the right child.
Binary Search Tree (BST): A BST is a binary tree where the values in the left subtree are less than the
parent node's value, and the values in the right subtree are greater
QUEUES
A queue is a linear data structure that stores the elements sequentially. It uses the FIFO (First In First Out)
approach for accessing elements. Queues are typically used to manage threads in multithreading and
implementing priority queuing systems.
A queue is an important data structure in programming. A queue follows the FIFO (First In First Out)
method and is open at both of its ends. Data insertion is done at one end, the rear end or the tail of the queue,
while deletion is done at the other end, called the front end or the head of the queue.
Let’s consider a queue in data structure example. A line of people is waiting to buy a ticket at a cinema hall.
A new person will join the line from the end, and the person standing at the front will be the first to get the
ticket and leave the line. Similarly in a queue data structure, data added first will leave the queue first.
Some other applications of the queue in real-life are:
People on an escalator
Cashier line in a store
A car wash line
One way exit.
Types of Queues in Data Structure
Simple Queue
Simple Queue is a linear data structure that follows the First-In-First-Out (FIFO) principle, where elements
are added to the rear (back) and removed from the front (head).
Ordered collection of comparable data kinds.
Queue structure is FIFO (First in, First Out).
When a new element is added, all elements added before the new element must be deleted in order to
remove the new element.
Circular Queue
A circular queue is a special case of a simple queue in which the last member is linked to the first. As a
result, a circle-like structure is formed.
The last node is connected to the first node.
Also known as a Ring Buffer, the nodes are connected end to end.
Insertion takes place at the front of the queue, and deletion at the end of the queue.
Circular queue application: Insertion of days in a week.
Priority Queue
In a priority queue, the nodes will have some predefined priority in the priority queue. The node with the
least priority will be the first to be removed from the queue. Insertion takes place in the order of arrival of
the nodes.
Some of the applications of priority queue:
Dijkstra’s shortest path algorithm
Prim’s algorithm
Data compression techniques like Huffman code
Deque (Double Ended Queue)
In a double-ended queue, insertion and deletion can occur at both the queue's front and rear ends.
Lists: A list is a linear data structure that stores elements in a sequential order. Each element in the list has a
position, and the elements can be accessed based on their index or position.
Types of Lists :There are two main types of lists
Array-Based Lists (Static Lists) :
Linked Lists: In computer science, a linked data structure is a data structure which consists of a set of data
records (nodes) linked together and organized by references (links or pointers).
Types of linked lists:
1. Single-linked list:
In a singly linked list, each node contains a reference to the next node in the sequence. Traversing a singly
linked list is done in a forward direction.
2. Double-linked list:
In a doubly linked list, each node contains references to both the next and previous nodes. This allows for
traversal in both forward and backward directions, but it requires additional memory for the backward
reference.
3.Circular linked list:
In a circular linked list, the last node points back to the head node, creating a circular structure. It can be
either singly or doubly linked.