What is an Automorphism?
An automorphism is a special kind of function or transformation of a structure that:
1. Maps the structure to itself, and
2. Preserves all of its properties, and
3. Is a bijection (both one-to-one and onto).
In simpler terms:
An automorphism is a "symmetry" of an object that doesn't change the object but may rearrange its
parts.
Automorphism in Different Contexts
1. In Abstract Algebra (Groups, Rings, Fields)
• An automorphism of a group is an isomorphism from the group to itself.
• It respects the group operation:
If f: G → G is an automorphism, then for all a, b ∈ G,
f(ab) = f(a)f(b)
Example (Group Automorphism):
Let G=Z, the group of integers under addition.
Define f(x)=−x. This is an automorphism since:
• It maps integers to integers.
• It’s bijective.
• It preserves addition:
f(a+b)=−(a+b)=−a+−b=f(a)+f(b)
2. In Graph Theory
• An automorphism is a relabeling of the graph’s vertices that preserves its structure
(adjacency).
Example (Graph Automorphism):
In a square graph (cycle with 4 vertices), rotating or reflecting the square gives an automorphism
because the connections (edges) remain the same.
Homomorphism
A homomorphism is a structure-preserving map between two algebraic structures (like groups,
rings, or vector spaces). It doesn't have to be bijective (one-to-one and onto), but it must respect
the operations defined on the structures.
In simple terms: A homomorphism is a function that translates operations in one system into
equivalent operations in another.
Formal Definition (Group Theory)
Let (G,∗)and (H,⋅) be two groups.
A function f:G→H is a group homomorphism if:
f(a∗b)=f(a)⋅f(b)for all a,b∈G
Graph Homomorphism
A function f:V(G)→V(H) between two graphs is a graph homomorphism if:
• Whenever (u,v)∈E(G), then (f(u),f(v))∈E(H)
This means edges are preserved under vertex mapping.
Types of Homomorphisms
• Monomorphism: An injective (one-to-one) homomorphism
• Epimorphism: A surjective (onto) homomorphism
• Isomorphism: A bijective homomorphism (also structure-preserving in both directions)
• Endomorphism: A homomorphism from a structure to itself
• Automorphism: A bijective endomorphism (i.e., isomorphism of a structure with itself)
Eulerian Paths and Cycles
Eulerian Path
An Eulerian Path is a path in a graph that visits every edge exactly once (but can repeat vertices).
Eulerian Cycle (Circuit)
An Eulerian Cycle is an Eulerian Path that starts and ends at the same vertex.
Euler’s Theorem – Conditions (for Undirected Graphs)
• A graph has an Eulerian cycle if and only if:
o It is connected
o Every vertex has even degree
• A graph has an Eulerian path (but not a cycle) if and only if:
o It is connected
o Exactly two vertices have odd degree
Example:
Let’s we have a graph with degrees of vertices:
Vertex Degree
A 2
B 4
C 2
D 2
All degrees are even ⇒ Eulerian cycle exists
Hamiltonian Paths and Cycles
Hamiltonian Path
A Hamiltonian Path is a path that visits every vertex exactly once.
Hamiltonian Cycle
A Hamiltonian Cycle is a Hamiltonian Path that starts and ends at the same vertex.
Note:
• Unlike Eulerian paths, there is no simple necessary-and-sufficient condition to determine
whether a Hamiltonian path/cycle exists.
• It is an NP-complete problem in computer science.
Sufficient Condition (Dirac’s Theorem)
Let G be a simple graph with n≥3 vertices.
If every vertex has degree ≥ n/2, then G has a Hamiltonian cycle.
Example:
Let’s we have 5 vertices, and every vertex is connected to at least 3 others (i.e., degree ≥ 5/2 = 2.5)
Satisfies Dirac’s condition ⇒ Likely to have a Hamiltonian cycle
Comparison Table
Feature Eulerian Path Hamiltonian Path
Visits Every edge once Every vertex once
Repeats vertices? Yes No
Easy to check? Yes (based on degrees) No (NP-complete)
Real-world example Postman route Traveling salesman
Rooted Tree
A rooted tree is a tree (an acyclic connected graph) where one node is specially designated as the
root.
Once a root is chosen, the tree becomes hierarchical, meaning:
• Nodes have a parent-child relationship
• You can define levels, depth, height, etc.
Basic Terminology in Rooted Trees
Term Meaning
Root The starting node (top of the hierarchy)
Parent The node one level above in the path to the root
Child The node one level below a parent
Siblings Nodes that share the same parent
Leaf A node with no children
Internal node A node that has at least one child (not a leaf)
Subtree A tree formed by any node and all its descendants
Depth of node Number of edges from the root to that node
Height of tree Maximum depth of any node in the tree
Types of Rooted Trees
1. Binary Tree – Each node has at most two children (left and right)
2. k-ary Tree – Each node has at most k children
3. Full Tree – Every node has 0 or k children
4. Complete Tree – All levels filled except possibly the last, filled from left to right
5. Balanced Tree – Tree height is minimized (difference in height between left and right is
small)
Undirected Tree
An undirected tree is a special type of graph with the following properties:
1. It is connected (there is a path between any two vertices)
2. It contains no cycles (i.e., it's acyclic)
So, it’s a connected acyclic undirected graph.
Basic Properties of Undirected Trees
Property Description
Edges If the tree has nnn vertices, it has n−1n - 1n−1 edges
Unique Path There is exactly one simple path between any two vertices
Acyclic No loops or cycles
Connected All nodes are reachable from any other node
Example:
/\
2 3
/\
4 5
- 5 vertices
- 4 edges ⇒ satisfies the condition (n - 1)
- No cycles
- Connected ⇒ It's a valid undirected tree
Spanning Tree?
A Spanning Tree of a connected undirected graph is a subgraph that:
1. Includes all the vertices of the original graph
2. Is a tree (connected & acyclic)
3. Has exactly n − 1 edges, where nnn is the number of vertices
Properties of a Spanning Tree
Property Description
Covers all vertices No vertex left out
No cycles It’s a tree!
Minimum edges Exactly n−1 for n vertices
Not unique A graph can have many different spanning trees
Minimum Spanning Tree (MST)
A Minimum Spanning Tree is a spanning tree with the least total edge weight (used in weighted
graphs).
Real-Life Applications:
• Designing least-cost network connections (like cables, roads)
• Clustering in Machine Learning
• Reducing circuit complexity
• Routing protocols (e.g., OSPF)
MST Algorithms
1. Kruskal’s Algorithm
Greedy approach that picks the smallest available edge that doesn’t form a cycle.
Steps:
1. Sort all edges by weight (ascending)
2. Pick the smallest edge that doesn’t form a cycle (use Union-Find/DSU)
3. Repeat until you have n−1 edges in the tree
Time Complexity:
O(ElogE) (E = number of edges)
2. Prim’s Algorithm
Builds the tree by growing from one node, adding the minimum weight edge at each step.
Steps:
1. Start from any vertex
2. Use a min-priority queue to pick the smallest edge to a new vertex
3. Repeat until all vertices are included
Time Complexity:
O(ElogV) using Min-Heap/Priority Queue
Kruskal vs Prim
Feature Kruskal’s Algorithm Prim’s Algorithm
Works with Edge list Adjacency list/matrix
Approach Greedy by edge Greedy by vertex
Best for Sparse graphs Dense graphs
Data structure Union-Find (DSU) Min-Heap / Priority Queue
Tractable Problems
A problem is tractable if it can be solved in a reasonable (polynomial) amount of time.
• That means: There exists an algorithm to solve it in time O(nk), where k is a constant and n is
the input size.
• "Reasonable" = Efficient = Polynomial time
Examples:
• Sorting (Merge Sort: O(nlogn))
• Shortest path (Dijkstra’s: O((V+E)logV))
• Matrix multiplication
• Searching in a sorted array (Binary search: O(logn))
Intractable Problems
A problem is intractable if it cannot be solved in polynomial time, or is very hard to solve
efficiently.
• These problems often require exponential or super-polynomial time
• Solutions exist, but they’re too slow for large inputs.
Examples:
• Travelling Salesman Problem (TSP)
• Knapsack Problem (without approximation)
• Sudoku Solver (in general case)
• Graph Coloring
• Subset Sum Problem
Classes of Problems
Class Meaning
P Problems solvable in polynomial time (tractable)
NP Problems verifiable in polynomial time
NP-hard As hard as the hardest problems in NP (intractable)
NP-complete Problems that are both in NP and NP-hard
Russell’s Paradox is a contradiction discovered by philosopher and logician Bertrand Russell in 1901.
It exposed a fundamental flaw in naive set theory, which assumes that any definable collection of
objects is a set.
The Paradox Explained (Informally)
Let’s define a set R as:
R={x∣x is a set and x∉x}
In plain English:
“Let RRR be the set of all sets that do not contain themselves.”
Now ask the question:
Does R contain itself?
• If R∈R: Then by definition of R, R∉R (Contradiction)
• If R∉R : Then again by definition of R, R∈R (Contradiction)
Either way, we get a contradiction.