Semantic Networks
What is SN?
A Semantic Network is a graph-based knowledge
representation method where:
• Nodes represent concepts or entities (e.g., Patient,
Diabetes)
• Edges (links) represent relationships between them
(e.g., has, is-a, causes)
• It's used to visually and logically structure knowledge so
that a system (or human) can interpret relationships
between different concepts.
Need of SN
Semantic Networks help AI systems:
• Represent knowledge in a structured, human-
readable form
• Perform reasoning and inference, e.g., if Chest Pain
and Diabetes are present → High Risk
• Support explainable AI, since the structure shows
how conclusions are derived
• Enable inheritance, similar to object-oriented models
• They're especially useful in expert systems, natural
language understanding, and decision support
systems.
Origin
• Origin: Introduced in the 1960s by Ross Quillian
• It was first used for natural language processing and
memory models in early AI systems.
• Inspired by how humans store and connect knowledge
in the brain.
Types of Semantic Networks
Definitional Networks
• Focus on is-a or subset relationships
• Example: Heart Disease is-a Cardiovascular Condition
Types of Semantic Networks
Assertional Networks
• Include specific facts about instances
• Example: Patient123 has Diabetes
Types of Semantic Networks
Inheritance Networks
• Represent class hierarchies and shared attributes
• E.g., All Patients have Age, Gender, Risk Level
Types of Semantic Networks
Conceptual Graphs
• More formal and logical extension of semantic networks
• Include relations and quantifiers
Types of Semantic Networks
Case-Frame Networks
• Integrate syntax and semantics for NLP
• Include roles like agent, object, etc.
Ontology
• An ontology is a structured way to represent
knowledge about a specific domain (like medicine or
weather) using concepts, relationships, and rules.
Ontology
Ontology = A map of knowledge
It shows:
• What things exist (concepts like "Patient", "Heart
Disease", "Blood Pressure")
• How they are related (e.g., "has_symptom", "causes",
"is_a")
• What rules or constraints apply
Key Parts of Element Example
an Ontology
Patient, Disease,
Class/Concept
Symptom
Patient_123,
Instance
Hypertension
Property/ has_symptom,
Relation suffers_from
Heart Disease is a
Hierarchy
type of Disease
Uses
Ontologies help machines understand and reason
about data.
They're used in:
• Medical diagnosis
• Semantic Web (like how Google understands
queries)
• Knowledge graphs (e.g., Google Knowledge
Panel)
Example
• Class: Disease
• Subclass: HeartDisease
• Class: Patient
• Property: suffers_from (Patient → Disease)
• Instance: Patient_001 suffers_from HeartDisease
Semantic
Feature Ontology
Network
Formal model
Difference Structure Simple graph
(logic-based)
between Show
Enable
Purpose reasoning, data
SN and associations
sharing
Ontology Formalism
Informal or
Highly formal
semi-formal
Graph OWL, RDF,
Languages structures, plain Description
text Logic
Full logical
Reasoning
Limited or none inference
Support
supported
ANN Models for SN
• 1. Knowledge Graph Embedding Models (KGEs)
• 2. Graph Neural Networks (GNNs)
• 3. Neuro-Symbolic Models
• 4. Transformer Models for Triplet/Relation
Extraction
What is a Triple Score
• In knowledge graphs or semantic networks, a triple is:
(subject, relation, object)
e.g., ("Smoking", causes, "Heart Disease")
A triple score is a numerical value predicted by a
neural network (like TransE or DistMult) that tells how
likely a triple is to be true or valid.
Model Idea
1. Knowledge
Graph Embedding
Models (KGEs) TransE
Translates head + relation ≈
tail
• These learn vector
representations of nodes
Multiplies embeddings to
(concepts) and edges DistMult
(relations) in a semantic score triples
network or ontology.
Extends DistMult using
ComplEx
complex numbers
Models relations as rotations
RotatE
in vector space
Graph Neural Networks (GNNs)
• These are powerful for reasoning over graph-based
data, like semantic networks.
Popular GNN types:
• GCN (Graph Convolutional Network)
• GAT (Graph Attention Network)
• R-GCN (Relational GCN) → great for knowledge
graphs
Graph Neural Networks (GNNs)
GNNs can:
• Predict missing links in a semantic network
• Classify nodes (e.g., is this concept a risk factor?)
• Learn patterns from the graph structure
3. Neuro-Symbolic Models
These combine symbolic reasoning (like ontologies)
with neural networks.
Examples:
• Neuro-symbolic Concept Learner (NS-CL)
• Logic Tensor Networks (LTN)
• DeepProbLog (Deep learning + Prolog logic)
Neuro Symbolic
Useful for:
• Reasoning over ontologies
• Merging human-readable knowledge with AI learning
4. Transformer Models for
Triplet/Relation Extraction
Models like BERT, BioBERT, or SciBERT can be used to:
• Extract triples (Subject, Relation, Object) from text
• Automatically generate a semantic network from
clinical notes or patient records
Use Case: Al-Zaidi Sani Dataset
• Use BERT or rules to extract relations.
• Use TransE to embed them.
• Use R-GCN to analyze and reason over the graph.