Distributed Databases
Distributed Databases
Systems Concepts
This document provides a comprehensive overview of the specified units in Distributed Database
Systems, covering fundamental concepts, architecture, design, query processing, transaction
management, reliability, and object-oriented aspects.
Definition: Distributed Data Processing refers to the collection, processing, and storage of data
in multiple interconnected computer systems (nodes) that are geographically dispersed but work
together to achieve a common goal. Each node handles a portion of the overall data processing
task. Key Idea: It's about distributing computation and data management across different
machines. Example: A global retail chain processing sales transactions at local stores, then
aggregating data at regional centers, and finally at a central headquarters. Each level processes
data relevant to its scope.
graph TD
UserQuery[User Query] --> DBMS1(Distributed DBMS)
DBMS1 --> SiteA(Database Site A)
DBMS1 --> SiteB(Database Site B)
DBMS1 --> SiteC(Database Site C)
SiteA -- Network --> SiteB
SiteB -- Network --> SiteC
Promises of DDBSs (Advantages)
1. Increased Reliability/Availability: If one site fails, other sites can continue to operate,
or its data might be available from a replicated copy.
2. Increased Scalability: New sites/nodes can be added to handle increased data volume or
processing load, allowing horizontal scaling.
3. Reflects Organizational Structure: Data can be stored where it is generated and
primarily used, aligning with organizational distribution.
4. Improved Performance: Data located closer to users reduces access latency. Parallel
processing of queries across multiple sites can also speed up execution.
5. Economic Advantages: It can be cheaper to use a network of smaller computers than a
single large mainframe.
6. Local Autonomy: Each site can maintain a degree of control over its local data, subject
to global consistency constraints.
1. Client-Server Architecture:
o Concept: The most common model. Clients submit requests, and servers process
data and return results. The database is often distributed among multiple servers.
o Variants:
Distributed Presentation: Client handles UI, server handles data.
Distributed Application: Client handles UI and some application logic,
server handles data and remaining logic.
Distributed Database: Client handles UI and application, servers manage
data. This is the focus for DDBMS.
o Diagram:
o graph TD
o C1[Client App 1] --> S[Server (DDBMS)]
o C2[Client App 2] --> S
o S --> DB1(Database Site 1)
o S --> DB2(Database Site 2)
o S --> DB3(Database Site 3)
2. Peer-to-Peer Architecture:
o Concept: Each node acts as both a client and a server. There is no central control.
Nodes communicate directly with each other to share data and processing.
o Pros: Highly fault-tolerant, scalable.
o Cons: Complex to manage consistency and discovery, often less suited for
traditional transactional databases.
o Example: Blockchain, some file-sharing systems.
3. Multi-database System (MDBS) Architecture:
o Concept: Integrates multiple existing, independent, and possibly heterogeneous
database systems. It sits on top of these local DBMSs, providing a unified view
without changing them.
o Types:
Federated Database System (FDBS): Provides a transparent, integrated
view of underlying heterogeneous databases. The FDBS has more control
over component schema integration.
Gateway Approach: Uses a gateway to translate queries and data
between a global DBMS and a local, heterogeneous DBMS. Less
integrated, more focused on connectivity.
o Diagram (Federated MDBS):
o graph TD
o UserApp[User Application] --> FDBS[Federated DDBMS]
o FDBS --> LDBMS1(Local DBMS 1 - Oracle)
o FDBS --> LDBMS2(Local DBMS 2 - SQL Server)
o FDBS --> LDBMS3(Local DBMS 3 - MySQL)
graph TD
User[User/Application] --> GlobalQP[Global Query Processor]
GlobalQP --> FragmentationSchema[Fragmentation Schema]
GlobalQP --> AllocationSchema[Allocation Schema]
GlobalQP --> GTM[Global Transaction Manager]
GTM --> SiteA(Local DBMS A)
GTM --> SiteB(Local DBMS B)
SiteA --- Network(Communication Network) --- SiteB
SiteA --> LocalSchemaA[Local Schema A]
SiteB --> LocalSchemaB[Local Schema B]
1. Top-Down Design:
o Concept: Starts with a global conceptual schema (enterprise-wide view) and then
iteratively fragments and allocates data to different sites.
o Steps: Global conceptual design -> Fragmentation design -> Allocation design.
o Pros: Leads to a consistent global view, well-suited for greenfield (new) systems.
o Cons: Can be complex, might not fit existing legacy systems.
2. Bottom-Up Design:
o Concept: Starts with existing local schemas (often heterogeneous) and then
integrates them to form a global conceptual schema.
o Steps: Local schema design -> Schema integration.
o Pros: Suitable for integrating existing databases (federated systems), preserves
local autonomy.
o Cons: Schema integration can be very challenging (semantic heterogeneity),
potential for inconsistencies.
3. Mixed Design:
o Combines elements of both top-down and bottom-up approaches. Might start with
a partial top-down design for core data and then integrate existing local data.
1. Fragmentation: Deciding how to break down global relations into smaller, manageable
units (fragments).
2. Allocation: Deciding where to store these fragments across different sites.
3. Replication: Deciding whether to store multiple copies of data for availability and
performance.
4. Location Transparency: Users should not need to know the physical location of data.
5. Fragmentation Transparency: Users should not need to know how relations are
fragmented.
6. Replication Transparency: Users should not need to know if data is replicated.
Fragmentation
The process of breaking a relation (table) into smaller pieces (fragments) that can be stored at
different sites. The goal is to improve performance, reliability, and local autonomy.
1. Horizontal Fragmentation:
o Concept: Divides a relation into subsets of tuples (rows) based on a predicate
(condition) on one or more attributes. Each fragment has the same schema as the
original relation.
o Types:
Primary Horizontal Fragmentation: Based on a predicate on the base
relation itself.
Derived Horizontal Fragmentation: Based on a join predicate with
another relation that is already fragmented.
o Example: EMPLOYEE table fragmented by DEPARTMENT_ID.
EMP_HR (employees in HR department)
EMP_SALES (employees in Sales department)
o Diagram:
o graph TD
o EmployeeTable[Employee (EmpID, Name, DeptID, Salary)] -->
H1[Fragment 1 (DeptID = 'HR')]
o EmployeeTable --> H2[Fragment 2 (DeptID = 'Sales')]
2. Vertical Fragmentation:
o Concept: Divides a relation into subsets of attributes (columns) plus the primary
key to link back to the original relation. Each fragment has a subset of the original
columns.
o Goal: To improve performance by allowing queries to access only relevant
columns, reducing I/O.
o Example: EMPLOYEE table.
EMP_PERSONAL (EmpID, Name, Address, Phone)
EMP_JOB (EmpID, DeptID, Salary, JobTitle)
o Diagram:
o graph TD
o EmployeeTable[Employee (EmpID, Name, DeptID, Salary,
Address)] --> V1[Fragment 1 (EmpID, Name, Address)]
o EmployeeTable --> V2[Fragment 2 (EmpID, DeptID, Salary)]
3. Mixed (Hybrid) Fragmentation:
o Concept: A combination of horizontal and vertical fragmentation. A relation is
first horizontally fragmented, and then some (or all) of these fragments are
vertically fragmented. Or vice-versa.
o Example: First fragment EMPLOYEE by DEPARTMENT_ID (horizontal), then
vertically fragment EMP_HR into EMP_HR_PERSONAL and EMP_HR_JOB.
Allocation
The process of deciding at which site(s) each fragment (or non-fragmented relation) will be
stored.
1. Non-redundant Allocation:
o Concept: Each fragment is stored at exactly one site. There are no replicated
copies.
o Pros: Simplest to manage, no replication consistency issues.
o Cons: Low reliability (if site fails, data is unavailable), lower availability,
potentially slower query performance if data is remote.
o Diagram:
o graph LR
o F1[Fragment 1] --> S1(Site 1)
o F2[Fragment 2] --> S2(Site 2)
o F3[Fragment 3] --> S3(Site 3)
2. Redundant Allocation:
o Concept: One or more fragments are stored at multiple sites (replicated). This is
used for reliability and performance.
o Types:
Replicated Allocation (Full Replication): Every fragment (or the entire
database) is stored at every site.
Pros: High availability, fast read queries (can query local copy).
Cons: High update cost (must update all copies), high storage cost,
complex concurrency control.
Diagram:
graph LR
DB[Database] --> S1(Site 1)
DB --> S2(Site 2)
DB --> S3(Site 3)
Horizontal Fragmentation:
o EMP_CHENNAI: (EmpID, Name, City, Dept, Salary) where City = 'Chennai'
o EMP_DELHI: (EmpID, Name, City, Dept, Salary) where City = 'Delhi'
Allocation:
o EMP_CHENNAI allocated to Site_Chennai.
o EMP_DELHI allocated to Site_Delhi.
Replication (Partial): If EMP_CHENNAI is frequently accessed from Delhi, a copy might
be allocated to Site_Delhi as well.
o EMP_CHENNAI to Site_Chennai (primary)
o EMP_DELHI to Site_Delhi (primary)
o EMP_CHENNAI (replicated copy) to Site_Delhi
Query Processing: The activities involved in retrieving data from a database. In a DDBMS, this
becomes complex as data is distributed.
1. Minimize Response Time: Reduce the time elapsed between submitting a query and
receiving the result.
2. Minimize Total Cost: Minimize the sum of I/O cost, CPU cost, and communication cost
(which is dominant in distributed systems).
3. Maximize Throughput: Handle as many queries as possible per unit of time.
Query Decomposition
The initial phase of query processing that transforms a high-level query (e.g., SQL) into an
equivalent relational algebra expression, checks its validity, and performs initial simplifications.
Steps:
1. Parsing and Translation: Translate the SQL query into an internal
representation (e.g., parse tree, relational algebra tree).
2. Semantic Analysis: Check for correctness (e.g., relations and attributes exist,
type compatibility).
3. Query Simplification: Remove redundant predicates, eliminate common
subexpressions.
4. Query Restructuring: Apply algebraic equivalences to transform the query into
a form that might be easier to optimize (e.g., push selections/projections down the
tree).
Query Optimization: The process of choosing the most efficient execution plan for a query. In
distributed systems, communication cost is typically the dominant factor.
Example (Distributed Join Optimization): Relations R(A, B) at Site 1 and S(B, C) at Site 2.
Query: R JOIN S.
The optimizer compares the estimated costs of these and other possible plans to choose the best
one.
Transaction: A logical unit of work that accesses and possibly modifies the contents of a
database. It is a sequence of operations (read, write, update, delete) that are performed as a
single, atomic unit.
1. Atomicity:
o Concept: A transaction is treated as a single, indivisible unit of work. Either all of
its operations are completed successfully (committed), or none of them are
(aborted/rolled back). There is no "half-finished" state.
o Example: A money transfer from account A to account B involves two
operations: Debit A and Credit B. If Debit A succeeds but Credit B fails, the
entire transaction is rolled back, and A's balance is restored.
2. Consistency:
o Concept: A transaction brings the database from one consistent state to another
consistent state. It ensures that any data written to the database must be valid
according to all defined rules and constraints (e.g., integrity constraints, business
rules).
o Example: In a banking system, the sum of balances in all accounts must remain
constant before and after a transfer, assuming no money is created or destroyed.
3. Isolation:
o Concept: Transactions are executed in isolation from each other. The
intermediate state of a transaction is not visible to other concurrent transactions
until it commits. This prevents interference problems (e.g., dirty reads, non-
repeatable reads, phantom reads).
o Example: If two transactions simultaneously try to update the same account
balance, isolation ensures that one transaction completes before the other's
changes are applied, or they are processed in a way that avoids conflicts, giving
the impression of sequential execution.
4. Durability:
o Concept: Once a transaction is committed, its changes are permanent and will
survive any subsequent system failures (e.g., power loss, system crash). This is
typically ensured by writing changes to non-volatile storage (e.g., disk) and
logging.
o Example: After a money transfer transaction commits, even if the system crashes
immediately after, the updated balances in both accounts will persist when the
system recovers.
Types of Transactions
1. Flat Transactions:
o Concept: The traditional transaction model. A single, indivisible unit of work. If
any part fails, the entire transaction aborts.
o Pros: Simple to implement and manage.
o Cons: Lacks flexibility for complex applications, limits concurrency for long-
running tasks.
2. Nested Transactions:
o Concept: A transaction can contain sub-transactions. If a sub-transaction aborts,
its effects are rolled back, but the parent transaction can continue, potentially
trying an alternative sub-transaction. Only the top-level transaction's commit is
permanent.
o Pros: Increased modularity, improved concurrency (sub-transactions can run in
parallel), better fault tolerance.
o Cons: More complex recovery and concurrency control.
3. Long-Duration (Long-Lived) Transactions:
o Concept: Transactions that execute for a long time (minutes, hours, or days),
often involving human interaction or external events. They violate the traditional
isolation property to allow other transactions to progress.
o Challenges: Traditional locking mechanisms would hold resources for too long,
causing poor concurrency.
o Approaches: Compensating transactions, sagas, loosening ACID properties (e.g.,
using weaker isolation levels), multi-version concurrency control.
o Example: Workflow management, CAD/CAM systems, complex scientific
simulations.
Serializability
Concept: The main correctness criterion for concurrency control. It ensures that the
concurrent execution of multiple transactions is equivalent to some serial (sequential)
execution of those same transactions.
Importance: If an execution is serializable, it means the database remains consistent,
even with concurrent access.
Global Serializability: In a DDBMS, not only must local executions at each site be
serializable, but their combined effect must also be globally serializable. This is often
achieved by ensuring that the order of commits of global transactions is the same at all
participating sites.
Concept: Each transaction is assigned a unique timestamp at its start. This timestamp
determines the serial order of transactions. Operations are validated based on these
timestamps.
Timestamp Ordering (TO):
o Each data item has a Read Timestamp (RTS) and a Write Timestamp (WTS)
indicating the timestamp of the last transaction that read/wrote it.
o Read Operation (T attempts to read X): If TS(T) < WTS(X), T is trying to read
an "old" version of X that has already been overwritten by a younger transaction.
T aborts and restarts with a new timestamp. Otherwise, read X and update RTS(X)
= max(RTS(X), TS(T)).
o Write Operation (T attempts to write X): If TS(T) < RTS(X) or TS(T) <
WTS(X), T is trying to write an "old" version or overwrite a value already
read/written by a younger transaction. T aborts and restarts. Otherwise, write X
and update WTS(X) = TS(T).
Pros: Does not cause deadlocks (transactions are simply aborted).
Cons: Can lead to cascade aborts, low concurrency for certain workloads, high restart
rate.
Deadlock Management
Deadlock: A state where two or more transactions are indefinitely waiting for each other to
release resources (locks) that they need.
Example (Deadlock):
1. Deadlock Prevention:
o Concept: Design the system to ensure deadlocks can never occur.
o Techniques:
Pre-claiming: Transactions must acquire all necessary locks at once at the
beginning. If any lock is unavailable, none are acquired.
Ordering of Resources: Impose a total ordering on all resources.
Transactions must request locks in increasing order of resource numbers.
Wait-Die: If TS(T_i) < TS(T_j) (T_i is older) and T_i requests a lock
held by T_j, T_i waits. If TS(T_i) > TS(T_j) (T_i is younger), T_i dies
(aborts) and restarts.
Wound-Wait: If TS(T_i) < TS(T_j) (T_i is older) and T_i requests a
lock held by T_j, T_j is wounded (aborts) and releases its lock. If TS(T_i)
> TS(T_j) (T_i is younger), T_i waits.
o Pros: Guarantees no deadlocks.
o Cons: Can lead to low resource utilization or unnecessary aborts.
2. Deadlock Detection and Recovery:
o Concept: Allow deadlocks to occur, detect them, and then recover by aborting
one or more transactions.
o Steps:
1. Detection: Maintain a Wait-For Graph (WFG). Nodes are transactions,
directed edge T_i -> T_j exists if T_i is waiting for a resource held by
T_j. A cycle in the WFG indicates a deadlock.
2. Recovery: If a cycle is detected, select a "victim" transaction to abort. The
victim releases its locks, allowing other transactions to proceed. The
victim is then restarted.
Victim Selection Criteria: Minimum cost, least progress,
youngest transaction, etc.
o Distributed Deadlock Detection: More complex.
Centralized: A single site collects local WFGs from all sites and builds a
global WFG. Single point of failure.
Distributed: Sites cooperate to detect cycles. Each site maintains its local
WFG and exchanges probes/messages with other sites. (e.g., Chandy-
Lamport algorithm, distributed WFG algorithms).
Hierarchical: Combines centralized and distributed approaches for large
systems.
o Pros: Higher resource utilization, more efficient if deadlocks are rare.
o Cons: Performance overhead of detection, cost of aborting transactions.
Reliability: The probability that a system will operate without failure for a specified time
interval. Availability: The fraction of time that a system is available for use.
The ability of a system to continue operating correctly even in the presence of failures.
DDBSs are prone to various types of failures, which are more complex than in centralized
systems due to distributed components and networks.
1. Transaction Failures:
o Logical Errors: Bugs in application programs or database integrity violations.
o System Errors: Software bugs, resource exhaustion.
o User Errors: Incorrect input, accidental deletion.
o Action: Transaction rollback/abort.
2. System Failures (Site Failures):
o Concept: A single computer site in the DDBMS crashes (e.g., power failure,
operating system crash, hardware malfunction). The site stops functioning, but
other sites might still be operational.
o Impact: Transactions active at the failed site are lost. Data exclusively at that site
becomes unavailable.
o Recovery: Requires restarting the site, restoring its local database from
logs/backups, and coordinating with other sites for global consistency.
3. Media Failures:
o Concept: Non-volatile storage (disks) containing database data or logs becomes
corrupted or inaccessible.
o Impact: Loss of persistent data.
o
Recovery: Requires restoring data from backups and replaying logs. Data
replication across sites is crucial for quick recovery.
4. Communication Failures (Network Partitioning):
o Concept: The network connecting sites fails, leading to lost messages or network
partitioning (the network splits into two or more disconnected components). Sites
within a component can communicate, but not with sites in other components.
o Impact: Transactions requiring communication across partitions cannot commit.
Can lead to "split-brain" syndrome if not handled carefully, where sites in
different partitions independently update data, leading to inconsistency.
o Recovery: Requires detecting partitions, resolving conflicts, and merging
consistent states after the network is restored.
Parallel Database System: A database system that runs on multiple processors and disks,
designed to perform operations in parallel, significantly improving query processing and
transaction throughput.
1. Shared-Memory Architecture:
o Concept: Multiple CPUs share a common main memory and common disks.
o Pros: Easy to program and load balance, low communication overhead (via
shared memory).
o Cons: Limited scalability (shared memory becomes a bottleneck), not fault-
tolerant beyond single node.
o Diagram:
o graph TD
o CPU1[CPU 1] --> SharedMemory(Shared Memory)
o CPU2[CPU 2] --> SharedMemory
o CPU3[CPU 3] --> SharedMemory
o SharedMemory --> SharedDisk(Shared Disk Array)
How data is distributed across disks in a parallel database. Also known as Data Partitioning or
Data Distribution.
graph TD
Table[Large Table] --> P1[Partition 1]
Table --> P2[Partition 2]
Table --> P3[Partition 3]
P1 --> Scan1[Scan Op (Node 1)]
P2 --> Scan2[Scan Op (Node 2)]
P3 --> Scan3[Scan Op (Node 3)]
Scan1 & Scan2 & Scan3 --> Combine[Combine Results]
Load Balancing
Concept: Distributing the workload evenly across the available resources (processors,
disks) in a parallel database system to prevent bottlenecks and maximize performance.
Techniques:
o Dynamic Data Migration: Move data partitions to less loaded nodes.
o Dynamic Query Assignment: Assign incoming queries to nodes with lower
current workload.
o Adaptive Query Execution: Adjust execution plans based on real-time load.
o Hashing/Range Partitioning: Used to initially distribute data evenly.
Database Clusters
1. Object:
o Concept: A discrete entity that combines both data (attributes/state) and behavior
(methods/operations) into a single unit.
o Example: An Employee object with attributes like name, salary and methods
like calculate_bonus(), update_salary().
2. Class:
o Concept: A blueprint or template for creating objects. It defines the structure
(attributes) and behavior (methods) that all objects of that class will have.
o Example: The Employee class.
3. Encapsulation:
o Concept: Bundling data and methods that operate on the data within a single unit
(the object), and restricting direct access to some of the object's components. It
hides the internal implementation details.
o Benefit: Data integrity, modularity, easier maintenance.
4. Inheritance:
o Concept: A mechanism by which one class (subclass/child class) can acquire the
attributes and methods of another class (superclass/parent class). It promotes code
reuse and creates a hierarchy.
o Example: Manager class inherits from Employee class. Manager has all
properties of Employee plus its own specific ones (e.g., department_managed).
5. Polymorphism:
o Concept: The ability of an object to take on many forms. Specifically, the ability
of methods to behave differently depending on the object on which they are
called.
o Example: A calculate_salary() method might exist in both Employee and
Manager classes, but its implementation might differ for managers (e.g., includes
a bonus calculation).
6. Object Identity:
o Concept: A unique, system-generated identifier for each object, independent of
its attribute values. This ID remains immutable even if the object's state changes.
o Importance: Allows objects to be referenced directly and enables complex object
relationships without relying on primary keys (which can change).
1. Object Fragmentation:
o Horizontal Fragmentation: Grouping objects of the same class based on a
predicate (e.g., Employee objects with department='Sales').
o Vertical Fragmentation: Less common for objects as it breaks encapsulation.
Might involve grouping attributes of an object into different fragments, requiring
object reconstruction.
o Class Partitioning: Partitioning a class's instances.
o Schema Partitioning: Distributing parts of the schema across sites.
2. Object Allocation: Deciding where to store object fragments or full objects (similar to
relational allocation: non-redundant, replicated).
3. Complex Object Distribution: Handling objects that contain references to other objects
(complex nested structures). This requires careful consideration during fragmentation and
allocation to minimize distributed object access.
Architectural Issues
1. Object Naming and Identification: Ensuring global unique object IDs across all sites.
2. Schema Integration: Integrating heterogeneous object schemas.
3. Object Migration: Moving objects between sites.
4. Distributed Object Access: Efficiently locating and accessing objects that might be
fragmented or replicated across multiple sites.
5. Concurrency Control and Recovery: Adapting distributed transaction management
(2PC, locking) for object-oriented data structures and complex methods.
6. Query Processing for Complex Objects: Optimizing queries that traverse object
relationships.
Object Management
Object Creation and Deletion: Managing unique object IDs and distributed storage.
Object Versioning: Supporting multiple versions of an object (e.g., for design
applications).
Object Caching: Caching frequently accessed objects at client or intermediate sites to
reduce network traffic.
Object Granularity: Deciding whether to distribute entire objects, or sub-objects (if they
can be meaningfully fragmented).
Storage Models:
o Centralized Object Store: All objects stored in one central OODBMS (less
distributed).
o Fragmented Object Store: Objects or object fragments are distributed across
sites.
o Replicated Object Store: Copies of objects/fragments are stored at multiple sites
for availability and performance.
Addressing Objects: Using Object IDs (OIDs) to locate objects regardless of their
physical location.
Clustering Objects: Storing related objects together (e.g., parent-child objects) to
optimize access performance and minimize I/O for traversals.
Definition: A data model that applies concepts from object-oriented programming to database
design. It stores data as objects, enabling more complex data types and direct representation of
real-world entities and their behaviors.
Inheritance
Object Identity
OODBMS (Object-Oriented
Feature ORDBMS (Object-Relational DBMS)
DBMS)
Pure object-oriented (objects, Relational model extended with object
Core Paradigm
classes, methods) features
Objects, classes, inheritance,
Data Model Tables, rows, columns + object-like features
encapsulation
Defined by classes and their
Schema Defined by tables and their relationships
relationships
Fundamental, system-
Object Identity generated OIDs for every Primary keys (value-based) for row identity
object
Complex Data Directly supports complex Supports user-defined types, arrays, LOBs,
Types nested objects, collectionsbut less naturally integrated
Stores data, methods usually in application
Stores methods as part of the
Methods/Behavior layer (though some can store
schema, directly callable
functions/procedures)
Directly supported and
Inheritance Limited support, often simulated with tables
managed within the DBMS
Object Query Language
SQL extensions (e.g., SQL:1999/SQL3) for
Query Language (OQL) or language-specific
objects
extensions
Transparent persistence for Requires Object-Relational Mapping
Persistence
programming languages (ORM) tools for object persistence
Niche, less mature in Highly mature, widely used, industry
Maturity
widespread adoption standard
Potentially faster for complex Excellent for structured data, joins, and
Performance
object traversals aggregations
Limited, mainly in specialized
Market Adoption domains (CAD/CAM, Dominant in enterprise applications
telecom)
Versant, GemStone/S,
Example Products Oracle, PostgreSQL, IBM DB2, SQL Server
Objectivity/DB
Impedance Low (direct mapping from
High (need ORM to map objects to tables)
Mismatch OO language to DB)
Conclusion:
OODBMS: Ideal when the primary requirement is direct storage and manipulation of
complex objects, and close integration with object-oriented programming languages.
Used in specialized applications.
ORDBMS: A pragmatic evolution of relational databases, extending them to handle
object-like features while retaining the strengths of the relational model. Dominant in
most business applications.