Administration OPEN DATABASE 2024
Administration OPEN DATABASE 2024
Course outline
Chapter 1:
General information on the administration of databases
REMINDER: What
is a database
first?
A database is a set of data stored in a
structured, organized and with as lit-
tle redundancy as possible.
Why use databases?
-Exploit data
-Non-relational or NoSql
What is a DBMS?
It is a software that allows to store, manage, create and manipulate data in a storage medium
It is a software that supports creation, description and structuring of data, modification and update
of data, querying and searching data and finally ensures the administration of the database (user
management, access management etc ...).
DBMS are also proprietary systems and are owned by computer manufacturers and software
publishers.
Definition
The DataBase Administrator (DBA) is a person responsible for the proper functioning of
database servers, essentially relational (OLTP) or decision-making (OLAP), both at the level of
database design, validation testing, stakeholder coordination, operation, protection and usage
control.
What is a DBMS?
It is a software that allows to store, manage, create and manipulate data in a storage medium
It is a software that supports creation, description and structuring of data, modification and update
of data, querying and searching data and finally ensures the administration of the database (user
management, access management etc ...).
ADMINISTRATION OPEN DATABASE MR MESSIO COMPUTER ENGINEER
4
DBMS are also proprietary systems and are owned by computer manufacturers and software
publishers.
Primary objectives
- Ensure physical independence between the physical level and the conceptual level (You can
modify the servers add and remove them or transfer the data to the Cloud without affecting the
overall modeling of the database).
-Ensure logical independence between the conceptual level of the data with the external level
(part of the data can be impacted without really affecting the conceptual schema)
-Allow to manipulate data using non-procedural languages (understood by all)
- Easy data administration thanks to facilitation tools (grant access, remove access, change
data structure)
Secondary objectives
-Ensure data consistency (age limit or minimum wage in a company for example)
• External level which is the vision that users or computer programs have on the data
to which it has access. Hence the notion of view (subset of a database). It ensures
data security.
-Provides a global view of the database and the relationships between objects and the
outside world (hence the conceptual diagram)
• Physical or internal level that corresponds to the data at the storage media level and
its structure
All the schemas represented will be saved in the Data Dictionary which is different from the data in
the database.
The DBMS must provide the commands to create and define the different schemas, create the data
dictionary, and create the database according to the data dictionary.
b) Data handling
The DBMS must provide the necessary elements for the search and updating of the data
c) Data transformation
The DBMS must be able to move from one level to another through level crossing rules
The DBMS must ensure compliance with the constraints of integrity and uniqueness (Example the
identity card number).
Transaction: A set of updates (queries) that run one after the other.
6. DBMS Architecture
What is a server?
A server is a machine that makes available to a set of users the data and resources it contains.
• 1-tier architecture
In the single-tier architecture, the DBMS is the only entity where the user sits directly on and uses
the DBMS. All changes made here will be made directly to the DBMS itself. It does not provide
practical tools to end users. Database designers and programmers generally prefer to use a single-
tier architecture.
• 3-tier architecture
ADMINISTRATION OPEN DATABASE MR MESSIO COMPUTER ENGINEER
8
A 3-tier architecture separates its tiers from each other based on the complexity of the users and
how they use the data in the database. This is the most widely used architecture for designing a
DBMS.
• Database (Data) Tier- At this level, the database resides with its query processing lan-
guages. We also have the relationships that define the data and its constraints at this level.
• Application (Middle) Tier- At this level reside the application server and the programs that
access the database. For a user, this application tier presents an abstract view of the database.
End users are unaware of the existence of the database beyond the application. At the other
end, the database tier knows no other users beyond the application tier. Therefore, the appli-
cation layer sits in the middle and acts as a mediator between the end user and the database.
• User (Presentation) Tier- End users operate at this level and they know nothing about the
existence of the database beyond this layer. At this layer, multiple views of
the database can be provided by the application. All views are generated by applications that
ADMINISTRATION OPEN DATABASE MR MESSIO COMPUTER ENGINEER
9
7. Tasks of an administrator
-Ensures that different users can easily use the database and that -the system works as it should
Chapter 2:
MYSQL DATABASE ADMINISTRATION
1. Versions and types of licenses
(For those who do not yet have MySql in their machine)
MySql is an enterprise product that is paid and has a license so it is better to download the
community version so no need for license.
After downloading you click on Custom to customize the installation, then we choose the type
of version. Choose a development configuration cad the database will not be accessible
remotely and by default it is necessary to leave the basic information about the ports.
Then choose the name and password (root is better and easier)
For the MySQL command to work at a command prompt or on power Shell, we need to
change the environment variables in your computer's settings, specifically the PATH and put
the path to the mysql command.exe
(Same thing for maria DB)
2. Administration tools
Databases act as a central repository of information that authorized Web programs or services can
access, supplement, or modify. Professionals called database administrators are in charge of setting
up and maintaining these specialized software servers. Database administration tools are specialized
programs that allow them to access, modify, and update database software and configurations.
In the world of computer programming, a server is a program whose purpose is to provide
information to other software called clients. A server can run on its own machine, which client
access over the Internet or a local network. A server can also run as a background program on the
same machine as its client programs. The database software functions as a server that listens on a
specific port for incoming connections from client programs that want to access the information it
holds.
The database administration tools that professionals use to keep databases running are separate
programs and are not part of the database software itself. Database administrators use these tools to
issue commands to which a particular database software is programmed, to recognize and respond
with a corresponding action. Although the exact syntax of these commands varies from one
database software to another, actions that can be performed using them include:
Add and remove credentials that programs can use to connect to a particular database
These utility programs can have full graphical user interfaces or be text-based terminal programs.
Database administration tools that use a text-based interface require the user to type the full syntax
for each servicing command. Programs with a full GUI allow users to configure commands by
clicking buttons and selecting different options, and then translate the action into the syntax needed
for a command in the database.
Database administration tools also include software that allows you to remotely connect to a
computer and execute system commands. These are necessary when the administrator needs to
update database software that runs on a machine without a computer monitor or GUI. Before an
administrator can apply an update, the administrator must shut down the database server, which is
equivalent to exiting a running application. Remote Connection software allows a database
administrator to access the computer on which the database is running to stop the database server,
update the software, and then restart the database software.
The most efficient system administrators use the right database management tools.
We live in the age of data and every business needs a team of system administrators who can
manage and organize highly complex databases.
As a database administrator or system administrator, you should always use the best tools available
on the market. It's all about efficiency and a flawless environment.
There are tons of outdated tools that are still in use in the industry. So, to help beginners and
experts, the following tools will help you do a remarkable job in database management.
a. PgAdmin4
You can't talk about database management tools without considering PgAdmin.
PgAdmin is the most powerful administration and development platform for PostgreSQL. If you
are just starting out and the command line administration seems too scary, PgAdmin can help you
get started.
Whenever I start a Django Project with PostgreSQL as my database, I use this tool to manage it.
That's because it's a simple and safe way to recover (or delete tables) if I've messed up the
migrations.
You can download the latest version of your operating system from the PgAdmin download page.
Features
Available for Windows, Mac and Linux
b. Admin
Administrator, Formerly known as phpMinAdmin is a simple to use database manager, written and
configurable with PHP. Its current version supports several databases, including relational and non-
relational.
The best thing about this tool is its simplicity. This is because of its distraction-free user interface
(UI) and the fact that you can configure it via a single PHP file.
Apart from that, it is highly customizable, has many designs and plugins that are easy to use.
The 5 steps you should follow to set up and run this tool on your website are.
Download the PHP file from the GitHub Page administrators, and check the name with ls
$ ls
adminer-4.8.0.php
Make sure you have installed PHP (5,7 or 8) on your server.
Download the single PHP file to the root folder of your server.
Click https://yourdomain.com/adminer-4.8.0.php
Connect to your database with your username and password
And that's all you need to install Adminer on your server.
Note: If you are a phpMinAdmin user, you need to update now. The official documentation explains
the reasons for switching to Adminer.
Features
Support for PHP 5, 7 and 8
c. DbVisualizer
As its official website proclaims, DbVisualizer is the universal database tool. The reason is that
this tool has all the features of a team of developers, analysts, and DBAneeds.
It is written in Java and its cross-platform advantage allows you to learn one tool and forget about
the rest.
One of the best things about this tool is the support for almost all modern databases such as
Redshift, H2, Exasol, Vertica, Informix, Netezza, etc. And those that are not yet can be managed
with a generic driver.
You can download the current version from the download page.
Features
Advanced SQL Editor
d. Quest
Quest Software, founded in 1987, remains one of the most trusted IT solution providers in today's
software development era.
Their main products are database management tools—for example, the Foglight monitoring and
optimization solutions or the Toad tool sets.
The best thing about Quest is the targeting of each of their products. This means that each platform
or database deserves its own set of specific toolkits. For example, if your company is heavily based
on the Cassandra database, you can opt for the specialized product Fog Lamp for Cassandra.
In conclusion, if you are looking for a specialized set of tools to maintain and evolve your growing
database, Quest products are perfect.
Features
A specific tool for each platform
e. ManageEngine
They offer a specific solution for each type of database. So, if you're looking for a monitoring tool
that's optimized for your database, feel free to review ManageEngine solutions.
In addition, these monitoring systems discover database calls made by Java, Node, Ruby on Rails,
DotNET, and PHP backend applications.
Features
20+ databases supported
Visual dashboard
Monitoring SQL statements
Advanced Analytics
f. Sequel Pro
If so, Suite Pro is the right database management system for you. It is an open source project that
gives you direct access to local and remote MySQL databases.
The best part about this tool is its simplicity and the fact that it is really lightweight. As a result, you
can quickly set up this tool on any Mac machine.
PHPMyAdmin
PhpMyAdmin is a free administration tool built with PHP, · It can perform a large number of
operations with MySQL and MariaDB Databases.
You can use the GUI or use SQL sequences to perform proper database management.
This tool has won several awards from 2002 to today. In addition, it has extensive documentation
and a book translated into English and Spanish.
Note: Be careful to confuse PhpMyAdmin with PhpMinAdmin (former name of the administrator)
because they are totally different projects.
You can try a simple demo here or download it from the official website.
Features
Database management has become one of the most demanding skills of the last decade. This, due to
the increasing amount of data, every business needs to grow and succeed.
When looking for the best tools to effectively manage databases, we offer both free and paid
platforms. You can choose the one that suits you best based on your needs as a system administrator
or the needs of the company you work for.
3. MySQL Architecture
MySQL is Relational Database Management system which is free Open Source Software Under
GNU License. It is also supported by Oracle Company . It is fast, scalable, easy to use database
management System. MYSQL support many operation system like Windows, Linux, MacOS etc.
MySQL is Structured Query Language which is used to manipulate, manage and retrieve data
with the help of various Queries. MySQL is developed and supported by MySQL AB which is a
Swedish Company and written in C and C++ programming language. It was developed by
Michael Widenius and David Hughes . It is often used to say that MYSQL is named after the
name of daughter of the co-founder MIchael Widenius whose name is 'My'
Architecture of MYSQL describes the relation among the different components of MYSQL
System. MYSQL follow Client-Server Architecture. It is designed so that end user that is Clients
can access the resources from Computer that is server using various networking services. The
Architecture of MYSQL contain following major layer's :
Client
Server
Storage Layer(Engine)
Client Layer:
This layer is the topmost layer in the above diagram. The Client give request instructions to the
Serve with the help of Client Layer. The Client make request through Command Prompt or
through GUI screen by using valid MYSQL commands and expressions. If the Expressions and
commands are valid then the output is obtained on the screen. Some important services of client
layer are:
Connection handing (When a client send request to the server and server will accept the
request and the client is connected. When Client is connected to the server at that time, a
client get it's own thread for it's connection. With the help of this thread all the queries
from client side is executed)
Authentication (Authentication is performed on the server side when client is connected to
the MYSQL server. Authentication is done with the help of username and password.)
Security (After authentication when the client gets connected successfully to MySQL
server, the server will check that a particular client has the privileges to issue in certain
queries against MySQL server.)
Server Layer:
The second layer of MYSQL architecture is responsible for all logical functionalities of relational
database management system of MYSQL. This Layer of MYSQL System is also known as
"Brain of MYSQL Architecture". It’s when the Client give request instructions to the Server and
the server gives the output as soon as the instruction is matched. The various sub components of
MYSQL server are:
-Thread Handling: When a client send request to the server and server will accept the request and
the client is connected. When Client is connected to the server at that time, a client get it's own
thread for it's connection. This thread is provided by thread handling of Server Layer. Also the
queries of client side which is executed by the thread is also handled by Thread Handling module.
-Parser : A Parser is a type of Software Component that built a data structure(parse tree) of given
input . Before parsing lexical analysis is done i.e. input is broken into number of tokens . After the
data is available in the smaller elements parser perform Syntax Analysis, Semantics Analysis after
that parse tree is generated as output
ADMINISTRATION OPEN DATABASE MR MESSIO COMPUTER ENGINEER
20
- Optimizer:As soon as the parsing is done, various types of optimization techniques are applied
at Optimizer Block. These techniques may include rewriting the query, order of scanning of tables
and choosing the right indexes to use etc
-Query Cache : Query Cache stores the complete result set for inputted query statement. Eve
before Parsing , MYSQL Server consult query cache . When client write a query , if the query
written by client is identical in the cache then the server simply skip the parsing, optimization and
even execution, it just simply display the output from the cache
-Buffer and Cache:
Cache and will buffer store the previous query or problem asked by user. When User write a
query then it firstly goes to Query Cache then query cache will check that the same query or
problem is available in the cache. If the same query is available then it will provide output
without interfering Parser, Optimizer.
-Metadata Cache Table
The metadata cache is a reserved area of memory used for tracking information on databases,
indexes, or objects. The greater the number of open databases, indexes, or objects, the larger the
metadata cache size
-Key Cache : A key cache is an index entry that uniquely identifies an object in a cache. By
default, edge servers cache content based on the entire resource path and a query string
Storage Layer:
This Storage Engine Layer of MYSQL Architecture make it's unique and most preferable for
developers. Due to this Layer MYSQL layer is counted as the mostly used RDBMS and is widely
used. In MYSQL server, for different situations and requirement's different types of storage
engines are used which are InnoDB, MYiSAM, NDB etc. These storage engines are used as
pluggable storage engineer where tables created by user are plugged with them.
4. Features of MYSQL
MYSQL language is easy to use as compared to other programming language like C, C++, and
Java etc. By learning with some basic command we can work, create and interact with Database
MYSQL consist of Data Security layer which protect the data from violator. Also, passwords are
encrypted in MYSQL
MYSQL follow Client-Server Architecture where Client request Commands and instructions and
Server will produce output as soon as the instruction is matched
MYSQL is free to use under Community version of it. So we can download it from MYSQL
website and work on it freely
MYSQL use multithreading which makes it Scalable. It can handle any amount of data. The
default file size limit is 4 GB, but we can increase it according to our need
MYSQL is considered as one of the fast database. Its fastness is determined on the basis of large
number of benchmark tests
MYSQL is very flexible because it supports large number of embedded systems
MYSQL is compatible to run on various operating system such as Windows, MacOS, and Linux
etc.
MYSQL allow transactions to be rolled back, commit and cash recovery
It has low memory leakage problem which increase its memory efficiency
MYSQL version 8.0 provide dual password support, one is a current password and another is
secondary password. With the help of this we can create new password
To know which engine is used in each table im supposed to type the command
SELECT TABLE_NAME,ENGINE FROM information_schema.TABLES ;
MySQL saves information with the MyISAM storage engine by default, but there are others
including InnoDB (the best known), ISAM, Heap, NBD, Berkeley DB or Merge, Memory etc. The
choice of a storage engine is important and the different sometimes unknown. The main differences
between MyISAM and InnoDB to understand how to make this choice are:
a) MyISAM
Advantages :
Here is a non-exhaustive list of arguments in favor of MyISAM:
It is MySQL default storage system. This type of storage is the oldest available with MySQL
Very fast to perform SELECT or INSERT queries
Supports full-text indexing. Offers much better performance when searching text fields
Simpler administration
P os s i bi l it y t o l ock b ut at t he l evel of a t abl e (c f. t abl e l ock)
M ore fl ex i bl e i n t erm s of dat a i nt e gri t y (but i t can have i t s f l i p si de ...)
Disadvantages
The flexibility MyISAM leads to some negative points when it comes to keeping dadata integ-
rity,as shown in the points below.
No t ransact i ons
No forei gn ke ys
M ore di ffi cul t t o re cover a ft er a cr ash ( Dat a i nt e gri t y can e asi l y be com -
prom i se).
Switch to MyISAM
If you want to pass a table to MyISAM just run this SQL query:
ALTER TABLE `table` ENGINE=MYISAM;
b) InnoDB
Advantages
The main advantages of using InnoDB are the ability to manage data integrity well, even during a
server crash.
Manages transactions (set of grouped requests) such as BEGIN or START TRANSAC-
TION, COMMIT, ROLLBACK ...
Manages foreign keys and integrity constraints
Support for ACID to ensure that all recordings are successful or failed, i.e Impossible error
risk even during a failure
Recovery system in the event of a crash thanks to a log review system
Disadvantages
However, data integrity complicates the administration of a database.
Stricter regarding data integrity
Does not support full-text indexing (cf. full text index)
Administration a little more complex
Larger storage engine (It requires more resources and is slower)
Ability to lock at row level (cf. row lock) to insert or update a record
Interesting to note that it is more recent, although it is not necessarily a disadvantage
Switch to InnoDB
To pass a table to InnoDB just run the following query:
It is important to know that it is possible in the same database to have tables in MyISAM and others
in InnoDB. This even improves performance in some cases.
MyISAM: preferred when it is mainly necessary to perform queries to read or insert data.
For websites, this is often the recommended solution.
InnoDB: preferred for systems that should not contain errors and require foreign keys. For
an application that manages important data, such as a banking application, this may be the
recommended storage engine.
8. Setting up Innodb
The settings discussed in this section do not necessarily need to be changed from the default
configuration, but it is important that you have thought about their good value for your application.
• Memory cache size (innodb_buffer_pool_size): This is InnoDB's primary buffer pool, where
frequently accessed data and indexes are stored. For a server dedicated to MySQL, it is common to
allocate most of the server's memory to it (for example, about 25 GB for a server with 32 GB of
physical memory). The main idea is that this cache avoids disk access: the size of the cache is all
the more important as the disk is slow.
If your database is small (a few tens of GB, for example), it is quite simple to use a server with
enough memory for all InnoDB data and indexes to fit in the cache. Otherwise, try to cache the
useful part of the data and index, that is, the part of the data and index that is frequently used by the
application. It is quite common indeed to have for example a 500 GB database containing the
history...
a. Replication
(As a presentation)
b. Usage
The use of InnoDB also differs from the classic use of MySQL. InnoDB tables and indexes use
different access methods. Each table has a «clustered index». The data is on the same page as this
index.
With InnoDB, all activity occurs in a transactional context. Namely that even if by default MySQL
automatically commits transactions, they remain transactions and MySQL actually performs a
COMMIT at each end of the query.
To prevent MySQL from using this behavior, you can start the script with START TRANSACTION
(at a command prompt or in Workbench) and end it with COMMIT or ROLLBACK. Another
method is to set the autocommit variable to 0 or off at the beginning of each session. This value can
be set in the config file of the MySQL client.
SET AUTOCOMMIT = 0;
InnoDB table creation is done by precising ENGINE = INNODB to the end of the table creation
statement:
1.CREATE TABLE nom_table
2.(. . .)
3.ENGINE = INNODB;
If an error occurs while editing a table, check the table for a full-text index. It is also not necessary
to use ALTER TABLE, but to delete the table taking care to have DUMPED the table, then import
the data into the table using set autocommit = 0 before insertion and a commit at the end.
Note that MYISAM table compression is not supported on InnoDB.
InnoDB support allows the use of transactions, constraints on foreign keys and to manage locks at
the level of records and tables.
To illustrate our point, we will create two tables that we will use as examples.
This example is an oversimplified management of classrooms. Table users are linked to one and
only one room.
Example
1.set auto_commit = 1;
2.Create database innodbtest;
3.CREATE TABLE users (
4. id int(11) NOT NULL auto_increment,
5. nom varchar(25) default NULL,
6. prenom varchar(25) default NULL,
7. age int(11) default NULL,
8. PRIMARY KEY (id)
9.) ENGINE=InnoDB;
10.
11.CREATE TABLE storage (
12.idstorage int(11) NOT NULL auto_increment,
13.num char(3) default NULL,
14.stock int(11) default NULL,
15.PRIMARY KEY (idstorage),
16.index user_ref (stock),
17.foreign key the_user (stock) references users(id) on delete set null
18.) ENGINE=InnoDB;
19.
20.grant select, insert, update, delete on innodbtest.* to user1@localhost;
21.grant select, insert, update, delete on innodbtest.* to user2@localhost;
22.flush privileges;
23.flush tables;
c. Transactions
Here we will talk about the data transaction language. These are SQL commands used to handle
changes that affect database data. Basically, we use these commands in the transaction or to create a
stable point during database changes to which we can cancel the state of the database if necessary.
TCL commands:
START TRANSACTION – To start the transaction
COMMIT – Makes database changes permanent
ROLLBACK – rolls back a transaction in the event of an error and returns to the previous status of
the database. (If you already have Commit you can no longer Rollback)
SAVEPOINT – defines a save point in a transaction.
SET TRANSACTION – specify the characteristics of the transaction.
To cancel the auto-commit you must use the START TRANSACTION command
If we leave MySQL, we realize that the changes on the table have not been made or saved. It will
therefore be necessary to use the commit command to save the changes. Until this is done, changes
in the server are temporary.
The START TRANSACTION command must be used again to disable auto-commit because the
rollback and commit commands are for single use.
You can also make multi-level backups with savepoint and go back with each change in the
database change.
One of the goals of InnoDB is to be able, at any time, to return to the previous state of a table after a
transaction. In other words, only committed transactions are considered durable and their data saved
in tables.
By default and up to version 4.0.4 the transaction isolation level and REPEATABLE READ. Higher
versions allow you to change this isolation level to READ UNCOMMITED, READ COMMITED,
SERIALISABLE.
d. Level of insulation
REPEATABLE READ reads and blocks indexes, leaving the ability to perform inserts.
READ UNCOMMITED plays recordings without the ability to watch at the latest version.
READ COMMITED: A statement can only see committed rows before it starts.
SEIRIZABLE: The current transaction can only see committed rows before a first query or data
change statement is executed in the transaction.
9. Table types
(To do as a report)
10. Indexes
In computer science, in databases, an index is a data structure used and maintained by the database
management system (DBMS) to allow it to quickly find data. Using an index simplifies and
accelerates the search, sort, join, or aggregation operations performed by the DBMS.
The index placed on a table will allow the DBMS to access records very quickly, depending on the
value of one or more fields.
An index is an automatically maintained structure that makes it easy to locate records in a file. The
use of indexes is based on the following observation: to find a book in a library, instead of
examining each book one by one (which corresponds to a sequential search), it is faster to consult
the catalog where they are classified by theme, author and title. Each entry in an index has a value
retrieved from the data and a pointer to its original location. A record can thus be easily found by
searching its location in the index.
An index can be ordered, chopped, dense or scattered:
• An ordered index contains a list of values retrieved from a table and sorted.
• In a hashed index the values are transformed by a hash function.
• In a dense index, all records in a table are referenced.
• In a scattered index only a part of the records are referenced.
The primary index of a table is the first index that is used to locate records. A table can also contain
secondary indexes in addition to the primary index2. Each table can have one or more indexes. An
index can be from a single field or can combine multiple fields
Indexes are used by DBMSs for many operations. Indexes make it easy to sort, search, group, and
join1. The most common structure of indexes is the B3 tree. Other structures exist but are rarely
used - ISAM, hash tables or bitmaps.
To speed up operations, index trees are saved in a block with a very large number of ramifications:
Hard disks, on which indexes are saved, read data in blocks of several kilobytes and the time
required to read a block is usually much less than the time needed to locate it. With this
construction, finding a record in a batch of several million then requires only 2 or 3 operations.
Indexes can be added by a SQL command. The existence of indexes is crucial to speed up data
manipulation operations. Improper use of indexes is the main cause of disappointment. The
database management system (abr. DBMS) will not find the indexes that are necessary to efficiently
perform the operations requested of it.
The reasons for inappropriate use are that DBMS' documentation of index usage is often vague and
succinct, highlighting their disadvantages rather than their advantages.
In particular, indexes are used by the query optimizer:
• The optimizer is the component of DBMS that looks for the most cost-effective way to run a
query. The optimizer looks at the different possible scenarios and estimates the number of
operations needed for each scenario, then chooses the scenario that requires the least. The number
of operations required depends on the presence of indexes, as well as the number of rows in the
table and the distribution of values
Index Types
• The most common structure for indexes is the B-tree. By storing the different values of the field in
a balanced tree, the DBMS will be able to prioritize records according to a field with an infinite (or
nearly infinite) range of values.
• Another type of index is the bitmap index. It consists of a simple table indicating, for each
possible value of the field, the list of records with that value for that field.
However, to be effective, it requires that the DBMS can directly access a given value. It is therefore
only applicable on columns for which the number of values is limited and ordered.
• There are also indexes by hash table. The major disadvantage of such an index is that it only
allows selections by equality, since it does not retain the notion of order. If n is the number of
records in a table, using a balanced hash table can reduce the number of records that must be
traversed at the square root of n (the table is then composed of hash values that each access N
records).The same efficiency note exists for the bitmap index: the DBMS must be able to access a
given hash value directly, without having to go through the list of possible hash values.
Impacts on modified performance
When inserting or updating a record in the database, there is a slight degradation in performance:
the DBMS must update the indexes so that they continue to reflect the state of the records. For this
reason, when designing a database, care should be taken to define only the indexes that will be used
by the system.
These will only be well identified by an analysis of the system (and in particular the mechanisms of
interrogation of the database) with a view to its optimization.
To create an index, type the command CREATE INDEX nom_de_l index ON table_name
(selected_attribut);
To see the list of indexes of a table, type the SHOW INDEX FROM table_name;
11. Views
Views are virtual tables that are assembled by assembling other tables based on criteria. Technically
views are created using a SELECT query. They do not store the data they contain but just keep the
query to create it.
The SELECT query that generates the view references one or more tables. The view can therefore
be, for example, a join between different tables, aggregating or extracting certain columns from a
table. It can also be created from another view.
Views are often read-only and therefore only read data. However, MySQL allows the creation of
editable views under certain conditions:
• The query that generates the view must allow MySQL to find the trace of the record to be
modified in the underlying table(s) as well as that of all the values of each column. Therefore, the
SELECT query creating the view must not contain a DISTINCT, GROUP BY, HAVING... and other
aggregation functions. The full list is available in the MySQL documentation.
Views can be used for different reasons. They make it possible to:
• Control integrity by restricting access to data to improve confidentiality.
Vertical and/or horizontal partitioning to hide fields from users, which allows you to customize the
display of information according to the type of user.
• Hide the complexity of the schema.
Logical data independence, useful for giving users access to a set of relationships represented in the
form of a table. The data in the view are then fields from different grouped tables, or results of
operations on those fields.
• Automatically edit selected data (sum(), avg(), max(),...).
o Manipulate values calculated from other values in the schema.
ADMINISTRATION OPEN DATABASE MR MESSIO COMPUTER ENGINEER
28
13. Locks
A lock concerns security so its purpose is to prevail over attacks that could come from outside. A
lock solves competing data access issues. For example, avoid two travelers booking the same seat
on the plane.
Avoid competing updates on the same value.
Avoid displaying inconsistent information during database update. A lock is used to manage
concurrent access (access the database at the same time). The lock is therefore a tool that will allow
us to lock a part of the database, either a table, a row or a column and that will leave the possibility
to the user who has locked this data to be able to exploit it and therefore block it in the eyes of
others. It is therefore like a sign that signals that this data is temporarily unavailable. It will become
available again when the user is finished.
So either in the reservation is validated so the place is available, or the user does not want it
anymore and makes the place available.
Example the reservation of a travel ticket by two people or Yango.
Or managing a company's salaries.
A lock is used to temporarily block data (table or only row of a table) to other Mysql sessions
The ideal is to block as little as possible
Other sessions will simply be denied access to the data.
It is possible to block read and write data.
LOCK TABLES table name READ ; (block read)
LOCK TABLES table name WRITE ; (block read and write)
Once a lock is placed the session can only read the data in the tables locked in READ.
It avoids errors by going to modify unauthorized data
The UNLOCK TABLES query unlocks the data. (It is not possible to specify which tables to
unlock. The statement removes all locks).
In all the rest of the transaction until I disable locks I cannot access the other tables in the database.
LOCK TABLES roman WRITE , series READ ;
SELECT * FROM novel;
SELECT * FROM series;
Chapter 3:
POSTGESQL DATABASE ADMINISTRATION
1. Versions and types of licenses
PostgreSQL is a relational database system that belongs to the class of open-source software that is
distinguished by its robustness, evolution and compliance with SQL standards
Choose the installation directory and choose the components you want to install
Backup-specific commands:
Pg_dump specific backup method for backing up one or more instances, multiple databases, or
certain tables and schemas
Pg_dumpall method of global or full cluster backup
Pg_restore it is a restore from a global backup (pg_dumpall)
The Wrappers:
Createdb for creating a database
Dropdb for deleting a database
Createuser to create the users
Dropuser to delete users
Maintenance:
For the Postgre command to work at a command prompt or on power Shell, we need to change the
environment variables in your computer's settings, specifically the PATH and put psql.exe
ADMINISTRATION OPEN DATABASE MR MESSIO COMPUTER ENGINEER
32
Whenever I start a Django Project with PostgreSQL as my database, I use this tool to manage it.
That's because it's a simple and safe way to recover (or delete tables) if I've messed up the
migrations.
You can download the latest version of your operating system from the PgAdmin download page.
Features
Available for Windows, Mac and Linux
Open Source Tool
Easy to use, interface
Online test version
A powerful set of tools for working with PostgreSQL databases.
4. POSTGRESQL Architecture
Client
Server
The client sends a request to the PostgreSQL server, and the PostgreSQL server responds to the cli-
ent request. In typical application, client and server is on different host in, they will communicate
through TCP/IP network connection. PostgreSQL server handling multiple concurrent sessions
from the client. To achieve this by starting a new process for each connection. From that point, the
new server process and the client will communicate without another process intervention. Post-
greSQL has its own background processes to manage the PostgreSQL server.
Architecture of PostgreSQL
Shared Memory
Background processes
Data directory structure / Data files
1. Shared Memory
Shared memory refers to the memory reserved for transactional, and another log catches. Shared
memory consist of the following components:
Shared Buffers
We need to set some amount of memory to a database server for the uses of shared buffers.
The default value of shared buffers in 9.2 and the older version is 32 megabytes (32 MB)
from 9.3, and the later default value of shared buffers is 128 megabytes (128 MB).
If we have a dedicated server for PostgreSQL, reasonable starting to set shared buffers value
is 25% of total memory. The purpose of shared buffers is to minimize server DISK IO.
WAL Buffers
WAL buffers temporarily store changes in the database, which changes in the WAL buffers
are written to the WAL file at a predetermined time. WAL buffers and WAL files are signifi-
cant to recover the data at some peak of time at the time of backup and recovery.
The minimum value of shared buffers is 32 KB. If we set this parameter as wal_buffers = -1,
it will set based on shared_buffers.
Work Memory
Set the specific memory to per client connections to be used by internal sort of operations
and hash tables to writing data into temporary disk files.
The default value of work memory in 9.3 and the older version is 1 megabyte (1 MB) from
9.4, and the later default value of work memory is 4 megabytes (4 MB).
We need to specify the maximum amount of memory for database maintenance operations
such as VACUUM, ANALYZE, ALTER TABLE, CREATE INDEX, ADD FOREIGN KEY,
etc.
The default value of maintenance work memory in 9.3 and the older version is 16 megabytes
(16 MB) from 9.4, and later default value of maintenance work memory is 64 megabytes (64
MB).
It is safe to set maintenance work memory is large as compared to work memory. Larger set-
tings will improve maintenance performance (VACUUM, ANALYZE, ALTER TABLE,
CREATE INDEX, and ADD FOREIGN KEY, etc.) operations.
2. Background Processes
Below are the background processes of PostgreSQL. Each process has its individual features and
PostgreSQL internals. Each process details will be described as follows:
Background Writer process: In PostgreSQL 9.1 version background writer regularly does the
checkpoint processing. But in PostgreSQL 9.2 version checkpointer process was separated from the
background writer process. It will keep logs and backup information up to date.
WAL Writer: This process writes and flushes the WAL data on the WAL buffer to persistent stor-
age periodically.
Logging Collector: This process also called a logger. It will write a WAL buffer to the WAL file.
autovacuum Launcher: When autovacuum is enabled, this process has the autovacuum daemon’s
responsibility to carry vacuum operations on bloated tables. This process relies on the stats collector
process for perfect table analysis.
Archiver: If we enable archive mode, this process has the responsibility to copy the WAL log files
to a specified directory.
Stats Collector: In this process, Statistics information like pg_stat_activity and for pg_stat_data-
base is collected. Information is passed from the stats collector via temporary files to requesting
processes.
Checkpointer Process: In PostgreSQL 9.2 and later versions checkpoint process is performed. This
process’s actual work is when a checkpoint occurs, it will write dirty buffer into a file.
Checkpointer: Checkpointer will write all dirty pages from memory to disk and clean the shared
buffer area. If the PostgreSQL database is crashed, we can measure data loss between the last
checkpoint time and PostgreSQL stopped time. The checkpoint command forces an immediate
checkpoint when the command is executed manually. Only database superuser can call checkpoint.
PostgreSQL consists of multiple databases; this is called a database cluster. When we initial-
ize PostgreSQL database template0, template1 and Postgres databases are created.
Template0 and template1 are template databases for new database creation of user it con-
tains the system catalog tables.
The user database will be created by cloning the template1 database.
PGDATA directory contains several subdirectories, and control files are as follows.
Conclusion
PostgreSQL architecture mainly divided into two models client and server. The client sends a re-
quest to the server; the PostgreSQL server process the data using shared buffers and background
processes and send a response back to the client. The data directory contains the physical file of the
PostgreSQL database server.
5. Features of POSTGRESQL
PostgreSQL supports tremendous features that will make things easier. Some of the features are
widely appreciated as they are incredibly useful. The following section contains some of the well-
known features of PostgreSQL that you should look at as they will help you to improve perfor-
mance in production and will make operations easier.
Features of PostgreSQL
PostgreSQL runs on various operating systems, like Windows, UNIX (AIX, BSD, HP-UX,
SGI IRIX, Mac OS X, Solaris, Tru64), and Linux, etc. PostgreSQL is available for almost
every operating system with the latest stable release.
PostgreSQL supports different storage like text, images, sounds, and videos, etc.
Users can run different web apps and dynamic websites as a LAMP stack option using Post-
greSQL.
PostgreSQL is a highly fault-tolerant database because of its write-ahead logging mecha-
nism.
As of PostgreSQL available under open source license its source code is freely available.
Which allows user to use freely, modify, and implement it as per their business needs.
In geospatial data stores for location-based services and geographic information systems, we
can use PostgreSQL as it supports geographic objects.
PostgresSQL is very easy to learn, users don’t need to go through rigorous training in order
to use it.
For embedded and enterprise PostgreSQL uses low maintenance administration
Users can write and execute their own code on the PostgreSQL database server using the
above standard procedural languages and it can be executed by These procedural lan-
guages.PostgreSQL supports the following non-standard procedural languages as well.
o PL/PHP
o PL/V8
o PL/Ruby
o PL/Java
PostgreSQL can be extended in various ways. Users can add the following new features.
o Data types
o Functions
o Operators
o Aggregate functions
o Index methods
PostgreSQL supports a huge set of the SQL standard and offers various modern features.
Similar to the various enterprise database management system the PostgreSQL offers ad-
vanced features, such as:
o User-defined types
o Table inheritance
o Sophisticated locking mechanism
o Foreign key referential integrity
o Views, rules, subquery
o Nested transactions (savepoints)
o Asynchronous replication
o Complex SQL queries
o SQL Sub-selects
o Transactions
o Multiversion concurrency control (MVCC)
o Streaming Replication (as of 9.0)
o Hot Standby (as of 9.0)
o PostgreSQL is compatible with many platforms with the help of various program-
ming languages and middleware.
o It offers a most sophisticated locking mechanism
o Support for multi-version concurrency control
o Mature Server-Side Programming Functionality
o Compliant with the ANSI SQL standard
o Full support for client-server network architecture
6. Migration
(To do in présentation)
7. Objects in a POSTGRESQL database
(To do in présentation)
8. Table types
a. Unloggued tables
b. Temporary tables
c. Inheritance tables
d. Partitionned table
9. Indexes
Indexes are special lookup tables that the database search engine can use to speed up data retrieval.
Simply put, an index is a pointer to data in a table. An index in a database is very similar to an index
on the back of a book.
For example, if you want to reference all the pages of a book dealing with a certain topic, you must
first refer to the index, which lists all the topics in alphabetical order, and then refer to one or more
specific page numbers.
An index is used to speed up SELECT queries and WHERE clauses; however, it slows down data
entry, with UPDATE and INSERT statements. Indexes can be created or deleted without affecting
the data.
Creating an index involves the CREATE INDEX statement, which allows you to name the index,
specify the table and column(s) to index, and indicate whether the index is in ascending or descend-
ing order.
Indexes can also be unique, similar to the UNIQUE constraint, in that the index prevents duplicate
entries in the column or combination of columns on which there is an index.
a)Type of index
PostgreSQL provides several types of indexes: B-tree, Hash, GiST, SP-GiST and GIN. Each type of
index uses a different algorithm that is best suited for different types of queries. By default, the
CREATE INDEX command creates B-tree indexes, which correspond to the most common situa-
tions.
A single column index is created based on a single table column. The basic syntax is as follows
Multicolumn index
A multicolumn index is defined on multiple columns in a table. The basic syntax is as follows
-CREATE INDEX index_name ON table_name (column1_name, column2_name);
Whether you want to create a single-column index or a multi-column index, consider the column or
columns that you can use very frequently in the WHERE clause of a query as filter conditions.
If only one column is used, a single-column index must be the choice. If two or more columns are
frequently used in the WHERE clause as filters, the multicolumn index would be the best choice.
Unique indexes
Unique indexes are used not only for performance, but also for data integrity. A single index does n
ot allow duplicate values to be inserted into the table. The basic syntax is as follows
-CREATE UNIQUE INDEX index_name ON table_name (column_name);
• Partial indexes
A partial index is an index built on a subset of a table; The subset is defined by a conditional expres
sion (called the partial index predicate). The index contains entries only for rows in the table that sat
isfy the predicate. The basic syntax is as follows
-CREATE INDEX index_name ON table_name (conditional_expression);
Implicit indexes
Implicit indexes are indexes that are automatically created by the database server when an object is
created. Indexes are automatically created for primary key constraints and unique constraints.
Here is an example where we will create an index in our database public, on the company table for
the salary column.
salary | real |
Indexes:
"company_pkey" PRIMARY KEY, btree (id)
"salary_index" btree (salary)
You can list the entire index database using the \di command
On our grafical interface Pgadmin 4 the syntaxe to display the list of indexes according to a spe-
cific table is
SELECT tablename,indexname,indexdef
FROM pg_indexes
WHERE tablename='company'
order by tablename,indexname;
select table_name,column_name,data_type
from information_schema.columns
where table_name='company';
An index can be deleted using PostgreSQL DROPcommander. Care should be taken when deleting
an index as performance may be slowed down or improved. The syntax is as follows
Although indexes are intended to improve the performance of a database, there are times when they
should be avoided. The following guidelines indicate when the use of an index should be reconsid-
ered
• Indexes should not be used on small tables.
• Tables that have frequent and large batch update or insert operations.
• Indexes should not be used on columns with a high number of NULL values.
• Frequently manipulated columns should not be indexed.
The purpose of indexes
The purpose of indexes is to increase the speed of queries.
An index can be reindexed if a table changes.
Exclusion Constraint
Exclusion constraints ensure that if two rows are compared on the specified columns or expressions
using the specified operators, at least one of these operator comparisons will return false or null.
For example, the following PostgreSQL statement creates a new table called COMPANY and adds
five columns. Here we add a constraint EXCLUDE
CREATE TABLE company1(
ID int PRIMARY KEY NOT NULL,
NAME text,
AGE int ,
ADDRESS char(50),
SALARY REAL,
CREATE INDEX idx_ully ON company USING gist(NAME)
EXCLUDE USING gist
(NAME WITH =,
AGE WITH <>)
ADMINISTRATION OPEN DATABASE MR MESSIO COMPUTER ENGINEER
44
);
Here, USING gist is the type of index to build and use for the application.
You must run the CREATE EXTENSION command btree_gist once per database. This will install
the btree_gist extension, which defines exclusion constraints on simple scalar data types. As we
have imposed, the age must be the same, let's see this by inserting records in the table
For the first two INSERT statements, the records are added to the company1 table. For the third IN-
SERT statement, the following error is displayed -
11.PostgreSQL – TRANSACTIONS
In practice, you will group many PostgreSQL queries into one group and execute
them all together as part of a transaction.
Transaction properties
Transactions have the following four standard properties, usually referred to by the acronym ACID
-
Atomicity- Ensures that all operations within the work unit are successfully completed; oth-
erwise, the transaction is aborted at the point of failure and previous operations are restored
to their previous state.
Consistency - Ensures that the database correctly changes state after a successfully commit-
ted transaction.
Isolation - Allows transactions to operate independently and seamlessly from each other.
Durability - Ensures that the result or effect of a committed transaction persists in the event
of a system failure.
Transaction control
• COMMIT - To save changes, you can also use END TRANSACTION command.
• ROLLBACK - To cancel changesLes commandes suivantes sont utilisées pour contrôler les tran-
sactions
Transactional control commands are used only with DML INSERT, UPDATE, and DELETE com-
mands only. They cannot be used when creating or deleting tables because these operations are au-
tomatically committed to the database.
Transactions can be initiated using BEGIN TRANSACTION or simply the BEGIN command.
These transactions typically persist until the next COMMIT or ROLLBACK command is encoun-
tered. But a transaction will also be ROLLBACK if the database is closed or an error occurs.
BEGIN;
or
BEGIN TRANSACTION;
The COMMIT command
The COMMIT command is the transactional command used to record changes invoked by a transac
tion in the database.
The COMMIT command records all transactions in the database since the last COMMIT or ROLLB
ACK command.
The syntax for the COMMIT command is
COMMIT;
or
END TRANSACTION;
The ROLLBACK command
The ROLLBACK command is the transactional command used to roll back transactions that have n
ot already been saved to the database.
The ROLLBACK command can only be used to roll back transactions since the last COMMIT or
ROLLBACK command was issued.
The syntax for the ROLLBACK command is -ROLLBACK;
Example
Consider the table company having these informations
If you verify that the COMPANY table still contains the same records -
Now let's start another transaction and delete the records from the table having age = 25 and finally
we use the COMMIT command to commit all the changes.
postgres=# BEGIN;
DELETE FROM company WHERE AGE = 25;
COMMIT;
If you verify the table COMPANY, she no more contain the row deleted
12.Views
Views are pseudo-tables. In other words, they are not real tables; nevertheless appear as regular ta-
bles at SELECT. A view can represent a subset of an actual table, selecting certain columns or rows
from a regular table. A view can even represent joined tables. Because views are assigned separate
permissions, you can use them to restrict access to tables so that users see only specific rows or col-
umns in a table. A view can even represent joined tables. Because views are assigned separate per-
missions, you can use them to restrict access to tables so that users see only specific rows or col-
umns in a table.
Views, which are a kind of virtual tables, allow users to do the following:
Structure data in a way that users or user classes find natural or intuitive.
Restrict access to data so that a user can only see limited data instead of a complete table.
Summarize data from various tables, which can be used to generate reports.
Because views are not ordinary tables, you may not be able to execute a DELETE, INSERT, or
UPDATE statement on a view. However, you can create a rule to correct this problem of using
DELETE, INSERT, or UPDATE on a view.
Create views
PostgreSQL views are created using the CREATE VIEW declaration. PostgreSQL views can be cre
ated from a single table, multiple tables, or another view.
The basic syntax of CREATE VIEW is as follows –
CREATE [TEMP | TEMPORARY] VIEW view_name AS
SELECT column1, column2.....
FROM table_name
WHERE [condition];
You can include multiple tables in your SELECT statement in the same way that you use them in a
normal PostgreSQL SELECT query. If the optional keyword TEMP or TEMPORARY is present,
the view will be created in the temporary space. Temporary views are automatically deleted at the
end of the current session.
Exemple
Now here is an example to create a view from the COMPANY table. This view would be used to ha
ve only a few columns of the COMPANY – table
You can now query COMPANY_VIEW the same way you query a real table. Here is the example -
id | name | age
----+-------+-----
1 | Roben | 30
2 | Kluger | 25
3 | Ully | 23
4 | Thanos | 25
5 | Joy | 27 | Texas
6 | Kim | 22 | Stuttgart
7 | Leroy | 24 | Berlin
(7 rows)
Deleting views
To delete a view, simply use the DROP VIEW statement with the view_name. The basic syntax of
DROP VIEW is as follows -
The following command will delete the COMPANY_VIEW view, which we created in the last sec-
tion -
13.Locks
Exclusive locks or locks or write locks prevent users from editing a row or table. Rows modified by
UPDATE and DELETE are then only locked automatically for the duration of the transaction. This
prevents other users from editing the row until the transaction is committed or rolled back.
The only time users have to wait for other users is when they try to edit the same row. If they mod-
ify different rows, no waiting is necessary. SELECT queries never have to wait.
The database performs the lock automatically. In some cases, however, the lock must be controlled
manually. Manual locking can be done using the LOCK command. It allows you to specify the type
and scope of the lock for a transaction.
LOCK [ TABLE ]
name
IN
lock_mode
name- The name (optionally qualified from the schema) of an existing table to lock. If
ONLY is specified before the table name, only that table is locked. If ONLY is not speci-
fied, the table and all of its descendant tables (if any) are locked.
lock_mode- Lock mode specifies the locks with which the lock conflicts. If no locking
mode is specified, then EXCLUSIVE ACCESS, the most restrictive mode, is used. Possible
values are: ACCESS SHARE, ROW SHARE, ROW EXCLUSIVE, SHARE UPDATE EX-
CLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE.
Once obtained, the lock is maintained for the remainder of the current transaction. There is no
UNLOCK TABLE command; Locks are always released at the end of the transaction.
DeadLocks
Deadlocks can occur when two transactions wait for each other to complete their operations. Alt-
hough PostgreSQL can detect them and terminate them with a ROLLBACK, crashes can still be an-
noying. To prevent your apps from experiencing this issue, be sure to design them to lock objects in
the same order.
Consultative locks
PostgreSQL provides ways to create locks that have application-defined meanings. These are called
consultative locks. Since the system does not impose their use, it is up to the application to use them
correctly. Advisory locks can be useful for locking policies that are not suitable for the MVCC
model.
For example, a common use of consultative locks is to emulate pessimistic locking strategies typical
of so-called "flat file" data management systems. While a flag stored in a table could be used for the
same purpose, consultative locks are faster, prevent table bloat, and are automatically cleaned by
the server at the end of the session.
Example
The following example locks the COMPANY table in the testdb database in EXCLUSIVE ACCESS mode. The
LOCK statement only works in transaction mode -
testdb=#BEGIN;
LOCK TABLE company IN ACCESS EXCLUSIVE MODE;
LOCK TABLE
The message above states that the table is locked until the transaction completes and that to
complete the transaction, you will need to cancel or commit the transaction
By definition, a sequence is an ordered list of integers. The orders of numbers in the sequence are
important. For example, {1,2,3,4,5} and {5,4,3,2,1} are entirely different sequences.
Sequence_name
Specify the name of the sequence after the CREATE SEQUENCE clause. The IF NOT EX-
ISTS conditionally creates a new sequence only if it does not exist.
The sequence name must be distinct from any other sequences, tables, indexes, views, or foreign
tables in the same schema.
Specify the data type of the sequence. The valid data type is SMALLINT, INT, and BIGINT. The
default data type is BIGINT if you skip it.
The data type of the sequence which determines the sequence’s minimum and maximum values.
[ INCREMENT [ BY ] increment ]
The increment specifies which value to be added to the current sequence value to create new
value.
A positive number will make an ascending sequence while a negative number will form a descend-
ing sequence.
Define the minimum value and maximum value of the sequence. If you use NO MINVALUE
and NO MAXVALUE, the sequence will use the default value.
For an ascending sequence, the default maximum value is the maximum value of the data type of
the sequence and the default minimum value is 1.
In case of a descending sequence, the default maximum value is -1 and the default minimum value
is the minimum value of the data type of the sequence.
The default starting value is minvalue for ascending sequences and maxvalue for descending
ones.
cache
The CACHE determines how many sequence numbers are preallocated and stored in memory for
faster access. One value can be generated at a time.
CYCLE | NO CYCLE
The CYCLE allows you to restart the value if the limit is reached. The next number will be the mini-
mum value for the ascending sequence and maximum value for the descending sequence.
If you use NO CYCLE, when the limit is reached, attempting to get the next value will result in an
error.
The NO CYCLE is the default if you don’t explicitly specify CYCLE or NO CYCLE.
OWNED BY table_name.column_name
The OWNED BY clause allows you to associate the table column with the sequence so that when
you drop the column or table, PostgreSQL will automatically drop the associated sequence.
Note that when you use the SERIAL pseudo-type for a column of a table, behind the scenes, Post-
greSQL automatically creates a sequence associated with the column.
This statement uses the CREATE SEQUENCE statement to create a new ascending sequence start-
ing from 100 with an increment of 5:
To get the next value from the sequence to you use the nextval() function:
SELECT nextval('mysequence');
If you execute the statement again, you will get the next value from the sequence:
SELECT nextval('mysequence');
The following statement creates a descending sequence from 3 to 1 with the cycle option:
When you execute the following statement multiple times, you will see the number starting from 3,
2, 1 and back to 3, 2, 1 and so on:
SELECT nextval('three');
Second, create a new sequence associated with the item_id column of the order_details ta-
ble:
Third, insert three order line items into the order_details table:
INSERT INTO
order_details(order_id, item_id, item_text, price)
VALUES
(100, nextval('order_item_id'),'DVD Player',100),
(100, nextval('order_item_id'),'Android TV',550),
(100, nextval('order_item_id'),'Speaker',250);
In this statement, we used the nextval() function to fetch item id value from the or-
der_item_id sequence.
To list all sequences in the current database, you use the following query:
SELECT
relname sequence_name FROM pg_class WHERE relkind = 'S';
Deleting sequences
If a sequence is associated with a table column, it will be automatically dropped once the table col-
umn is removed or the table is dropped.
You can also remove a sequence manually using the DROP SEQUENCE statement:
In this syntax:
First, specify the name of the sequence which you want to drop. The IF EXISTS option
conditionally deletes the sequence if it exists. In case you want to drop multiple sequences at
once, you can use a list of comma-separated sequence names.
Then, use the CASCADE option if you want to recursively drops objects that depend on the
sequence, and objects that depend on the dependent objects and so on.
This statement drops the table order_details. Since the sequence order_item_id associ-
ates with the item_id of the order_details, it is also dropped automatically:
In this tutorial, you have learned about PostgreSQL sequences and how to use a sequence object to
generate a list of sequences.
Each time an object is created in a database, it is assigned an owner. The owner is usually the one
who executed the creation statement. For most object types, the initial state is that only the owner
(or a superuser) can modify or delete the object. To allow other roles or users to use it, privileges or
permission must be granted.
SELECT,
INSERT,
UPDATE,
DELETE,
TRUNCATE,
REFERENCES,
TRIGGER,
CREATE,
CONNECT,
TEMPORARY,
ADMINISTRATION OPEN DATABASE MR MESSIO COMPUTER ENGINEER
56
EXECUTE,
USAGE
Depending on the type of object (table, function, etc.), privileges are applied to the object. To assign
privileges to users, the GRANT command is used.
GRANT syntax
privilege - values can be: SELECT, INSERT, UPDATE, DELETE, RULE, ALL.
object- The name of an object to which access is granted. Possible objects are: table, view,
sequence
PUBLIC - A short form representing all users.
GROUP group - A group to which privileges are granted.
username- The name of a user to whom privileges are granted. PUBLIC is a short form rep-
resenting all users.
privilege – the values can be: SELECT, INSERT, UPDATE, DELETE, RULE, ALL.
object- The name of an object where we can give acces. The possibles objects are table,
vue, séquence
PUBLIC – a short form representing all users.
GROUPE group – a group to grant privileges to.
username- The name of the user to who grant privileges. PUBLIC is a short form repre-
senting al users.
ADMINISTRATION OPEN DATABASE MR MESSIO COMPUTER ENGINEER
57
The message CREATE ROLE indicates that the USER "Thanos" is created.-
the message GRANT indicates that all privileges are attibuted to the user.
Then, let us take off privileges from a user -
the message REVOKE indicates that all privileges are revoked from the user
you can even drop the user as folows -
The message DROP ROLE indicates that the USER 'thanos' is droped from thr database.
PostgreSQL functions, also known as stored procedures, allow you to perform operations that
would normally require multiple queries and round trips in a single database function. The functions
allow database reuse because other applications can interact directly with your stored procedures
instead of middle-level or duplicate code.
Functions can be created in a language of your choice like SQL, PL/pgSQL, C, Py-thon, etc.
Syntaxe
The basic syntax for creating a function is as follows: -
where,
• Plpgsql is the name of the language in which the function is implemented. Here we use this
option for PostgreSQL, it can be SQL, C, internal or the name of a user-defined procedural
language. For backward compatibility, the name can be enclosed in single quotation marks.
Exemple
L'exemple suivant illustre la création et l'appel d'une fonction autonome. Cette fonction renvoie le
nombre total d'enregistrements dans la table COMPANY. Nous utiliserons la table COMPANY ,
qui contient les enregistrements suivants -
Maintenant, exécutons un appel à cette fonction et vérifions les enregistrements dans la table COM-
PANY
totalrecords
--------------
7
(1 row)
Before attempting any operation on a row (before constraints are checked and the INSERT,
UPDATE or DELETE is attempted).
When an operation has been completed (after constraints are checked and the INSERT, UP-
DATE, or DELETE has been completed).
In spite of the operation (in the case of INSERT, UPDATE, or DELETE on a view).
For creating a new PostgreSQL Trigger, you need to follow these steps:
First, you need to create a trigger function using CREATE FUNCTION statement.
Then you need to bind the trigger function to a table by using CREATE TRIGGER state-
ment.
You first need to create a Trigger function, a user-defined function that doesn’t take any
arguments. The function returns a value of the type “trigger.”
It has to be noted that you can create a Trigger Function using any of the languages supported by
PostgreSQL. In this case, PL/pgSQL is used.
The Create Trigger statement is used for creating any PostgreSQL Trigger.
INSERT
UPDATE [ OF column_name [, … ] ]
DELETE
TRUNCATE
However, a more simplified syntax of the CREATE TRIGGER statement would be:
trigger_name is the name of the trigger. You need to specify the name of the trigger after
the TRIGGER keyword.
{BEFORE | AFTER | INSTEAD OF} specifies the timing when the trigger is to be fired.
event_name specifies the name of the event that invokes the trigger. The event can be IN-
SERT, DELETE, UPDATE or TRUNCATE.
table_name is the name of the table that is associated with the trigger. You need to specify
the name of the table after the ON keyword.
Then you need to specify the type of triggers that can be:
o The FOR EACH ROW clause specifies a row-level trigger.
o The FOR EACH STATEMENT clause specifies a statement-level trigger.
For example, the below statement showcases the creation of a trigger function and a PostgreSQL
trigger.
RETURN NEW;
END;
$body$;
For removing PostgreSQL triggers, you can use the DROP TRIGGER statement. The DROP
TRIGGER command helps to remove an existing trigger definition. The current user must be the
owner of the table for which the PostgreSQL trigger is specified in order to run this command.
IF EXISTS specifies not to throw an error if the trigger does not exist. A notice is issued in
this case.
name is the name of the trigger you want to remove. You need to specify the name of the
trigger after the TRIGGER keyword.
table_name is the name of the table for which the trigger is defined. The name of the table
has to be mentioned after the ON keyword.
If you want to automatically drop items that are dependent on the trigger, you can use the
CASCADE option. CASCADE will also delete objects that are dependent on the trigger
that is being removed. You can use the RESTRICT option to prevent the trigger from being
dropped if any objects rely on it. The DROP TRIGGER command by default uses RE-
STRICT if not specified anything.
For example,
For changing the definition of PostgreSQL triggers, you can use the ALTER TRIGGER statement.
You can use it for 2 purposes:
In this case, you must be the owner of the table where the trigger to be modified is defined.
1) Renaming a Trigger
name is the name of the trigger you want to alter. You need to specify the name of the trig-
ger after the ALTER TRIGGER keyword.
table_name is the name of the table for which the trigger is defined. The name of the table
has to be mentioned after the ON keyword.
new_name is the new name that you want to give the trigger. You need to specify the name
of the trigger after the RENAME TO keyword.
extension_name is the name of the extension on which you want the trigger to depend.
For example,
For disabling PostgreSQL triggers, you can use the ALTER TABLE DISABLE TRIGGER
statement. Even after disabling a PostgreSQL trigger, it still remains in the database. However,
when an event associated with the trigger occurs, the disabled PostgreSQL trigger will not fire.
table_name is the name of the table with which the trigger is associated. The name of the
table has to be mentioned after the ALTER TABLE keyword.
trigger_name is the name of the trigger you want to disable. You need to specify the name
of the trigger after the DISABLE TRIGGER keyword. You can also use the ALL keyword
to disable all the triggers associated with the table.
For example,
For enabling one or more PostgreSQL triggers, you can use the ALTER TABLE ENABLE
TRIGGER statement. When an event associated with the trigger occurs, the disabled PostgreSQL
trigger will not fire. So, in order for it to fire, you need to enable the PostgreSQL Trigger.
table_name is the name of the table with which the trigger is associated. The name of the
table has to be mentioned after the ALTER TABLE keyword.
trigger_name is the name of the trigger you want to enable. You need to specify the name
of the trigger after the ENABLE TRIGGER keyword. You can also use the ALL keyword
to enable all the triggers associated with the table.
For example,
Hevo’s automated, No-code platform empowers you with everything you need to have for a smooth
data replication experience.
Secure: Hevo has a fault-tolerant architecture and ensures that your data streams are han-
dled in a secure & consistent manner with zero data loss.
Auto Schema Mapping: Hevo takes away the tedious task of schema management & auto-
matically detects the format of incoming data from Oracle and MySQL and loads it to the
destination schema.
Transformations: Hevo provides preload transformations to make your incoming data from
PostgreSQL for the chosen destination. You can also use drag and drop transformations like
Date and Control Functions, JSON, and Event Manipulation to name a few.
Faster Insight Generation: Hevo offers near real-time data replication so you have access
to real-time insight generation and faster decision making.
Live Support: The Hevo team is available round the clock to extend exceptional support for
your convenience through chat, email, and support calls.
Conclusion
In this article, you learned about PostgreSQL Triggers. This article also focused on PostgreSQL,
its key features, different operations associated with PostgreSQL Triggers, their syntax and example
use cases. You would have also learned about the advantages of using PostgreSQL Triggers.