Postgresql 17.4 US
Postgresql 17.4 US
4 Documentation
Legal Notice
PostgreSQL is Copyright © 1996–2025 by the PostgreSQL Global Development Group.
Permission to use, copy, modify, and distribute this software and its documentation for any purpose, without fee, and without a written agreement
is hereby granted, provided that the above copyright notice and this paragraph and the following two paragraphs appear in all copies.
IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCI-
DENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IM-
PLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HERE-
UNDER IS ON AN “AS-IS” BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
Table of Contents
Preface ........................................................................................................................ xxxiv
1. What Is PostgreSQL? ......................................................................................... xxxiv
2. A Brief History of PostgreSQL ............................................................................. xxxv
2.1. The Berkeley POSTGRES Project .............................................................. xxxv
2.2. Postgres95 .............................................................................................. xxxv
2.3. PostgreSQL ........................................................................................... xxxvi
3. Conventions ..................................................................................................... xxxvi
4. Further Information .......................................................................................... xxxvii
5. Bug Reporting Guidelines .................................................................................. xxxvii
5.1. Identifying Bugs .................................................................................... xxxvii
5.2. What to Report ..................................................................................... xxxviii
5.3. Where to Report Bugs .................................................................................. xl
I. Tutorial .......................................................................................................................... 1
1. Getting Started ........................................................................................................ 3
1.1. Installation ................................................................................................... 3
1.2. Architectural Fundamentals ............................................................................ 3
1.3. Creating a Database ...................................................................................... 4
1.4. Accessing a Database .................................................................................... 5
2. The SQL Language ................................................................................................. 8
2.1. Introduction ................................................................................................. 8
2.2. Concepts ..................................................................................................... 8
2.3. Creating a New Table ................................................................................... 8
2.4. Populating a Table With Rows ........................................................................ 9
2.5. Querying a Table ........................................................................................ 10
2.6. Joins Between Tables .................................................................................. 12
2.7. Aggregate Functions .................................................................................... 14
2.8. Updates ..................................................................................................... 16
2.9. Deletions ................................................................................................... 17
3. Advanced Features ................................................................................................. 18
3.1. Introduction ................................................................................................ 18
3.2. Views ....................................................................................................... 18
3.3. Foreign Keys .............................................................................................. 18
3.4. Transactions ............................................................................................... 19
3.5. Window Functions ...................................................................................... 21
3.6. Inheritance ................................................................................................. 24
3.7. Conclusion ................................................................................................. 26
II. The SQL Language ....................................................................................................... 27
4. SQL Syntax .......................................................................................................... 36
4.1. Lexical Structure ......................................................................................... 36
4.2. Value Expressions ....................................................................................... 46
4.3. Calling Functions ........................................................................................ 60
5. Data Definition ..................................................................................................... 64
5.1. Table Basics ............................................................................................... 64
5.2. Default Values ............................................................................................ 65
5.3. Identity Columns ........................................................................................ 66
5.4. Generated Columns ..................................................................................... 67
5.5. Constraints ................................................................................................. 69
5.6. System Columns ......................................................................................... 79
5.7. Modifying Tables ........................................................................................ 80
5.8. Privileges ................................................................................................... 83
5.9. Row Security Policies .................................................................................. 88
iii
PostgreSQL 17.4 Documentation
iv
PostgreSQL 17.4 Documentation
v
PostgreSQL 17.4 Documentation
vi
PostgreSQL 17.4 Documentation
vii
PostgreSQL 17.4 Documentation
viii
PostgreSQL 17.4 Documentation
ix
PostgreSQL 17.4 Documentation
x
PostgreSQL 17.4 Documentation
xi
PostgreSQL 17.4 Documentation
xii
PostgreSQL 17.4 Documentation
xiii
PostgreSQL 17.4 Documentation
xiv
PostgreSQL 17.4 Documentation
xv
PostgreSQL 17.4 Documentation
xvi
PostgreSQL 17.4 Documentation
xvii
PostgreSQL 17.4 Documentation
xviii
PostgreSQL 17.4 Documentation
xix
PostgreSQL 17.4 Documentation
xx
PostgreSQL 17.4 Documentation
xxi
PostgreSQL 17.4 Documentation
xxii
List of Figures
60.1. Structure of a Genetic Algorithm .............................................................................. 2641
64.1. GIN Internals ........................................................................................................ 2709
65.1. Page Layout .......................................................................................................... 2735
xxiii
List of Tables
4.1. Backslash Escape Sequences ......................................................................................... 39
4.2. Operator Precedence (highest to lowest) .......................................................................... 45
5.1. ACL Privilege Abbreviations ........................................................................................ 86
5.2. Summary of Access Privileges ...................................................................................... 86
8.1. Data Types ............................................................................................................... 156
8.2. Numeric Types ......................................................................................................... 158
8.3. Monetary Types ........................................................................................................ 163
8.4. Character Types ........................................................................................................ 164
8.5. Special Character Types ............................................................................................. 166
8.6. Binary Data Types ..................................................................................................... 166
8.7. bytea Literal Escaped Octets ..................................................................................... 167
8.8. bytea Output Escaped Octets .................................................................................... 168
8.9. Date/Time Types ....................................................................................................... 168
8.10. Date Input .............................................................................................................. 170
8.11. Time Input ............................................................................................................. 171
8.12. Time Zone Input ..................................................................................................... 171
8.13. Special Date/Time Inputs .......................................................................................... 173
8.14. Date/Time Output Styles ........................................................................................... 174
8.15. Date Order Conventions ............................................................................................ 174
8.16. ISO 8601 Interval Unit Abbreviations ......................................................................... 176
8.17. Interval Input .......................................................................................................... 178
8.18. Interval Output Style Examples .................................................................................. 178
8.19. Boolean Data Type .................................................................................................. 179
8.20. Geometric Types ..................................................................................................... 182
8.21. Network Address Types ............................................................................................ 184
8.22. cidr Type Input Examples ...................................................................................... 185
8.23. JSON Primitive Types and Corresponding PostgreSQL Types ......................................... 195
8.24. jsonpath Variables ............................................................................................... 203
8.25. jsonpath Accessors .............................................................................................. 204
8.26. Object Identifier Types ............................................................................................. 229
8.27. Pseudo-Types .......................................................................................................... 231
9.1. Comparison Operators ................................................................................................ 235
9.2. Comparison Predicates ............................................................................................... 236
9.3. Comparison Functions ................................................................................................ 239
9.4. Mathematical Operators .............................................................................................. 239
9.5. Mathematical Functions .............................................................................................. 241
9.6. Random Functions ..................................................................................................... 245
9.7. Trigonometric Functions ............................................................................................. 245
9.8. Hyperbolic Functions ................................................................................................. 247
9.9. SQL String Functions and Operators ............................................................................. 248
9.10. Other String Functions and Operators .......................................................................... 251
9.11. SQL Binary String Functions and Operators ................................................................. 259
9.12. Other Binary String Functions ................................................................................... 260
9.13. Text/Binary String Conversion Functions ..................................................................... 262
9.14. Bit String Operators ................................................................................................. 263
9.15. Bit String Functions ................................................................................................. 264
9.16. Regular Expression Match Operators ........................................................................... 268
9.17. Regular Expression Atoms ........................................................................................ 275
9.18. Regular Expression Quantifiers .................................................................................. 275
9.19. Regular Expression Constraints .................................................................................. 276
9.20. Regular Expression Character-Entry Escapes ................................................................ 278
xxiv
PostgreSQL 17.4 Documentation
xxv
PostgreSQL 17.4 Documentation
xxvi
PostgreSQL 17.4 Documentation
xxvii
PostgreSQL 17.4 Documentation
xxviii
PostgreSQL 17.4 Documentation
xxix
PostgreSQL 17.4 Documentation
xxx
PostgreSQL 17.4 Documentation
xxxi
PostgreSQL 17.4 Documentation
xxxii
List of Examples
8.1. Using the Character Types .......................................................................................... 165
8.2. Using the boolean Type .......................................................................................... 179
8.3. Using the Bit String Types ......................................................................................... 187
9.1. XSLT Stylesheet for Converting SQL/XML Output to HTML ........................................... 345
10.1. Square Root Operator Type Resolution ........................................................................ 464
10.2. String Concatenation Operator Type Resolution ............................................................ 464
10.3. Absolute-Value and Negation Operator Type Resolution ................................................. 464
10.4. Array Inclusion Operator Type Resolution ................................................................... 465
10.5. Custom Operator on a Domain Type ........................................................................... 466
10.6. Rounding Function Argument Type Resolution ............................................................. 468
10.7. Variadic Function Resolution ..................................................................................... 468
10.8. Substring Function Type Resolution ............................................................................ 469
10.9. character Storage Type Conversion ....................................................................... 471
10.10. Type Resolution with Underspecified Types in a Union ................................................ 472
10.11. Type Resolution in a Simple Union ........................................................................... 472
10.12. Type Resolution in a Transposed Union ..................................................................... 472
10.13. Type Resolution in a Nested Union ........................................................................... 472
11.1. Setting up a Partial Index to Exclude Common Values ................................................... 481
11.2. Setting up a Partial Index to Exclude Uninteresting Values .............................................. 482
11.3. Setting up a Partial Unique Index ............................................................................... 483
11.4. Do Not Use Partial Indexes as a Substitute for Partitioning .............................................. 484
20.1. Example pg_hba.conf Entries ............................................................................... 754
20.2. An Example pg_ident.conf File .......................................................................... 758
32.1. libpq Example Program 1 ........................................................................................ 1071
32.2. libpq Example Program 2 ........................................................................................ 1074
32.3. libpq Example Program 3 ........................................................................................ 1077
33.1. Large Objects with libpq Example Program ................................................................ 1089
34.1. Example SQLDA Program ...................................................................................... 1146
34.2. ECPG Program Accessing Large Objects ................................................................... 1161
40.1. Manual Installation of PL/Perl .................................................................................. 1417
41.1. Quoting Values in Dynamic Queries ......................................................................... 1435
41.2. Exceptions with UPDATE/INSERT ........................................................................... 1451
41.3. A PL/pgSQL Trigger Function ................................................................................. 1466
41.4. A PL/pgSQL Trigger Function for Auditing ................................................................ 1467
41.5. A PL/pgSQL View Trigger Function for Auditing ....................................................... 1468
41.6. A PL/pgSQL Trigger Function for Maintaining a Summary Table ................................... 1469
41.7. Auditing with Transition Tables ............................................................................... 1472
41.8. A PL/pgSQL Event Trigger Function ........................................................................ 1473
41.9. Porting a Simple Function from PL/SQL to PL/pgSQL ................................................. 1482
41.10. Porting a Function that Creates Another Function from PL/SQL to PL/pgSQL ................. 1483
41.11. Porting a Procedure With String Manipulation and OUT Parameters from PL/SQL to PL/
pgSQL ......................................................................................................................... 1484
41.12. Porting a Procedure from PL/SQL to PL/pgSQL ........................................................ 1486
F.1. Create a Foreign Table for PostgreSQL CSV Logs ........................................................ 2942
F.2. Create a Foreign Table with an Option on a Column ...................................................... 2943
xxxiii
Preface
This book is the official documentation of PostgreSQL. It has been written by the PostgreSQL developers
and other volunteers in parallel to the development of the PostgreSQL software. It describes all the func-
tionality that the current version of PostgreSQL officially supports.
To make the large amount of information about PostgreSQL manageable, this book has been organized
in several parts. Each part is targeted at a different class of users, or at users in different stages of their
PostgreSQL experience:
• Part II documents the SQL query language environment, including data types and functions, as well as
user-level performance tuning. Every PostgreSQL user should read this.
• Part III describes the installation and administration of the server. Everyone who runs a PostgreSQL
server, be it for private use or for others, should read this part.
• Part V contains information for advanced users about the extensibility capabilities of the server. Topics
include user-defined data types and functions.
• Part VI contains reference information about SQL commands, client and server programs. This part
supports the other parts with structured information sorted by command or program.
• Part VII contains assorted information that might be of use to PostgreSQL developers.
1. What Is PostgreSQL?
PostgreSQL is an object-relational database management system (ORDBMS) based on POSTGRES, Ver-
sion 4.21, developed at the University of California at Berkeley Computer Science Department. POST-
GRES pioneered many concepts that only became available in some commercial database systems much
later.
PostgreSQL is an open-source descendant of this original Berkeley code. It supports a large part of the
SQL standard and offers many modern features:
• complex queries
• foreign keys
• triggers
• updatable views
• transactional integrity
• multiversion concurrency control
Also, PostgreSQL can be extended by the user in many ways, for example by adding new
• data types
• functions
• operators
• aggregate functions
1
https://dsf.berkeley.edu/postgres.html
xxxiv
Preface
• index methods
• procedural languages
And because of the liberal license, PostgreSQL can be used, modified, and distributed by anyone free of
charge for any purpose, be it private, commercial, or academic.
POSTGRES has undergone several major releases since then. The first “demoware” system became oper-
ational in 1987 and was shown at the 1988 ACM-SIGMOD Conference. Version 1, described in [ston90a],
was released to a few external users in June 1989. In response to a critique of the first rule system ([ston89]),
the rule system was redesigned ([ston90b]), and Version 2 was released in June 1990 with the new rule
system. Version 3 appeared in 1991 and added support for multiple storage managers, an improved query
executor, and a rewritten rule system. For the most part, subsequent releases until Postgres95 (see below)
focused on portability and reliability.
POSTGRES has been used to implement many different research and production applications. These in-
clude: a financial data analysis system, a jet engine performance monitoring package, an asteroid tracking
database, a medical information database, and several geographic information systems. POSTGRES has
also been used as an educational tool at several universities. Finally, Illustra Information Technologies
(later merged into Informix2, which is now owned by IBM3) picked up the code and commercialized it.
In late 1992, POSTGRES became the primary data manager for the Sequoia 2000 scientific computing
project described in [ston92].
The size of the external user community nearly doubled during 1993. It became increasingly obvious that
maintenance of the prototype code and support was taking up large amounts of time that should have been
devoted to database research. In an effort to reduce this support burden, the Berkeley POSTGRES project
officially ended with Version 4.2.
2.2. Postgres95
In 1994, Andrew Yu and Jolly Chen added an SQL language interpreter to POSTGRES. Under a new
name, Postgres95 was subsequently released to the web to find its own way in the world as an open-source
descendant of the original POSTGRES Berkeley code.
Postgres95 code was completely ANSI C and trimmed in size by 25%. Many internal changes improved
performance and maintainability. Postgres95 release 1.0.x ran about 30–50% faster on the Wisconsin
2
https://www.ibm.com/analytics/informix
3
https://www.ibm.com/
xxxv
Preface
Benchmark compared to POSTGRES, Version 4.2. Apart from bug fixes, the following were the major
enhancements:
• The query language PostQUEL was replaced with SQL (implemented in the server). (Interface library
libpq was named after PostQUEL.) Subqueries were not supported until PostgreSQL (see below), but
they could be imitated in Postgres95 with user-defined SQL functions. Aggregate functions were re-
implemented. Support for the GROUP BY query clause was also added.
• A new program (psql) was provided for interactive SQL queries, which used GNU Readline. This largely
superseded the old monitor program.
• A new front-end library, libpgtcl, supported Tcl-based clients. A sample shell, pgtclsh, provided
new Tcl commands to interface Tcl programs with the Postgres95 server.
• The large-object interface was overhauled. The inversion large objects were the only mechanism for
storing large objects. (The inversion file system was removed.)
• The instance-level rule system was removed. Rules were still available as rewrite rules.
• A short tutorial introducing regular SQL features as well as those of Postgres95 was distributed with
the source code
• GNU make (instead of BSD make) was used for the build. Also, Postgres95 could be compiled with an
unpatched GCC (data alignment of doubles was fixed).
2.3. PostgreSQL
By 1996, it became clear that the name “Postgres95” would not stand the test of time. We chose a new
name, PostgreSQL, to reflect the relationship between the original POSTGRES and the more recent ver-
sions with SQL capability. At the same time, we set the version numbering to start at 6.0, putting the
numbers back into the sequence originally begun by the Berkeley POSTGRES project.
Many people continue to refer to PostgreSQL as “Postgres” (now rarely in all capital letters) because of
tradition or because it is easier to pronounce. This usage is widely accepted as a nickname or alias.
The emphasis during development of Postgres95 was on identifying and understanding existing problems
in the server code. With PostgreSQL, the emphasis has shifted to augmenting features and capabilities,
although work continues in all areas.
Details about what has happened in PostgreSQL since then can be found in Appendix E.
3. Conventions
The following conventions are used in the synopsis of a command: brackets ([ and ]) indicate optional
parts. Braces ({ and }) and vertical lines (|) indicate that you must choose one alternative. Dots (...)
mean that the preceding element can be repeated. All other symbols, including parentheses, should be
taken literally.
Where it enhances the clarity, SQL commands are preceded by the prompt =>, and shell commands are
preceded by the prompt $. Normally, prompts are not shown, though.
An administrator is generally a person who is in charge of installing and running the server. A user could be
anyone who is using, or wants to use, any part of the PostgreSQL system. These terms should not be inter-
preted too narrowly; this book does not have fixed presumptions about system administration procedures.
xxxvi
Preface
4. Further Information
Besides the documentation, that is, this book, there are other resources about PostgreSQL:
Wiki
The PostgreSQL wiki4 contains the project's FAQ5 (Frequently Asked Questions) list, TODO6 list,
and detailed information about many more topics.
Web Site
The PostgreSQL web site7 carries details on the latest release and other information to make your
work or play with PostgreSQL more productive.
Mailing Lists
The mailing lists are a good place to have your questions answered, to share experiences with other
users, and to contact the developers. Consult the PostgreSQL web site for details.
Yourself!
PostgreSQL is an open-source project. As such, it depends on the user community for ongoing support.
As you begin to use PostgreSQL, you will rely on others for help, either through the documentation
or through the mailing lists. Consider contributing your knowledge back. Read the mailing lists and
answer questions. If you learn something which is not in the documentation, write it up and contribute
it. If you add features to the code, contribute them.
The following suggestions are intended to assist you in forming bug reports that can be handled in an
effective fashion. No one is required to follow them but doing so tends to be to everyone's advantage.
We cannot promise to fix every bug right away. If the bug is obvious, critical, or affects a lot of users,
chances are good that someone will look into it. It could also happen that we tell you to update to a newer
version to see if the bug happens there. Or we might decide that the bug cannot be fixed before some major
rewrite we might be planning is done. Or perhaps it is simply too hard and there are more important things
on the agenda. If you need help immediately, consider obtaining a commercial support contract.
xxxvii
Preface
• A program terminates with a fatal signal or an operating system error message that would point to a
problem in the program. (A counterexample might be a “disk full” message, since you have to fix that
yourself.)
• A program accepts invalid input without a notice or error message. But keep in mind that your idea of
invalid input might be our idea of an extension or compatibility with traditional practice.
• PostgreSQL fails to compile, build, or install according to the instructions on supported platforms.
Here “program” refers to any executable, not only the backend process.
Being slow or resource-hogging is not necessarily a bug. Read the documentation or ask on one of the
mailing lists for help in tuning your applications. Failing to comply to the SQL standard is not necessarily
a bug either, unless compliance for the specific feature is explicitly claimed.
Before you continue, check on the TODO list and in the FAQ to see if your bug is already known. If you
cannot decode the information on the TODO list, report your problem. The least we can do is make the
TODO list clearer.
• The exact sequence of steps from program start-up necessary to reproduce the problem. This should
be self-contained; it is not enough to send in a bare SELECT statement without the preceding CREATE
TABLE and INSERT statements, if the output should depend on the data in the tables. We do not have
the time to reverse-engineer your database schema, and if we are supposed to make up our own data
we would probably miss the problem.
The best format for a test case for SQL-related problems is a file that can be run through the psql frontend
that shows the problem. (Be sure to not have anything in your ~/.psqlrc start-up file.) An easy way
to create this file is to use pg_dump to dump out the table declarations and data needed to set the scene,
then add the problem query. You are encouraged to minimize the size of your example, but this is not
absolutely necessary. If the bug is reproducible, we will find it either way.
If your application uses some other client interface, such as PHP, then please try to isolate the offending
queries. We will probably not set up a web server to reproduce your problem. In any case remember
to provide the exact input files; do not guess that the problem happens for “large files” or “midsize
databases”, etc. since this information is too inexact to be of use.
• The output you got. Please do not say that it “didn't work” or “crashed”. If there is an error message,
show it, even if you do not understand it. If the program terminates with an operating system error,
say which. If nothing at all happens, say so. Even if the result of your test case is a program crash or
otherwise obvious it might not happen on our platform. The easiest thing is to copy the output from
the terminal, if possible.
xxxviii
Preface
Note
If you are reporting an error message, please obtain the most verbose form of the message. In
psql, say \set VERBOSITY verbose beforehand. If you are extracting the message from
the server log, set the run-time parameter log_error_verbosity to verbose so that all details
are logged.
Note
In case of fatal errors, the error message reported by the client might not contain all the infor-
mation available. Please also look at the log output of the database server. If you do not keep
your server's log output, this would be a good time to start doing so.
• The output you expected is very important to state. If you just write “This command gives me that
output.” or “This is not what I expected.”, we might run it ourselves, scan the output, and think it looks
OK and is exactly what we expected. We should not have to spend the time to decode the exact semantics
behind your commands. Especially refrain from merely saying that “This is not what SQL says/Oracle
does.” Digging out the correct behavior from SQL is not a fun undertaking, nor do we all know how all
the other relational databases out there behave. (If your problem is a program crash, you can obviously
omit this item.)
• Any command line options and other start-up options, including any relevant environment variables or
configuration files that you changed from the default. Again, please provide exact information. If you
are using a prepackaged distribution that starts the database server at boot time, you should try to find
out how that is done.
• The PostgreSQL version. You can run the command SELECT version(); to find out the version
of the server you are connected to. Most executable programs also support a --version option; at
least postgres --version and psql --version should work. If the function or the options
do not exist then your version is more than old enough to warrant an upgrade. If you run a prepackaged
version, such as RPMs, say so, including any subversion the package might have. If you are talking
about a Git snapshot, mention that, including the commit hash.
If your version is older than 17.4 we will almost certainly tell you to upgrade. There are many bug fixes
and improvements in each new release, so it is quite possible that a bug you have encountered in an
older release of PostgreSQL has already been fixed. We can only provide limited support for sites using
older releases of PostgreSQL; if you require more than we can provide, consider acquiring a commercial
support contract.
• Platform information. This includes the kernel name and version, C library, processor, memory infor-
mation, and so on. In most cases it is sufficient to report the vendor and version, but do not assume
everyone knows what exactly “Debian” contains or that everyone runs on x86_64. If you have instal-
lation problems then information about the toolchain on your machine (compiler, make, and so on) is
also necessary.
Do not be afraid if your bug report becomes rather lengthy. That is a fact of life. It is better to report
everything the first time than us having to squeeze the facts out of you. On the other hand, if your input
xxxix
Preface
files are huge, it is fair to ask first whether somebody is interested in looking into it. Here is an article8
that outlines some more tips on reporting bugs.
Do not spend all your time to figure out which changes in the input make the problem go away. This will
probably not help solving it. If it turns out that the bug cannot be fixed right away, you will still have time
to find and share your work-around. Also, once again, do not waste your time guessing why the bug exists.
We will find that out soon enough.
When writing a bug report, please avoid confusing terminology. The software package in total is called
“PostgreSQL”, sometimes “Postgres” for short. If you are specifically talking about the backend process,
mention that, do not just say “PostgreSQL crashes”. A crash of a single backend process is quite different
from crash of the parent “postgres” process; please don't say “the server crashed” when you mean a single
backend process went down, nor vice versa. Also, client programs such as the interactive frontend “psql”
are completely separate from the backend. Please try to be specific about whether the problem is on the
client or server side.
Another method is to fill in the bug report web-form available at the project's web site9. Entering a bug
report this way causes it to be mailed to the <[email protected]> mailing list.
If your bug report has security implications and you'd prefer that it not become immediately visible in
public archives, don't send it to pgsql-bugs. Security issues can be reported privately to <securi-
[email protected]>.
Do not send bug reports to any of the user mailing lists, such as <[email protected]
gresql.org> or <[email protected]>. These mailing lists are for an-
swering user questions, and their subscribers normally do not wish to receive bug reports. More impor-
tantly, they are unlikely to fix them.
Also, please do not send reports to the developers' mailing list <[email protected]
gresql.org>. This list is for discussing the development of PostgreSQL, and it would be nice if we
could keep the bug reports separate. We might choose to take up a discussion about your bug report on
pgsql-hackers, if the problem needs more review.
If you have a problem with the documentation, the best place to report it is the documentation mailing list
<[email protected]>. Please be specific about what part of the documentation
you are unhappy with.
Note
Due to the unfortunate amount of spam going around, all of the above lists will be moderated
unless you are subscribed. That means there will be some delay before the email is delivered. If
you wish to subscribe to the lists, please visit https://lists.postgresql.org/ for instructions.
8
https://www.chiark.greenend.org.uk/~sgtatham/bugs.html
9
https://www.postgresql.org/
xl
Part I. Tutorial
Welcome to the PostgreSQL Tutorial. The tutorial is intended to give an introduction to PostgreSQL, relational data-
base concepts, and the SQL language. We assume some general knowledge about how to use computers and no par-
ticular Unix or programming experience is required. This tutorial is intended to provide hands-on experience with im-
portant aspects of the PostgreSQL system. It makes no attempt to be a comprehensive treatment of the topics it covers.
After you have successfully completed this tutorial you will want to read the Part II section to gain a better under-
standing of the SQL language, or Part IV for information about developing applications with PostgreSQL. Those who
provision and manage their own PostgreSQL installation should also read Part III.
Table of Contents
1. Getting Started ................................................................................................................ 3
1.1. Installation ........................................................................................................... 3
1.2. Architectural Fundamentals .................................................................................... 3
1.3. Creating a Database .............................................................................................. 4
1.4. Accessing a Database ............................................................................................ 5
2. The SQL Language ......................................................................................................... 8
2.1. Introduction ......................................................................................................... 8
2.2. Concepts ............................................................................................................. 8
2.3. Creating a New Table ........................................................................................... 8
2.4. Populating a Table With Rows ................................................................................ 9
2.5. Querying a Table ................................................................................................ 10
2.6. Joins Between Tables .......................................................................................... 12
2.7. Aggregate Functions ............................................................................................ 14
2.8. Updates ............................................................................................................. 16
2.9. Deletions ........................................................................................................... 17
3. Advanced Features ......................................................................................................... 18
3.1. Introduction ....................................................................................................... 18
3.2. Views ............................................................................................................... 18
3.3. Foreign Keys ...................................................................................................... 18
3.4. Transactions ....................................................................................................... 19
3.5. Window Functions .............................................................................................. 21
3.6. Inheritance ......................................................................................................... 24
3.7. Conclusion ......................................................................................................... 26
2
Chapter 1. Getting Started
1.1. Installation
Before you can use PostgreSQL you need to install it, of course. It is possible that PostgreSQL is already
installed at your site, either because it was included in your operating system distribution or because the
system administrator already installed it. If that is the case, you should obtain information from the oper-
ating system documentation or your system administrator about how to access PostgreSQL.
If you are not sure whether PostgreSQL is already available or whether you can use it for your experimen-
tation then you can install it yourself. Doing so is not hard and it can be a good exercise. PostgreSQL can
be installed by any unprivileged user; no superuser (root) access is required.
If you are installing PostgreSQL yourself, then refer to Chapter 17 for instructions on installation, and
return to this guide when the installation is complete. Be sure to follow closely the section about setting
up the appropriate environment variables.
If your site administrator has not set things up in the default way, you might have some more work to do. For
example, if the database server machine is a remote machine, you will need to set the PGHOST environment
variable to the name of the database server machine. The environment variable PGPORT might also have
to be set. The bottom line is this: if you try to start an application program and it complains that it cannot
connect to the database, you should consult your site administrator or, if that is you, the documentation
to make sure that your environment is properly set up. If you did not understand the preceding paragraph
then read the next section.
In database jargon, PostgreSQL uses a client/server model. A PostgreSQL session consists of the following
cooperating processes (programs):
• A server process, which manages the database files, accepts connections to the database from client
applications, and performs database actions on behalf of the clients. The database server program is
called postgres.
• The user's client (frontend) application that wants to perform database operations. Client applications
can be very diverse in nature: a client could be a text-oriented tool, a graphical application, a web server
that accesses the database to display web pages, or a specialized database maintenance tool. Some client
applications are supplied with the PostgreSQL distribution; most are developed by users.
As is typical of client/server applications, the client and the server can be on different hosts. In that case
they communicate over a TCP/IP network connection. You should keep this in mind, because the files that
can be accessed on a client machine might not be accessible (or might only be accessible using a different
file name) on the database server machine.
The PostgreSQL server can handle multiple concurrent connections from clients. To achieve this it starts
(“forks”) a new process for each connection. From that point on, the client and the new server process
communicate without intervention by the original postgres process. Thus, the supervisor server process
is always running, waiting for client connections, whereas client and associated server processes come and
go. (All of this is of course invisible to the user. We only mention it here for completeness.)
3
Getting Started
Possibly, your site administrator has already created a database for your use. In that case you can omit this
step and skip ahead to the next section.
To create a new database, in this example named mydb, you use the following command:
$ createdb mydb
If this produces no response then this step was successful and you can skip over the remainder of this
section.
then PostgreSQL was not installed properly. Either it was not installed at all or your shell's search path
was not set to include it. Try calling the command with an absolute path instead:
$ /usr/local/pgsql/bin/createdb mydb
The path at your site might be different. Contact your site administrator or check the installation instruc-
tions to correct the situation.
This means that the server was not started, or it is not listening where createdb expects to contact it.
Again, check the installation instructions or consult the administrator.
where your own login name is mentioned. This will happen if the administrator has not created a Post-
greSQL user account for you. (PostgreSQL user accounts are distinct from operating system user accounts.)
If you are the administrator, see Chapter 21 for help creating accounts. You will need to become the op-
erating system user under which PostgreSQL was installed (usually postgres) to create the first user
account. It could also be that you were assigned a PostgreSQL user name that is different from your op-
erating system user name; in that case you need to use the -U switch or set the PGUSER environment
variable to specify your PostgreSQL user name.
4
Getting Started
If you have a user account but it does not have the privileges required to create a database, you will see
the following:
Not every user has authorization to create new databases. If PostgreSQL refuses to create databases for
you then the site administrator needs to grant you permission to create databases. Consult your site admin-
istrator if this occurs. If you installed PostgreSQL yourself then you should log in for the purposes of this
tutorial under the user account that you started the server as. 1
You can also create databases with other names. PostgreSQL allows you to create any number of databases
at a given site. Database names must have an alphabetic first character and are limited to 63 bytes in length.
A convenient choice is to create a database with the same name as your current user name. Many tools
assume that database name as the default, so it can save you some typing. To create that database, simply
type:
$ createdb
If you do not want to use your database anymore you can remove it. For example, if you are the owner
(creator) of the database mydb, you can destroy it using the following command:
$ dropdb mydb
(For this command, the database name does not default to the user account name. You always need to
specify it.) This action physically removes all files associated with the database and cannot be undone, so
this should only be done with a great deal of forethought.
More about createdb and dropdb can be found in createdb and dropdb respectively.
• Running the PostgreSQL interactive terminal program, called psql, which allows you to interactively
enter, edit, and execute SQL commands.
• Using an existing graphical frontend tool like pgAdmin or an office suite with ODBC or JDBC support
to create and manipulate a database. These possibilities are not covered in this tutorial.
• Writing a custom application, using one of the several available language bindings. These possibilities
are discussed further in Part IV.
You probably want to start up psql to try the examples in this tutorial. It can be activated for the mydb
database by typing the command:
$ psql mydb
1
As an explanation for why this works: PostgreSQL user names are separate from operating system user accounts. When you connect to a database,
you can choose what PostgreSQL user name to connect as; if you don't, it will default to the same name as your current operating system account.
As it happens, there will always be a PostgreSQL user account that has the same name as the operating system user that started the server, and it also
happens that that user always has permission to create databases. Instead of logging in as that user you can also specify the -U option everywhere
to select a PostgreSQL user name to connect as.
5
Getting Started
If you do not supply the database name then it will default to your user account name. You already dis-
covered this scheme in the previous section using createdb.
psql (17.4)
Type "help" for help.
mydb=>
mydb=#
That would mean you are a database superuser, which is most likely the case if you installed the Post-
greSQL instance yourself. Being a superuser means that you are not subject to access controls. For the
purposes of this tutorial that is not important.
If you encounter problems starting psql then go back to the previous section. The diagnostics of cre-
atedb and psql are similar, and if the former worked the latter should work as well.
The last line printed out by psql is the prompt, and it indicates that psql is listening to you and that you
can type SQL queries into a work space maintained by psql. Try out these commands:
mydb=> SELECT 2 + 2;
?column?
----------
4
(1 row)
The psql program has a number of internal commands that are not SQL commands. They begin with
the backslash character, “\”. For example, you can get help on the syntax of various PostgreSQL SQL
commands by typing:
mydb=> \h
6
Getting Started
mydb=> \q
and psql will quit and return you to your command shell. (For more internal commands, type \? at the
psql prompt.) The full capabilities of psql are documented in psql. In this tutorial we will not use these
features explicitly, but you can use them yourself when it is helpful.
7
Chapter 2. The SQL Language
2.1. Introduction
This chapter provides an overview of how to use SQL to perform simple operations. This tutorial is only
intended to give you an introduction and is in no way a complete tutorial on SQL. Numerous books have
been written on SQL, including [melt93] and [date97]. You should be aware that some PostgreSQL lan-
guage features are extensions to the standard.
In the examples that follow, we assume that you have created a database named mydb, as described in the
previous chapter, and have been able to start psql.
Examples in this manual can also be found in the PostgreSQL source distribution in the directory src/
tutorial/. (Binary distributions of PostgreSQL might not provide those files.) To use those files, first
change to that directory and run make:
$ cd .../src/tutorial
$ make
This creates the scripts and compiles the C files containing user-defined functions and types. Then, to start
the tutorial, do the following:
$ psql -s mydb
...
mydb=> \i basics.sql
The \i command reads in commands from the specified file. psql's -s option puts you in single step
mode which pauses before sending each statement to the server. The commands used in this section are
in the file basics.sql.
2.2. Concepts
PostgreSQL is a relational database management system (RDBMS). That means it is a system for
managing data stored in relations. Relation is essentially a mathematical term for table. The notion of
storing data in tables is so commonplace today that it might seem inherently obvious, but there are a
number of other ways of organizing databases. Files and directories on Unix-like operating systems form
an example of a hierarchical database. A more modern development is the object-oriented database.
Each table is a named collection of rows. Each row of a given table has the same set of named columns,
and each column is of a specific data type. Whereas columns have a fixed order in each row, it is important
to remember that SQL does not guarantee the order of the rows within the table in any way (although they
can be explicitly sorted for display).
Tables are grouped into databases, and a collection of databases managed by a single PostgreSQL server
instance constitutes a database cluster.
8
The SQL Language
You can enter this into psql with the line breaks. psql will recognize that the command is not terminated
until the semicolon.
White space (i.e., spaces, tabs, and newlines) can be used freely in SQL commands. That means you can
type the command aligned differently than above, or even all on one line. Two dashes (“--”) introduce
comments. Whatever follows them is ignored up to the end of the line. SQL is case-insensitive about key
words and identifiers, except when identifiers are double-quoted to preserve the case (not done above).
varchar(80) specifies a data type that can store arbitrary character strings up to 80 characters in length.
int is the normal integer type. real is a type for storing single precision floating-point numbers. date
should be self-explanatory. (Yes, the column of type date is also named date. This might be convenient
or confusing — you choose.)
PostgreSQL supports the standard SQL types int, smallint, real, double precision,
char(N), varchar(N), date, time, timestamp, and interval, as well as other types of general
utility and a rich set of geometric types. PostgreSQL can be customized with an arbitrary number of user-
defined data types. Consequently, type names are not key words in the syntax, except where required to
support special cases in the SQL standard.
The second example will store cities and their associated geographical location:
Finally, it should be mentioned that if you don't need a table any longer or want to recreate it differently
you can remove it using the following command:
Note that all data types use rather obvious input formats. Constants that are not simple numeric values
usually must be surrounded by single quotes ('), as in the example. The date type is actually quite flexible
in what it accepts, but for this tutorial we will stick to the unambiguous format shown here.
9
The SQL Language
The syntax used so far requires you to remember the order of the columns. An alternative syntax allows
you to list the columns explicitly:
You can list the columns in a different order if you wish or even omit some columns, e.g., if the precipitation
is unknown:
Many developers consider explicitly listing the columns better style than relying on the order implicitly.
Please enter all the commands shown above so you have some data to work with in the following sections.
You could also have used COPY to load large amounts of data from flat-text files. This is usually faster
because the COPY command is optimized for this application while allowing less flexibility than INSERT.
An example would be:
where the file name for the source file must be available on the machine running the backend process, not
the client, since the backend process reads the file directly. You can read more about the COPY command
in COPY.
Here * is a shorthand for “all columns”. 1 So the same result would be had with:
10
The SQL Language
---------------+---------+---------+------+------------
San Francisco | 46 | 50 | 0.25 | 1994-11-27
San Francisco | 43 | 57 | 0 | 1994-11-29
Hayward | 37 | 54 | | 1994-11-29
(3 rows)
You can write expressions, not just simple column references, in the select list. For example, you can do:
Notice how the AS clause is used to relabel the output column. (The AS clause is optional.)
A query can be “qualified” by adding a WHERE clause that specifies which rows are wanted. The WHERE
clause contains a Boolean (truth value) expression, and only rows for which the Boolean expression is
true are returned. The usual Boolean operators (AND, OR, and NOT) are allowed in the qualification. For
example, the following retrieves the weather of San Francisco on rainy days:
Result:
You can request that the results of a query be returned in sorted order:
In this example, the sort order isn't fully specified, and so you might get the San Francisco rows in either
order. But you'd always get the results shown above if you do:
11
The SQL Language
You can request that duplicate rows be removed from the result of a query:
city
---------------
Hayward
San Francisco
(2 rows)
Here again, the result row ordering might vary. You can ensure consistent results by using DISTINCT
and ORDER BY together: 2
12
The SQL Language
• There is no result row for the city of Hayward. This is because there is no matching entry in the cities
table for Hayward, so the join ignores the unmatched rows in the weather table. We will see shortly
how this can be fixed.
• There are two columns containing the city name. This is correct because the lists of columns from the
weather and cities tables are concatenated. In practice this is undesirable, though, so you will
probably want to list the output columns explicitly rather than using *:
Since the columns all had different names, the parser automatically found which table they belong to. If
there were duplicate column names in the two tables you'd need to qualify the column names to show
which one you meant, as in:
It is widely considered good style to qualify all column names in a join query, so that the query won't fail
if a duplicate column name is later added to one of the tables.
Join queries of the kind seen thus far can also be written in this form:
SELECT *
FROM weather, cities
WHERE city = name;
This syntax pre-dates the JOIN/ON syntax, which was introduced in SQL-92. The tables are simply listed
in the FROM clause, and the comparison expression is added to the WHERE clause. The results from this
older implicit syntax and the newer explicit JOIN/ON syntax are identical. But for a reader of the query,
the explicit syntax makes its meaning easier to understand: The join condition is introduced by its own key
word whereas previously the condition was mixed into the WHERE clause together with other conditions.
Now we will figure out how we can get the Hayward records back in. What we want the query to do is
to scan the weather table and for each row to find the matching cities row(s). If no matching row is
found we want some “empty values” to be substituted for the cities table's columns. This kind of query
is called an outer join. (The joins we have seen so far are inner joins.) The command looks like this:
SELECT *
FROM weather LEFT OUTER JOIN cities ON weather.city = cities.name;
13
The SQL Language
This query is called a left outer join because the table mentioned on the left of the join operator will have
each of its rows in the output at least once, whereas the table on the right will only have those rows output
that match some row of the left table. When outputting a left-table row for which there is no right-table
match, empty (null) values are substituted for the right-table columns.
Exercise: There are also right outer joins and full outer joins. Try to find out what those do.
We can also join a table against itself. This is called a self join. As an example, suppose we wish to find
all the weather records that are in the temperature range of other weather records. So we need to compare
the temp_lo and temp_hi columns of each weather row to the temp_lo and temp_hi columns
of all other weather rows. We can do this with the following query:
Here we have relabeled the weather table as w1 and w2 to be able to distinguish the left and right side of
the join. You can also use these kinds of aliases in other queries to save some typing, e.g.:
SELECT *
FROM weather w JOIN cities c ON w.city = c.name;
max
-----
46
(1 row)
14
The SQL Language
If we wanted to know what city (or cities) that reading occurred in, we might try:
but this will not work since the aggregate max cannot be used in the WHERE clause. (This restriction
exists because the WHERE clause determines which rows will be included in the aggregate calculation; so
obviously it has to be evaluated before aggregate functions are computed.) However, as is often the case
the query can be restated to accomplish the desired result, here by using a subquery:
city
---------------
San Francisco
(1 row)
This is OK because the subquery is an independent computation that computes its own aggregate separately
from what is happening in the outer query.
Aggregates are also very useful in combination with GROUP BY clauses. For example, we can get the
number of readings and the maximum low temperature observed in each city with:
which gives us one output row per city. Each aggregate result is computed over the table rows matching
that city. We can filter these grouped rows using HAVING:
which gives us the same results for only the cities that have all temp_lo values below 40. Finally, if we
only care about cities whose names begin with “S”, we might do:
15
The SQL Language
1 The LIKE operator does pattern matching and is explained in Section 9.7.
It is important to understand the interaction between aggregates and SQL's WHERE and HAVING clauses.
The fundamental difference between WHERE and HAVING is this: WHERE selects input rows before groups
and aggregates are computed (thus, it controls which rows go into the aggregate computation), whereas
HAVING selects group rows after groups and aggregates are computed. Thus, the WHERE clause must not
contain aggregate functions; it makes no sense to try to use an aggregate to determine which rows will
be inputs to the aggregates. On the other hand, the HAVING clause always contains aggregate functions.
(Strictly speaking, you are allowed to write a HAVING clause that doesn't use aggregates, but it's seldom
useful. The same condition could be used more efficiently at the WHERE stage.)
In the previous example, we can apply the city name restriction in WHERE, since it needs no aggregate.
This is more efficient than adding the restriction to HAVING, because we avoid doing the grouping and
aggregate calculations for all rows that fail the WHERE check.
Another way to select the rows that go into an aggregate computation is to use FILTER, which is a per-
aggregate option:
FILTER is much like WHERE, except that it removes rows only from the input of the particular aggregate
function that it is attached to. Here, the count aggregate counts only rows with temp_lo below 45; but
the max aggregate is still applied to all rows, so it still finds the reading of 46.
2.8. Updates
You can update existing rows using the UPDATE command. Suppose you discover the temperature read-
ings are all off by 2 degrees after November 28. You can correct the data as follows:
UPDATE weather
SET temp_hi = temp_hi - 2, temp_lo = temp_lo - 2
WHERE date > '1994-11-28';
16
The SQL Language
2.9. Deletions
Rows can be removed from a table using the DELETE command. Suppose you are no longer interested in
the weather of Hayward. Then you can do the following to delete those rows from the table:
Without a qualification, DELETE will remove all rows from the given table, leaving it empty. The system
will not request confirmation before doing this!
17
Chapter 3. Advanced Features
3.1. Introduction
In the previous chapter we have covered the basics of using SQL to store and access your data in Post-
greSQL. We will now discuss some more advanced features of SQL that simplify management and prevent
loss or corruption of your data. Finally, we will look at some PostgreSQL extensions.
This chapter will on occasion refer to examples found in Chapter 2 to change or improve them, so it will be
useful to have read that chapter. Some examples from this chapter can also be found in advanced.sql
in the tutorial directory. This file also contains some sample data to load, which is not repeated here. (Refer
to Section 2.1 for how to use the file.)
3.2. Views
Refer back to the queries in Section 2.6. Suppose the combined listing of weather records and city location
is of particular interest to your application, but you do not want to type the query each time you need
it. You can create a view over the query, which gives a name to the query that you can refer to like an
ordinary table:
Making liberal use of views is a key aspect of good SQL database design. Views allow you to encapsu-
late the details of the structure of your tables, which might change as your application evolves, behind
consistent interfaces.
Views can be used in almost any place a real table can be used. Building views upon other views is not
uncommon.
18
Advanced Features
The behavior of foreign keys can be finely tuned to your application. We will not go beyond this simple
example in this tutorial, but just refer you to Chapter 5 for more information. Making correct use of foreign
keys will definitely improve the quality of your database applications, so you are strongly encouraged to
learn about them.
3.4. Transactions
Transactions are a fundamental concept of all database systems. The essential point of a transaction is that
it bundles multiple steps into a single, all-or-nothing operation. The intermediate states between the steps
are not visible to other concurrent transactions, and if some failure occurs that prevents the transaction
from completing, then none of the steps affect the database at all.
For example, consider a bank database that contains balances for various customer accounts, as well as
total deposit balances for branches. Suppose that we want to record a payment of $100.00 from Alice's
account to Bob's account. Simplifying outrageously, the SQL commands for this might look like:
The details of these commands are not important here; the important point is that there are several separate
updates involved to accomplish this rather simple operation. Our bank's officers will want to be assured
that either all these updates happen, or none of them happen. It would certainly not do for a system failure
to result in Bob receiving $100.00 that was not debited from Alice. Nor would Alice long remain a happy
customer if she was debited without Bob being credited. We need a guarantee that if something goes wrong
partway through the operation, none of the steps executed so far will take effect. Grouping the updates
into a transaction gives us this guarantee. A transaction is said to be atomic: from the point of view of
other transactions, it either happens completely or not at all.
19
Advanced Features
We also want a guarantee that once a transaction is completed and acknowledged by the database system,
it has indeed been permanently recorded and won't be lost even if a crash ensues shortly thereafter. For
example, if we are recording a cash withdrawal by Bob, we do not want any chance that the debit to his
account will disappear in a crash just after he walks out the bank door. A transactional database guaran-
tees that all the updates made by a transaction are logged in permanent storage (i.e., on disk) before the
transaction is reported complete.
Another important property of transactional databases is closely related to the notion of atomic updates:
when multiple transactions are running concurrently, each one should not be able to see the incomplete
changes made by others. For example, if one transaction is busy totalling all the branch balances, it would
not do for it to include the debit from Alice's branch but not the credit to Bob's branch, nor vice versa. So
transactions must be all-or-nothing not only in terms of their permanent effect on the database, but also in
terms of their visibility as they happen. The updates made so far by an open transaction are invisible to other
transactions until the transaction completes, whereupon all the updates become visible simultaneously.
In PostgreSQL, a transaction is set up by surrounding the SQL commands of the transaction with BEGIN
and COMMIT commands. So our banking transaction would actually look like:
BEGIN;
UPDATE accounts SET balance = balance - 100.00
WHERE name = 'Alice';
-- etc etc
COMMIT;
If, partway through the transaction, we decide we do not want to commit (perhaps we just noticed that
Alice's balance went negative), we can issue the command ROLLBACK instead of COMMIT, and all our
updates so far will be canceled.
PostgreSQL actually treats every SQL statement as being executed within a transaction. If you do not issue
a BEGIN command, then each individual statement has an implicit BEGIN and (if successful) COMMIT
wrapped around it. A group of statements surrounded by BEGIN and COMMIT is sometimes called a
transaction block.
Note
Some client libraries issue BEGIN and COMMIT commands automatically, so that you might get
the effect of transaction blocks without asking. Check the documentation for the interface you are
using.
It's possible to control the statements in a transaction in a more granular fashion through the use of save-
points. Savepoints allow you to selectively discard parts of the transaction, while committing the rest. After
defining a savepoint with SAVEPOINT, you can if needed roll back to the savepoint with ROLLBACK TO.
All the transaction's database changes between defining the savepoint and rolling back to it are discarded,
but changes earlier than the savepoint are kept.
After rolling back to a savepoint, it continues to be defined, so you can roll back to it several times.
Conversely, if you are sure you won't need to roll back to a particular savepoint again, it can be released,
so the system can free some resources. Keep in mind that either releasing or rolling back to a savepoint
will automatically release all savepoints that were defined after it.
All this is happening within the transaction block, so none of it is visible to other database sessions. When
and if you commit the transaction block, the committed actions become visible as a unit to other sessions,
while the rolled-back actions never become visible at all.
20
Advanced Features
Remembering the bank database, suppose we debit $100.00 from Alice's account, and credit Bob's account,
only to find later that we should have credited Wally's account. We could do it using savepoints like this:
BEGIN;
UPDATE accounts SET balance = balance - 100.00
WHERE name = 'Alice';
SAVEPOINT my_savepoint;
UPDATE accounts SET balance = balance + 100.00
WHERE name = 'Bob';
-- oops ... forget that and use Wally's account
ROLLBACK TO my_savepoint;
UPDATE accounts SET balance = balance + 100.00
WHERE name = 'Wally';
COMMIT;
This example is, of course, oversimplified, but there's a lot of control possible in a transaction block through
the use of savepoints. Moreover, ROLLBACK TO is the only way to regain control of a transaction block
that was put in aborted state by the system due to an error, short of rolling it back completely and starting
again.
Here is an example that shows how to compare each employee's salary with the average salary in his or
her department:
The first three output columns come directly from the table empsalary, and there is one output row
for each row in the table. The fourth column represents an average taken across all the table rows that
have the same depname value as the current row. (This actually is the same function as the non-window
21
Advanced Features
avg aggregate, but the OVER clause causes it to be treated as a window function and computed across
the window frame.)
A window function call always contains an OVER clause directly following the window function's name
and argument(s). This is what syntactically distinguishes it from a normal function or non-window aggre-
gate. The OVER clause determines exactly how the rows of the query are split up for processing by the
window function. The PARTITION BY clause within OVER divides the rows into groups, or partitions,
that share the same values of the PARTITION BY expression(s). For each row, the window function is
computed across the rows that fall into the same partition as the current row.
You can also control the order in which rows are processed by window functions using ORDER BY within
OVER. (The window ORDER BY does not even have to match the order in which the rows are output.)
Here is an example:
As shown here, the rank function produces a numerical rank for each distinct ORDER BY value in
the current row's partition, using the order defined by the ORDER BY clause. rank needs no explicit
parameter, because its behavior is entirely determined by the OVER clause.
The rows considered by a window function are those of the “virtual table” produced by the query's FROM
clause as filtered by its WHERE, GROUP BY, and HAVING clauses if any. For example, a row removed
because it does not meet the WHERE condition is not seen by any window function. A query can contain
multiple window functions that slice up the data in different ways using different OVER clauses, but they
all act on the same collection of rows defined by this virtual table.
We already saw that ORDER BY can be omitted if the ordering of rows is not important. It is also possible
to omit PARTITION BY, in which case there is a single partition containing all rows.
There is another important concept associated with window functions: for each row, there is a set of rows
within its partition called its window frame. Some window functions act only on the rows of the window
frame, rather than of the whole partition. By default, if ORDER BY is supplied then the frame consists of
all rows from the start of the partition up through the current row, plus any following rows that are equal
to the current row according to the ORDER BY clause. When ORDER BY is omitted the default frame
consists of all rows in the partition. 1 Here is an example using sum:
1
There are options to define the window frame in other ways, but this tutorial does not cover them. See Section 4.2.8 for details.
22
Advanced Features
salary | sum
--------+-------
5200 | 47100
5000 | 47100
3500 | 47100
4800 | 47100
3900 | 47100
4200 | 47100
4500 | 47100
4800 | 47100
6000 | 47100
5200 | 47100
(10 rows)
Above, since there is no ORDER BY in the OVER clause, the window frame is the same as the partition,
which for lack of PARTITION BY is the whole table; in other words each sum is taken over the whole
table and so we get the same result for each output row. But if we add an ORDER BY clause, we get very
different results:
salary | sum
--------+-------
3500 | 3500
3900 | 7400
4200 | 11600
4500 | 16100
4800 | 25700
4800 | 25700
5000 | 30700
5200 | 41100
5200 | 41100
6000 | 47100
(10 rows)
Here the sum is taken from the first (lowest) salary up through the current one, including any duplicates
of the current one (notice the results for the duplicated salaries).
Window functions are permitted only in the SELECT list and the ORDER BY clause of the query. They are
forbidden elsewhere, such as in GROUP BY, HAVING and WHERE clauses. This is because they logically
execute after the processing of those clauses. Also, window functions execute after non-window aggre-
gate functions. This means it is valid to include an aggregate function call in the arguments of a window
function, but not vice versa.
If there is a need to filter or group rows after the window calculations are performed, you can use a sub-
select. For example:
23
Advanced Features
The above query only shows the rows from the inner query having rank less than 3.
When a query involves multiple window functions, it is possible to write out each one with a separate
OVER clause, but this is duplicative and error-prone if the same windowing behavior is wanted for several
functions. Instead, each windowing behavior can be named in a WINDOW clause and then referenced in
OVER. For example:
More details about window functions can be found in Section 4.2.8, Section 9.22, Section 7.2.5, and the
SELECT reference page.
3.6. Inheritance
Inheritance is a concept from object-oriented databases. It opens up interesting new possibilities of data-
base design.
Let's create two tables: A table cities and a table capitals. Naturally, capitals are also cities, so you
want some way to show the capitals implicitly when you list all cities. If you're really clever you might
invent some scheme like this:
This works OK as far as querying goes, but it gets ugly when you need to update several rows, for one thing.
24
Advanced Features
name text,
population real,
elevation int -- (in ft)
);
In this case, a row of capitals inherits all columns (name, population, and elevation) from its
parent, cities. The type of the column name is text, a native PostgreSQL type for variable length
character strings. The capitals table has an additional column, state, which shows its state abbrevi-
ation. In PostgreSQL, a table can inherit from zero or more other tables.
For example, the following query finds the names of all cities, including state capitals, that are located
at an elevation over 500 feet:
which returns:
name | elevation
-----------+-----------
Las Vegas | 2174
Mariposa | 1953
Madison | 845
(3 rows)
On the other hand, the following query finds all the cities that are not state capitals and are situated at an
elevation over 500 feet:
name | elevation
-----------+-----------
Las Vegas | 2174
Mariposa | 1953
(2 rows)
Here the ONLY before cities indicates that the query should be run over only the cities table, and not
tables below cities in the inheritance hierarchy. Many of the commands that we have already discussed
— SELECT, UPDATE, and DELETE — support this ONLY notation.
Note
Although inheritance is frequently useful, it has not been integrated with unique constraints or
foreign keys, which limits its usefulness. See Section 5.11 for more detail.
25
Advanced Features
3.7. Conclusion
PostgreSQL has many features not touched upon in this tutorial introduction, which has been oriented
toward newer users of SQL. These features are discussed in more detail in the remainder of this book.
If you feel you need more introductory material, please visit the PostgreSQL web site2 for links to more
resources.
2
https://www.postgresql.org
26
Part II. The SQL Language
This part describes the use of the SQL language in PostgreSQL. We start with describing the general syntax of SQL,
then how to create tables, how to populate the database, and how to query it. The middle part lists the available data
types and functions for use in SQL commands. Lastly, we address several aspects of importance for tuning a database.
The information is arranged so that a novice user can follow it from start to end and gain a full understanding of the
topics without having to refer forward too many times. The chapters are intended to be self-contained, so that advanced
users can read the chapters individually as they choose. The information is presented in narrative form with topical
units. Readers looking for a complete description of a particular command are encouraged to review the Part VI.
Readers should know how to connect to a PostgreSQL database and issue SQL commands. Readers that are unfamiliar
with these issues are encouraged to read Part I first. SQL commands are typically entered using the PostgreSQL
interactive terminal psql, but other programs that have similar functionality can be used as well.
Table of Contents
4. SQL Syntax .................................................................................................................. 36
4.1. Lexical Structure ................................................................................................. 36
4.1.1. Identifiers and Key Words ......................................................................... 36
4.1.2. Constants ................................................................................................ 38
4.1.3. Operators ................................................................................................ 43
4.1.4. Special Characters .................................................................................... 44
4.1.5. Comments ............................................................................................... 44
4.1.6. Operator Precedence ................................................................................. 45
4.2. Value Expressions ............................................................................................... 46
4.2.1. Column References ................................................................................... 47
4.2.2. Positional Parameters ................................................................................ 47
4.2.3. Subscripts ................................................................................................ 47
4.2.4. Field Selection ......................................................................................... 48
4.2.5. Operator Invocations ................................................................................. 48
4.2.6. Function Calls .......................................................................................... 49
4.2.7. Aggregate Expressions .............................................................................. 49
4.2.8. Window Function Calls ............................................................................. 52
4.2.9. Type Casts .............................................................................................. 54
4.2.10. Collation Expressions .............................................................................. 55
4.2.11. Scalar Subqueries .................................................................................... 56
4.2.12. Array Constructors .................................................................................. 56
4.2.13. Row Constructors ................................................................................... 58
4.2.14. Expression Evaluation Rules ..................................................................... 59
4.3. Calling Functions ................................................................................................ 60
4.3.1. Using Positional Notation .......................................................................... 61
4.3.2. Using Named Notation .............................................................................. 62
4.3.3. Using Mixed Notation ............................................................................... 62
5. Data Definition ............................................................................................................. 64
5.1. Table Basics ....................................................................................................... 64
5.2. Default Values .................................................................................................... 65
5.3. Identity Columns ................................................................................................ 66
5.4. Generated Columns ............................................................................................. 67
5.5. Constraints ......................................................................................................... 69
5.5.1. Check Constraints ..................................................................................... 69
5.5.2. Not-Null Constraints ................................................................................. 71
5.5.3. Unique Constraints ................................................................................... 72
5.5.4. Primary Keys ........................................................................................... 74
5.5.5. Foreign Keys ........................................................................................... 75
5.5.6. Exclusion Constraints ................................................................................ 78
5.6. System Columns ................................................................................................. 79
5.7. Modifying Tables ................................................................................................ 80
5.7.1. Adding a Column ..................................................................................... 80
5.7.2. Removing a Column ................................................................................. 81
5.7.3. Adding a Constraint .................................................................................. 81
5.7.4. Removing a Constraint .............................................................................. 81
5.7.5. Changing a Column's Default Value ............................................................ 82
5.7.6. Changing a Column's Data Type ................................................................. 82
5.7.7. Renaming a Column ................................................................................. 82
5.7.8. Renaming a Table .................................................................................... 82
5.8. Privileges ........................................................................................................... 83
5.9. Row Security Policies .......................................................................................... 88
28
The SQL Language
29
The SQL Language
30
The SQL Language
31
The SQL Language
32
The SQL Language
33
The SQL Language
34
The SQL Language
35
Chapter 4. SQL Syntax
This chapter describes the syntax of SQL. It forms the foundation for understanding the following chapters
which will go into detail about how SQL commands are applied to define and modify data.
We also advise users who are already familiar with SQL to read this chapter carefully because it contains
several rules and concepts that are implemented inconsistently among SQL databases or that are specific
to PostgreSQL.
A token can be a key word, an identifier, a quoted identifier, a literal (or constant), or a special character
symbol. Tokens are normally separated by whitespace (space, tab, newline), but need not be if there is no
ambiguity (which is generally only the case if a special character is adjacent to some other token type).
This is a sequence of three commands, one per line (although this is not required; more than one command
can be on a line, and commands can usefully be split across lines).
Additionally, comments can occur in SQL input. They are not tokens, they are effectively equivalent to
whitespace.
The SQL syntax is not very consistent regarding what tokens identify commands and which are operands
or parameters. The first few tokens are generally the command name, so in the above example we would
usually speak of a “SELECT”, an “UPDATE”, and an “INSERT” command. But for instance the UPDATE
command always requires a SET token to appear in a certain position, and this particular variation of
INSERT also requires a VALUES in order to be complete. The precise syntax rules for each command
are described in Part VI.
SQL identifiers and key words must begin with a letter (a-z, but also letters with diacritical marks and
non-Latin letters) or an underscore (_). Subsequent characters in an identifier or key word can be letters,
underscores, digits (0-9), or dollar signs ($). Note that dollar signs are not allowed in identifiers according
to the letter of the SQL standard, so their use might render applications less portable. The SQL standard
36
SQL Syntax
will not define a key word that contains digits or starts or ends with an underscore, so identifiers of this
form are safe against possible conflict with future extensions of the standard.
The system uses no more than NAMEDATALEN-1 bytes of an identifier; longer names can be written in
commands, but they will be truncated. By default, NAMEDATALEN is 64 so the maximum identifier length
is 63 bytes. If this limit is problematic, it can be raised by changing the NAMEDATALEN constant in src/
include/pg_config_manual.h.
A convention often used is to write key words in upper case and names in lower case, e.g.:
There is a second kind of identifier: the delimited identifier or quoted identifier. It is formed by enclosing
an arbitrary sequence of characters in double-quotes ("). A delimited identifier is always an identifier,
never a key word. So "select" could be used to refer to a column or table named “select”, whereas an
unquoted select would be taken as a key word and would therefore provoke a parse error when used
where a table or column name is expected. The example can be written with quoted identifiers like this:
Quoted identifiers can contain any character, except the character with code zero. (To include a double
quote, write two double quotes.) This allows constructing table or column names that would otherwise not
be possible, such as ones containing spaces or ampersands. The length limitation still applies.
Quoting an identifier also makes it case-sensitive, whereas unquoted names are always folded to lower
case. For example, the identifiers FOO, foo, and "foo" are considered the same by PostgreSQL, but
"Foo" and "FOO" are different from these three and each other. (The folding of unquoted names to lower
case in PostgreSQL is incompatible with the SQL standard, which says that unquoted names should be
folded to upper case. Thus, foo should be equivalent to "FOO" not "foo" according to the standard. If
you want to write portable applications you are advised to always quote a particular name or never quote it.)
A variant of quoted identifiers allows including escaped Unicode characters identified by their code points.
This variant starts with U& (upper or lower case U followed by ampersand) immediately before the opening
double quote, without any spaces in between, for example U&"foo". (Note that this creates an ambiguity
with the operator &. Use spaces around the operator to avoid this problem.) Inside the quotes, Unicode
characters can be specified in escaped form by writing a backslash followed by the four-digit hexadecimal
code point number or alternatively a backslash followed by a plus sign followed by a six-digit hexadecimal
code point number. For example, the identifier "data" could be written as
U&"d\0061t\+000061"
The following less trivial example writes the Russian word “slon” (elephant) in Cyrillic letters:
37
SQL Syntax
U&"\0441\043B\043E\043D"
If a different escape character than backslash is desired, it can be specified using the UESCAPE clause
after the string, for example:
The escape character can be any single character other than a hexadecimal digit, the plus sign, a single
quote, a double quote, or a whitespace character. Note that the escape character is written in single quotes,
not double quotes, after UESCAPE.
Either the 4-digit or the 6-digit escape form can be used to specify UTF-16 surrogate pairs to compose
characters with code points larger than U+FFFF, although the availability of the 6-digit form technically
makes this unnecessary. (Surrogate pairs are not stored directly, but are combined into a single code point.)
If the server encoding is not UTF-8, the Unicode code point identified by one of these escape sequences
is converted to the actual server encoding; an error is reported if that's not possible.
4.1.2. Constants
There are three kinds of implicitly-typed constants in PostgreSQL: strings, bit strings, and numbers. Con-
stants can also be specified with explicit types, which can enable more accurate representation and more
efficient handling by the system. These alternatives are discussed in the following subsections.
Two string constants that are only separated by whitespace with at least one newline are concatenated and
effectively treated as if the string had been written as one constant. For example:
SELECT 'foo'
'bar';
is equivalent to:
SELECT 'foobar';
but:
is not valid syntax. (This slightly bizarre behavior is specified by SQL; PostgreSQL is following the stan-
dard.)
38
SQL Syntax
PostgreSQL also accepts “escape” string constants, which are an extension to the SQL standard. An escape
string constant is specified by writing the letter E (upper or lower case) just before the opening single
quote, e.g., E'foo'. (When continuing an escape string constant across lines, write E only before the
first opening quote.) Within an escape string, a backslash character (\) begins a C-like backslash escape
sequence, in which the combination of backslash and following character(s) represent a special byte value,
as shown in Table 4.1.
Any other character following a backslash is taken literally. Thus, to include a backslash character, write
two backslashes (\\). Also, a single quote can be included in an escape string by writing \', in addition
to the normal way of ''.
It is your responsibility that the byte sequences you create, especially when using the octal or hexadecimal
escapes, compose valid characters in the server character set encoding. A useful alternative is to use Uni-
code escapes or the alternative Unicode escape syntax, explained in Section 4.1.2.3; then the server will
check that the character conversion is possible.
Caution
If the configuration parameter standard_conforming_strings is off, then PostgreSQL recognizes
backslash escapes in both regular and escape string constants. However, as of PostgreSQL 9.1, the
default is on, meaning that backslash escapes are recognized only in escape string constants. This
behavior is more standards-compliant, but might break applications which rely on the historical
behavior, where backslash escapes were always recognized. As a workaround, you can set this
parameter to off, but it is better to migrate away from using backslash escapes. If you need to use
a backslash escape to represent a special character, write the string constant with an E.
39
SQL Syntax
ample U&'foo'. (Note that this creates an ambiguity with the operator &. Use spaces around the operator
to avoid this problem.) Inside the quotes, Unicode characters can be specified in escaped form by writing a
backslash followed by the four-digit hexadecimal code point number or alternatively a backslash followed
by a plus sign followed by a six-digit hexadecimal code point number. For example, the string 'data'
could be written as
U&'d\0061t\+000061'
The following less trivial example writes the Russian word “slon” (elephant) in Cyrillic letters:
U&'\0441\043B\043E\043D'
If a different escape character than backslash is desired, it can be specified using the UESCAPE clause
after the string, for example:
The escape character can be any single character other than a hexadecimal digit, the plus sign, a single
quote, a double quote, or a whitespace character.
Either the 4-digit or the 6-digit escape form can be used to specify UTF-16 surrogate pairs to compose
characters with code points larger than U+FFFF, although the availability of the 6-digit form technically
makes this unnecessary. (Surrogate pairs are not stored directly, but are combined into a single code point.)
If the server encoding is not UTF-8, the Unicode code point identified by one of these escape sequences
is converted to the actual server encoding; an error is reported if that's not possible.
Also, the Unicode escape syntax for string constants only works when the configuration parameter stan-
dard_conforming_strings is turned on. This is because otherwise this syntax could confuse clients that
parse the SQL statements to the point that it could lead to SQL injections and similar security issues. If
the parameter is set to off, this syntax will be rejected with an error message.
$$Dianne's horse$$
$SomeTag$Dianne's horse$SomeTag$
Notice that inside the dollar-quoted string, single quotes can be used without needing to be escaped. Indeed,
no characters inside a dollar-quoted string are ever escaped: the string content is always written literally.
Backslashes are not special, and neither are dollar signs, unless they are part of a sequence matching the
opening tag.
40
SQL Syntax
It is possible to nest dollar-quoted string constants by choosing different tags at each nesting level. This
is most commonly used in writing function definitions. For example:
$function$
BEGIN
RETURN ($1 ~ $q$[\t\r\n\v\\]$q$);
END;
$function$
The tag, if any, of a dollar-quoted string follows the same rules as an unquoted identifier, except that it
cannot contain a dollar sign. Tags are case sensitive, so $tag$String content$tag$ is correct, but
$TAG$String content$tag$ is not.
A dollar-quoted string that follows a keyword or identifier must be separated from it by whitespace; oth-
erwise the dollar quoting delimiter would be taken as part of the preceding identifier.
Dollar quoting is not part of the SQL standard, but it is often a more convenient way to write compli-
cated string literals than the standard-compliant single quote syntax. It is particularly useful when repre-
senting string constants inside other constants, as is often needed in procedural function definitions. With
single-quote syntax, each backslash in the above example would have to be written as four backslashes,
which would be reduced to two backslashes in parsing the original string constant, and then to one when
the inner string constant is re-parsed during function execution.
Alternatively, bit-string constants can be specified in hexadecimal notation, using a leading X (upper or
lower case), e.g., X'1FF'. This notation is equivalent to a bit-string constant with four binary digits for
each hexadecimal digit.
Both forms of bit-string constant can be continued across lines in the same way as regular string constants.
Dollar quoting cannot be used in a bit-string constant.
digits
digits.[digits][e[+-]digits]
[digits].digits[e[+-]digits]
digitse[+-]digits
where digits is one or more decimal digits (0 through 9). At least one digit must be before or after the
decimal point, if one is used. At least one digit must follow the exponent marker (e), if one is present.
There cannot be any spaces or other characters embedded in the constant, except for underscores, which
41
SQL Syntax
can be used for visual grouping as described below. Note that any leading plus or minus sign is not actually
considered part of the constant; it is an operator applied to the constant.
42
3.5
4.
.001
5e2
1.925e-3
0xhexdigits
0ooctdigits
0bbindigits
where hexdigits is one or more hexadecimal digits (0-9, A-F), octdigits is one or more octal digits
(0-7), and bindigits is one or more binary digits (0 or 1). Hexadecimal digits and the radix prefixes
can be in upper or lower case. Note that only integers can have non-decimal forms, not numbers with
fractional parts.
0b100101
0B10011001
0o273
0O755
0x42f
0XFFFF
For visual grouping, underscores can be inserted between digits. These have no further effect on the value
of the constant. For example:
1_500_000_000
0b10001000_00000000
0o_1_755
0xFFFF_FFFF
1.618_034
Underscores are not allowed at the start or end of a numeric constant or a group of digits (that is, imme-
diately before or after the decimal point or the exponent marker), and more than one underscore in a row
is not allowed.
A numeric constant that contains neither a decimal point nor an exponent is initially presumed to be
type integer if its value fits in type integer (32 bits); otherwise it is presumed to be type bigint if
its value fits in type bigint (64 bits); otherwise it is taken to be type numeric. Constants that contain
decimal points and/or exponents are always initially presumed to be type numeric.
The initially assigned data type of a numeric constant is just a starting point for the type resolution algo-
rithms. In most cases the constant will be automatically coerced to the most appropriate type depending
42
SQL Syntax
on context. When necessary, you can force a numeric value to be interpreted as a specific data type by
casting it. For example, you can force a numeric value to be treated as type real (float4) by writing:
These are actually just special cases of the general casting notations discussed next.
type 'string'
'string'::type
CAST ( 'string' AS type )
The string constant's text is passed to the input conversion routine for the type called type. The result is
a constant of the indicated type. The explicit type cast can be omitted if there is no ambiguity as to the
type the constant must be (for example, when it is assigned directly to a table column), in which case it
is automatically coerced.
The string constant can be written using either regular SQL notation or dollar-quoting.
typename ( 'string' )
but not all type names can be used in this way; see Section 4.2.9 for details.
The ::, CAST(), and function-call syntaxes can also be used to specify run-time type conversions of
arbitrary expressions, as discussed in Section 4.2.9. To avoid syntactic ambiguity, the type 'string'
syntax can only be used to specify the type of a simple literal constant. Another restriction on the type
'string' syntax is that it does not work for array types; use :: or CAST() to specify the type of an
array constant.
The CAST() syntax conforms to SQL. The type 'string' syntax is a generalization of the standard:
SQL specifies this syntax only for a few data types, but PostgreSQL allows it for all types. The syntax
with :: is historical PostgreSQL usage, as is the function-call syntax.
4.1.3. Operators
An operator name is a sequence of up to NAMEDATALEN-1 (63 by default) characters from the following
list:
+-*/<>=~!@#%^&|`?
• -- and /* cannot appear anywhere in an operator name, since they will be taken as the start of a
comment.
• A multiple-character operator name cannot end in + or -, unless the name also contains at least one
of these characters:
43
SQL Syntax
~!@#%^&|`?
For example, @- is an allowed operator name, but *- is not. This restriction allows PostgreSQL to parse
SQL-compliant queries without requiring spaces between tokens.
When working with non-SQL-standard operator names, you will usually need to separate adjacent opera-
tors with spaces to avoid ambiguity. For example, if you have defined a prefix operator named @, you can-
not write X*@Y; you must write X* @Y to ensure that PostgreSQL reads it as two operator names not one.
• A dollar sign ($) followed by digits is used to represent a positional parameter in the body of a function
definition or a prepared statement. In other contexts the dollar sign can be part of an identifier or a
dollar-quoted string constant.
• Parentheses (()) have their usual meaning to group expressions and enforce precedence. In some cases
parentheses are required as part of the fixed syntax of a particular SQL command.
• Brackets ([]) are used to select the elements of an array. See Section 8.15 for more information on
arrays.
• Commas (,) are used in some syntactical constructs to separate the elements of a list.
• The semicolon (;) terminates an SQL command. It cannot appear anywhere within a command, except
within a string constant or quoted identifier.
• The colon (:) is used to select “slices” from arrays. (See Section 8.15.) In certain SQL dialects (such
as Embedded SQL), the colon is used to prefix variable names.
• The asterisk (*) is used in some contexts to denote all the fields of a table row or composite value. It also
has a special meaning when used as the argument of an aggregate function, namely that the aggregate
does not require any explicit parameter.
• The period (.) is used in numeric constants, and to separate schema, table, and column names.
4.1.5. Comments
A comment is a sequence of characters beginning with double dashes and extending to the end of the
line, e.g.:
/* multiline comment
* with nesting: /* nested block comment */
*/
44
SQL Syntax
where the comment begins with /* and extends to the matching occurrence of */. These block comments
nest, as specified in the SQL standard but unlike C, so that one can comment out larger blocks of code
that might contain existing block comments.
A comment is removed from the input stream before further syntax analysis and is effectively replaced
by whitespace.
Note that the operator precedence rules also apply to user-defined operators that have the same names as
the built-in operators mentioned above. For example, if you define a “+” operator for some custom data
type it will have the same precedence as the built-in “+” operator, no matter what yours does.
When a schema-qualified operator name is used in the OPERATOR syntax, as for example in:
SELECT 3 OPERATOR(pg_catalog.+) 4;
the OPERATOR construct is taken to have the default precedence shown in Table 4.2 for “any other oper-
ator”. This is true no matter which specific operator appears inside OPERATOR().
45
SQL Syntax
Note
PostgreSQL versions before 9.5 used slightly different operator precedence rules. In particular,
<= >= and <> used to be treated as generic operators; IS tests used to have higher priority; and
NOT BETWEEN and related constructs acted inconsistently, being taken in some cases as having
the precedence of NOT rather than BETWEEN. These rules were changed for better compliance
with the SQL standard and to reduce confusion from inconsistent treatment of logically equivalent
constructs. In most cases, these changes will result in no behavioral change, or perhaps in “no such
operator” failures which can be resolved by adding parentheses. However there are corner cases
in which a query might change behavior without any parsing error being reported.
• A column reference
• A subscripted expression
• An operator invocation
• A function call
• An aggregate expression
• A type cast
• A collation expression
• A scalar subquery
• An array constructor
• A row constructor
• Another value expression in parentheses (used to group subexpressions and override precedence)
In addition to this list, there are a number of constructs that can be classified as an expression but do not
follow any general syntax rules. These generally have the semantics of a function or operator and are
explained in the appropriate location in Chapter 9. An example is the IS NULL clause.
46
SQL Syntax
We have already discussed constants in Section 4.1.2. The following sections discuss the remaining op-
tions.
correlation.columnname
correlation is the name of a table (possibly qualified with a schema name), or an alias for a table
defined by means of a FROM clause. The correlation name and separating dot can be omitted if the column
name is unique across all the tables being used in the current query. (See also Chapter 7.)
$number
Here the $1 references the value of the first function argument whenever the function is invoked.
4.2.3. Subscripts
If an expression yields a value of an array type, then a specific element of the array value can be extracted
by writing
expression[subscript]
expression[lower_subscript:upper_subscript]
(Here, the brackets [ ] are meant to appear literally.) Each subscript is itself an expression, which
will be rounded to the nearest integer value.
In general the array expression must be parenthesized, but the parentheses can be omitted when the
expression to be subscripted is just a column reference or positional parameter. Also, multiple subscripts
can be concatenated when the original array is multidimensional. For example:
47
SQL Syntax
mytable.arraycolumn[4]
mytable.two_d_column[17][34]
$1[10:42]
(arrayfunction(a,b))[42]
The parentheses in the last example are required. See Section 8.15 for more about arrays.
expression.fieldname
In general the row expression must be parenthesized, but the parentheses can be omitted when the
expression to be selected from is just a table reference or positional parameter. For example:
mytable.mycolumn
$1.somecolumn
(rowfunction(a,b)).col3
(Thus, a qualified column reference is actually just a special case of the field selection syntax.) An impor-
tant special case is extracting a field from a table column that is of a composite type:
(compositecol).somefield
(mytable.compositecol).somefield
The parentheses are required here to show that compositecol is a column name not a table name, or
that mytable is a table name not a schema name in the second case.
You can ask for all fields of a composite value by writing .*:
(compositecol).*
This notation behaves differently depending on context; see Section 8.16.5 for details.
where the operator token follows the syntax rules of Section 4.1.3, or is one of the key words AND,
OR, and NOT, or is a qualified operator name in the form:
OPERATOR(schema.operatorname)
Which particular operators exist and whether they are unary or binary depends on what operators have
been defined by the system or the user. Chapter 9 describes the built-in operators.
48
SQL Syntax
sqrt(2)
The list of built-in functions is in Chapter 9. Other functions can be added by the user.
When issuing queries in a database where some users mistrust other users, observe security precautions
from Section 10.3 when writing function calls.
The arguments can optionally have names attached. See Section 4.3 for details.
Note
A function that takes a single argument of composite type can optionally be called using field-
selection syntax, and conversely field selection can be written in functional style. That is, the
notations col(table) and table.col are interchangeable. This behavior is not SQL-standard
but is provided in PostgreSQL because it allows use of functions to emulate “computed fields”.
For more information see Section 8.16.5.
where aggregate_name is a previously defined aggregate (possibly qualified with a schema name) and
expression is any value expression that does not itself contain an aggregate expression or a window
function call. The optional order_by_clause and filter_clause are described below.
The first form of aggregate expression invokes the aggregate once for each input row. The second form is
the same as the first, since ALL is the default. The third form invokes the aggregate once for each distinct
value of the expression (or distinct set of values, for multiple expressions) found in the input rows. The
fourth form invokes the aggregate once for each input row; since no particular input value is specified,
49
SQL Syntax
it is generally only useful for the count(*) aggregate function. The last form is used with ordered-set
aggregate functions, which are described below.
Most aggregate functions ignore null inputs, so that rows in which one or more of the expression(s) yield
null are discarded. This can be assumed to be true, unless otherwise specified, for all built-in aggregates.
For example, count(*) yields the total number of input rows; count(f1) yields the number of input
rows in which f1 is non-null, since count ignores nulls; and count(distinct f1) yields the number
of distinct non-null values of f1.
Ordinarily, the input rows are fed to the aggregate function in an unspecified order. In many cases this does
not matter; for example, min produces the same result no matter what order it receives the inputs in. How-
ever, some aggregate functions (such as array_agg and string_agg) produce results that depend on
the ordering of the input rows. When using such an aggregate, the optional order_by_clause can be
used to specify the desired ordering. The order_by_clause has the same syntax as for a query-level
ORDER BY clause, as described in Section 7.5, except that its expressions are always just expressions and
cannot be output-column names or numbers. For example:
Since jsonb only keeps the last matching key, ordering of its keys can be significant:
When dealing with multiple-argument aggregate functions, note that the ORDER BY clause goes after all
the aggregate arguments. For example, write this:
not this:
The latter is syntactically valid, but it represents a call of a single-argument aggregate function with two
ORDER BY keys (the second one being rather useless since it's a constant).
50
SQL Syntax
{4,3,2,1}
Placing ORDER BY within the aggregate's regular argument list, as described so far, is used when order-
ing the input rows for general-purpose and statistical aggregates, for which ordering is optional. There
is a subclass of aggregate functions called ordered-set aggregates for which an order_by_clause is
required, usually because the aggregate's computation is only sensible in terms of a specific ordering of its
input rows. Typical examples of ordered-set aggregates include rank and percentile calculations. For an
ordered-set aggregate, the order_by_clause is written inside WITHIN GROUP (...), as shown
in the final syntax alternative above. The expressions in the order_by_clause are evaluated once per
input row just like regular aggregate arguments, sorted as per the order_by_clause's requirements,
and fed to the aggregate function as input arguments. (This is unlike the case for a non-WITHIN GROUP
order_by_clause, which is not treated as argument(s) to the aggregate function.) The argument ex-
pressions preceding WITHIN GROUP, if any, are called direct arguments to distinguish them from the
aggregated arguments listed in the order_by_clause. Unlike regular aggregate arguments, direct ar-
guments are evaluated only once per aggregate call, not once per input row. This means that they can
contain variables only if those variables are grouped by GROUP BY; this restriction is the same as if the
direct arguments were not inside an aggregate expression at all. Direct arguments are typically used for
things like percentile fractions, which only make sense as a single value per aggregation calculation. The
direct argument list can be empty; in this case, write just () not (*). (PostgreSQL will actually accept
either spelling, but only the first way conforms to the SQL standard.)
which obtains the 50th percentile, or median, value of the income column from table households.
Here, 0.5 is a direct argument; it would make no sense for the percentile fraction to be a value varying
across rows.
If FILTER is specified, then only the input rows for which the filter_clause evaluates to true are
fed to the aggregate function; other rows are discarded. For example:
SELECT
count(*) AS unfiltered,
count(*) FILTER (WHERE i < 5) AS filtered
FROM generate_series(1,10) AS s(i);
unfiltered | filtered
------------+----------
10 | 4
(1 row)
The predefined aggregate functions are described in Section 9.21. Other aggregate functions can be added
by the user.
An aggregate expression can only appear in the result list or HAVING clause of a SELECT command.
It is forbidden in other clauses, such as WHERE, because those clauses are logically evaluated before the
results of aggregates are formed.
When an aggregate expression appears in a subquery (see Section 4.2.11 and Section 9.24), the aggregate
is normally evaluated over the rows of the subquery. But an exception occurs if the aggregate's arguments
51
SQL Syntax
(and filter_clause if any) contain only outer-level variables: the aggregate then belongs to the near-
est such outer level, and is evaluated over the rows of that query. The aggregate expression as a whole is
then an outer reference for the subquery it appears in, and acts as a constant over any one evaluation of that
subquery. The restriction about appearing only in the result list or HAVING clause applies with respect to
the query level that the aggregate belongs to.
[ existing_window_name ]
[ PARTITION BY expression [, ...] ]
[ ORDER BY expression [ ASC | DESC | USING operator ] [ NULLS { FIRST
| LAST } ] [, ...] ]
[ frame_clause ]
UNBOUNDED PRECEDING
offset PRECEDING
CURRENT ROW
offset FOLLOWING
UNBOUNDED FOLLOWING
52
SQL Syntax
EXCLUDE TIES
EXCLUDE NO OTHERS
Here, expression represents any value expression that does not itself contain window function calls.
window_name is a reference to a named window specification defined in the query's WINDOW clause.
Alternatively, a full window_definition can be given within parentheses, using the same syntax as
for defining a named window in the WINDOW clause; see the SELECT reference page for details. It's worth
pointing out that OVER wname is not exactly equivalent to OVER (wname ...); the latter implies
copying and modifying the window definition, and will be rejected if the referenced window specification
includes a frame clause.
The PARTITION BY clause groups the rows of the query into partitions, which are processed separately
by the window function. PARTITION BY works similarly to a query-level GROUP BY clause, except
that its expressions are always just expressions and cannot be output-column names or numbers. Without
PARTITION BY, all rows produced by the query are treated as a single partition. The ORDER BY clause
determines the order in which the rows of a partition are processed by the window function. It works
similarly to a query-level ORDER BY clause, but likewise cannot use output-column names or numbers.
Without ORDER BY, rows are processed in an unspecified order.
The frame_clause specifies the set of rows constituting the window frame, which is a subset of the
current partition, for those window functions that act on the frame instead of the whole partition. The set
of rows in the frame can vary depending on which row is the current row. The frame can be specified in
RANGE, ROWS or GROUPS mode; in each case, it runs from the frame_start to the frame_end. If
frame_end is omitted, the end defaults to CURRENT ROW.
A frame_start of UNBOUNDED PRECEDING means that the frame starts with the first row of the
partition, and similarly a frame_end of UNBOUNDED FOLLOWING means that the frame ends with
the last row of the partition.
In RANGE or GROUPS mode, a frame_start of CURRENT ROW means the frame starts with the current
row's first peer row (a row that the window's ORDER BY clause sorts as equivalent to the current row),
while a frame_end of CURRENT ROW means the frame ends with the current row's last peer row. In
ROWS mode, CURRENT ROW simply means the current row.
In the offset PRECEDING and offset FOLLOWING frame options, the offset must be an expres-
sion not containing any variables, aggregate functions, or window functions. The meaning of the offset
depends on the frame mode:
• In ROWS mode, the offset must yield a non-null, non-negative integer, and the option means that the
frame starts or ends the specified number of rows before or after the current row.
• In GROUPS mode, the offset again must yield a non-null, non-negative integer, and the option means
that the frame starts or ends the specified number of peer groups before or after the current row's peer
group, where a peer group is a set of rows that are equivalent in the ORDER BY ordering. (There must
be an ORDER BY clause in the window definition to use GROUPS mode.)
• In RANGE mode, these options require that the ORDER BY clause specify exactly one column. The
offset specifies the maximum difference between the value of that column in the current row and
its value in preceding or following rows of the frame. The data type of the offset expression varies
depending on the data type of the ordering column. For numeric ordering columns it is typically of the
same type as the ordering column, but for datetime ordering columns it is an interval. For example,
if the ordering column is of type date or timestamp, one could write RANGE BETWEEN '1 day'
PRECEDING AND '10 days' FOLLOWING. The offset is still required to be non-null and non-
negative, though the meaning of “non-negative” depends on its data type.
53
SQL Syntax
In any case, the distance to the end of the frame is limited by the distance to the end of the partition, so
that for rows near the partition ends the frame might contain fewer rows than elsewhere.
Notice that in both ROWS and GROUPS mode, 0 PRECEDING and 0 FOLLOWING are equivalent to
CURRENT ROW. This normally holds in RANGE mode as well, for an appropriate data-type-specific mean-
ing of “zero”.
The frame_exclusion option allows rows around the current row to be excluded from the frame, even
if they would be included according to the frame start and frame end options. EXCLUDE CURRENT ROW
excludes the current row from the frame. EXCLUDE GROUP excludes the current row and its ordering
peers from the frame. EXCLUDE TIES excludes any peers of the current row from the frame, but not
the current row itself. EXCLUDE NO OTHERS simply specifies explicitly the default behavior of not
excluding the current row or its peers.
The default framing option is RANGE UNBOUNDED PRECEDING, which is the same as RANGE BE-
TWEEN UNBOUNDED PRECEDING AND CURRENT ROW. With ORDER BY, this sets the frame to
be all rows from the partition start up through the current row's last ORDER BY peer. Without ORDER
BY, this means all rows of the partition are included in the window frame, since all rows become peers
of the current row.
If FILTER is specified, then only the input rows for which the filter_clause evaluates to true are
fed to the window function; other rows are discarded. Only window functions that are aggregates accept
a FILTER clause.
The built-in window functions are described in Table 9.65. Other window functions can be added by the
user. Also, any built-in or user-defined general-purpose or statistical aggregate can be used as a window
function. (Ordered-set and hypothetical-set aggregates cannot presently be used as window functions.)
The syntaxes using * are used for calling parameter-less aggregate functions as window functions, for
example count(*) OVER (PARTITION BY x ORDER BY y). The asterisk (*) is customarily
not used for window-specific functions. Window-specific functions do not allow DISTINCT or ORDER
BY to be used within the function argument list.
Window function calls are permitted only in the SELECT list and the ORDER BY clause of the query.
More information about window functions can be found in Section 3.5, Section 9.22, and Section 7.2.5.
The CAST syntax conforms to SQL; the syntax with :: is historical PostgreSQL usage.
When a cast is applied to a value expression of a known type, it represents a run-time type conversion. The
cast will succeed only if a suitable type conversion operation has been defined. Notice that this is subtly
54
SQL Syntax
different from the use of casts with constants, as shown in Section 4.1.2.7. A cast applied to an unadorned
string literal represents the initial assignment of a type to a literal constant value, and so it will succeed for
any type (if the contents of the string literal are acceptable input syntax for the data type).
An explicit type cast can usually be omitted if there is no ambiguity as to the type that a value expression
must produce (for example, when it is assigned to a table column); the system will automatically apply a
type cast in such cases. However, automatic casting is only done for casts that are marked “OK to apply
implicitly” in the system catalogs. Other casts must be invoked with explicit casting syntax. This restriction
is intended to prevent surprising conversions from being applied silently.
typename ( expression )
However, this only works for types whose names are also valid as function names. For example, double
precision cannot be used this way, but the equivalent float8 can. Also, the names interval,
time, and timestamp can only be used in this fashion if they are double-quoted, because of syntactic
conflicts. Therefore, the use of the function-like cast syntax leads to inconsistencies and should probably
be avoided.
Note
The function-like syntax is in fact just a function call. When one of the two standard cast syntaxes
is used to do a run-time conversion, it will internally invoke a registered function to perform the
conversion. By convention, these conversion functions have the same name as their output type,
and thus the “function-like syntax” is nothing more than a direct invocation of the underlying
conversion function. Obviously, this is not something that a portable application should rely on.
For further details see CREATE CAST.
where collation is a possibly schema-qualified identifier. The COLLATE clause binds tighter than
operators; parentheses can be used when necessary.
If no collation is explicitly specified, the database system either derives a collation from the columns
involved in the expression, or it defaults to the default collation of the database if no column is involved
in the expression.
The two common uses of the COLLATE clause are overriding the sort order in an ORDER BY clause,
for example:
and overriding the collation of a function or operator call that has locale-sensitive results, for example:
55
SQL Syntax
Note that in the latter case the COLLATE clause is attached to an input argument of the operator we wish to
affect. It doesn't matter which argument of the operator or function call the COLLATE clause is attached to,
because the collation that is applied by the operator or function is derived by considering all arguments, and
an explicit COLLATE clause will override the collations of all other arguments. (Attaching non-matching
COLLATE clauses to more than one argument, however, is an error. For more details see Section 23.2.)
Thus, this gives the same result as the previous example:
because it attempts to apply a collation to the result of the > operator, which is of the non-collatable data
type boolean.
For example, the following finds the largest city population in each state:
SELECT ARRAY[1,2,3+4];
array
---------
{1,2,7}
(1 row)
By default, the array element type is the common type of the member expressions, determined using the
same rules as for UNION or CASE constructs (see Section 10.5). You can override this by explicitly casting
the array constructor to the desired type, for example:
SELECT ARRAY[1,2,22.7]::integer[];
56
SQL Syntax
array
----------
{1,2,23}
(1 row)
This has the same effect as casting each expression to the array element type individually. For more on
casting, see Section 4.2.9.
Multidimensional array values can be built by nesting array constructors. In the inner constructors, the key
word ARRAY can be omitted. For example, these produce the same result:
SELECT ARRAY[[1,2],[3,4]];
array
---------------
{{1,2},{3,4}}
(1 row)
Since multidimensional arrays must be rectangular, inner constructors at the same level must produce sub-
arrays of identical dimensions. Any cast applied to the outer ARRAY constructor propagates automatically
to all the inner constructors.
Multidimensional array constructor elements can be anything yielding an array of the proper kind, not
only a sub-ARRAY construct. For example:
You can construct an empty array, but since it's impossible to have an array with no type, you must ex-
plicitly cast your empty array to the desired type. For example:
SELECT ARRAY[]::integer[];
array
-------
{}
(1 row)
It is also possible to construct an array from the results of a subquery. In this form, the array constructor
is written with the key word ARRAY followed by a parenthesized (not bracketed) subquery. For example:
57
SQL Syntax
The subquery must return a single column. If the subquery's output column is of a non-array type, the
resulting one-dimensional array will have an element for each row in the subquery result, with an element
type matching that of the subquery's output column. If the subquery's output column is of an array type,
the result will be an array of the same type but one higher dimension; in this case all the subquery rows
must yield arrays of identical dimensionality, else the result would not be rectangular.
The subscripts of an array value built with ARRAY always begin with one. For more information about
arrays, see Section 8.15.
The key word ROW is optional when there is more than one expression in the list.
A row constructor can include the syntax rowvalue.*, which will be expanded to a list of the elements
of the row value, just as occurs when the .* syntax is used at the top level of a SELECT list (see Sec-
tion 8.16.5). For example, if table t has columns f1 and f2, these are the same:
Note
Before PostgreSQL 8.2, the .* syntax was not expanded in row constructors, so that writing
ROW(t.*, 42) created a two-field row whose first field was another row value. The new be-
havior is usually more useful. If you need the old behavior of nested row values, write the inner
row value without .*, for instance ROW(t, 42).
By default, the value created by a ROW expression is of an anonymous record type. If necessary, it can be
cast to a named composite type — either the row type of a table, or a composite type created with CREATE
TYPE AS. An explicit cast might be needed to avoid ambiguity. For example:
58
SQL Syntax
Row constructors can be used to build composite values to be stored in a composite-type table column,
or to be passed to a function that accepts a composite parameter. Also, it is possible to test rows using
the standard comparison operators as described in Section 9.2, to compare one row against another as
described in Section 9.25, and to use them in connection with subqueries, as discussed in Section 9.24,
Furthermore, if the result of an expression can be determined by evaluating only some parts of it, then
other subexpressions might not be evaluated at all. For instance, if one wrote:
then somefunc() would (probably) not be called at all. The same would be the case if one wrote:
Note that this is not the same as the left-to-right “short-circuiting” of Boolean operators that is found in
some programming languages.
59
SQL Syntax
As a consequence, it is unwise to use functions with side effects as part of complex expressions. It is
particularly dangerous to rely on side effects or evaluation order in WHERE and HAVING clauses, since
those clauses are extensively reprocessed as part of developing an execution plan. Boolean expressions
(AND/OR/NOT combinations) in those clauses can be reorganized in any manner allowed by the laws of
Boolean algebra.
When it is essential to force evaluation order, a CASE construct (see Section 9.18) can be used. For exam-
ple, this is an untrustworthy way of trying to avoid division by zero in a WHERE clause:
SELECT ... WHERE CASE WHEN x > 0 THEN y/x > 1.5 ELSE false END;
A CASE construct used in this fashion will defeat optimization attempts, so it should only be done when
necessary. (In this particular example, it would be better to sidestep the problem by writing y > 1.5*x
instead.)
CASE is not a cure-all for such issues, however. One limitation of the technique illustrated above is that
it does not prevent early evaluation of constant subexpressions. As described in Section 36.7, functions
and operators marked IMMUTABLE can be evaluated when the query is planned rather than when it is
executed. Thus for example
SELECT CASE WHEN x > 0 THEN x ELSE 1/0 END FROM tab;
is likely to result in a division-by-zero failure due to the planner trying to simplify the constant subexpres-
sion, even if every row in the table has x > 0 so that the ELSE arm would never be entered at run time.
While that particular example might seem silly, related cases that don't obviously involve constants can
occur in queries executed within functions, since the values of function arguments and local variables
can be inserted into queries as constants for planning purposes. Within PL/pgSQL functions, for example,
using an IF-THEN-ELSE statement to protect a risky computation is much safer than just nesting it in a
CASE expression.
Another limitation of the same kind is that a CASE cannot prevent evaluation of an aggregate expression
contained within it, because aggregate expressions are computed before other expressions in a SELECT
list or HAVING clause are considered. For example, the following query can cause a division-by-zero error
despite seemingly having protected against it:
The min() and avg() aggregates are computed concurrently over all the input rows, so if any row has
employees equal to zero, the division-by-zero error will occur before there is any opportunity to test
the result of min(). Instead, use a WHERE or FILTER clause to prevent problematic input rows from
reaching an aggregate function in the first place.
PostgreSQL allows functions that have named parameters to be called using either positional or named
notation. Named notation is especially useful for functions that have a large number of parameters, since it
makes the associations between parameters and actual arguments more explicit and reliable. In positional
notation, a function call is written with its argument values in the same order as they are defined in the
function declaration. In named notation, the arguments are matched to the function parameters by name
and can be written in any order. For each notation, also consider the effect of function argument types,
documented in Section 10.3.
In either notation, parameters that have default values given in the function declaration need not be written
in the call at all. But this is particularly useful in named notation, since any combination of parameters can
be omitted; while in positional notation parameters can only be omitted from right to left.
PostgreSQL also supports mixed notation, which combines positional and named notation. In this case,
positional parameters are written first and named parameters appear after them.
The following examples will illustrate the usage of all three notations, using the following function def-
inition:
All arguments are specified in order. The result is upper case since uppercase is specified as true.
Another example is:
61
SQL Syntax
hello world
(1 row)
Here, the uppercase parameter is omitted, so it receives its default value of false, resulting in lower
case output. In positional notation, arguments can be omitted from right to left so long as they have defaults.
Again, the argument uppercase was omitted so it is set to false implicitly. One advantage of using
named notation is that the arguments may be specified in any order, for example:
62
SQL Syntax
-----------------------
HELLO WORLD
(1 row)
In the above query, the arguments a and b are specified positionally, while uppercase is specified
by name. In this example, that adds little except documentation. With a more complex function having
numerous parameters that have default values, named or mixed notation can save a great deal of writing
and reduce chances for error.
Note
Named and mixed call notations currently cannot be used when calling an aggregate function (but
they do work when an aggregate function is used as a window function).
63
Chapter 5. Data Definition
This chapter covers how one creates the database structures that will hold one's data. In a relational data-
base, the raw data is stored in tables, so the majority of this chapter is devoted to explaining how tables
are created and modified and what features are available to control what data is stored in the tables. Sub-
sequently, we discuss how tables can be organized into schemas, and how privileges can be assigned to
tables. Finally, we will briefly look at other features that affect the data storage, such as inheritance, table
partitioning, views, functions, and triggers.
Each column has a data type. The data type constrains the set of possible values that can be assigned to a
column and assigns semantics to the data stored in the column so that it can be used for computations. For
instance, a column declared to be of a numerical type will not accept arbitrary text strings, and the data
stored in such a column can be used for mathematical computations. By contrast, a column declared to be
of a character string type will accept almost any kind of data but it does not lend itself to mathematical
calculations, although other operations such as string concatenation are available.
PostgreSQL includes a sizable set of built-in data types that fit many applications. Users can also define
their own data types. Most built-in data types have obvious names and semantics, so we defer a detailed
explanation to Chapter 8. Some of the frequently used data types are integer for whole numbers, nu-
meric for possibly fractional numbers, text for character strings, date for dates, time for time-of-
day values, and timestamp for values containing both date and time.
To create a table, you use the aptly named CREATE TABLE command. In this command you specify at
least a name for the new table, the names of the columns and the data type of each column. For example:
This creates a table named my_first_table with two columns. The first column is named
first_column and has a data type of text; the second column has the name second_column and
the type integer. The table and column names follow the identifier syntax explained in Section 4.1.1.
The type names are usually also identifiers, but there are some exceptions. Note that the column list is
comma-separated and surrounded by parentheses.
Of course, the previous example was heavily contrived. Normally, you would give names to your tables
and columns that convey what kind of data they store. So let's look at a more realistic example:
64
Data Definition
(The numeric type can store fractional components, as would be typical of monetary amounts.)
Tip
When you create many interrelated tables it is wise to choose a consistent naming pattern for the
tables and columns. For instance, there is a choice of using singular or plural nouns for table names,
both of which are favored by some theorist or other.
There is a limit on how many columns a table can contain. Depending on the column types, it is between
250 and 1600. However, defining a table with anywhere near this many columns is highly unusual and
often a questionable design.
If you no longer need a table, you can remove it using the DROP TABLE command. For example:
Attempting to drop a table that does not exist is an error. Nevertheless, it is common in SQL script files
to unconditionally try to drop each table before creating it, ignoring any error messages, so that the script
works whether or not the table exists. (If you like, you can use the DROP TABLE IF EXISTS variant
to avoid the error messages, but this is not standard SQL.)
If you need to modify a table that already exists, see Section 5.7 later in this chapter.
With the tools discussed so far you can create fully functional tables. The remainder of this chapter is
concerned with adding features to the table definition to ensure data integrity, security, or convenience.
If you are eager to fill your tables with data now you can skip ahead to Chapter 6 and read the rest of
this chapter later.
If no default value is declared explicitly, the default value is the null value. This usually makes sense
because a null value can be considered to represent unknown data.
In a table definition, default values are listed after the column data type. For example:
65
Data Definition
name text,
price numeric DEFAULT 9.99
);
The default value can be an expression, which will be evaluated whenever the default value is inserted
(not when the table is created). A common example is for a timestamp column to have a default of
CURRENT_TIMESTAMP, so that it gets set to the time of row insertion. Another common example is
generating a “serial number” for each row. In PostgreSQL this is typically done by something like:
where the nextval() function supplies successive values from a sequence object (see Section 9.17).
This arrangement is sufficiently common that there's a special shorthand for it:
To create an identity column, use the GENERATED ... AS IDENTITY clause in CREATE TABLE,
for example:
or alternatively
If an INSERT command is executed on the table with the identity column and no value is explicitly
specified for the identity column, then a value generated by the implicit sequence is inserted. For example,
with the above definitions and assuming additional appropriate columns, writing
66
Data Definition
would generate values for the id column starting at 1 and result in the following table data:
id | name | address
----+------+---------
1 | A | foo
2 | B | bar
Alternatively, the keyword DEFAULT can be specified in place of a value to explicitly request the se-
quence-generated value, like
INSERT INTO people (id, name, address) VALUES (DEFAULT, 'C', 'baz');
Thus, in many ways, an identity column behaves like a column with a default value.
The clauses ALWAYS and BY DEFAULT in the column definition determine how explicitly user-specified
values are handled in INSERT and UPDATE commands. In an INSERT command, if ALWAYS is selected, a
user-specified value is only accepted if the INSERT statement specifies OVERRIDING SYSTEM VALUE.
If BY DEFAULT is selected, then the user-specified value takes precedence. Thus, using BY DEFAULT
results in a behavior more similar to default values, where the default value can be overridden by an explicit
value, whereas ALWAYS provides some more protection against accidentally inserting an explicit value.
The data type of an identity column must be one of the data types supported by sequences. (See CREATE
SEQUENCE.) The properties of the associated sequence may be specified when creating an identity col-
umn (see CREATE TABLE) or changed afterwards (see ALTER TABLE).
An identity column is automatically marked as NOT NULL. An identity column, however, does not guar-
antee uniqueness. (A sequence normally returns unique values, but a sequence could be reset, or values
could be inserted manually into the identity column, as discussed above.) Uniqueness would need to be
enforced using a PRIMARY KEY or UNIQUE constraint.
In table inheritance hierarchies, identity columns and their properties in a child table are independent of
those in its parent tables. A child table does not inherit identity columns or their properties automatically
from the parent. During INSERT or UPDATE, a column is treated as an identity column if that column is an
identity column in the table named in the statement, and the corresponding identity properties are applied.
Partitions inherit identity columns from the partitioned table. They cannot have their own identity columns.
The properties of a given identity column are consistent across all the partitions in the partition hierarchy.
67
Data Definition
To create a generated column, use the GENERATED ALWAYS AS clause in CREATE TABLE, for example:
The keyword STORED must be specified to choose the stored kind of generated column. See CREATE
TABLE for more details.
A generated column cannot be written to directly. In INSERT or UPDATE commands, a value cannot be
specified for a generated column, but the keyword DEFAULT may be specified.
Consider the differences between a column with a default and a generated column. The column default is
evaluated once when the row is first inserted if no other value was provided; a generated column is updated
whenever the row changes and cannot be overridden. A column default may not refer to other columns of
the table; a generation expression would normally do so. A column default can use volatile functions, for
example random() or functions referring to the current time; this is not allowed for generated columns.
Several restrictions apply to the definition of generated columns and tables involving generated columns:
• The generation expression can only use immutable functions and cannot use subqueries or reference
anything other than the current row in any way.
• Foreign tables can have generated columns. See CREATE FOREIGN TABLE for details.
• If a parent column is a generated column, its child column must also be a generated column; however,
the child column can have a different generation expression. The generation expression that is actually
applied during insert or update of a row is the one associated with the table that the row is physically
in. (This is unlike the behavior for column defaults: for those, the default value associated with the
table named in the query applies.)
• If a parent column is not a generated column, its child column must not be generated either.
• For inherited tables, if you write a child column definition without any GENERATED clause in CRE-
ATE TABLE ... INHERITS, then its GENERATED clause will automatically be copied from the
parent. ALTER TABLE ... INHERIT will insist that parent and child columns already match as
to generation status, but it will not require their generation expressions to match.
• Similarly for partitioned tables, if you write a child column definition without any GENERATED
clause in CREATE TABLE ... PARTITION OF, then its GENERATED clause will automatically
be copied from the parent. ALTER TABLE ... ATTACH PARTITION will insist that parent and
child columns already match as to generation status, but it will not require their generation expressions
to match.
68
Data Definition
• In case of multiple inheritance, if one parent column is a generated column, then all parent columns
must be generated columns. If they do not all have the same generation expression, then the desired
expression for the child must be specified explicitly.
• Generated columns maintain access privileges separately from their underlying base columns. So, it
is possible to arrange it so that a particular role can read from a generated column but not from the
underlying base columns.
• Generated columns are, conceptually, updated after BEFORE triggers have run. Therefore, changes
made to base columns in a BEFORE trigger will be reflected in generated columns. But conversely, it
is not allowed to access generated columns in BEFORE triggers.
• Generated columns are skipped for logical replication and cannot be specified in a CREATE PUBLI-
CATION column list.
5.5. Constraints
Data types are a way to limit the kind of data that can be stored in a table. For many applications, however,
the constraint they provide is too coarse. For example, a column containing a product price should probably
only accept positive values. But there is no standard data type that accepts only positive numbers. Another
issue is that you might want to constrain column data with respect to other columns or rows. For example,
in a table containing product information, there should be only one row for each product number.
To that end, SQL allows you to define constraints on columns and tables. Constraints give you as much
control over the data in your tables as you wish. If a user attempts to store data in a column that would
violate a constraint, an error is raised. This applies even if the value came from the default value definition.
As you see, the constraint definition comes after the data type, just like default value definitions. Default
values and constraints can be listed in any order. A check constraint consists of the key word CHECK
followed by an expression in parentheses. The check constraint expression should involve the column thus
constrained, otherwise the constraint would not make too much sense.
You can also give the constraint a separate name. This clarifies error messages and allows you to refer to
the constraint when you need to change it. The syntax is:
69
Data Definition
product_no integer,
name text,
price numeric CONSTRAINT positive_price CHECK (price > 0)
);
So, to specify a named constraint, use the key word CONSTRAINT followed by an identifier followed by
the constraint definition. (If you don't specify a constraint name in this way, the system chooses a name
for you.)
A check constraint can also refer to several columns. Say you store a regular price and a discounted price,
and you want to ensure that the discounted price is lower than the regular price:
The first two constraints should look familiar. The third one uses a new syntax. It is not attached to a
particular column, instead it appears as a separate item in the comma-separated column list. Column def-
initions and these constraint definitions can be listed in mixed order.
We say that the first two constraints are column constraints, whereas the third one is a table constraint
because it is written separately from any one column definition. Column constraints can also be written
as table constraints, while the reverse is not necessarily possible, since a column constraint is supposed
to refer to only the column it is attached to. (PostgreSQL doesn't enforce that rule, but you should follow
it if you want your table definitions to work with other database systems.) The above example could also
be written as:
or even:
70
Data Definition
Names can be assigned to table constraints in the same way as column constraints:
It should be noted that a check constraint is satisfied if the check expression evaluates to true or the null
value. Since most expressions will evaluate to the null value if any operand is null, they will not prevent
null values in the constrained columns. To ensure that a column does not contain null values, the not-null
constraint described in the next section can be used.
Note
PostgreSQL does not support CHECK constraints that reference table data other than the new or
updated row being checked. While a CHECK constraint that violates this rule may appear to work
in simple tests, it cannot guarantee that the database will not reach a state in which the constraint
condition is false (due to subsequent changes of the other row(s) involved). This would cause a
database dump and restore to fail. The restore could fail even when the complete database state
is consistent with the constraint, due to rows not being loaded in an order that will satisfy the
constraint. If possible, use UNIQUE, EXCLUDE, or FOREIGN KEY constraints to express cross-
row and cross-table restrictions.
If what you desire is a one-time check against other rows at row insertion, rather than a contin-
uously-maintained consistency guarantee, a custom trigger can be used to implement that. (This
approach avoids the dump/restore problem because pg_dump does not reinstall triggers until after
restoring data, so that the check will not be enforced during a dump/restore.)
Note
PostgreSQL assumes that CHECK constraints' conditions are immutable, that is, they will always
give the same result for the same input row. This assumption is what justifies examining CHECK
constraints only when rows are inserted or updated, and not at other times. (The warning above
about not referencing other table data is really a special case of this restriction.)
71
Data Definition
A not-null constraint simply specifies that a column must not assume the null value. A syntax example:
A not-null constraint is always written as a column constraint. A not-null constraint is functionally equiv-
alent to creating a check constraint CHECK (column_name IS NOT NULL), but in PostgreSQL
creating an explicit not-null constraint is more efficient. The drawback is that you cannot give explicit
names to not-null constraints created this way.
Of course, a column can have more than one constraint. Just write the constraints one after another:
The order doesn't matter. It does not necessarily determine in which order the constraints are checked.
The NOT NULL constraint has an inverse: the NULL constraint. This does not mean that the column must
be null, which would surely be useless. Instead, this simply selects the default behavior that the column
might be null. The NULL constraint is not present in the SQL standard and should not be used in portable
applications. (It was only added to PostgreSQL to be compatible with some other database systems.) Some
users, however, like it because it makes it easy to toggle the constraint in a script file. For example, you
could start with:
Tip
In most database designs the majority of columns should be marked not null.
72
Data Definition
name text,
price numeric
);
To define a unique constraint for a group of columns, write it as a table constraint with the column names
separated by commas:
This specifies that the combination of values in the indicated columns is unique across the whole table,
though any one of the columns need not be (and ordinarily isn't) unique.
You can assign your own name for a unique constraint, in the usual way:
Adding a unique constraint will automatically create a unique B-tree index on the column or group of
columns listed in the constraint. A uniqueness restriction covering only some rows cannot be written as a
unique constraint, but it is possible to enforce such a restriction by creating a unique partial index.
In general, a unique constraint is violated if there is more than one row in the table where the values of all
of the columns included in the constraint are equal. By default, two null values are not considered equal
in this comparison. That means even in the presence of a unique constraint it is possible to store duplicate
rows that contain a null value in at least one of the constrained columns. This behavior can be changed by
adding the clause NULLS NOT DISTINCT, like
73
Data Definition
or
The default behavior can be specified explicitly using NULLS DISTINCT. The default null treatment in
unique constraints is implementation-defined according to the SQL standard, and other implementations
have a different behavior. So be careful when developing applications that are intended to be portable.
Primary keys can span more than one column; the syntax is similar to unique constraints:
Adding a primary key will automatically create a unique B-tree index on the column or group of columns
listed in the primary key, and will force the column(s) to be marked NOT NULL.
A table can have at most one primary key. (There can be any number of unique and not-null constraints,
which are functionally almost the same thing, but only one can be identified as the primary key.) Relational
database theory dictates that every table must have a primary key. This rule is not enforced by PostgreSQL,
but it is usually best to follow it.
Primary keys are useful both for documentation purposes and for client applications. For example, a GUI
application that allows modifying row values probably needs to know the primary key of a table to be
able to identify rows uniquely. There are also various ways in which the database system makes use of a
74
Data Definition
primary key if one has been declared; for example, the primary key defines the default target column(s)
for foreign keys referencing its table.
Say you have the product table that we have used several times already:
Let's also assume you have a table storing orders of those products. We want to ensure that the orders table
only contains orders of products that actually exist. So we define a foreign key constraint in the orders
table that references the products table:
Now it is impossible to create orders with non-NULL product_no entries that do not appear in the
products table.
We say that in this situation the orders table is the referencing table and the products table is the referenced
table. Similarly, there are referencing and referenced columns.
because in absence of a column list the primary key of the referenced table is used as the referenced
column(s).
You can assign your own name for a foreign key constraint, in the usual way.
A foreign key can also constrain and reference a group of columns. As usual, it then needs to be written
in table constraint form. Here is a contrived syntax example:
CREATE TABLE t1 (
a integer PRIMARY KEY,
b integer,
75
Data Definition
c integer,
FOREIGN KEY (b, c) REFERENCES other_table (c1, c2)
);
Of course, the number and type of the constrained columns need to match the number and type of the
referenced columns.
Sometimes it is useful for the “other table” of a foreign key constraint to be the same table; this is called a
self-referential foreign key. For example, if you want rows of a table to represent nodes of a tree structure,
you could write
A top-level node would have NULL parent_id, while non-NULL parent_id entries would be con-
strained to reference valid rows of the table.
A table can have more than one foreign key constraint. This is used to implement many-to-many relation-
ships between tables. Say you have tables about products and orders, but now you want to allow one order
to contain possibly many products (which the structure above did not allow). You could use this table
structure:
Notice that the primary key overlaps with the foreign keys in the last table.
We know that the foreign keys disallow creation of orders that do not relate to any products. But what if
a product is removed after an order is created that references it? SQL allows you to handle that as well.
Intuitively, we have a few options:
76
Data Definition
To illustrate this, let's implement the following policy on the many-to-many relationship example above:
when someone wants to remove a product that is still referenced by an order (via order_items), we
disallow it. If someone removes an order, the order items are removed as well:
Restricting and cascading deletes are the two most common options. RESTRICT prevents deletion of a
referenced row. NO ACTION means that if any referencing rows still exist when the constraint is checked,
an error is raised; this is the default behavior if you do not specify anything. (The essential difference
between these two choices is that NO ACTION allows the check to be deferred until later in the transac-
tion, whereas RESTRICT does not.) CASCADE specifies that when a referenced row is deleted, row(s)
referencing it should be automatically deleted as well. There are two other options: SET NULL and SET
DEFAULT. These cause the referencing column(s) in the referencing row(s) to be set to nulls or their de-
fault values, respectively, when the referenced row is deleted. Note that these do not excuse you from
observing any constraints. For example, if an action specifies SET DEFAULT but the default value would
not satisfy the foreign key constraint, the operation will fail.
The appropriate choice of ON DELETE action depends on what kinds of objects the related tables repre-
sent. When the referencing table represents something that is a component of what is represented by the
referenced table and cannot exist independently, then CASCADE could be appropriate. If the two tables
represent independent objects, then RESTRICT or NO ACTION is more appropriate; an application that
actually wants to delete both objects would then have to be explicit about this and run two delete com-
mands. In the above example, order items are part of an order, and it is convenient if they are deleted
automatically if an order is deleted. But products and orders are different things, and so making a deletion
of a product automatically cause the deletion of some order items could be considered problematic. The
actions SET NULL or SET DEFAULT can be appropriate if a foreign-key relationship represents optional
information. For example, if the products table contained a reference to a product manager, and the product
manager entry gets deleted, then setting the product's product manager to null or a default might be useful.
The actions SET NULL and SET DEFAULT can take a column list to specify which columns to set.
Normally, all columns of the foreign-key constraint are set; setting only a subset is useful in some special
cases. Consider the following example:
77
Data Definition
Without the specification of the column, the foreign key would also set the column tenant_id to null,
but that column is still required as part of the primary key.
Analogous to ON DELETE there is also ON UPDATE which is invoked when a referenced column is
changed (updated). The possible actions are the same, except that column lists cannot be specified for
SET NULL and SET DEFAULT. In this case, CASCADE means that the updated values of the referenced
column(s) should be copied into the referencing row(s).
Normally, a referencing row need not satisfy the foreign key constraint if any of its referencing columns
are null. If MATCH FULL is added to the foreign key declaration, a referencing row escapes satisfying the
constraint only if all its referencing columns are null (so a mix of null and non-null values is guaranteed
to fail a MATCH FULL constraint). If you don't want referencing rows to be able to avoid satisfying the
foreign key constraint, declare the referencing column(s) as NOT NULL.
A foreign key must reference columns that either are a primary key or form a unique constraint, or are
columns from a non-partial unique index. This means that the referenced columns always have an index
to allow efficient lookups on whether a referencing row has a match. Since a DELETE of a row from the
referenced table or an UPDATE of a referenced column will require a scan of the referencing table for rows
matching the old value, it is often a good idea to index the referencing columns too. Because this is not
always needed, and there are many choices available on how to index, the declaration of a foreign key
constraint does not automatically create an index on the referencing columns.
More information about updating and deleting data is in Chapter 6. Also see the description of foreign key
constraint syntax in the reference documentation for CREATE TABLE.
78
Data Definition
See also CREATE TABLE ... CONSTRAINT ... EXCLUDE for details.
Adding an exclusion constraint will automatically create an index of the type specified in the constraint
declaration.
tableoid
The OID of the table containing this row. This column is particularly handy for queries that select
from partitioned tables (see Section 5.12) or inheritance hierarchies (see Section 5.11), since without
it, it's difficult to tell which individual table a row came from. The tableoid can be joined against
the oid column of pg_class to obtain the table name.
xmin
The identity (transaction ID) of the inserting transaction for this row version. (A row version is an
individual state of a row; each update of a row creates a new row version for the same logical row.)
cmin
xmax
The identity (transaction ID) of the deleting transaction, or zero for an undeleted row version. It is
possible for this column to be nonzero in a visible row version. That usually indicates that the deleting
transaction hasn't committed yet, or that an attempted deletion was rolled back.
cmax
ctid
The physical location of the row version within its table. Note that although the ctid can be used to
locate the row version very quickly, a row's ctid will change if it is updated or moved by VACUUM
FULL. Therefore ctid is useless as a long-term row identifier. A primary key should be used to
identify logical rows.
Transaction identifiers are also 32-bit quantities. In a long-lived database it is possible for transaction IDs
to wrap around. This is not a fatal problem given appropriate maintenance procedures; see Chapter 24 for
details. It is unwise, however, to depend on the uniqueness of transaction IDs over the long term (more
than one billion transactions).
Command identifiers are also 32-bit quantities. This creates a hard limit of 232 (4 billion) SQL commands
within a single transaction. In practice this limit is not a problem — note that the limit is on the number
of SQL commands, not the number of rows processed. Also, only commands that actually modify the
database contents will consume a command identifier.
79
Data Definition
You can:
• Add columns
• Remove columns
• Add constraints
• Remove constraints
• Change default values
• Change column data types
• Rename columns
• Rename tables
All these actions are performed using the ALTER TABLE command, whose reference page contains de-
tails beyond those given here.
The new column is initially filled with whatever default value is given (null if you don't specify a DEFAULT
clause).
Tip
From PostgreSQL 11, adding a column with a constant default value no longer means that each
row of the table needs to be updated when the ALTER TABLE statement is executed. Instead,
the default value will be returned the next time the row is accessed, and applied when the table is
rewritten, making the ALTER TABLE very fast even on large tables.
However, if the default value is volatile (e.g., clock_timestamp()) each row will need to be
updated with the value calculated at the time ALTER TABLE is executed. To avoid a potentially
lengthy update operation, particularly if you intend to fill the column with mostly nondefault values
anyway, it may be preferable to add the column with no default, insert the correct values using
UPDATE, and then add any desired default as described below.
You can also define constraints on the column at the same time, using the usual syntax:
ALTER TABLE products ADD COLUMN description text CHECK (description <>
'');
80
Data Definition
In fact all the options that can be applied to a column description in CREATE TABLE can be used here.
Keep in mind however that the default value must satisfy the given constraints, or the ADD will fail.
Alternatively, you can add constraints later (see below) after you've filled in the new column correctly.
Whatever data was in the column disappears. Table constraints involving the column are dropped, too.
However, if the column is referenced by a foreign key constraint of another table, PostgreSQL will not
silently drop that constraint. You can authorize dropping everything that depends on the column by adding
CASCADE:
See Section 5.15 for a description of the general mechanism behind this.
To add a not-null constraint, which cannot be written as a table constraint, use this syntax:
The constraint will be checked immediately, so the table data must satisfy the constraint before it can be
added.
(If you are dealing with a generated constraint name like $2, don't forget that you'll need to double-quote
it to make it a valid identifier.)
As with dropping a column, you need to add CASCADE if you want to drop a constraint that something
else depends on. An example is that a foreign key constraint depends on a unique or primary key constraint
on the referenced column(s).
81
Data Definition
This works the same for all constraint types except not-null constraints. To drop a not-null constraint use:
Note that this doesn't affect any existing rows in the table, it just changes the default for future INSERT
commands.
This is effectively the same as setting the default to null. As a consequence, it is not an error to drop a
default where one hadn't been defined, because the default is implicitly the null value.
This will succeed only if each existing entry in the column can be converted to the new type by an implicit
cast. If a more complex conversion is needed, you can add a USING clause that specifies how to compute
the new values from the old.
PostgreSQL will attempt to convert the column's default value (if any) to the new type, as well as any
constraints that involve the column. But these conversions might fail, or might produce surprising results.
It's often best to drop any constraints on the column before altering its type, and then add back suitably
modified constraints afterwards.
82
Data Definition
5.8. Privileges
When an object is created, it is assigned an owner. The owner is normally the role that executed the creation
statement. For most kinds of objects, the initial state is that only the owner (or a superuser) can do anything
with the object. To allow other roles to use it, privileges must be granted.
There are different kinds of privileges: SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFER-
ENCES, TRIGGER, CREATE, CONNECT, TEMPORARY, EXECUTE, USAGE, SET, ALTER SYSTEM, and
MAINTAIN. The privileges applicable to a particular object vary depending on the object's type (table,
function, etc.). More detail about the meanings of these privileges appears below. The following sections
and chapters will also show you how these privileges are used.
The right to modify or destroy an object is inherent in being the object's owner, and cannot be granted
or revoked in itself. (However, like all privileges, that right can be inherited by members of the owning
role; see Section 21.3.)
An object can be assigned to a new owner with an ALTER command of the appropriate kind for the object,
for example
Superusers can always do this; ordinary roles can only do it if they are both the current owner of the object
(or inherit the privileges of the owning role) and able to SET ROLE to the new owning role.
To assign privileges, the GRANT command is used. For example, if joe is an existing role, and ac-
counts is an existing table, the privilege to update the table can be granted with:
Writing ALL in place of a specific privilege grants all privileges that are relevant for the object type.
The special “role” name PUBLIC can be used to grant a privilege to every role on the system. Also,
“group” roles can be set up to help manage privileges when there are many users of a database — for
details see Chapter 21.
Ordinarily, only the object's owner (or a superuser) can grant or revoke privileges on an object. However,
it is possible to grant a privilege “with grant option”, which gives the recipient the right to grant it in turn to
others. If the grant option is subsequently revoked then all who received the privilege from that recipient
(directly or through a chain of grants) will lose the privilege. For details see the GRANT and REVOKE
reference pages.
An object's owner can choose to revoke their own ordinary privileges, for example to make a table read-
only for themselves as well as others. But owners are always treated as holding all grant options, so they
can always re-grant their own privileges.
83
Data Definition
SELECT
Allows SELECT from any column, or specific column(s), of a table, view, materialized view, or other
table-like object. Also allows use of COPY TO. This privilege is also needed to reference existing
column values in UPDATE, DELETE, or MERGE. For sequences, this privilege also allows use of the
currval function. For large objects, this privilege allows the object to be read.
INSERT
Allows INSERT of a new row into a table, view, etc. Can be granted on specific column(s), in which
case only those columns may be assigned to in the INSERT command (other columns will therefore
receive default values). Also allows use of COPY FROM.
UPDATE
Allows UPDATE of any column, or specific column(s), of a table, view, etc. (In practice, any nontrivial
UPDATE command will require SELECT privilege as well, since it must reference table columns to
determine which rows to update, and/or to compute new values for columns.) SELECT ... FOR
UPDATE and SELECT ... FOR SHARE also require this privilege on at least one column, in
addition to the SELECT privilege. For sequences, this privilege allows use of the nextval and
setval functions. For large objects, this privilege allows writing or truncating the object.
DELETE
Allows DELETE of a row from a table, view, etc. (In practice, any nontrivial DELETE command will
require SELECT privilege as well, since it must reference table columns to determine which rows
to delete.)
TRUNCATE
REFERENCES
Allows creation of a foreign key constraint referencing a table, or specific column(s) of a table.
TRIGGER
CREATE
For databases, allows new schemas and publications to be created within the database, and allows
trusted extensions to be installed within the database.
For schemas, allows new objects to be created within the schema. To rename an existing object, you
must own the object and have this privilege for the containing schema.
For tablespaces, allows tables, indexes, and temporary files to be created within the tablespace, and
allows databases to be created that have the tablespace as their default tablespace.
Note that revoking this privilege will not alter the existence or location of existing objects.
CONNECT
Allows the grantee to connect to the database. This privilege is checked at connection startup (in
addition to checking any restrictions imposed by pg_hba.conf).
84
Data Definition
TEMPORARY
EXECUTE
Allows calling a function or procedure, including use of any operators that are implemented on top of
the function. This is the only type of privilege that is applicable to functions and procedures.
USAGE
For procedural languages, allows use of the language for the creation of functions in that language.
This is the only type of privilege that is applicable to procedural languages.
For schemas, allows access to objects contained in the schema (assuming that the objects' own priv-
ilege requirements are also met). Essentially this allows the grantee to “look up” objects within the
schema. Without this permission, it is still possible to see the object names, e.g., by querying system
catalogs. Also, after revoking this permission, existing sessions might have statements that have pre-
viously performed this lookup, so this is not a completely secure way to prevent object access.
For types and domains, allows use of the type or domain in the creation of tables, functions, and other
schema objects. (Note that this privilege does not control all “usage” of the type, such as values of the
type appearing in queries. It only prevents objects from being created that depend on the type. The
main purpose of this privilege is controlling which users can create dependencies on a type, which
could prevent the owner from changing the type later.)
For foreign-data wrappers, allows creation of new servers using the foreign-data wrapper.
For foreign servers, allows creation of foreign tables using the server. Grantees may also create, alter,
or drop their own user mappings associated with that server.
SET
Allows a server configuration parameter to be set to a new value within the current session. (While
this privilege can be granted on any parameter, it is meaningless except for parameters that would
normally require superuser privilege to set.)
ALTER SYSTEM
Allows a server configuration parameter to be configured to a new value using the ALTER SYSTEM
command.
MAINTAIN
Allows VACUUM, ANALYZE, CLUSTER, REFRESH MATERIALIZED VIEW, REINDEX, and LOCK
TABLE on a relation.
The privileges required by other commands are listed on the reference page of the respective command.
PostgreSQL grants privileges on some types of objects to PUBLIC by default when the objects are created.
No privileges are granted to PUBLIC by default on tables, table columns, sequences, foreign data wrappers,
foreign servers, large objects, schemas, tablespaces, or configuration parameters. For other types of objects,
the default privileges granted to PUBLIC are as follows: CONNECT and TEMPORARY (create temporary
tables) privileges for databases; EXECUTE privilege for functions and procedures; and USAGE privilege
for languages and data types (including domains). The object owner can, of course, REVOKE both default
and expressly granted privileges. (For maximum security, issue the REVOKE in the same transaction that
85
Data Definition
creates the object; then there is no window in which another user can use the object.) Also, these default
privilege settings can be overridden using the ALTER DEFAULT PRIVILEGES command.
Table 5.1 shows the one-letter abbreviations that are used for these privilege types in ACL (Access Control
List) values. You will see these letters in the output of the psql commands listed below, or when looking
at ACL columns of system catalogs.
Table 5.2 summarizes the privileges available for each type of SQL object, using the abbreviations shown
above. It also shows the psql command that can be used to examine privilege settings for each object type.
86
Data Definition
The privileges that have been granted for a particular object are displayed as a list of aclitem entries,
each having the format:
grantee=privilege-abbreviation[*].../grantor
Each aclitem lists all the permissions of one grantee that have been granted by a particular grantor. Spe-
cific privileges are represented by one-letter abbreviations from Table 5.1, with * appended if the privilege
was granted with grant option. For example, calvin=r*w/hobbes specifies that the role calvin has
the privilege SELECT (r) with grant option (*) as well as the non-grantable privilege UPDATE (w), both
granted by the role hobbes. If calvin also has some privileges on the same object granted by a different
grantor, those would appear as a separate aclitem entry. An empty grantee field in an aclitem stands
for PUBLIC.
As an example, suppose that user miriam creates table mytable and does:
If the “Access privileges” column is empty for a given object, it means the object has default privileges
(that is, its privileges entry in the relevant system catalog is null). Default privileges always include all
privileges for the owner, and can include some privileges for PUBLIC depending on the object type, as
explained above. The first GRANT or REVOKE on an object will instantiate the default privileges (produc-
ing, for example, miriam=arwdDxt/miriam) and then modify them per the specified request. Simi-
larly, entries are shown in “Column privileges” only for columns with nondefault privileges. (Note: for this
87
Data Definition
purpose, “default privileges” always means the built-in default privileges for the object's type. An object
whose privileges have been affected by an ALTER DEFAULT PRIVILEGES command will always be
shown with an explicit privilege entry that includes the effects of the ALTER.)
Notice that the owner's implicit grant options are not marked in the access privileges display. A * will
appear only when grant options have been explicitly granted to someone.
The “Access privileges” column shows (none) when the object's privileges entry is non-null but empty.
This means that no privileges are granted at all, even to the object's owner — a rare situation. (The owner
still has implicit grant options in this case, and so could re-grant her own privileges; but she has none at
the moment.)
When row security is enabled on a table (with ALTER TABLE ... ENABLE ROW LEVEL SECURITY),
all normal access to the table for selecting rows or modifying rows must be allowed by a row security
policy. (However, the table's owner is typically not subject to row security policies.) If no policy exists for
the table, a default-deny policy is used, meaning that no rows are visible or can be modified. Operations
that apply to the whole table, such as TRUNCATE and REFERENCES, are not subject to row security.
Row security policies can be specific to commands, or to roles, or to both. A policy can be specified to
apply to ALL commands, or to SELECT, INSERT, UPDATE, or DELETE. Multiple roles can be assigned
to a given policy, and normal role membership and inheritance rules apply.
To specify which rows are visible or modifiable according to a policy, an expression is required that returns
a Boolean result. This expression will be evaluated for each row prior to any conditions or functions coming
from the user's query. (The only exceptions to this rule are leakproof functions, which are guaranteed to
not leak information; the optimizer may choose to apply such functions ahead of the row-security check.)
Rows for which the expression does not return true will not be processed. Separate expressions may be
specified to provide independent control over the rows which are visible and the rows which are allowed
to be modified. Policy expressions are run as part of the query and with the privileges of the user running
the query, although security-definer functions can be used to access data not available to the calling user.
Superusers and roles with the BYPASSRLS attribute always bypass the row security system when access-
ing a table. Table owners normally bypass row security as well, though a table owner can choose to be
subject to row security with ALTER TABLE ... FORCE ROW LEVEL SECURITY.
Enabling and disabling row security, as well as adding policies to a table, is always the privilege of the
table owner only.
Policies are created using the CREATE POLICY command, altered using the ALTER POLICY command,
and dropped using the DROP POLICY command. To enable and disable row security for a given table,
use the ALTER TABLE command.
Each policy has a name and multiple policies can be defined for a table. As policies are table-specific,
each policy for a table must have a unique name. Different tables may have policies with the same name.
When multiple policies apply to a given query, they are combined using either OR (for permissive policies,
which are the default) or using AND (for restrictive policies). This is similar to the rule that a given role
88
Data Definition
has the privileges of all roles that they are a member of. Permissive vs. restrictive policies are discussed
further below.
As a simple example, here is how to create a policy on the account relation to allow only members of
the managers role to access rows, and only rows of their accounts:
The policy above implicitly provides a WITH CHECK clause identical to its USING clause, so that the con-
straint applies both to rows selected by a command (so a manager cannot SELECT, UPDATE, or DELETE
existing rows belonging to a different manager) and to rows modified by a command (so rows belonging
to a different manager cannot be created via INSERT or UPDATE).
If no role is specified, or the special user name PUBLIC is used, then the policy applies to all users on the
system. To allow all users to access only their own row in a users table, a simple policy can be used:
To use a different policy for rows that are being added to the table compared to those rows that are visible,
multiple policies can be combined. This pair of policies would allow all users to view all rows in the
users table, but only modify their own:
In a SELECT command, these two policies are combined using OR, with the net effect being that all rows
can be selected. In other command types, only the second policy applies, so that the effects are the same
as before.
Row security can also be disabled with the ALTER TABLE command. Disabling row security does not
remove any policies that are defined on the table; they are simply ignored. Then all rows in the table are
visible and modifiable, subject to the standard SQL privileges system.
Below is a larger example of how this feature can be used in production environments. The table passwd
emulates a Unix password file:
89
Data Definition
-- Create policies
-- Administrator can see all rows and add any rows
CREATE POLICY admin_all ON passwd TO admin USING (true) WITH CHECK
(true);
-- Normal users can view all rows
CREATE POLICY all_view ON passwd FOR SELECT USING (true);
-- Normal users can update their own records, but
-- limit which shells a normal user is allowed to set
CREATE POLICY user_mod ON passwd FOR UPDATE
USING (current_user = user_name)
WITH CHECK (
current_user = user_name AND
shell IN ('/bin/bash','/bin/sh','/bin/dash','/bin/zsh','/bin/
tcsh')
);
As with any security settings, it's important to test and ensure that the system is behaving as expected.
Using the example above, this demonstrates that the permission system is working properly.
90
Data Definition
91
Data Definition
All of the policies constructed thus far have been permissive policies, meaning that when multiple poli-
cies are applied they are combined using the “OR” Boolean operator. While permissive policies can be
constructed to only allow access to rows in the intended cases, it can be simpler to combine permissive
policies with restrictive policies (which the records must pass and which are combined using the “AND”
Boolean operator). Building on the example above, we add a restrictive policy to require the administrator
to be connected over a local Unix socket to access the records of the passwd table:
We can then see that an administrator connecting over a network will not see any records, due to the
restrictive policy:
Referential integrity checks, such as unique or primary key constraints and foreign key references, al-
ways bypass row security to ensure that data integrity is maintained. Care must be taken when developing
schemas and row level policies to avoid “covert channel” leaks of information through such referential
integrity checks.
In some contexts it is important to be sure that row security is not being applied. For example, when taking
a backup, it could be disastrous if row security silently caused some rows to be omitted from the backup.
In such a situation, you can set the row_security configuration parameter to off. This does not in itself
bypass row security; what it does is throw an error if any query's results would get filtered by a policy.
The reason for the error can then be investigated and fixed.
In the examples above, the policy expressions consider only the current values in the row to be accessed
or updated. This is the simplest and best-performing case; when possible, it's best to design row security
applications to work this way. If it is necessary to consult other rows or other tables to make a policy
decision, that can be accomplished using sub-SELECTs, or functions that contain SELECTs, in the policy
expressions. Be aware however that such accesses can create race conditions that could allow information
leakage if care is not taken. As an example, consider the following table design:
92
Data Definition
Now suppose that alice wishes to change the “slightly secret” information, but decides that mallory
should not be trusted with the new content of that row, so she does:
BEGIN;
UPDATE users SET group_id = 1 WHERE user_name = 'mallory';
93
Data Definition
That looks safe; there is no window wherein mallory should be able to see the “secret from mallory”
string. However, there is a race condition here. If mallory is concurrently doing, say,
and her transaction is in READ COMMITTED mode, it is possible for her to see “secret from mallory”.
That happens if her transaction reaches the information row just after alice's does. It blocks waiting
for alice's transaction to commit, then fetches the updated row contents thanks to the FOR UPDATE
clause. However, it does not fetch an updated row for the implicit SELECT from users, because that
sub-SELECT did not have FOR UPDATE; instead the users row is read with the snapshot taken at the
start of the query. Therefore, the policy expression tests the old value of mallory's privilege level and
allows her to see the updated row.
There are several ways around this problem. One simple answer is to use SELECT ... FOR SHARE
in sub-SELECTs in row security policies. However, that requires granting UPDATE privilege on the ref-
erenced table (here users) to the affected users, which might be undesirable. (But another row security
policy could be applied to prevent them from actually exercising that privilege; or the sub-SELECT could
be embedded into a security definer function.) Also, heavy concurrent use of row share locks on the ref-
erenced table could pose a performance problem, especially if updates of it are frequent. Another solution,
practical if updates of the referenced table are infrequent, is to take an ACCESS EXCLUSIVE lock on the
referenced table when updating it, so that no concurrent transactions could be examining old row values.
Or one could just wait for all concurrent transactions to end after committing an update of the referenced
table and before making changes that rely on the new security situation.
5.10. Schemas
A PostgreSQL database cluster contains one or more named databases. Roles and a few other object types
are shared across the entire cluster. A client connection to the server can only access data in a single
database, the one specified in the connection request.
Note
Users of a cluster do not necessarily have the privilege to access every database in the cluster.
Sharing of role names means that there cannot be different roles named, say, joe in two databases
in the same cluster; but the system can be configured to allow joe access to only some of the
databases.
A database contains one or more named schemas, which in turn contain tables. Schemas also contain
other kinds of named objects, including data types, functions, and operators. Within one schema, two
objects of the same type cannot have the same name. Furthermore, tables, sequences, indexes, views,
materialized views, and foreign tables share the same namespace, so that, for example, an index and a
table must have different names if they are in the same schema. The same object name can be used in
different schemas without conflict; for example, both schema1 and myschema can contain tables named
mytable. Unlike databases, schemas are not rigidly separated: a user can access objects in any of the
schemas in the database they are connected to, if they have privileges to do so.
94
Data Definition
There are several reasons why one might want to use schemas:
• To allow many users to use one database without interfering with each other.
• To organize database objects into logical groups to make them more manageable.
• Third-party applications can be put into separate schemas so they do not collide with the names of other
objects.
Schemas are analogous to directories at the operating system level, except that schemas cannot be nested.
To create or access objects in a schema, write a qualified name consisting of the schema name and table
name separated by a dot:
schema.table
This works anywhere a table name is expected, including the table modification commands and the data
access commands discussed in the following chapters. (For brevity we will speak of tables only, but the
same ideas apply to other kinds of named objects, such as types and functions.)
database.schema.table
can be used too, but at present this is just for pro forma compliance with the SQL standard. If you write a
database name, it must be the same as the database you are connected to.
To drop a schema if it's empty (all objects in it have been dropped), use:
See Section 5.15 for a description of the general mechanism behind this.
95
Data Definition
Often you will want to create a schema owned by someone else (since this is one of the ways to restrict
the activities of your users to well-defined namespaces). The syntax for that is:
You can even omit the schema name, in which case the schema name will be the same as the user name.
See Section 5.10.6 for how this can be useful.
Schema names beginning with pg_ are reserved for system purposes and cannot be created by users.
and:
The ability to create like-named objects in different schemas complicates writing a query that references
precisely the same objects every time. It also opens up the potential for users to change the behavior of
other users' queries, maliciously or accidentally. Due to the prevalence of unqualified names in queries
and their use in PostgreSQL internals, adding a schema to search_path effectively trusts all users
having CREATE privilege on that schema. When you run an ordinary query, a malicious user able to create
objects in a schema of your search path can take control and execute arbitrary SQL functions as though
you executed them.
The first schema named in the search path is called the current schema. Aside from being the first schema
searched, it is also the schema in which new tables will be created if the CREATE TABLE command does
not specify a schema name.
SHOW search_path;
96
Data Definition
search_path
--------------
"$user", public
The first element specifies that a schema with the same name as the current user is to be searched. If no
such schema exists, the entry is ignored. The second element refers to the public schema that we have
seen already.
The first schema in the search path that exists is the default location for creating new objects. That is
the reason that by default objects are created in the public schema. When objects are referenced in any
other context without schema qualification (table modification, data modification, or query commands)
the search path is traversed until a matching object is found. Therefore, in the default configuration, any
unqualified access again can only refer to the public schema.
(We omit the $user here because we have no immediate need for it.) And then we can access the table
without schema qualification:
Also, since myschema is the first element in the path, new objects would by default be created in it.
Then we no longer have access to the public schema without explicit qualification. There is nothing special
about the public schema except that it exists by default. It can be dropped, too.
See also Section 9.27 for other ways to manipulate the schema search path.
The search path works in the same way for data type names, function names, and operator names as it does
for table names. Data type and function names can be qualified in exactly the same way as table names. If
you need to write a qualified operator name in an expression, there is a special provision: you must write
OPERATOR(schema.operator)
SELECT 3 OPERATOR(pg_catalog.+) 4;
In practice one usually relies on the search path for operators, so as not to have to write anything so ugly
as that.
97
Data Definition
schema public. To allow users to make use of the objects in a schema, additional privileges might need
to be granted, as appropriate for the object.
A user can also be allowed to create objects in someone else's schema. To allow that, the CREATE privilege
on the schema needs to be granted. In databases upgraded from PostgreSQL 14 or earlier, everyone has
that privilege on the schema public. Some usage patterns call for revoking that privilege:
(The first “public” is the schema, the second “public” means “every user”. In the first sense it is an iden-
tifier, in the second sense it is a key word, hence the different capitalization; recall the guidelines from
Section 4.1.1.)
Since system table names begin with pg_, it is best to avoid such names to ensure that you won't suffer a
conflict if some future version defines a system table named the same as your table. (With the default search
path, an unqualified reference to your table name would then be resolved as the system table instead.)
System tables will continue to follow the convention of having names beginning with pg_, so that they
will not conflict with unqualified user-table names so long as users avoid the pg_ prefix.
• Constrain ordinary users to user-private schemas. To implement this pattern, first ensure that no schemas
have public CREATE privileges. Then, for every user needing to create non-temporary objects, create a
schema with the same name as that user, for example CREATE SCHEMA alice AUTHORIZATION
alice. (Recall that the default search path starts with $user, which resolves to the user name. There-
fore, if each user has a separate schema, they access their own schemas by default.) This pattern is a
secure schema usage pattern unless an untrusted user is the database owner or has been granted ADMIN
OPTION on a relevant role, in which case no secure schema usage pattern exists.
In PostgreSQL 15 and later, the default configuration supports this usage pattern. In prior versions,
or when using a database that has been upgraded from a prior version, you will need to remove the
public CREATE privilege from the public schema (issue REVOKE CREATE ON SCHEMA public
FROM PUBLIC). Then consider auditing the public schema for objects named like objects in schema
pg_catalog.
• Remove the public schema from the default search path, by modifying postgresql.conf or by
issuing ALTER ROLE ALL SET search_path = "$user". Then, grant privileges to create
98
Data Definition
in the public schema. Only qualified names will choose public schema objects. While qualified table
references are fine, calls to functions in the public schema will be unsafe or unreliable. If you create
functions or extensions in the public schema, use the first pattern instead. Otherwise, like the first pattern,
this is secure unless an untrusted user is the database owner or has been granted ADMIN OPTION on
a relevant role.
• Keep the default search path, and grant privileges to create in the public schema. All users access the
public schema implicitly. This simulates the situation where schemas are not available at all, giving
a smooth transition from the non-schema-aware world. However, this is never a secure pattern. It is
acceptable only when the database has a single user or a few mutually-trusting users. In databases up-
graded from PostgreSQL 14 or earlier, this is the default.
For any pattern, to install shared applications (tables to be used by everyone, additional functions provided
by third parties, etc.), put them into separate schemas. Remember to grant appropriate privileges to allow
the other users to access them. Users can then refer to these additional objects by qualifying the names
with a schema name, or they can put the additional schemas into their search path, as they choose.
5.10.7. Portability
In the SQL standard, the notion of objects in the same schema being owned by different users does not exist.
Moreover, some implementations do not allow you to create schemas that have a different name than their
owner. In fact, the concepts of schema and user are nearly equivalent in a database system that implements
only the basic schema support specified in the standard. Therefore, many users consider qualified names
to really consist of user_name.table_name. This is how PostgreSQL will effectively behave if you
create a per-user schema for every user.
Also, there is no concept of a public schema in the SQL standard. For maximum conformance to the
standard, you should not use the public schema.
Of course, some SQL database systems might not implement schemas at all, or provide namespace sup-
port by allowing (possibly limited) cross-database access. If you need to work with those systems, then
maximum portability would be achieved by not using schemas at all.
5.11. Inheritance
PostgreSQL implements table inheritance, which can be a useful tool for database designers. (SQL:1999
and later define a type inheritance feature, which differs in many respects from the features described here.)
Let's start with an example: suppose we are trying to build a data model for cities. Each state has many
cities, but only one capital. We want to be able to quickly retrieve the capital city for any particular state.
This can be done by creating two tables, one for state capitals and one for cities that are not capitals.
However, what happens when we want to ask for data about a city, regardless of whether it is a capital
or not? The inheritance feature can help to resolve this problem. We define the capitals table so that
it inherits from cities:
99
Data Definition
) INHERITS (cities);
In this case, the capitals table inherits all the columns of its parent table, cities. State capitals also
have an extra column, state, that shows their state.
In PostgreSQL, a table can inherit from zero or more other tables, and a query can reference either all
rows of a table or all rows of a table plus all of its descendant tables. The latter behavior is the default.
For example, the following query finds the names of all cities, including state capitals, that are located
at an elevation over 500 feet:
Given the sample data from the PostgreSQL tutorial (see Section 2.1), this returns:
name | elevation
-----------+-----------
Las Vegas | 2174
Mariposa | 1953
Madison | 845
On the other hand, the following query finds all the cities that are not state capitals and are situated at an
elevation over 500 feet:
name | elevation
-----------+-----------
Las Vegas | 2174
Mariposa | 1953
Here the ONLY keyword indicates that the query should apply only to cities, and not any tables below
cities in the inheritance hierarchy. Many of the commands that we have already discussed — SELECT,
UPDATE and DELETE — support the ONLY keyword.
You can also write the table name with a trailing * to explicitly specify that descendant tables are included:
Writing * is not necessary, since this behavior is always the default. However, this syntax is still supported
for compatibility with older releases where the default could be changed.
In some cases you might wish to know which table a particular row originated from. There is a system
column called tableoid in each table which can tell you the originating table:
100
Data Definition
FROM cities c
WHERE c.elevation > 500;
which returns:
(If you try to reproduce this example, you will probably get different numeric OIDs.) By doing a join with
pg_class you can see the actual table names:
which returns:
Another way to get the same effect is to use the regclass alias type, which will print the table OID
symbolically:
Inheritance does not automatically propagate data from INSERT or COPY commands to other tables in
the inheritance hierarchy. In our example, the following INSERT statement will fail:
We might hope that the data would somehow be routed to the capitals table, but this does not happen:
INSERT always inserts into exactly the table specified. In some cases it is possible to redirect the insertion
using a rule (see Chapter 39). However that does not help for the above case because the cities table
does not contain the column state, and so the command will be rejected before the rule can be applied.
All check constraints and not-null constraints on a parent table are automatically inherited by its children,
unless explicitly specified otherwise with NO INHERIT clauses. Other types of constraints (unique, pri-
mary key, and foreign key constraints) are not inherited.
A table can inherit from more than one parent table, in which case it has the union of the columns defined
by the parent tables. Any columns declared in the child table's definition are added to these. If the same
column name appears in multiple parent tables, or in both a parent table and the child's definition, then
these columns are “merged” so that there is only one such column in the child table. To be merged, columns
101
Data Definition
must have the same data types, else an error is raised. Inheritable check constraints and not-null constraints
are merged in a similar fashion. Thus, for example, a merged column will be marked not-null if any one
of the column definitions it came from is marked not-null. Check constraints are merged if they have the
same name, and the merge will fail if their conditions are different.
Table inheritance is typically established when the child table is created, using the INHERITS clause of
the CREATE TABLE statement. Alternatively, a table which is already defined in a compatible way can
have a new parent relationship added, using the INHERIT variant of ALTER TABLE. To do this the new
child table must already include columns with the same names and types as the columns of the parent. It
must also include check constraints with the same names and check expressions as those of the parent.
Similarly an inheritance link can be removed from a child using the NO INHERIT variant of ALTER
TABLE. Dynamically adding and removing inheritance links like this can be useful when the inheritance
relationship is being used for table partitioning (see Section 5.12).
One convenient way to create a compatible table that will later be made a new child is to use the LIKE
clause in CREATE TABLE. This creates a new table with the same columns as the source table. If there are
any CHECK constraints defined on the source table, the INCLUDING CONSTRAINTS option to LIKE
should be specified, as the new child must have constraints matching the parent to be considered compat-
ible.
A parent table cannot be dropped while any of its children remain. Neither can columns or check constraints
of child tables be dropped or altered if they are inherited from any parent tables. If you wish to remove a
table and all of its descendants, one easy way is to drop the parent table with the CASCADE option (see
Section 5.15).
ALTER TABLE will propagate any changes in column data definitions and check constraints down the
inheritance hierarchy. Again, dropping columns that are depended on by other tables is only possible when
using the CASCADE option. ALTER TABLE follows the same rules for duplicate column merging and
rejection that apply during CREATE TABLE.
Inherited queries perform access permission checks on the parent table only. Thus, for example, granting
UPDATE permission on the cities table implies permission to update rows in the capitals table as
well, when they are accessed through cities. This preserves the appearance that the data is (also) in
the parent table. But the capitals table could not be updated directly without an additional grant. In
a similar way, the parent table's row security policies (see Section 5.9) are applied to rows coming from
child tables during an inherited query. A child table's policies, if any, are applied only when it is the table
explicitly named in the query; and in that case, any policies attached to its parent(s) are ignored.
Foreign tables (see Section 5.13) can also be part of inheritance hierarchies, either as parent or child tables,
just as regular tables can be. If a foreign table is part of an inheritance hierarchy then any operations not
supported by the foreign table are not supported on the whole hierarchy either.
5.11.1. Caveats
Note that not all SQL commands are able to work on inheritance hierarchies. Commands that are used
for data querying, data modification, or schema modification (e.g., SELECT, UPDATE, DELETE, most
variants of ALTER TABLE, but not INSERT or ALTER TABLE ... RENAME) typically default
to including child tables and support the ONLY notation to exclude them. Commands that do database
maintenance and tuning (e.g., REINDEX, VACUUM) typically only work on individual, physical tables and
do not support recursing over inheritance hierarchies. The respective behavior of each individual command
is documented in its reference page (SQL Commands).
A serious limitation of the inheritance feature is that indexes (including unique constraints) and foreign key
constraints only apply to single tables, not to their inheritance children. This is true on both the referencing
and referenced sides of a foreign key constraint. Thus, in the terms of the above example:
102
Data Definition
• If we declared cities.name to be UNIQUE or a PRIMARY KEY, this would not stop the capitals
table from having rows with names duplicating rows in cities. And those duplicate rows would
by default show up in queries from cities. In fact, by default capitals would have no unique
constraint at all, and so could contain multiple rows with the same name. You could add a unique
constraint to capitals, but this would not prevent duplication compared to cities.
• Similarly, if we were to specify that cities.name REFERENCES some other table, this constraint
would not automatically propagate to capitals. In this case you could work around it by manually
adding the same REFERENCES constraint to capitals.
• Specifying that another table's column REFERENCES cities(name) would allow the other table
to contain city names, but not capital names. There is no good workaround for this case.
Some functionality not implemented for inheritance hierarchies is implemented for declarative partition-
ing. Considerable care is needed in deciding whether partitioning with legacy inheritance is useful for
your application.
5.12.1. Overview
Partitioning refers to splitting what is logically one large table into smaller physical pieces. Partitioning
can provide several benefits:
• Query performance can be improved dramatically in certain situations, particularly when most of the
heavily accessed rows of the table are in a single partition or a small number of partitions. Partitioning
effectively substitutes for the upper tree levels of indexes, making it more likely that the heavily-used
parts of the indexes fit in memory.
• When queries or updates access a large percentage of a single partition, performance can be improved by
using a sequential scan of that partition instead of using an index, which would require random-access
reads scattered across the whole table.
• Bulk loads and deletes can be accomplished by adding or removing partitions, if the usage pattern is
accounted for in the partitioning design. Dropping an individual partition using DROP TABLE, or doing
ALTER TABLE DETACH PARTITION, is far faster than a bulk operation. These commands also
entirely avoid the VACUUM overhead caused by a bulk DELETE.
These benefits will normally be worthwhile only when a table would otherwise be very large. The exact
point at which a table will benefit from partitioning depends on the application, although a rule of thumb
is that the size of the table should exceed the physical memory of the database server.
Range Partitioning
The table is partitioned into “ranges” defined by a key column or set of columns, with no overlap
between the ranges of values assigned to different partitions. For example, one might partition by date
ranges, or by ranges of identifiers for particular business objects. Each range's bounds are understood
103
Data Definition
as being inclusive at the lower end and exclusive at the upper end. For example, if one partition's
range is from 1 to 10, and the next one's range is from 10 to 20, then value 10 belongs to the second
partition not the first.
List Partitioning
The table is partitioned by explicitly listing which key value(s) appear in each partition.
Hash Partitioning
The table is partitioned by specifying a modulus and a remainder for each partition. Each partition
will hold the rows for which the hash value of the partition key divided by the specified modulus will
produce the specified remainder.
If your application needs to use other forms of partitioning not listed above, alternative methods such as
inheritance and UNION ALL views can be used instead. Such methods offer flexibility but do not have
some of the performance benefits of built-in declarative partitioning.
The partitioned table itself is a “virtual” table having no storage of its own. Instead, the storage belongs to
partitions, which are otherwise-ordinary tables associated with the partitioned table. Each partition stores
a subset of the data as defined by its partition bounds. All rows inserted into a partitioned table will be
routed to the appropriate one of the partitions based on the values of the partition key column(s). Updating
the partition key of a row will cause it to be moved into a different partition if it no longer satisfies the
partition bounds of its original partition.
Partitions may themselves be defined as partitioned tables, resulting in sub-partitioning. Although all par-
titions must have the same columns as their partitioned parent, partitions may have their own indexes,
constraints and default values, distinct from those of other partitions. See CREATE TABLE for more de-
tails on creating partitioned tables and partitions.
It is not possible to turn a regular table into a partitioned table or vice versa. However, it is possible to add
an existing regular or partitioned table as a partition of a partitioned table, or remove a partition from a par-
titioned table turning it into a standalone table; this can simplify and speed up many maintenance process-
es. See ALTER TABLE to learn more about the ATTACH PARTITION and DETACH PARTITION
sub-commands.
Partitions can also be foreign tables, although considerable care is needed because it is then the user's
responsibility that the contents of the foreign table satisfy the partitioning rule. There are some other
restrictions as well. See CREATE FOREIGN TABLE for more information.
5.12.2.1. Example
Suppose we are constructing a database for a large ice cream company. The company measures peak
temperatures every day as well as ice cream sales in each region. Conceptually, we want a table like:
104
Data Definition
peaktemp int,
unitsales int
);
We know that most queries will access just the last week's, month's or quarter's data, since the main use of
this table will be to prepare online reports for management. To reduce the amount of old data that needs to
be stored, we decide to keep only the most recent 3 years worth of data. At the beginning of each month
we will remove the oldest month's data. In this situation we can use partitioning to help us meet all of our
different requirements for the measurements table.
1. Create the measurement table as a partitioned table by specifying the PARTITION BY clause,
which includes the partitioning method (RANGE in this case) and the list of column(s) to use as the
partition key.
Partitions thus created are in every way normal PostgreSQL tables (or, possibly, foreign tables). It is
possible to specify a tablespace and storage parameters for each partition separately.
For our example, each partition should hold one month's worth of data, to match the requirement of
deleting one month's data at a time. So the commands might look like:
...
CREATE TABLE measurement_y2007m11 PARTITION OF measurement
FOR VALUES FROM ('2007-11-01') TO ('2007-12-01');
(Recall that adjacent partitions can share a bound value, since range upper bounds are treated as ex-
clusive bounds.)
105
Data Definition
If you wish to implement sub-partitioning, again specify the PARTITION BY clause in the commands
used to create individual partitions, for example:
After creating partitions of measurement_y2006m02, any data inserted into measurement that
is mapped to measurement_y2006m02 (or data that is directly inserted into measuremen-
t_y2006m02, which is allowed provided its partition constraint is satisfied) will be further redirected
to one of its partitions based on the peaktemp column. The partition key specified may overlap with
the parent's partition key, although care should be taken when specifying the bounds of a sub-partition
such that the set of data it accepts constitutes a subset of what the partition's own bounds allow; the
system does not try to check whether that's really the case.
Inserting data into the parent table that does not map to one of the existing partitions will cause an
error; an appropriate partition must be added manually.
It is not necessary to manually create table constraints describing the partition boundary conditions for
partitions. Such constraints will be created automatically.
3. Create an index on the key column(s), as well as any other indexes you might want, on the partitioned
table. (The key index is not strictly necessary, but in most scenarios it is helpful.) This automatically
creates a matching index on each partition, and any partitions you create or attach later will also have
such an index. An index or unique constraint declared on a partitioned table is “virtual” in the same
way that the partitioned table is: the actual data is in child indexes on the individual partition tables.
In the above example we would be creating a new partition each month, so it might be wise to write a
script that generates the required DDL automatically.
The simplest option for removing old data is to drop the partition that is no longer necessary:
This can very quickly delete millions of records because it doesn't have to individually delete every record.
Note however that the above command requires taking an ACCESS EXCLUSIVE lock on the parent table.
Another option that is often preferable is to remove the partition from the partitioned table but retain access
to it as a table in its own right. This has two forms:
106
Data Definition
These allow further operations to be performed on the data before it is dropped. For example, this is often
a useful time to back up the data using COPY, pg_dump, or similar tools. It might also be a useful time to
aggregate data into smaller formats, perform other data manipulations, or run reports. The first form of the
command requires an ACCESS EXCLUSIVE lock on the parent table. Adding the CONCURRENTLY qual-
ifier as in the second form allows the detach operation to require only SHARE UPDATE EXCLUSIVE lock
on the parent table, but see ALTER TABLE ... DETACH PARTITION for details on the restrictions.
Similarly we can add a new partition to handle new data. We can create an empty partition in the partitioned
table just as the original partitions were created above:
As an alternative to creating a new partition, it is sometimes more convenient to create a new table separate
from the partition structure and attach it as a partition later. This allows new data to be loaded, checked, and
transformed prior to it appearing in the partitioned table. Moreover, the ATTACH PARTITION operation
requires only a SHARE UPDATE EXCLUSIVE lock on the partitioned table rather than the ACCESS
EXCLUSIVE lock required by CREATE TABLE ... PARTITION OF, so it is more friendly to
concurrent operations on the partitioned table; see ALTER TABLE ... ATTACH PARTITION for
additional details. The CREATE TABLE ... LIKE option can be helpful to avoid tediously repeating
the parent table's definition; for example:
Note that when running the ATTACH PARTITION command, the table will be scanned to validate the
partition constraint while holding an ACCESS EXCLUSIVE lock on that partition. As shown above, it is
recommended to avoid this scan by creating a CHECK constraint matching the expected partition constraint
on the table prior to attaching it. Once the ATTACH PARTITION is complete, it is recommended to drop
the now-redundant CHECK constraint. If the table being attached is itself a partitioned table, then each
of its sub-partitions will be recursively locked and scanned until either a suitable CHECK constraint is
encountered or the leaf partitions are reached.
Similarly, if the partitioned table has a DEFAULT partition, it is recommended to create a CHECK constraint
which excludes the to-be-attached partition's constraint. If this is not done, the DEFAULT partition will be
scanned to verify that it contains no records which should be located in the partition being attached. This
operation will be performed whilst holding an ACCESS EXCLUSIVE lock on the DEFAULT partition.
107
Data Definition
If the DEFAULT partition is itself a partitioned table, then each of its partitions will be recursively checked
in the same way as the table being attached, as mentioned above.
As mentioned earlier, it is possible to create indexes on partitioned tables so that they are applied auto-
matically to the entire hierarchy. This can be very convenient as not only will all existing partitions be
indexed, but any future partitions will be as well. However, one limitation when creating new indexes on
partitioned tables is that it is not possible to use the CONCURRENTLY qualifier, which could lead to long
lock times. To avoid this, you can use CREATE INDEX ON ONLY the partitioned table, which creates
the new index marked as invalid, preventing automatic application to existing partitions. Instead, indexes
can then be created individually on each partition using CONCURRENTLY and attached to the partitioned
index on the parent using ALTER INDEX ... ATTACH PARTITION. Once indexes for all the parti-
tions are attached to the parent index, the parent index will be marked valid automatically. Example:
This technique can be used with UNIQUE and PRIMARY KEY constraints too; the indexes are created
implicitly when the constraint is created. Example:
5.12.2.3. Limitations
The following limitations apply to partitioned tables:
• To create a unique or primary key constraint on a partitioned table, the partition keys must not include
any expressions or function calls and the constraint's columns must include all of the partition key
columns. This limitation exists because the individual indexes making up the constraint can only directly
enforce uniqueness within their own partitions; therefore, the partition structure itself must guarantee
that there are not duplicates in different partitions.
• Similarly an exclusion constraint must include all the partition key columns. Furthermore the constraint
must compare those columns for equality (not e.g. &&). Again, this limitation stems from not being able
to enforce cross-partition restrictions. The constraint may include additional columns that aren't part of
the partition key, and it may compare those with any operators you like.
• BEFORE ROW triggers on INSERT cannot change which partition is the final destination for a new row.
• Mixing temporary and permanent relations in the same partition tree is not allowed. Hence, if the par-
titioned table is permanent, so must be its partitions and likewise if the partitioned table is temporary.
When using temporary relations, all members of the partition tree have to be from the same session.
Individual partitions are linked to their partitioned table using inheritance behind-the-scenes. However, it
is not possible to use all of the generic features of inheritance with declaratively partitioned tables or their
108
Data Definition
partitions, as discussed below. Notably, a partition cannot have any parents other than the partitioned table
it is a partition of, nor can a table inherit from both a partitioned table and a regular table. That means
partitioned tables and their partitions never share an inheritance hierarchy with regular tables.
Since a partition hierarchy consisting of the partitioned table and its partitions is still an inheritance hier-
archy, tableoid and all the normal rules of inheritance apply as described in Section 5.11, with a few
exceptions:
• Partitions cannot have columns that are not present in the parent. It is not possible to specify columns
when creating partitions with CREATE TABLE, nor is it possible to add columns to partitions after-the-
fact using ALTER TABLE. Tables may be added as a partition with ALTER TABLE ... ATTACH
PARTITION only if their columns exactly match the parent.
• Both CHECK and NOT NULL constraints of a partitioned table are always inherited by all its partitions.
CHECK constraints that are marked NO INHERIT are not allowed to be created on partitioned tables.
You cannot drop a NOT NULL constraint on a partition's column if the same constraint is present in
the parent table.
• Using ONLY to add or drop a constraint on only the partitioned table is supported as long as there are
no partitions. Once partitions exist, using ONLY will result in an error for any constraints other than
UNIQUE and PRIMARY KEY. Instead, constraints on the partitions themselves can be added and (if
they are not present in the parent table) dropped.
• As a partitioned table does not have any data itself, attempts to use TRUNCATE ONLY on a partitioned
table will always return an error.
• For declarative partitioning, partitions must have exactly the same set of columns as the partitioned
table, whereas with table inheritance, child tables may have extra columns not present in the parent.
• Declarative partitioning only supports range, list and hash partitioning, whereas table inheritance allows
data to be divided in a manner of the user's choosing. (Note, however, that if constraint exclusion is
unable to prune child tables effectively, query performance might be poor.)
5.12.3.1. Example
This example builds a partitioning structure equivalent to the declarative partitioning example above. Use
the following steps:
1. Create the “root” table, from which all of the “child” tables will inherit. This table will contain no
data. Do not define any check constraints on this table, unless you intend them to be applied equally
to all child tables. There is no point in defining any indexes or unique constraints on it, either. For our
example, the root table is the measurement table as originally defined:
109
Data Definition
unitsales int
);
2. Create several “child” tables that each inherit from the root table. Normally, these tables will not add
any columns to the set inherited from the root. Just as with declarative partitioning, these tables are in
every way normal PostgreSQL tables (or foreign tables).
CHECK ( x = 1 )
CHECK ( county IN ( 'Oxfordshire', 'Buckinghamshire',
'Warwickshire' ))
CHECK ( outletID >= 100 AND outletID < 200 )
Ensure that the constraints guarantee that there is no overlap between the key values permitted in
different child tables. A common mistake is to set up range constraints like:
This is wrong since it is not clear which child table the key value 200 belongs in. Instead, ranges should
be defined in this style:
...
CREATE TABLE measurement_y2007m11 (
CHECK ( logdate >= DATE '2007-11-01' AND logdate < DATE
'2007-12-01' )
) INHERITS (measurement);
110
Data Definition
After creating the function, we create a trigger which calls the trigger function:
We must redefine the trigger function each month so that it always inserts into the current child table.
The trigger definition does not need to be updated, however.
We might want to insert data and have the server automatically locate the child table into which the
row should be added. We could do this with a more complex trigger function, for example:
111
Data Definition
The trigger definition is the same as before. Note that each IF test must exactly match the CHECK
constraint for its child table.
While this function is more complex than the single-month case, it doesn't need to be updated as often,
since branches can be added in advance of being needed.
Note
In practice, it might be best to check the newest child first, if most inserts go into that child. For
simplicity, we have shown the trigger's tests in the same order as in other parts of this example.
A different approach to redirecting inserts into the appropriate child table is to set up rules, instead of
a trigger, on the root table. For example:
A rule has significantly more overhead than a trigger, but the overhead is paid once per query rather
than once per row, so this method might be advantageous for bulk-insert situations. In most cases,
however, the trigger method will offer better performance.
Be aware that COPY ignores rules. If you want to use COPY to insert data, you'll need to copy into the
correct child table rather than directly into the root. COPY does fire triggers, so you can use it normally
if you use the trigger approach.
Another disadvantage of the rule approach is that there is no simple way to force an error if the set of
rules doesn't cover the insertion date; the data will silently go into the root table instead.
6. Ensure that the constraint_exclusion configuration parameter is not disabled in postgresql.conf;
otherwise child tables may be accessed unnecessarily.
112
Data Definition
As we can see, a complex table hierarchy could require a substantial amount of DDL. In the above example
we would be creating a new child table each month, so it might be wise to write a script that generates
the required DDL automatically.
To remove the child table from the inheritance hierarchy table but retain access to it as a table in its own
right:
To add a new child table to handle new data, create an empty child table just as the original children were
created above:
Alternatively, one may want to create and populate the new child table before adding it to the table hier-
archy. This could allow data to be loaded, checked, and transformed before being made visible to queries
on the parent table.
5.12.3.3. Caveats
The following caveats apply to partitioning implemented using inheritance:
• There is no automatic way to verify that all of the CHECK constraints are mutually exclusive. It is safer
to create code that generates child tables and creates and/or modifies associated objects than to write
each by hand.
• Indexes and foreign key constraints apply to single tables and not to their inheritance children, hence
they have some caveats to be aware of.
• The schemes shown here assume that the values of a row's key column(s) never change, or at least do
not change enough to require it to move to another partition. An UPDATE that attempts to do that will
fail because of the CHECK constraints. If you need to handle such cases, you can put suitable update
triggers on the child tables, but it makes management of the structure much more complicated.
• If you are using manual VACUUM or ANALYZE commands, don't forget that you need to run them on
each child table individually. A command like:
113
Data Definition
ANALYZE measurement;
• INSERT statements with ON CONFLICT clauses are unlikely to work as expected, as the ON CON-
FLICT action is only taken in case of unique violations on the specified target relation, not its child
relations.
• Triggers or rules will be needed to route rows to the desired child table, unless the application is explicitly
aware of the partitioning scheme. Triggers may be complicated to write, and will be much slower than
the tuple routing performed internally by declarative partitioning.
Without partition pruning, the above query would scan each of the partitions of the measurement table.
With partition pruning enabled, the planner will examine the definition of each partition and prove that the
partition need not be scanned because it could not contain any rows meeting the query's WHERE clause.
When the planner can prove this, it excludes (prunes) the partition from the query plan.
By using the EXPLAIN command and the enable_partition_pruning configuration parameter, it's possible
to show the difference between a plan for which partitions have been pruned and one for which they have
not. A typical unoptimized plan for this type of table setup is:
114
Data Definition
Some or all of the partitions might use index scans instead of full-table sequential scans, but the point here
is that there is no need to scan the older partitions at all to answer this query. When we enable partition
pruning, we get a significantly cheaper plan that will deliver the same answer:
Note that partition pruning is driven only by the constraints defined implicitly by the partition keys, not
by the presence of indexes. Therefore it isn't necessary to define indexes on the key columns. Whether an
index needs to be created for a given partition depends on whether you expect that queries that scan the
partition will generally scan a large part of the partition or just a small part. An index will be helpful in
the latter case but not the former.
Partition pruning can be performed not only during the planning of a given query, but also during its
execution. This is useful as it can allow more partitions to be pruned when clauses contain expressions
whose values are not known at query planning time, for example, parameters defined in a PREPARE
statement, using a value obtained from a subquery, or using a parameterized value on the inner side of a
nested loop join. Partition pruning during execution can be performed at any of the following times:
• During initialization of the query plan. Partition pruning can be performed here for parameter values
which are known during the initialization phase of execution. Partitions which are pruned during this
stage will not show up in the query's EXPLAIN or EXPLAIN ANALYZE. It is possible to determine
the number of partitions which were removed during this phase by observing the “Subplans Removed”
property in the EXPLAIN output.
• During actual execution of the query plan. Partition pruning may also be performed here to remove
partitions using values which are only known during actual query execution. This includes values from
subqueries and values from execution-time parameters such as those from parameterized nested loop
joins. Since the value of these parameters may change many times during the execution of the query,
partition pruning is performed whenever one of the execution parameters being used by partition pruning
changes. Determining if partitions were pruned during this phase requires careful inspection of the
loops property in the EXPLAIN ANALYZE output. Subplans corresponding to different partitions may
have different values for it depending on how many times each of them was pruned during execution.
Some may be shown as (never executed) if they were pruned every time.
Constraint exclusion works in a very similar way to partition pruning, except that it uses each table's CHECK
constraints — which gives it its name — whereas partition pruning uses the table's partition bounds, which
115
Data Definition
exist only in the case of declarative partitioning. Another difference is that constraint exclusion is only
applied at plan time; there is no attempt to remove partitions at execution time.
The fact that constraint exclusion uses CHECK constraints, which makes it slow compared to partition
pruning, can sometimes be used as an advantage: because constraints can be defined even on declarative-
ly-partitioned tables, in addition to their internal partition bounds, constraint exclusion may be able to elide
additional partitions from the query plan.
The default (and recommended) setting of constraint_exclusion is neither on nor off, but an intermediate
setting called partition, which causes the technique to be applied only to queries that are likely to be
working on inheritance partitioned tables. The on setting causes the planner to examine CHECK constraints
in all queries, even simple ones that are unlikely to benefit.
• Constraint exclusion is only applied during query planning, unlike partition pruning, which can also be
applied during query execution.
• Constraint exclusion only works when the query's WHERE clause contains constants (or externally sup-
plied parameters). For example, a comparison against a non-immutable function such as CURREN-
T_TIMESTAMP cannot be optimized, since the planner cannot know which child table the function's
value might fall into at run time.
• Keep the partitioning constraints simple, else the planner may not be able to prove that child tables
might not need to be visited. Use simple equality conditions for list partitioning, or simple range tests
for range partitioning, as illustrated in the preceding examples. A good rule of thumb is that partitioning
constraints should contain only comparisons of the partitioning column(s) to constants using B-tree-
indexable operators, because only B-tree-indexable column(s) are allowed in the partition key.
• All constraints on all children of the parent table are examined during constraint exclusion, so large
numbers of children are likely to increase query planning time considerably. So the legacy inheritance
based partitioning will work well with up to perhaps a hundred child tables; don't try to use many
thousands of children.
One of the most critical design decisions will be the column or columns by which you partition your data.
Often the best choice will be to partition by the column or set of columns which most commonly appear in
WHERE clauses of queries being executed on the partitioned table. WHERE clauses that are compatible with
the partition bound constraints can be used to prune unneeded partitions. However, you may be forced
into making other decisions by requirements for the PRIMARY KEY or a UNIQUE constraint. Removal
of unwanted data is also a factor to consider when planning your partitioning strategy. An entire partition
can be detached fairly quickly, so it may be beneficial to design the partition strategy in such a way that
all data to be removed at once is located in a single partition.
Choosing the target number of partitions that the table should be divided into is also a critical decision to
make. Not having enough partitions may mean that indexes remain too large and that data locality remains
poor which could result in low cache hit ratios. However, dividing the table into too many partitions can
also cause issues. Too many partitions can mean longer query planning times and higher memory con-
sumption during both query planning and execution, as further described below. When choosing how to
partition your table, it's also important to consider what changes may occur in the future. For example, if
you choose to have one partition per customer and you currently have a small number of large customers,
consider the implications if in several years you instead find yourself with a large number of small cus-
116
Data Definition
tomers. In this case, it may be better to choose to partition by HASH and choose a reasonable number
of partitions rather than trying to partition by LIST and hoping that the number of customers does not
increase beyond what it is practical to partition the data by.
Sub-partitioning can be useful to further divide partitions that are expected to become larger than other
partitions. Another option is to use range partitioning with multiple columns in the partition key. Either of
these can easily lead to excessive numbers of partitions, so restraint is advisable.
It is important to consider the overhead of partitioning during query planning and execution. The query
planner is generally able to handle partition hierarchies with up to a few thousand partitions fairly well,
provided that typical queries allow the query planner to prune all but a small number of partitions. Plan-
ning times become longer and memory consumption becomes higher when more partitions remain after
the planner performs partition pruning. Another reason to be concerned about having a large number of
partitions is that the server's memory consumption may grow significantly over time, especially if many
sessions touch large numbers of partitions. That's because each partition requires its metadata to be loaded
into the local memory of each session that touches it.
With data warehouse type workloads, it can make sense to use a larger number of partitions than with
an OLTP type workload. Generally, in data warehouses, query planning time is less of a concern as the
majority of processing time is spent during query execution. With either of these two types of workload,
it is important to make the right decisions early, as re-partitioning large quantities of data can be painfully
slow. Simulations of the intended workload are often beneficial for optimizing the partitioning strategy.
Never just assume that more partitions are better than fewer partitions, nor vice-versa.
Foreign data is accessed with help from a foreign data wrapper. A foreign data wrapper is a library that can
communicate with an external data source, hiding the details of connecting to the data source and obtaining
data from it. There are some foreign data wrappers available as contrib modules; see Appendix F. Other
kinds of foreign data wrappers might be found as third party products. If none of the existing foreign data
wrappers suit your needs, you can write your own; see Chapter 57.
To access foreign data, you need to create a foreign server object, which defines how to connect to a
particular external data source according to the set of options used by its supporting foreign data wrapper.
Then you need to create one or more foreign tables, which define the structure of the remote data. A foreign
table can be used in queries just like a normal table, but a foreign table has no storage in the PostgreSQL
server. Whenever it is used, PostgreSQL asks the foreign data wrapper to fetch data from the external
source, or transmit data to the external source in the case of update commands.
Accessing remote data may require authenticating to the external data source. This information can be
provided by a user mapping, which can provide additional data such as user names and passwords based
on the current PostgreSQL role.
For additional information, see CREATE FOREIGN DATA WRAPPER, CREATE SERVER, CREATE
USER MAPPING, CREATE FOREIGN TABLE, and IMPORT FOREIGN SCHEMA.
117
Data Definition
and management of the data more efficient or convenient. They are not discussed in this chapter, but we
give you a list here so that you are aware of what is possible:
• Views
To ensure the integrity of the entire database structure, PostgreSQL makes sure that you cannot drop
objects that other objects still depend on. For example, attempting to drop the products table we considered
in Section 5.5.5, with the orders table depending on it, would result in an error message like this:
The error message contains a useful hint: if you do not want to bother deleting all the dependent objects
individually, you can run:
and all the dependent objects will be removed, as will any objects that depend on them, recursively. In this
case, it doesn't remove the orders table, it only removes the foreign key constraint. It stops there because
nothing depends on the foreign key constraint. (If you want to check what DROP ... CASCADE will
do, run DROP without CASCADE and read the DETAIL output.)
Almost all DROP commands in PostgreSQL support specifying CASCADE. Of course, the nature of the
possible dependencies varies with the type of the object. You can also write RESTRICT instead of CAS-
CADE to get the default behavior, which is to prevent dropping objects that any other objects depend on.
Note
According to the SQL standard, specifying either RESTRICT or CASCADE is required in a DROP
command. No database system actually enforces that rule, but whether the default behavior is
RESTRICT or CASCADE varies across systems.
If a DROP command lists multiple objects, CASCADE is only required when there are dependencies outside
the specified group. For example, when saying DROP TABLE tab1, tab2 the existence of a foreign
key referencing tab1 from tab2 would not mean that CASCADE is needed to succeed.
118
Data Definition
For a user-defined function or procedure whose body is defined as a string literal, PostgreSQL tracks
dependencies associated with the function's externally-visible properties, such as its argument and result
types, but not dependencies that could only be known by examining the function body. As an example,
consider this situation:
(See Section 36.5 for an explanation of SQL-language functions.) PostgreSQL will be aware that the
get_color_note function depends on the rainbow type: dropping the type would force dropping
the function, because its argument type would no longer be defined. But PostgreSQL will not consider
get_color_note to depend on the my_colors table, and so will not drop the function if the table is
dropped. While there are disadvantages to this approach, there are also benefits. The function is still valid
in some sense if the table is missing, though executing it would cause an error; creating a new table of the
same name would allow the function to work again.
On the other hand, for an SQL-language function or procedure whose body is written in SQL-standard
style, the body is parsed at function definition time and all dependencies recognized by the parser are
stored. Thus, if we write the function above as
then the function's dependency on the my_colors table will be known and enforced by DROP.
119
Chapter 6. Data Manipulation
The previous chapter discussed how to create tables and other structures to hold your data. Now it is time
to fill the tables with data. This chapter covers how to insert, update, and delete table data. The chapter
after this will finally explain how to extract your long-lost data from the database.
To create a new row, use the INSERT command. The command requires the table name and column values.
For example, consider the products table from Chapter 5:
The data values are listed in the order in which the columns appear in the table, separated by commas.
Usually, the data values will be literals (constants), but scalar expressions are also allowed.
The above syntax has the drawback that you need to know the order of the columns in the table. To avoid
this you can also list the columns explicitly. For example, both of the following commands have the same
effect as the one above:
Many users consider it good practice to always list the column names.
If you don't have values for all the columns, you can omit some of them. In that case, the columns will
be filled with their default values. For example:
The second form is a PostgreSQL extension. It fills the columns from the left with as many values as are
given, and the rest will be defaulted.
For clarity, you can also request default values explicitly, for individual columns or for the entire row:
120
Data Manipulation
It is also possible to insert the result of a query (which might be no rows, one row, or many rows):
This provides the full power of the SQL query mechanism (Chapter 7) for computing the rows to be
inserted.
Tip
When inserting a lot of data at the same time, consider using the COPY command. It is not as
flexible as the INSERT command, but is more efficient. Refer to Section 14.4 for more information
on improving bulk loading performance.
To update existing rows, use the UPDATE command. This requires three pieces of information:
Recall from Chapter 5 that SQL does not, in general, provide a unique identifier for rows. Therefore it is
not always possible to directly specify which row to update. Instead, you specify which conditions a row
must meet in order to be updated. Only if you have a primary key in the table (independent of whether
you declared it or not) can you reliably address individual rows by choosing a condition that matches the
primary key. Graphical database access tools rely on this fact to allow you to update rows individually.
For example, this command updates all products that have a price of 5 to have a price of 10:
This might cause zero, one, or many rows to be updated. It is not an error to attempt an update that does
not match any rows.
121
Data Manipulation
Let's look at that command in detail. First is the key word UPDATE followed by the table name. As usual,
the table name can be schema-qualified, otherwise it is looked up in the path. Next is the key word SET
followed by the column name, an equal sign, and the new column value. The new column value can be
any scalar expression, not just a constant. For example, if you want to raise the price of all products by
10% you could use:
As you see, the expression for the new value can refer to the existing value(s) in the row. We also left
out the WHERE clause. If it is omitted, it means that all rows in the table are updated. If it is present, only
those rows that match the WHERE condition are updated. Note that the equals sign in the SET clause is an
assignment while the one in the WHERE clause is a comparison, but this does not create any ambiguity. Of
course, the WHERE condition does not have to be an equality test. Many other operators are available (see
Chapter 9). But the expression needs to evaluate to a Boolean result.
You can update more than one column in an UPDATE command by listing more than one assignment in
the SET clause. For example:
You use the DELETE command to remove rows; the syntax is very similar to the UPDATE command.
For instance, to remove all rows from the products table that have a price of 10, use:
The allowed contents of a RETURNING clause are the same as a SELECT command's output list (see
Section 7.3). It can contain column names of the command's target table, or value expressions using those
columns. A common shorthand is RETURNING *, which selects all columns of the target table in order.
122
Data Manipulation
In an INSERT, the data available to RETURNING is the row as it was inserted. This is not so useful in trivial
inserts, since it would just repeat the data provided by the client. But it can be very handy when relying
on computed default values. For example, when using a serial column to provide unique identifiers,
RETURNING can return the ID assigned to a new row:
The RETURNING clause is also very useful with INSERT ... SELECT.
In an UPDATE, the data available to RETURNING is the new content of the modified row. For example:
In a DELETE, the data available to RETURNING is the content of the deleted row. For example:
In a MERGE, the data available to RETURNING is the content of the source row plus the content of the
inserted, updated, or deleted target row. Since it is quite common for the source and target to have many
of the same columns, specifying RETURNING * can lead to a lot of duplicated columns, so it is often
more useful to qualify it so as to return just the source or target row. For example:
If there are triggers (Chapter 37) on the target table, the data available to RETURNING is the row as
modified by the triggers. Thus, inspecting columns computed by triggers is another common use-case for
RETURNING.
123
Chapter 7. Queries
The previous chapters explained how to create tables, how to fill them with data, and how to manipulate
that data. Now we finally discuss how to retrieve the data from the database.
7.1. Overview
The process of retrieving or the command to retrieve data from a database is called a query. In SQL the
SELECT command is used to specify queries. The general syntax of the SELECT command is
The following sections describe the details of the select list, the table expression, and the sort specification.
WITH queries are treated last since they are an advanced feature.
Assuming that there is a table called table1, this command would retrieve all rows and all user-defined
columns from table1. (The method of retrieval depends on the client application. For example, the psql
program will display an ASCII-art table on the screen, while client libraries will offer functions to extract
individual values from the query result.) The select list specification * means all columns that the table
expression happens to provide. A select list can also select a subset of the available columns or make
calculations using the columns. For example, if table1 has columns named a, b, and c (and perhaps
others) you can make the following query:
(assuming that b and c are of a numerical data type). See Section 7.3 for more details.
FROM table1 is a simple kind of table expression: it reads just one table. In general, table expressions
can be complex constructs of base tables, joins, and subqueries. But you can also omit the table expression
entirely and use the SELECT command as a calculator:
SELECT 3 * 4;
This is more useful if the expressions in the select list return varying results. For example, you could call
a function this way:
SELECT random();
124
Queries
The optional WHERE, GROUP BY, and HAVING clauses in the table expression specify a pipeline of
successive transformations performed on the table derived in the FROM clause. All these transformations
produce a virtual table that provides the rows that are passed to the select list to compute the output rows
of the query.
A table reference can be a table name (possibly schema-qualified), or a derived table such as a subquery, a
JOIN construct, or complex combinations of these. If more than one table reference is listed in the FROM
clause, the tables are cross-joined (that is, the Cartesian product of their rows is formed; see below). The
result of the FROM list is an intermediate virtual table that can then be subject to transformations by the
WHERE, GROUP BY, and HAVING clauses and is finally the result of the overall table expression.
When a table reference names a table that is the parent of a table inheritance hierarchy, the table reference
produces rows of not only that table but all of its descendant tables, unless the key word ONLY precedes
the table name. However, the reference produces only the columns that appear in the named table — any
columns added in subtables are ignored.
Instead of writing ONLY before the table name, you can write * after the table name to explicitly specify
that descendant tables are included. There is no real reason to use this syntax any more, because searching
descendant tables is now always the default behavior. However, it is supported for compatibility with older
releases.
T1 join_type T2 [ join_condition ]
Joins of all types can be chained together, or nested: either or both T1 and T2 can be joined tables. Paren-
theses can be used around JOIN clauses to control the join order. In the absence of parentheses, JOIN
clauses nest left-to-right.
Join Types
Cross join
T1 CROSS JOIN T2
For every possible combination of rows from T1 and T2 (i.e., a Cartesian product), the joined table
will contain a row consisting of all columns in T1 followed by all columns in T2. If the tables have
N and M rows respectively, the joined table will have N * M rows.
125
Queries
Note
This latter equivalence does not hold exactly when more than two tables appear, because
JOIN binds more tightly than comma. For example FROM T1 CROSS JOIN T2 INNER
JOIN T3 ON condition is not the same as FROM T1, T2 INNER JOIN T3 ON
condition because the condition can reference T1 in the first case but not the second.
Qualified joins
The words INNER and OUTER are optional in all forms. INNER is the default; LEFT, RIGHT, and
FULL imply an outer join.
The join condition is specified in the ON or USING clause, or implicitly by the word NATURAL.
The join condition determines which rows from the two source tables are considered to “match”, as
explained in detail below.
INNER JOIN
For each row R1 of T1, the joined table has a row for each row in T2 that satisfies the join
condition with R1.
First, an inner join is performed. Then, for each row in T1 that does not satisfy the join condition
with any row in T2, a joined row is added with null values in columns of T2. Thus, the joined
table always has at least one row for each row in T1.
First, an inner join is performed. Then, for each row in T2 that does not satisfy the join condition
with any row in T1, a joined row is added with null values in columns of T1. This is the converse
of a left join: the result table will always have a row for each row in T2.
First, an inner join is performed. Then, for each row in T1 that does not satisfy the join condition
with any row in T2, a joined row is added with null values in columns of T2. Also, for each row
of T2 that does not satisfy the join condition with any row in T1, a joined row with null values
in the columns of T1 is added.
The ON clause is the most general kind of join condition: it takes a Boolean value expression of the
same kind as is used in a WHERE clause. A pair of rows from T1 and T2 match if the ON expression
evaluates to true.
The USING clause is a shorthand that allows you to take advantage of the specific situation where
both sides of the join use the same name for the joining column(s). It takes a comma-separated list
126
Queries
of the shared column names and forms a join condition that includes an equality comparison for each
one. For example, joining T1 and T2 with USING (a, b) produces the join condition ON T1.a
= T2.a AND T1.b = T2.b.
Furthermore, the output of JOIN USING suppresses redundant columns: there is no need to print both
of the matched columns, since they must have equal values. While JOIN ON produces all columns
from T1 followed by all columns from T2, JOIN USING produces one output column for each of
the listed column pairs (in the listed order), followed by any remaining columns from T1, followed
by any remaining columns from T2.
Finally, NATURAL is a shorthand form of USING: it forms a USING list consisting of all column
names that appear in both input tables. As with USING, these columns appear only once in the output
table. If there are no common column names, NATURAL JOIN behaves like CROSS JOIN.
Note
USING is reasonably safe from column changes in the joined relations since only the listed
columns are combined. NATURAL is considerably more risky since any schema changes to
either relation that cause a new matching column name to be present will cause the join to
combine that new column as well.
num | name
-----+------
1 | a
2 | b
3 | c
and t2:
num | value
-----+-------
1 | xxx
3 | yyy
5 | zzz
127
Queries
3 | c | 5 | zzz
(9 rows)
128
Queries
| | 5 | zzz
(4 rows)
The join condition specified with ON can also contain conditions that do not relate directly to the join. This
can prove useful for some queries but needs to be thought out carefully. For example:
Notice that placing the restriction in the WHERE clause produces a different result:
This is because a restriction placed in the ON clause is processed before the join, while a restriction placed
in the WHERE clause is processed after the join. That does not matter with inner joins, but it matters a
lot with outer joins.
or
A typical application of table aliases is to assign short identifiers to long table names to keep the join
clauses readable. For example:
The alias becomes the new name of the table reference so far as the current query is concerned — it is not
allowed to refer to the table by the original name elsewhere in the query. Thus, this is not valid:
129
Queries
Table aliases are mainly for notational convenience, but it is necessary to use them when joining a table
to itself, e.g.:
Parentheses are used to resolve ambiguities. In the following example, the first statement assigns the alias
b to the second instance of my_table, but the second statement assigns the alias to the result of the join:
Another form of table aliasing gives temporary names to the columns of the table, as well as the table itself:
If fewer column aliases are specified than the actual table has columns, the remaining columns are not
renamed. This syntax is especially useful for self-joins or subqueries.
When an alias is applied to the output of a JOIN clause, the alias hides the original name(s) within the
JOIN. For example:
is not valid; the table alias a is not visible outside the alias c.
7.2.1.3. Subqueries
Subqueries specifying a derived table must be enclosed in parentheses. They may be assigned a table alias
name, and optionally column alias names (as in Section 7.2.1.2). For example:
This example is equivalent to FROM table1 AS alias_name. More interesting cases, which cannot
be reduced to a plain join, arise when the subquery involves grouping or aggregation.
Again, a table alias is optional. Assigning alias names to the columns of the VALUES list is optional, but
is good practice. For more information see Section 7.7.
130
Queries
According to the SQL standard, a table alias name must be supplied for a subquery. PostgreSQL allows AS
and the alias to be omitted, but writing one is good practice in SQL code that might be ported to another
system.
Table functions may also be combined using the ROWS FROM syntax, with the results returned in parallel
columns; the number of result rows in this case is that of the largest function result, with smaller results
padded with null values to match.
If the WITH ORDINALITY clause is specified, an additional column of type bigint will be added to
the function result columns. This column numbers the rows of the function result set, starting from 1. (This
is a generalization of the SQL-standard syntax for UNNEST ... WITH ORDINALITY.) By default,
the ordinal column is called ordinality, but a different column name can be assigned to it using an
AS clause.
The special table function UNNEST may be called with any number of array parameters, and it returns
a corresponding number of columns, as if UNNEST (Section 9.19) had been called on each parameter
separately and combined using the ROWS FROM construct.
If no table_alias is specified, the function name is used as the table name; in the case of a ROWS
FROM() construct, the first function's name is used.
If column aliases are not supplied, then for a function returning a base data type, the column name is also
the same as the function name. For a function returning a composite type, the result columns get the names
of the individual attributes of the type.
Some examples:
131
Queries
FROM getfoo(foo.fooid) z
WHERE z.fooid = foo.fooid
);
In some cases it is useful to define table functions that can return different column sets depending on how
they are invoked. To support this, the table function can be declared as returning the pseudo-type record
with no OUT parameters. When such a function is used in a query, the expected row structure must be
specified in the query itself, so that the system can know how to parse and plan the query. This syntax
looks like:
When not using the ROWS FROM() syntax, the column_definition list replaces the column alias
list that could otherwise be attached to the FROM item; the names in the column definitions serve as column
aliases. When using the ROWS FROM() syntax, a column_definition list can be attached to each
member function separately; or if there is only one member function and no WITH ORDINALITY clause,
a column_definition list can be written in place of a column alias list following ROWS FROM().
SELECT *
FROM dblink('dbname=mydb', 'SELECT proname, prosrc FROM pg_proc')
AS t1(proname name, prosrc text)
WHERE proname LIKE 'bytea%';
The dblink function (part of the dblink module) executes a remote query. It is declared to return record
since it might be used for any kind of query. The actual column set must be specified in the calling query
so that the parser knows, for example, what * should expand to.
SELECT *
FROM ROWS FROM
(
json_to_recordset('[{"a":40,"b":"foo"},
{"a":"100","b":"bar"}]')
AS (a INTEGER, b TEXT),
generate_series(1, 3)
) AS x (p, q, s)
ORDER BY p;
p | q | s
-----+-----+---
40 | foo | 1
100 | bar | 2
| | 3
132
Queries
It joins two functions into a single FROM target. json_to_recordset() is instructed to return two
columns, the first integer and the second text. The result of generate_series() is used directly.
The ORDER BY clause sorts the column values as integers.
Table functions appearing in FROM can also be preceded by the key word LATERAL, but for functions the
key word is optional; the function's arguments can contain references to columns provided by preceding
FROM items in any case.
A LATERAL item can appear at the top level in the FROM list, or within a JOIN tree. In the latter case it
can also refer to any items that are on the left-hand side of a JOIN that it is on the right-hand side of.
When a FROM item contains LATERAL cross-references, evaluation proceeds as follows: for each row of
the FROM item providing the cross-referenced column(s), or set of rows of multiple FROM items providing
the columns, the LATERAL item is evaluated using that row or row set's values of the columns. The
resulting row(s) are joined as usual with the rows they were computed from. This is repeated for each row
or set of rows from the column source table(s).
This is not especially useful since it has exactly the same result as the more conventional
LATERAL is primarily useful when the cross-referenced column is necessary for computing the row(s) to
be joined. A common application is providing an argument value for a set-returning function. For example,
supposing that vertices(polygon) returns the set of vertices of a polygon, we could identify close-
together vertices of polygons stored in a table with:
or in several other equivalent formulations. (As already mentioned, the LATERAL key word is unnecessary
in this example, but we use it for clarity.)
133
Queries
It is often particularly handy to LEFT JOIN to a LATERAL subquery, so that source rows will appear
in the result even if the LATERAL subquery produces no rows for them. For example, if get_produc-
t_names() returns the names of products made by a manufacturer, but some manufacturers in our table
currently produce no products, we could find out which ones those are like this:
SELECT m.name
FROM manufacturers m LEFT JOIN LATERAL get_product_names(m.id) pname
ON true
WHERE pname IS NULL;
WHERE search_condition
where search_condition is any value expression (see Section 4.2) that returns a value of type
boolean.
After the processing of the FROM clause is done, each row of the derived virtual table is checked against
the search condition. If the result of the condition is true, the row is kept in the output table, otherwise
(i.e., if the result is false or null) it is discarded. The search condition typically references at least one
column of the table generated in the FROM clause; this is not required, but otherwise the WHERE clause
will be fairly useless.
Note
The join condition of an inner join can be written either in the WHERE clause or in the JOIN clause.
For example, these table expressions are equivalent:
and:
or perhaps even:
Which one of these you use is mainly a matter of style. The JOIN syntax in the FROM clause is
probably not as portable to other SQL database management systems, even though it is in the SQL
standard. For outer joins there is no choice: they must be done in the FROM clause. The ON or
USING clause of an outer join is not equivalent to a WHERE condition, because it results in the
addition of rows (for unmatched input rows) as well as the removal of rows in the final result.
134
Queries
SELECT ... FROM fdt WHERE EXISTS (SELECT c1 FROM t2 WHERE c2 > fdt.c1)
fdt is the table derived in the FROM clause. Rows that do not meet the search condition of the WHERE
clause are eliminated from fdt. Notice the use of scalar subqueries as value expressions. Just like any
other query, the subqueries can employ complex table expressions. Notice also how fdt is referenced
in the subqueries. Qualifying c1 as fdt.c1 is only necessary if c1 is also the name of a column in
the derived input table of the subquery. But qualifying the column name adds clarity even when it is not
needed. This example shows how the column naming scope of an outer query extends into its inner queries.
SELECT select_list
FROM ...
[WHERE ...]
GROUP BY grouping_column_reference
[, grouping_column_reference]...
The GROUP BY clause is used to group together those rows in a table that have the same values in all the
columns listed. The order in which the columns are listed does not matter. The effect is to combine each
set of rows having common values into one group row that represents all rows in the group. This is done
to eliminate redundancy in the output and/or compute aggregates that apply to these groups. For instance:
135
Queries
In the second query, we could not have written SELECT * FROM test1 GROUP BY x, because there
is no single value for the column y that could be associated with each group. The grouped-by columns can
be referenced in the select list since they have a single value in each group.
In general, if a table is grouped, columns that are not listed in GROUP BY cannot be referenced except in
aggregate expressions. An example with aggregate expressions is:
Here sum is an aggregate function that computes a single value over the entire group. More information
about the available aggregate functions can be found in Section 9.21.
Tip
Grouping without aggregate expressions effectively calculates the set of distinct values in a col-
umn. This can also be achieved using the DISTINCT clause (see Section 7.3.3).
Here is another example: it calculates the total sales for each product (rather than the total sales of all
products):
In this example, the columns product_id, p.name, and p.price must be in the GROUP BY clause
since they are referenced in the query select list (but see below). The column s.units does not have to
be in the GROUP BY list since it is only used in an aggregate expression (sum(...)), which represents
the sales of a product. For each product, the query returns a summary row about all sales of the product.
If the products table is set up so that, say, product_id is the primary key, then it would be enough to
group by product_id in the above example, since name and price would be functionally dependent
on the product ID, and so there would be no ambiguity about which name and price value to return for
each product ID group.
In strict SQL, GROUP BY can only group by columns of the source table but PostgreSQL extends this
to also allow GROUP BY to group by columns in the select list. Grouping by value expressions instead
of simple column names is also allowed.
If a table has been grouped using GROUP BY, but only certain groups are of interest, the HAVING clause
can be used, much like a WHERE clause, to eliminate groups from the result. The syntax is:
136
Queries
Expressions in the HAVING clause can refer both to grouped expressions and to ungrouped expressions
(which necessarily involve an aggregate function).
Example:
In the example above, the WHERE clause is selecting rows by a column that is not grouped (the expression
is only true for sales during the last four weeks), while the HAVING clause restricts the output to groups
with total gross sales over 5000. Note that the aggregate expressions do not necessarily need to be the
same in all parts of the query.
If a query contains aggregate function calls, but no GROUP BY clause, grouping still occurs: the result is
a single group row (or perhaps no rows at all, if the single row is then eliminated by HAVING). The same
is true if it contains a HAVING clause, even without any aggregate function calls or GROUP BY clause.
137
Queries
Each sublist of GROUPING SETS may specify zero or more columns or expressions and is interpreted the
same way as though it were directly in the GROUP BY clause. An empty grouping set means that all rows
are aggregated down to a single group (which is output even if no input rows were present), as described
above for the case of aggregate functions with no GROUP BY clause.
References to the grouping columns or expressions are replaced by null values in result rows for grouping
sets in which those columns do not appear. To distinguish which grouping a particular output row resulted
from, see Table 9.64.
A shorthand notation is provided for specifying two common types of grouping set. A clause of the form
represents the given list of expressions and all prefixes of the list including the empty list; thus it is equiv-
alent to
GROUPING SETS (
( e1, e2, e3, ... ),
...
( e1, e2 ),
( e1 ),
( )
)
This is commonly used for analysis over hierarchical data; e.g., total salary by department, division, and
company-wide total.
represents the given list and all of its possible subsets (i.e., the power set). Thus
CUBE ( a, b, c )
is equivalent to
GROUPING SETS (
( a, b, c ),
( a, b ),
138
Queries
( a, c ),
( a ),
( b, c ),
( b ),
( c ),
( )
)
The individual elements of a CUBE or ROLLUP clause may be either individual expressions, or sublists
of elements in parentheses. In the latter case, the sublists are treated as single units for the purposes of
generating the individual grouping sets. For example:
is equivalent to
GROUPING SETS (
( a, b, c, d ),
( a, b ),
( c, d ),
( )
)
and
is equivalent to
GROUPING SETS (
( a, b, c, d ),
( a, b, c ),
( a ),
( )
)
The CUBE and ROLLUP constructs can be used either directly in the GROUP BY clause, or nested inside
a GROUPING SETS clause. If one GROUPING SETS clause is nested inside another, the effect is the
same as if all the elements of the inner clause had been written directly in the outer clause.
If multiple grou