01 SQL - V6
01 SQL - V6
• To maintain databases.
• To carry out ETL operations through SQL Server Integration Services (SSIS).
Version Year
6.0 1995
6.5 1996
7.0 1998
12 (2014) 2014
• Enterprise
• Standard
• Developer
• Web
• Express
https://www.tutorialspoint.com/ms_sql_server/ms_sql_server_installation.htm
Connecting to SQL Server using SSMS
SQL Server Management Studio (SSMS), is the client tool that can be used to write and
execute SQL queries. To connect to the SQL Server Management Studio
1. Click Start
2. Select All Programs
3. Select Microsoft SQL Server 2005, 2008, or 2008 R2 or 2012 (Depending on the version
installed)
4. Select SQL Server Management Studio
1. Select Database Engine as the Server Type. The other options that you will see here are
Analysis Services(SSAS), Reporting Services (SSRS) and Integration Services(SSIS).
Server type = Database Engine
2. Next you need to specify the Server Name. Here we can specify the name or the server or IP
Address.If you have SQL Server installed on your local machine, you can specify, (local) or just
. (Period) or 127.0.0.1
Server name = (local)
3. Now select Authentication. The options available here, depends on how you have installed
SQL Server. During installation, if you have chosen mixed mode authentication, you will have
both Windows Authentication and SQL Server Authentication. Otherwise, you will just be able to
connect using windows authentication.
4. If you have chosen Windows Authentication, you don’t have to enter user name and
password, otherwise enter the user name and password and click connect.
You should now be connected to SQL Server. Now, click on New Query, on the top left hand
corner of SSMS. This should open a new query editor window, where we can type sql queries
and execute.
SSMS is a client tool and not the Server by itself. Usually database server (SQL Server), will be
on a dedicated machine, and developers connect to the server using SSMS from their
respective local (development) computers.
Developer Machines 1,2,3 and 4 connects to the database server using SSMS.
SQL Commands
SQL commands are instructions, coded into SQL statements, which are used to
communicate with the database to perform specific tasks, work, functions and
queries with data.
SQL commands can be used not only for searching the database but also to
perform various other functions like, for example, you can create tables, add data to
tables, or modify data, drop the table, set permissions for users. SQL commands
are grouped into four major categories depending on their functionality:
• Data Definition Language (DDL) - These SQL commands are used for
creating, modifying, and dropping the structure of database objects. The
commands are CREATE, ALTER, DROP, RENAME, and TRUNCATE.
• Data Manipulation Language (DML) - These SQL commands are used for
storing,retrieving,modifying,anddeletingdata.
and DELETE.
• Transaction Control Language (TCL) - These SQL commands are used for
managing changes affecting the data. These commands are COMMIT,
ROLLBACK, and SAVEPOINT.
• Data Control Language (DCL) - These SQL commands are used for providing
security to database objects. These commands are GRANT and REVOKE.
Database
Database is a collection of objects such as table, view, stored procedure, function, trigger,
etc.In MS SQL Server, two types of databases are available.
• System databases
• User Databases
System Databases
System databases are created automatically when we install MS SQL Server. Following is
a list of system databases −
• Master
• Model
• MSDB
• Tempdb
User Databases
User databases are created by users (Administrators, developers, and testers who have
access to create databases).
Creating Backups
Backup database can be done using either of the following two methods.
Differential Type
Backup database <Your database name> to
disk = '<Backup file location + file name>' with differential
Log Type
Backup log <Your database name> to disk = '<Backup file location + file name>'
Example :
The following command is used for full backup database called 'TestDB' to
the location 'D:\' with backup file name 'TestDB_Full.bak'
The following command is used for Log backup database called 'TestDB' to
the location 'D:\' with backup file name 'TestDB_log.trn'
Restore Database <Your database name> from disk = '<Backup file location + file name>
Example :
You cannot drop a database, if it is currently in use. You get an error stating - Cannot drop
database "NewDatabaseName" because it is currently in use. So, if other users are
connected, you need to put the database in single user mode and then drop the database.
Alter Database DatabaseName Set SINGLE_USER With Rollback Immediate
With Rollback Immediate option, will rollback all incomplete transactions and closes the
connection to the database.
SQL constraints are used to specify rules for the data in a table.
If there is any violation between the constraint and the data action, the action is aborted by
the constraint.
Constraints can be specified when the table is created (inside the CREATE TABLE
statement) or after the table is created (inside the ALTER TABLE statement).
The aim of this article is to create tblPerson and tblGender tables and establish primary key
and foreign key constraints. In SQL Server, tables can be created graphically using SQL
Server Management Studio (SSMS) or using a query.
To create tblPerson table, graphically, using SQL Server Management Studio
1. Right click on Tables folder in Object explorer window
2. Select New Table
3. Fill Column Name, Data Type and Allow Nulls, as shown below and save the table as
tblPerson.
The following statement creates tblGender table, with ID and Gender columns. The
following statement creates tblGender table, with ID and Gender columns. ID column, is
the primary key column. The primary key is used to uniquely identify each row in a table.
Primary key does not allow nulls.
Create Table tblGender
(ID int Not Null Primary Key,
Gender nvarchar(50))
Foreign keys are used to enforce database integrity. A foreign keyin one table points to
a primary key in another table. The foreign key constraint prevents invalid data form being
inserted into the foreign key column. The values that you enter into the foreign key column,
has to be one of the values contained in the table it points to.
A column default can be specified using Default constraint. The default constraint is used to
insert a default value into a column. The default value will be added to all new records, if no
other value is specified, including NULL.
The insert statement below does not provide a value for GenderId column, so the default of 1
will be inserted for this record.
Insert into tblPerson(ID,Name,Email) values(5,'Sam','[email protected]')
On the other hand, the following insert statement will insert NULL, instead of using the default.
Insert into tblPerson(ID,Name,Email,GenderId) values (6,'Dan','[email protected]',NULL)
To drop a constraint
ALTER TABLE { TABLE_NAME }
DROP CONSTRAINT { CONSTRAINT_NAME }
Cascading referential integrity constraint allows to define the actions Microsoft SQL Server
should take when a user attempts to delete or update a key to which an existing foreign keys
points.
For example, consider the 2 tables shown below. If you delete row with ID =
1 fromtblGender table, then row with ID = 3 from tblPerson table becomes an orphan record.
You will not be able to tell the Gender for this row. So, Cascading referential integrity constraint
can be used to define actions Microsoft SQL Server should take when this happens. By
default, we get an error and the DELETE or UPDATE statement is rolled back.
However, you have the following options when setting up Cascading referential integrity
constraint
1. No Action: This is the default behavior. No Action specifies that if an attempt is made to
delete or update a row with a key referenced by foreign keys in existing rows in other tables, an
error is raised and the DELETE or UPDATE is rolled back.
createtabletblGender
(
Idintprimarykey,
Namevarchar(50)
)
createtabletblPerson
(
Idintprimarykey,
Namevarchar(50),
GenderIdintconstraintFK_tblPerson_GenderIdforeignkeyreferencestblGender(Id)ondeletenoacti
on
)
2. Cascade: Specifies that if an attempt is made to delete or update a row with a key referenced
by foreign keys in existing rows in other tables, all rows containing those foreign keys are also
deleted or updated.
createtabletblGender
(
Idintprimarykey,
Namevarchar(50)
)
createtabletblPerson
(
Idintprimarykey,
Namevarchar(50),
GenderIdintconstraintFK_tblPerson_GenderIdforeignkeyreferencestblGender(Id)ondeletecascad
e
)
3. Set NULL: Specifies that if an attempt is made to delete or update a row with a key
referenced by foreign keys in existing rows in other tables, all rows containing those foreign
keys are set to NULL.
createtabletblGender
(
Idintprimarykey,
Namevarchar(50)
)
createtabletblPerson
(
Idintprimarykey,
Namevarchar(50),
GenderIdintconstraintFK_tblPerson_GenderIdforeignkeyreferencestblGender(Id)ondeletesetNUL
L
)
4. Set Default: Specifies that if an attempt is made to delete or update a row with a key
referenced by foreign keys in existing rows in other tables, all rows containing those foreign
keys are set to default values.
createtabletblGender
(
Idintprimarykey,
Namevarchar(50)
)
createtabletblPerson
(
Idintprimarykey,
Namevarchar(50),
GenderIdintconstraintFK_tblPerson_GenderIdforeignkeyreferencestblGender(Id)ondeletesetdef
ault
)
Note : If you have not already set a default value to GenderId column then on delete of Primary Key
Table value will going to set NULL as a default value to Foreign Key Table value.
CHECK constraint is used to limit the range of the values, that can be entered for a
column.
Let's say, we have an integer AGE column, in a table. The AGE in general cannot be less
than ZERO and at the same time cannot be greater than 150. But, since AGE is an integer
column it can accept negative values and values much greater than 150.
So, to limit the values, that can be added, we can use CHECK constraint. In SQL Server,
CHECK constraint can be created graphically, or using a query.
The following check constraint, limits the age between ZERO and 150.
ALTER TABLE tblPerson
ADD CONSTRAINT CK_tblPerson_Age CHECK (Age > 0 AND Age < 150)
If the BOOLEAN_EXPRESSION returns true, then the CHECK constraint allows the value,
otherwise it doesn't. Since, AGE is a nullable column, it's possible to pass null for this
column, when inserting a row. When you pass NULL for the AGE column, the boolean
expression evaluates to UNKNOWN, and allows the value.
In the following 2 insert statements, we only supply values for Name column and not for
PersonId column.
Insert into tblPerson values ('Sam')
Insert into tblPerson values ('Sara')
If you select all the rows from tblPerson table, you will see that, 'Sam' and 'Sara' rows have got
1 and 2 as PersonId.
Now, if I try to execute the following query, I get an error stating - An explicit value for the
identity column in table 'tblPerson' can only be specified when a column list is used and
IDENTITY_INSERT is ON.
Insert into tblPerson values (1,'Todd')
So if you mark a column as an Identity column, you dont have to explicitly supply a value for that
column when you insert a new row. The value is automatically calculated and provided by SQL
server. So, to insert a row into tblPerson table, just provide value for Name column.
Insert into tblPerson values ('Todd')
Delete the row, that you have just inserted and insert another row. You see that the value for
PersonId is 2. Now if you insert another row, PersonId is 3. A record with PersonId = 1, does not
exist, and I want to fill this gap. To do this, we should be able to explicitly supply the value for
identity column. To explicitly supply a value for identity column
1. First turn on identity insert - SET Identity_Insert tblPerson ON
2. In the insert query specify the column list
Insert into tblPerson(PersonId, Name) values(2, 'John')
As long as the Identity_Insert is turned on for a table, you need to explicitly provide the value for
that column. If you don't provide the value, you get an error - Explicit value must be specified for
identity column in table 'tblPerson1' either when IDENTITY_INSERT is set to ON or when a
replication user is inserting into a NOT FOR REPLICATION identity column.
After, you have the gaps in the identity column filled, and if you wish SQL server to calculate the
value, turn off Identity_Insert.
SET Identity_Insert tblPerson OFF
If you have deleted all the rows in a table, and you want to reset the identity column value, use
DBCC CHECKIDENT command. This command will reset PersonId identity column.
DBCC CHECKIDENT(tblPerson, RESEED, 0)
How to get the last generated identity column value in SQL Server
There are several ways in sql server, to retrieve the last identity value that is generated. The
most common way is to use SCOPE_IDENTITY() built in function.
createtabletable1
(
Idintidentity(1,1)primarykey,
Namevarchar(50)
)
createtabletable2
(
Idintidentity(1,1)primarykey,
Namevarchar(50)
)
insertintotable1values('a')
createprocedureUSP_Test
AS
Begin
insertintotable2values('a')
END
selectSCOPE_IDENTITY()
select@@IDENTITY
selectIDENT_CURRENT('table1')
selectIDENT_CURRENT('table2')
SCOPE_IDENTITY() returns the last identity value that is created in the same session
(Connection) and in the same scope (in the same Stored procedure, function, trigger). Let's say,
I have 2 tables table1 and table2, and I have a Stored Procedure fortable2 table, which will
insert a record into table2 table. Now, when you insert a record into table1
table, SCOPE_IDENTITY() returns the identity value that is generated in table1 table, where as
@@IDENTITY returns, the value that is generated in table2 table. So, @@IDENTITY returns
the last identity value that is created in the same session without any consideration to the
scope. IDENT_CURRENT('table1') returns the last identity value created for a specific table
across any session and any scope.
In brief:
SCOPE_IDENTITY() - returns the last identity value that is created in the same session and in
the same scope.
@@IDENTITY - returns the last identity value that is created in the same session and across
any scope.
IDENT_CURRENT('TableName') - returns the last identity value that is created for a specific
table across any session and any scope.
We use UNIQUE constraint to enforce uniqueness of a column i.e the column shouldn't allow
any duplicate values. We can add a Unique constraint thru the designer or using a query.
To add a unique constraint using SQL server management studio designer:
1. Right-click on the table and select Design
2. Right-click on the column, and select Indexes/Keys...
3. Click Add
4. For Columns, select the column name you want to be unique.
5. For Type, choose Unique Key.
6. Click Close, Save the table.
Both primary key and unique key are used to enforce, the uniqueness of a column. So,
when do you choose one over the other?
A table can have, only one primary key. If you want to enforce uniqueness on 2 or more
columns, then we use unique key constraint.
What is the difference between Primary key constraint and Unique key constraint? This
question is asked very frequently in interviews.
1. A table can have only one primary key, but more than one unique key
2. Primary key does not allow nulls, where as unique key allows one null
Candidate Key
Candidate keys are those keys which is candidate for primary key of a table. In simple words we can
understand that such type of keys which full fill all the requirements of primary key which is not null
and have unique records is a candidate for primary key. So thus type of key is known as candidate
key. Every table must have at least one candidate key but at the same time can have several.
Alternate Key
If any table have more than one candidate key, then after choosing primary key from those
candidate key, rest of candidate keys are known as an alternate key of that table. Like here we can
take a very simple example to understand the concept of alternate key. Suppose we have a table
named Employee which has two columns EmpID and EmpMail, both have not null attributes and
unique value. So both columns are treated as candidate key. Now we make EmpID as a primary key
to that table then EmpMail is known as alternate key.
Database Normalization
Database normalization is the process of organizing data to minimize data redundancy (data
duplication), which in turn ensures data consistency.
Let's understand with an example, how redundant data can cause data inconsistency.
Consider Employees table below. For every employee with in the same department, we are
repeating, all the 3 columns (DeptName, DeptHead and DeptLocation). Let's say for example, if
there 50 thousand employees in the IT department, we would have unnecessarily repeated all
the 3 department columns (DeptName, DeptHead and DeptLocation) data 50 thousand times.
The obvious problem with redundant data is the disk space wastage.
Another common problem, is that data can become inconsistent. For example, let's say,
JOHN has resigned, and we have a new department head (STEVE) for IT department. At
present, there are 3 IT department rows in the table, and we need to update all of them. Let's
assume I updated only one row and forgot to update the other 2 rows, then obviously, the data
becomes inconsistent.
Another problem, DML queries (Insert, update and delete), could become slow, as there
could many records and columns to process.
So, to reduce the data redundancy, we can divide this large badly organised table into two
(Employees and Departments), as shown below. Now, we have reduced redundant department
data. So, if we have to update department head name, we only have one row to update, even if
there are 10 million employees in that department.
Database normalization is a step by step process. There are 6 normal forms, First Normal
form (1NF) thru Sixth Normal Form (6NF). Most databases are in third normal form (3NF). There
are certain rules, that each normal form should follow.
Now, let's explore the first normal form (1NF). A table is said to be in 1NF, if
1. The data in each column should be atomic. No multiple values, separated by comma.
2. The table does not contain any repeating column groups
3. Identify each record uniquely using primary key.
In the table below, data in Employee column is not atomic. It contains multiple employees
seperated by comma. From the data you can see that in the IT department, we have 3
employees - Sam, Mike, Shan. Now, let's say I want to change just, SHAN name. It is not
possible, we have to update the entire cell. Similarly it is not possible to select or delete just
one employee, as the data in the cell is not atomic.
The 2nd rule of the first normal form is that, the table should not contain any repeating
column groups. Consider the Employee table below. We have repeated the Employee column,
from Employee1 to Employee3. The problem with this design is that, if a department is going to
have more than 3 employees, then we have to change the table structure to add Employee4
column. Employee2 and Employee3 columns in the HR department are NULL, as there is only
employee in this department. The disk space is simply wasted.
To eliminate the repeating column groups, we are dividing the table into 2. The repeating
Employee columns are moved into a separate table, with a foreign key pointing to the primary
key of the other table. We also, introduced primary key to uniquely identify each record.
The table below violates second normal form. There is lot of redundant data in the table.
Let's say, in my organization there are 100,000 employees and only 2 departments (IT & HR).
Since we are storing DeptName, DeptHead and DeptLocation columns also in the same table,
all these columns should also be repeated 100,000 times, which results in unnecessary
duplication of data.
So this table is clearly violating the rules of the second normal form, and the redundant
data can cause the following issues.
1. Disk space wastage
2. Data inconsistency
3. DML queries (Insert, Update, Delete) can become slow
Now, to put this table in the second normal form, we need to break the table into 2, and
move the redundant department data (DeptName, DeptHead and DeptLocation) into it's own
table. To link the tables with each other, we use the DeptId foreign key. The tables below are in
2NF.
Third Normal Form (3NF):
A table is said to be in 3NF, if the table
1. Meets all the conditions of 1NF and 2NF
2. Does not contain columns (attributes) that are not fully dependent upon the primary key
The table below, violates third normal form, because AnnualSalary column is not fully
dependent on the primary key EmpId. The AnnualSalary is also dependent on
theSalary column. In fact, to compute the AnnualSalary, we multiply the Salary by 12.
Since AnnualSalary is not fully dependent on the primary key, and it can be computed, we can
remove this column from the table, which then, will adhere to 3NF.
Let's look at another example of Third Normal Form violation. In the table
below,DeptHead column is not fully dependent on EmpId column. DeptHead is also dependent
on DeptName. So, this table is not in 3NF.
To put this table in 3NF, we break this down into 2, and then move all the columns that are
not fully dependent on the primary key to a separate table as shown below. This design is now
in 3NF.
Select statement
If you want to select all the columns, you can also use *.For better performance use the
column list, instead of using *.
SELECT *FROM Table_Name
1.* reteives all columns even though some of the columns are not requred,it causes performance
degadation,time consuming.
2. In future if we want to change any column alias names while retriving, it is difficult. in that time we must
specify thespecified columns.
Note: Text values, should be present in single quotes, but not required for numeric values.
Operator Description
= Equal
<> Not equal. Note: In some versions of SQL this operator may be written as !=
The OR operator displays a record if either the first condition OR the second
condition is true.
Syntax :
The LIKE operator is used in a WHERE clause to search for a specified pattern in a
column.
Syntax :
SELECT column_name(s)
FROM table_name
WHERE column_name LIKE pattern;
IN Operator :
The IN operator allows you to specify multiple values in a WHERE clause.
Syntax :
SELECT column_name(s)
FROM table_name
WHERE column_name IN (value1,value2,...);
SQL Aliases :
Group by clause is used to group a selected set of rows into a set of summary rows by the
values of one or more columns or expressions. It is always used in conjunction with one or more
aggregate functions.
INSERTINTOtblEmployeeVALUES('a','Male', 1000,'Pune')
INSERTINTOtblEmployeeVALUES('b','FeMale', 2000,'Mumbai')
INSERTINTOtblEmployeeVALUES('c','Male', 3000,'Pune')
INSERTINTOtblEmployeeVALUES('d','Male', 4000,'Mumbai')
INSERTINTOtblEmployeeVALUES('e','FeMale', 5000,'Delhi')
INSERTINTOtblEmployeeVALUES('f','Male', 6000,'Pune')
INSERTINTOtblEmployeeVALUES('g','FeMale', 7000,'Pune')
Minimum_Salary
1000
SELECTMIN(salary)ASMinimum_Salary
FROMtblEmployee
SELECTCity,MIN(salary)FROMtblEmployee
GROUPBYCity
City Maximum_Salary
Delhi 5000
Mumbai 4000
Pune 7000
SELECTCity,MAX(salary)ASMaximum_Salary
FROMtblEmployee
GROUPBYCity
I want an sql query, which gives total salaries paid by City. The output should be as shown
below.
City TotalSalary
Delhi 5000
Mumbai 6000
Pune 17000
Note: If you omit, the group by clause and try to execute the query, you get an error -Column
'tblEmployee.City' is invalid in the select list because it is not contained in either an aggregate
function or the GROUP BY clause.
Now, I want an sql query, which gives total salaries by City, by gender. The output should be as
shown below.
Query for retrieving total salaries by city and by gender: It's possible to group by multiple
columns. In this query, we are grouping first by city and then by gender.
Select City, Gender, SUM(Salary) as TotalSalary
from tblEmployee
group by City, Gender
Now, I want an sql query, which gives total salaries and total number of employees by City, and
by gender. The output should be as shown below.
Query for retrieving total salaries and total number of employees by City, and by gender:
The only difference here is that, we are using Count() aggregate function.
Select City, Gender, SUM(Salary) as TotalSalary,
COUNT(ID) as TotalEmployees
from tblEmployee
group by City, Gender
Filtering Groups:
WHERE clause is used to filter rows before aggregation, where as HAVING clause is used to
filter groups after aggregations. The following 2 queries produce the same result.
Filtering groups using HAVING clause, after all aggrgations take place:
Select City, SUM(Salary) as TotalSalary
from tblEmployee
group by City
Having City = 'Pune'
From a performance standpoint, you cannot say that one method is less efficient than the other.
Sql server optimizer analyzes each statement and selects an efficient way of executing it. As a
best practice, use the syntax that clearly describes the desired result. Try to eliminate rows that
you wouldn't need, as early as possible.
It is also possible to combine WHERE and HAVING
Select City, SUM(Salary) as TotalSalary
from tblEmployee
Where Gender = 'Male'
group by City
Having City = 'Pune'
Joins in SQL server are used to query (retrieve) data from 2 or more related tables. In general
tables are related to each other using foreign key constraints.
Now let's understand all the JOIN types, with examples and the differences between
them.
Tables used during session :
CREATETABLEtblDepartmentHead
(
IdINTPRIMARYKEYIDENTITY,
NameVARCHAR(50),
ExperienceINT
)
INSERTINTOtblDepartmentHeadVALUES('Rick',10)
INSERTINTOtblDepartmentHeadVALUES('Ron',11)
INSERTINTOtblDepartmentHeadVALUES('Christie',15)
INSERTINTOtblDepartmentHeadVALUES('Cindrella',12)
INSERTINTOtblDepartmentHeadVALUES('Shhon',9)
CREATETABLEtblDepartment
(
IdINTPRIMARYKEYIDENTITY,
DepartmentNameVARCHAR(50),
DepartmentLocationVARCHAR(100),
DepartmentHeadIdINTConstraintFK_tblDepartment_DepartmentHeadIdFOREIGNKEYREFERENCEStblDepa
rtmentHead(Id)
)
INSERTINTOtblDepartmentVALUES('IT','London',1)
INSERTINTOtblDepartmentVALUES('PAYROLL','Delhi',2)
INSERTINTOtblDepartmentVALUES('HR','New York',3)
INSERTINTOtblDepartmentVALUES('Other Department','Sydney',4)
CreateTabletblGender
(
IDintPrimaryKeyIDENTITY,
Gendernvarchar(50)
)
INSERTINTOtblGenderVALUES('Male')
INSERTINTOtblGenderVALUES('Female')
INSERTINTOtblGenderVALUES('Unknown')
CREATETABLEtblEmployee
(
IDINTPRIMARYKEYIDENTITY,
NameVARCHAR(50),
GenderIdINTCONSTRAINTFK_tblEmployee_GenderIdFOREIGNKEYREFERENCEStblGender(ID),
SalaryINT,
DepartmentIdINTCONSTRAINTFK_tblEmployee_DepartmentIdFOREIGNKEYREFERENCEStblDepartment(Id)
)
INSERTINTOtblEmployeeVALUES('Tom', 1, 4000, 1)
INSERTINTOtblEmployeeVALUES('Pam', 2, 2000, 3)
INSERTINTOtblEmployeeVALUES('John', 1, 3500, 1)
INSERTINTOtblEmployeeVALUES('Sam', 1, 2000, 2)
INSERTINTOtblEmployeeVALUES('Todd', 1, 5000, 2)
INSERTINTOtblEmployeeVALUES('Ben', 1, 5200, 1)
INSERTINTOtblEmployeeVALUES('Sara', 2, 2000, 3)
INSERTINTOtblEmployeeVALUES('Valaria', 2, 3000, 1)
INSERTINTOtblEmployeeVALUES('James', 1, 3200,NULL)
INSERTINTOtblEmployeeVALUES('Russell',NULL, 4000,NULL)
CROSS JOIN
CROSS JOIN, produces the cartesian product of the 2 tables involved in the join. For example,
in the Employees table we have 10 rows and in the Departments table we have 4 rows. So, a
cross join between these 2 tables produces 40 rows. Cross Join shouldn't have ON clause.
OR
Note: JOIN or INNER JOIN means the same. It's always better to use INNER JOIN, as this
explicitly specifies your intention.
If you look at the output, we got only 8 rows, but in the Employees table, we have 10 rows. We
didn't get JAMES and RUSSELL records. This is because the DEPARTMENTID, in Employees
table is NULL for these two employees and doesn't match with ID column in Departments table.
So, in summary, INNER JOIN, returns only the matching rows between both the tables. Non
matching rows are eliminated.
OR
Note: You can use, LEFT JOIN or LEFT OUTER JOIN. OUTER keyowrd is optional
LEFT JOIN, returns all the matching rows + non matching rows from the left table. In reality,
INNER JOIN and LEFT JOIN are extensively used.
OR
Note: You can use, RIGHT JOIN or RIGHT OUTER JOIN. OUTER keyowrd is optional
RIGHT JOIN, returns all the matching rows + non matching rows from the right table.
OR
Note: You can use, FULLJOIN or FULL OUTER JOIN. OUTER keyowrd is optional
FULL JOIN, returns all rows from both the left and right tables, including the non matching rows.
Joins Summary
Advanced Joins
How to retrieve only the non matching rows from the left table. The output should be as
shown below:
Query:
SELECT Name, Gender, Salary, DepartmentName
FROM tblEmployee E
LEFT JOIN tblDepartment D
ON E.DepartmentId = D.Id
WHERE D.Id IS NULL
How to retrieve only the non matching rows from the right table
Query:
SELECT Name, Gender, Salary, DepartmentName
FROM tblEmployee E
RIGHT JOIN tblDepartment D
ON E.DepartmentId = D.Id
WHERE E.DepartmentId IS NULL
How to retrieve only the non matching rows from both the left and right table. Matching
rows should be eliminated.
Query:
SELECT Name, Gender, Salary, DepartmentName
FROM tblEmployee E
FULL JOIN tblDepartment D
ON E.DepartmentId = D.Id
WHERE E.DepartmentId IS NULL
OR D.Id IS NULL
CREATETABLEtblEmployees
(
EmployeeIdINTPRIMARYKEYIDENTITY,
NameVARCHAR(50),
ManagerIdINT
)
INSERTINTOtblEmployeesVALUES('Mike', 3)
INSERTINTOtblEmployeesVALUES('Rob', 1)
INSERTINTOtblEmployeesVALUES('Todd',NULL)
INSERTINTOtblEmployeesVALUES('Ben', 1)
INSERTINTOtblEmployeesVALUES('Sam', 1)
We have learnt writing a LEFT OUTER SELF JOIN query, which produced the following output.
In the output, MANAGER column, for Todd's rows is NULL. I want to replace the NULLvalue,
with 'No Manager'
Replacing NULL value using ISNULL() function: We are passing 2 parameters to IsNULL()
function. If M.Name returns NULL, then 'No Manager' string is used as the replacement value.
SELECT E.Name as Employee, ISNULL(M.Name,'No Manager') as Manager
FROM tblEmployee E
LEFT JOIN tblEmployee M
ON E.ManagerID = M.EmployeeID
Replacing NULL value using COALESCE() function: COALESCE() function, returns the first
NON NULL value.
SELECT E.Name as Employee, COALESCE(M.Name, 'No Manager') as Manager
FROM tblEmployee E
LEFT JOIN tblEmployee M
ON E.ManagerID = M.EmployeeID
According to the MSDN Books online COALESCE() returns the first Non NULL value. Let's
understand this with an example.
Consider the Employees Table below. Not all employees have their First, Midde and Last
Names filled. Some of the employees has First name missing, some of them have Middle Name
missing and some of them last name.
CREATETABLEtblCustomer
(
IdINTPRIMARYKEYIDENTITY,
FirstNameVARCHAR(50),
MiddleNameVARCHAR(50),
LastNameVARCHAR(50)
)
INSERTINTOtblCustomerVALUES('Sam',NULL,NULL)
INSERTINTOtblCustomerVALUES(NULL,'Todd','Tanzan')
INSERTINTOtblCustomerVALUES(NULL,NULL,'Sara')
INSERTINTOtblCustomerVALUES('Ben','Parker',NULL)
INSERTINTOtblCustomerVALUES('James','Nick','Nancy')
Now, let's write a query that returns the Name of the Employee. If an employee, has all the
columns filled - First, Middle and Last Names, then we only want the first name.
If the FirstName is NULL, and if Middle and Last Names are filled then, we only want
the middle name. For example, Employee row with Id = 1, has the FirstName filled, so we want
to retrieve his FirstName "Sam". Employee row with Id = 2, has Middle and Last names
filled, but the First name is missing. Here, we want to retrieve his middle name "Todd". In short,
UNION and UNION ALL operators in SQL Server, are used to combine the result-set of two or
more SELECT queries. Please consider India and UK customer tables below
CREATETABLEtblIndiaCustomers
(
IdINTPRIMARYKEYIDENTITY,
NameVARCHAR(50),
EmailVARCHAR(100)
)
INSERTINTOtblIndiaCustomersVALUES('Raj','[email protected]')
INSERTINTOtblIndiaCustomersVALUES('Sam','[email protected]')
CREATETABLEtblUKCustomers
(
IdINTPRIMARYKEYIDENTITY,
NameVARCHAR(50),
EmailVARCHAR(100)
)
INSERTINTOtblUKCustomersVALUES('Ben','[email protected]')
INSERTINTOtblUKCustomersVALUES('Sam','[email protected]')
Combining the rows of tblIndiaCustomers and tblUKCustomers using UNION ALL
Select Id, Name, Email from tblIndiaCustomers
UNION ALL
Select Id, Name, Email from tblUKCustomers
Qu
ery Results of UNION
Note: If you want to see the cost of DISTINCT SORT, you can turn on the estimated query
execution plan using CTRL + L.
Note: For UNION and UNION ALL to work, the Number, Data types, and the order of the
columns in the select statements should be same.
If you want to sort, the results of UNION or UNION ALL, the ORDER BY clause should be
used on the last SELECT statement as shown below.
Select Id, Name, Email from tblIndiaCustomers
UNION ALL
Select Id, Name, Email from tblUKCustomers
UNION ALL
Select Id, Name, Email from tblUSCustomers
Order by Name
JOINS and UNIONS are different things. UNION combines the result-set of two or more
select queries into a single result-set which includes all the rows from all the queries in the
union, where as JOINS, retrieve data from two or more tables based on logical relationships
between the tables. In short, UNION combines rows from 2 or more tables, where JOINS
combine columns from 2 or more table.
Subqueries in SQL
A subquery is simply a select statement, that returns a single value and can be nested inside
a SELECT, UPDATE, INSERT, or DELETE statement.
Subqueries are always encolsed in paranthesis and are also called as inner queries, and the
query containing the subquery is called as outer query.
The columns from a table that is present only inside a subquery, cannot be used in the SELECT
list of the outer query.
Let us understand subqueris with an example. Please create the required tables and insert
sample data using the script below.
Insert into tblProducts values ('TV', '52 inch black color LCD TV')
Insert into tblProducts values ('Laptop', 'Very thin black color acer laptop')
Insert into tblProducts values ('Desktop', 'HP high performance desktop')
Most of the times subqueries can be very easily replaced with joins. The above query is
rewritten using joins and produces the same results.
In this example, we have seen how to use a subquery in the where clause.
Let us now discuss about using a sub query in the SELECT clause.
Write a query to retrieve the NAME and TOTALQUANTITY sold, using a subquery.
Select [Name],
(Select SUM(QuantitySold) from tblProductSales where ProductId =
tblProducts.Id) asTotalQuantity
from tblProducts
order by Name
In the example below, sub query is executed first and only once. The sub query results are
then used by the outer query. A non-corelated subquery can be executed independently of the
outer query.
Select [Id], [Name], [Description]
from tblProducts
where Id not in (Select Distinct ProductId from tblProductSales)
If the subquery depends on the outer query for its values, then that sub query is called as a
correlated subquery. In the where clause of the subquery below, "ProductId"column get it's
value from tblProducts table that is present in the outer query. So, here the subquery is
dependent on the outer query for it's value, hence this subquery is a correlated subquery.
Correlated subqueries get executed, once for every row that is selected by the outer query.
Corelated subquery, cannot be executed independently of the outer query.
Select [Name],
(Select SUM(QuantitySold) from tblProductSales where ProductId =
tblProducts.Id) asTotalQuantity
from tblProducts
order by Name
Stored procedures
A stored procedure is group of T-SQL (Transact SQL) statements. If you have a situation, where
you write the same query over and over again, you can save that specific query as a stored
procedure and call it just by it's name.
There are several advantages of using stored procedures.we will learn how to create, execute,
change and delete stored procedures.
Creating a simple stored procedure without any parameters: This stored procedure,
retrieves Name and Gender of all the employees. To create a stored procedure we
use,CREATE PROCEDURE or CREATE PROC statement.
Note: When naming user defined stored procedures, Microsoft recommends not to use"sp_" as
a prefix. All system stored procedures, are prefixed with "sp_". This avoids any ambiguity
between user defined and system stored procedures and any conflicts, with some future system
procedure.
To execute the stored procedure, you can just type the procedure name and press F5, or use
EXEC or EXECUTE keywords followed by the procedure name as shown below.
1. spGetEmployees
2. EXEC spGetEmployees
3. Execute spGetEmployees
Note: You can also right click on the procedure name, in object explorer in SQL Server
Management Studio and select EXECUTE STORED PROCEDURE.
Creating a stored procedure with input parameters: This SP, accepts GENDER and
DEPARTMENTID parameters. Parameters and variables have an @ prefix in their name.
To invoke this procedure, we need to pass the value for @Gender and @DepartmentId
parameters. If you don't specify the name of the parameters, you have to first pass value for
@Gender parameter and then for @DepartmentId.
EXECUTE spGetEmployeesByGenderAndDepartment 'Male', 1
On the other hand, if you change the order, you will get an error stating "Error converting data
type varchar to int." This is because, the value of "Male" is passed into @DepartmentId
parameter. Since @DepartmentId is an integer, we get the type conversion error.
spGetEmployeesByGenderAndDepartment 1, 'Male'
When you specify the names of the parameters when executing the stored procedure the order
doesn't matter.
EXECUTE spGetEmployeesByGenderAndDepartment @DepartmentId=1, @Gender ='Male'
To encrypt the text of the SP, use WITH ENCRYPTION option. Once, encrypted, you cannot
view the text of the procedure, using sp_helptext system stored procedure.
Alter Procedure spGetEmployeesByGenderAndDepartment
@Gender nvarchar(50),
@DepartmentId int
WITH ENCRYPTION
as
Begin
Select Name, Gender from tblEmployee Where Gender = @Gender and DepartmentId =
@DepartmentId
End
To delete the SP, use DROP PROC 'SPName' or DROP PROCEDURE 'SPName'
Stored procedures with output parameters
1. First initialize a variable of the same datatype as that of the output parameter. We have
declared @EmployeeTotal integer variable.
2. Then pass the @EmployeeTotal variable to the SP. You have to specify
the OUTPUTkeyword. If you don't specify the OUTPUT keyword, the variable will be NULL.
3. Execute
If you don't specify the OUTPUT keyword, when executing the stored procedure, the
@EmployeeTotal variable will be NULL. Here, we have not specified OUTPUT keyword. When
you execute, you will see '@EmployeeTotal is null' printed.
You can pass parameters in any order, when you use the parameter names. Here, we are
first passing the OUTPUT parameter and then the input @Gender parameter.
The following system stored procedures, are extremely useful when working procedures.
sp_help SP_Name : View the information about the stored procedure, like parameter names,
their datatypes etc. sp_help can be used with any database object, like tables, views, SP's,
triggers etc. Alternatively, you can also press ALT+F1, when the name of the object is
highlighted.
sp_depends SP_Name : View the dependencies of the stored procedure. This system SP is
very useful, especially if you want to check, if there are any stored procedures that are
referencing a table that you are about to drop. sp_depends can also be used with other
database objects like table etc.
Note: All parameter and variable names in SQL server, need to have the @symbol.
We will see
1. Understand what are stored procedure return values
2. Difference between stored procedure return values and output parameters
3. When to use output parameters over return values
So, from this we understood that, when a stored procedure is executed, it returns an integer
status variable. With this in mind, let's understand the difference between output parameters
and RETURN values. We will use the Employees table below for this purpose.
The following procedure returns total number of employees in the Employees table,
using output parameter - @TotalCount.
Create Procedure spGetTotalCountOfEmployees1
@TotalCount int output
as
Begin
Select @TotalCount = COUNT(ID) from tblEmployee
End
So, we are able to achieve what we want, using output parameters as well as return values.
Now, let's look at example, where return status variables cannot be used, but Output
parameters can be used.
In this SP, we are retrieving the Name of the employee, based on their Id, using the
output parameter @Name.
Create Procedure spGetNameById1
@Id int,
@Name nvarchar(20) Output
as
Begin
Select @Name = Name from tblEmployee Where Id = @Id
End
Now let's try to achieve the same thing, using return status variables.
Create Procedure spGetNameById2
@Id int
as
Begin
Return (Select Name from tblEmployee Where Id = @Id)
End
Executing spGetNameById2 returns an error stating 'Conversion failed when converting the
nvarchar value 'Sam' to data type int.'. The return status variable is an integer, and hence, when
we select Name of an employee and try to return that we get a conversion error.
So, using return values, we can only return integers, and that too, only one integer. It is not
possible, to return more than one value using return values, where as output parameters, can
return any datatype and an sp can have more than one output parameters.Always prefer, using
output parameters, over RETURN values.
In general, RETURN values are used to indicate success or failure of stored procedure,
especially when we are dealing with nested stored procedures.Return a value of 0, indicates
success, and any nonzero value indicates failure.
Parameters of a sql server stored procedure can be made optional by specifying default
values.
Name, Email, Age and Gender parameters of spSearchEmployees stored procedure are
optional. Notice that, we have set defaults for all the parameters, and in the "WHERE" clause
we are checking if the respective parameter IS NULL.
Create Proc spSearchEmployees
@Name nvarchar(50) = NULL,
@Email nvarchar(50) = NULL,
@Age int = NULL,
@Gender nvarchar(50) = NULL
as
Begin
Select * from tblEmployee where
(Name = @Name OR @Name IS NULL) AND
(Email = @Email OR @Email IS NULL) AND
(Age = @Age OR @Age IS NULL) AND
(Gender = @Gender OR @Gender IS NULL)
End
The following advantages of using Stored Procedures over adhoc queries (inline SQL)
1. Execution plan retention and reusability - Stored Procedures are compiled and their
execution plan is cached and used again, when the same SP is executed again. Although
adhoc queries also create and reuse plan, the plan is reused only when the query is textual
match and the datatypes are matching with the previous call. Any change in the datatype or you
have an extra space in the query then, a new plan is created.
2. Reduces network traffic - You only need to send, EXECUTE SP_Name statement, over the
network, instead of the entire batch of adhoc SQL code.
3. Code reusability and better maintainability - A stored procedure can be reused with
multiple applications. If the logic has to change, we only have one place to change, where as if it
is inline sql, and if you have to use it in multiple applications, we end up with multiple copies of
this inline sql. If the logic has to change, we have to change at all the places, which makes it
harder maintaining inline sql.
4. Better Security - A database user can be granted access to an SP and prevent them from
executing direct "select" statements against a table. This is fine grain access control which will
help control what data a user has access to.
There are several built-in functions. We will look at the most common string functions available.
Note: The while loop will become an infinite loop, if you forget to include the following line.
Set @Number = @Number + 1
Another way of printing lower case alphabets using CHAR() and LOWER() functions.
Declare @Number int
Set @Number = 65
While(@Number <= 90)
Begin
Print LOWER(CHAR(@Number))
Set @Number = @Number + 1
End
Example: Removing the 3 white spaces on the left hand side of the ' Hello' string using
LTRIM() function.
Select LTRIM(' Hello')
Output: Hello
RTRIM(Character_Expression) - Removes blanks on the right hand side of the given character
expression.
Example: Removing the 3 white spaces on the left hand side of the 'Hello ' string using
RTRIM() function.
Select RTRIM('Hello ')
Output: Hello
Example: To remove white spaces on either sides of the given character expression, use
LTRIM() and RTRIM() as shown below.
Select LTRIM(RTRIM(' Hello '))
Output: Hello
LEN(String_Expression) - Returns the count of total characters, in the given string expression,
excluding the blanks at the end of the expression.
Example: In this example, we get the starting position of '@' character in the email string
'[email protected]'.
Select CHARINDEX('@','[email protected]',1)
Output: 5
Example: Display just the domain part of the given email '[email protected]'.
Select SUBSTRING('[email protected]',6, 7)
Output: bbb.com
In the above example, we have hardcoded the starting position and the length parameters.
Instead of hardcoding we can dynamically retrieve them using CHARINDEX() and LEN()
string functions as shown below.
Example:
Select SUBSTRING('[email protected]',(CHARINDEX('@', '[email protected]') + 1),
(LEN('[email protected]') - CHARINDEX('@','[email protected]')))
Output: bbb.com
Real time example, where we can use LEN(), CHARINDEX() and SUBSTRING() functions.
Let us assume we have table as shown below.
INSERTINTOtblEmployee1VALUES('Tom','[email protected]')
INSERTINTOtblEmployee1VALUES('Pam','[email protected]')
INSERTINTOtblEmployee1VALUES('John','[email protected]')
INSERTINTOtblEmployee1VALUES('Sam','[email protected]')
INSERTINTOtblEmployee1VALUES('Todd','[email protected]')
INSERTINTOtblEmployee1VALUES('Ben','[email protected]')
INSERTINTOtblEmployee1VALUES('Sara','[email protected]')
INSERTINTOtblEmployee1VALUES('Valaria','[email protected]')
INSERTINTOtblEmployee1VALUES('James','[email protected]')
INSERTINTOtblEmployee1VALUES('Russell','[email protected]')
Write a query to find out total number of emails, by domain. The result of the query should
be as shown below.
Query
Select SUBSTRING(Email, CHARINDEX('@', Email) + 1,
LEN(Email) - CHARINDEX('@', Email)) as EmailDomain,
COUNT(Email) as Total
from tblEmployee1
Group By SUBSTRING(Email, CHARINDEX('@', Email) + 1,
LEN(Email) - CHARINDEX('@', Email))
Let's mask the email with 5 * (star) symbols. The output should be as shown below.
Query:
Select FirstName, LastName, SUBSTRING(Email, 1, 2) + REPLICATE('*',5) +
SUBSTRING(Email, CHARINDEX('@',Email), LEN(Email) -
CHARINDEX('@',Email)+1) asEmail
from tblEmployee
Example: The SPACE(5) function, inserts 5 spaces between FirstName and LastName
Select FirstName + SPACE(5) + LastName as FullName
From tblEmployee
Output:
PATINDEX('%Pattern%', Expression)
Returns the starting position of the first occurrence of a pattern in a specified expression. It
takes two arguments, the pattern to be searched and the expression. PATINDEX() is similar to
CHARINDEX(). With CHARINDEX() we cannot use wildcards, where as PATINDEX() provides
this capability. If the specified pattern is not found, PATINDEX() returns ZERO.
Example:
Select Email, PATINDEX('%@aaa.com', Email) as FirstOccurence
from tblEmployee
Where PATINDEX('%@aaa.com', Email) > 0
Output:
Example:
Select FirstName, LastName,Email, STUFF(Email, 2, 3, '*****') as StuffedEmail
From tblEmployee
Output:
DateTime functions in SQL Server
There are several built-in DateTime functions available in SQL Server. All the following
functions can be used to get the current system date and time, where you have sql server
installed.
Function Date Time Format Description
GETDATE() 2012-08-31 20:15:04.543 Commonly used function
ANSI SQL equivalent to
CURRENT_TIMESTAMP 2012-08-31 20:15:04.543
GETDATE
2012-08-31 More fractional seconds
SYSDATETIME()
20:15:04.5380028 precision
2012-08-31 More fractional seconds
SYSDATETIMEOFFSET()
20:15:04.5380028 + 01:00 precision + Time zone offset
GETUTCDATE() 2012-08-31 19:15:04.543 UTC Date and Time
2012-08-31 UTC Date and Time, with More
SYSUTCDATETIME()
19:15:04.5380028 fractional seconds precision
Note: UTC stands for Coordinated Universal Time, based on which, the world regulates
clocks and time. There are slight differences between GMT and UTC, but for most common
purposes, UTC is synonymous with GMT.
To practically understand how the different date time datatypes available in SQL Server,
store data, create the sample table tblDateTime.
CREATE TABLE [tblDateTime]
(
[c_time] [time](7) NULL,
[c_date] [date] NULL,
[c_smalldatetime] [smalldatetime] NULL,
[c_datetime] [datetime] NULL,
[c_datetime2] [datetime2](7) NULL,
[c_datetimeoffset] [datetimeoffset](7) NULL
)
Now, issue a select statement, and you should see, the different types of datetime
datatypes, storing the current datetime, in different formats.
IsDate, Day, Month, Year and DateName DateTime functions in SQL
Server
ISDATE() - Checks if the given value, is a valid date, time, or datetime. Returns 1 for success, 0
for failure.
Examples:
Select ISDATE(‘sql’) -- returns 0
Select ISDATE(Getdate()) -- returns 1
Select ISDATE('2012-08-31 21:02:04.167') -- returns 1
Example:
Select ISDATE('2012-09-01 11:34:21.1918447') -- returns 0
Day() - Returns the 'Day number of the Month' of the given date
Examples:
Select DAY(GETDATE()) -- Returns the day number of the month, based on current system
datetime.
Select DAY('01/31/2012') -- Returns 31
Month() - Returns the 'Month number of the year' of the given date
Examples:
Select Month(GETDATE()) -- Returns the Month number of the year, based on the current
system date and time
Select Month('01/31/2012') -- Returns 1
Examples:
Select Year(GETDATE()) -- Returns the year number, based on the current system date
Select Year('01/31/2012') -- Returns 2012
DateName(DatePart, Date) - Returns a string, that represents a part of the given date. This
functions takes 2 parameters. The first parameter 'DatePart' specifies, the part of the date, we
want. The second parameter, is the actual date, from which we want the part of the Date.
A simple practical example using some of these DateTime functions. Consider the table
tblEmployees.
Query:
Select Name, DateOfBirth, DateName(WEEKDAY,DateOfBirth) as [Day],
Month(DateOfBirth) as MonthNumber,
DateName(MONTH, DateOfBirth) as [MonthName],
Year(DateOfBirth) as [Year]
From tblEmployees
DATEADD (datepart, NumberToAdd, date) - Returns the DateTime, after adding specified
NumberToAdd, to the datepart specified of the given date.
Examples:
Select DateAdd(DAY, 20, '2012-08-30 19:45:31.793')
-- Returns 2012-09-19 19:45:31.793
Select DateAdd(DAY, -20, '2012-08-30 19:45:31.793')
-- Returns 2012-08-10 19:45:31.793
Examples:
Select DATEDIFF(MONTH, '11/30/2005','01/31/2006') -- returns 2
Select DATEDIFF(DAY, '11/30/2005','01/31/2006') -- returns 62
End
Using the function in a query to get the expected output along with the age of the
person.
Select Id, Name, DateOfBirth, dbo.fnComputeAge(DateOfBirth) as Age from tblEmployees
Cast and Convert functions in SQL Server
To convert one data type to another, CAST and CONVERT functions can be used.
From the syntax, it is clear that CONVERT() function has an optional style parameter, where
as CAST() function lacks this capability.
The following 2 queries convert, DateOfBirth's DateTime datatype to NVARCHAR. The first
query uses the CAST() function, and the second one uses CONVERT() function. The output is
exactly the same for both the queries as shown below.
Select Id, Name, DateOfBirth, CAST(DateofBirth as nvarchar) as ConvertedDOB
from tblEmployees
Select Id, Name, DateOfBirth, Convert(nvarchar, DateOfBirth) as ConvertedDOB
from tblEmployees
Output:
Now, let's use the style parameter of the CONVERT() function, to format the Date as we would
like it. In the query below, we are using 103 as the argument for style parameter, which formats
the date as dd/mm/yyyy.
Select Id, Name, DateOfBirth, Convert(nvarchar, DateOfBirth, 103) as ConvertedDOB
from tblEmployees
Output:
For complete list of all the Date and Time Styles, please check MSDN.
In SQL Server 2008, Date datatype is introduced, so you can also use
SELECT CAST(GETDATE() as DATE)
SELECT CONVERT(DATE, GETDATE())
Note: To control the formatting of the Date part, DateTime has to be converted to NVARCHAR
using the styles provided. When converting to DATE data type, the CONVERT() function will
ignore the style parameter.
In this query, we are using CAST() function, to convert Id (int) to nvarchar, so it can be
appended with the NAME column. If you remove the CAST() function, you will get an error
stating - 'Conversion failed when converting the nvarchar value 'Sam - ' to data type int.'
Select Id, Name, Name + ' - ' + CAST(Id AS NVARCHAR) AS [Name-Id]
FROM tblEmployees
Now let's look at a practical example of using CAST function. Consider the registrations table
below.
Query:
Select CAST(RegisteredDate as DATE) as RegistrationDate,
COUNT(Id) as TotalRegistrations
From tblRegistrations
Group By CAST(RegisteredDate as DATE)
The general guideline is to use CAST(), unless you want to take advantage of the style
functionality in CONVERT().
we will understand the commonly used mathematical functions in sql server like, Abs, Ceiling,
Floor, Power, Rand, Square, Sqrt, and Round functions
ABS ( numeric_expression ) - ABS stands for absolute and returns, the absolute (positive)
number.
Examples:
Select CEILING(15.2) -- Returns 16
Select CEILING(-15.2) -- Returns -15
Power(expression, power) - Returns the power value of the specified expression to the
specified power.
Example: The following example calculates '2 TO THE POWER OF 3' = 2*2*2 = 8
Select POWER(2,3) -- Returns 8
RAND([Seed_Value]) - Returns a random float number between 0 and 1. Rand() function takes
an optional seed parameter. When seed value is supplied the RAND() function always returns
the same value for the same seed.
Example:
Select RAND(1) -- Always returns the same value
If you want to generate a random number between 1 and 100, RAND() and FLOOR()
functions can be used as shown below. Every time, you execute this query, you get a random
number between 1 and 100.
Select FLOOR(RAND() * 100)
Example:
Select SQUARE(9) -- Returns 81
SQRT ( Number ) - SQRT stands for Square Root. This function returns the square root of the
given value.
Example:
Select SQRT(81) -- Returns 9
ROUND ( numeric_expression , length [ ,function ] ) - Rounds the given numeric expression
based on the given length. This function takes 3 parameters.
1. Numeric_Expression is the number that we want to round.
2. Length parameter, specifies the number of the digits that we want to round to. If the length is
a positive number, then the rounding is applied for the decimal part, where as if the length is
negative, then the rounding is applied to the number before the decimal.
3. The optional function parameter, is used to indicate rounding or truncation operations. A
value of 0, indicates rounding, where as a value of non zero indicates truncation. Default, if not
specified is 0.
Examples:
-- Round to 2 places after (to the right) the decimal point
Select ROUND(850.556, 2) -- Returns 850.560
-- Truncate anything after 2 places, after (to the right) the decimal point
Select ROUND(850.556, 2, 1) -- Returns 850.550
-- Truncate anything after 1 place, after (to the right) the decimal point
Select ROUND(850.556, 1, 1) -- Returns 850.500
-- Round the last 2 places before (to the left) the decimal point
Select ROUND(850.556, -2) -- 900.000
-- Round the last 1 place before (to the left) the decimal point
Select ROUND(850.556, -1) -- 850.000
start_date : The date for which to return the last day of the month
month_to_add : Optional. Number of months to add to the start_date. EOMONTH adds the
specified number of months to start_date, and then returns the last day of the month for the
resulting date.
Output :
Example : Returns last day of the month of February from a NON-LEAP year
SELECT EOMONTH('2/20/2015') AS LastDay
Output :
Example : Returns last day of the month of February from a LEAP year
SELECT EOMONTH('2/20/2016') AS LastDay
Output :
The following example adds 2 months to the start_date and returns the last day of the month
from the resulting date
SELECT EOMONTH('3/20/2016', 2) AS LastDay
Output :
The following example subtracts 1 month from the start_date and returns the last day of the
month from the resulting date
SELECT EOMONTH('3/20/2016', -1) AS LastDay
Output :
Using EOMONTH function with table data. We will use the following Employees table for this
example.
SQL Script to create Employees table
Create table Employees
(
Id int primary key identity,
Name nvarchar(10),
DateOfBirth date
)
Go
The following example returns the last day of the month from the DateOfBirth of every
employee.
DATEFROMPARTS function
Output :
Example : Invalid value specified for month parameter, so the function returns an error
Output : Cannot construct data type date, some of the arguments have values which are not
valid.
Example : NULL specified for month parameter, so the function returns NULL.
Output :
Other new date and time functions introduced in SQL Server 2012
The range for SmallDateTime is January 1, 1900, through June 6, 2079. A value outside of this
range, is not allowed.
The following 2 queries have values outside of the range of SmallDateTime data type.
Insert into Employees ([SmallDateTime]) values ('01/01/1899')
Insert into Employees ([SmallDateTime]) values ('07/06/2079')
When executed, the above queries fail with the following error
The conversion of a varchar data type to a smalldatetime data type resulted in an out-of-range
value
The range for DateTime is January 1, 1753, through December 31, 9999. A value outside of
this range, is not allowed.
The following query has a value outside of the range of DateTime data type.
Insert into Employees ([DateTime]) values ('01/01/1752')
When executed, the above query fails with the following error
The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
DATETIME2 has a bigger date range than DATETIME. Also, DATETIME2 is more accurate
than DATETIME. So I would recommend using DATETIME2 over DATETIME when possible. I
think the only reason for using DATETIME over DATETIME2 is for backward compatibility.
With DateTime2
The following query retrieves the prcision, the datetime value, and the storage size.
SELECT 'Precision - 1' AS [Precision],
DateTime2Precision1 AS DateValue,
DATALENGTH(DateTime2Precision1) AS StorageSize
FROM @TempTable
UNION ALL
UNION ALL
UNION ALL
UNION ALL
UNION ALL
SELECT 'Precision - 7',
DateTime2Precision7,
DATALENGTH(DateTime2Precision7) AS StorageSize
FROM @TempTable
DateTime2FromParts function
Example : All the function arguments have valid values, so DATETIME2FROMPARTS returns
DATETIME2 value as expected.
Output :
Example : Invalid value specified for month parameter, so the function returns an error
Output : Cannot construct data type datetime2, some of the arguments have values which are
not valid.
Example : If any of the required arguments are NULL, the function returns null. NULL specified
for month parameter, so the function returns NULL.
Output :
SELECT DATETIME2FROMPARTS ( 2015, 15, 15, 20, 55, 55, 0, NULL ) AS[DateTime2]
Output : Scale argument is not valid. Valid expressions for data type datetime2 scale argument
are integer constants and integer constant expressions.
Syntax :
we have learnt how to use many of the built-in system functions that are available in SQL
Server. Now we will see about creating user defined functions i.e UDF.
Scalar functions may or may not have parameters, but always return a single (scalar) value.
The returned value can be of any data type, except text, ntext, image, cursor, and
timestamp.
Let us now create a function which calculates and returns the age of a person. To compute the
age we req uire, date of birth. So, let's pass date of birth as a parameter. So, AGE() function
returns an integer and accepts date parameter.
CREATE FUNCTION Age(@DOB Date)
RETURNS INT
AS
BEGIN
DECLARE @Age INT
SET @Age = DATEDIFF(YEAR, @DOB, GETDATE()) - CASE WHEN(MONTH(@DOB)
> MONTH(GETDATE())) OR (MONTH(@DOB) =MONTH(GETDATE()) AND DAY(@DOB)
> DAY(GETDATE())) THEN 1 ELSE 0 END
RETURN @Age
END
Scalar user defined functions can be used in the Select clause as shown below.
Select Name, DateOfBirth, dbo.Age(DateOfBirth) as Age from tblEmployees
Scalar user defined functions can be used in the Where clause, as shown below.
Select Name, DateOfBirth, dbo.Age(DateOfBirth) as Age
from tblEmployees
Where dbo.Age(DateOfBirth) > 30
A stored procedure also can accept DateOfBirth and return Age, but you cannot use stored
procedures in a select or where clause. This is one of thedifference between a function and a
stored procedure.
To alter a function we use ALTER FUNCTION FuncationName statement and to delete it, we
use DROP FUNCTION FuncationName.
We have seen how to create and call 'scalar user defined functions'. We will learn about
'Inline Table Valued Functions'.
A scalar function, returns a single value. On the other hand, an Inline Table Valued function,
return a table.
Consider this Employees table shown below, which we will be using for our example.
Create a function that returns EMPLOYEES by GENDER.
CREATE FUNCTION fn_EmployeesByGender(@Gender nvarchar(10))
RETURNS TABLE
AS
RETURN (Select Id, Name, DateOfBirth, Gender, DepartmentId
from tblEmployees
where Gender = @Gender)
If you look at the way we implemented this function, it is very similar to SCALAR function,
with the following differences
1. We specify TABLE as the return type, instead of any scalar data type
2. The function body is not enclosed between BEGIN and END block. Inline table valued
function body, cannot have BEGIN and END block.
3. The structure of the table that gets returned, is determined by the SELECT statement with
in the function.
Output:
As the inline user defined function, is returning a table, issue the select statement against the
function, as if you are selecting the data from a TABLE.
Multi statement table valued functions are very similar to Inline Table valued functions, with a
few differences. Let's look at an example, and then note the differences.
Employees Table:
Let's write an Inline and multi-statement Table Valued functions that can return the
output shown below.
Inline Table Valued function(ILTVF):
Create Function fn_ILTVF_GetEmployees()
Returns Table
as
Return (Select Id, Name, Cast(DateOfBirth as Date) as DOB
From tblEmployees)
Return
End
Now let's understand the differences between Inline Table Valued functions and Multi-
statement Table Valued functions
1. In an Inline Table Valued function, the RETURNS clause cannot contain the structure of the
table, the function returns. Where as, with the multi-statement table valued function, we specify
the structure of the table that gets returned
2. Inline Table Valued function cannot have BEGIN and END block, where as the multi-
statement function can have.
3. Inline Table valued functions are better for performance, than multi-statement table valued
functions. If the given task, can be achieved using an inline table valued function, always prefer
to use them, over multi-statement table valued functions.
4. It's possible to update the underlying table, using an inline table valued function, but not
possible using multi-statement table valued function.
Nondeterministic functions may return different results each time they are called with a
specific set of input values even if the database state that they access remains the same.
Examples: GetDate() and CURRENT_TIMESTAMP
Rand() function is a Non-deterministic function, but if you provide the seed value, the
function becomes deterministic, as the same value gets returned for the same seed value.
We will be using tblEmployees table, for the rest of our examples. Please, create the table
using this script.
CREATE TABLE [dbo].[tblEmployees]
(
[Id] [int] Primary Key,
[Name] [nvarchar](50) NULL,
[DateOfBirth] [datetime] NULL,
[Gender] [nvarchar](10) NULL,
[DepartmentId] [int] NULL
)
Insert rows into the table using the insert script below.
Insert into tblEmployees values(1,'Sam','1980-12-30 00:00:00.000','Male',1)
Insert into tblEmployees values(2,'Pam','1982-09-01 12:02:36.260','Female',2)
Insert into tblEmployees values(3,'John','1985-08-22 12:03:30.370','Male',1)
Insert into tblEmployees values(4,'Sara','1979-11-29 12:59:30.670','Female',3)
Insert into tblEmployees values(5,'Todd','1978-11-29 12:59:30.670','Male',1)
Now try to retrieve, the text of the function, using sp_helptex fn_GetEmployeeNameById.
You will get a message stating 'The text for object 'fn_GetEmployeeNameById' is encrypted.'
Note: You have to use the 2 part object name i.e, dbo.tblEmployees, to use WITH
SCHEMABINDING option. dbo is the schema name or owner name, tblEmployees is the table
name.
6. Now, try to drop the table using - Drop Table tblEmployees. You will get a message
stating, 'Cannot DROP TABLE tblEmployees because it is being referenced by object
fn_GetEmployeeNameById.'
So, Schema binding, specifies that the function is bound to the database objects that it
references. When SCHEMABINDING is specified, the base objects cannot be modified in any
way that would affect the function definition. The function definition itself must first be modified
or dropped to remove dependencies on the object that is to be modified.
1. Stored Procedure support deferred name resolution whereas functions do not support deffered name
resolution.
2. User Defined Function can be used in a select statement where as you cannot use a stored
procedure in a select statement.
3. UDF's cannot return Image, Text where as a StoredProcedure can return any datatype.
4. In general, User Defined Functions are used for computations whereas Stored Procedures are used
for performing business logic.
5. UDF should return a value whereas Stored Procedure need not.
6. User Defined Functions accept lesser number of input parameters than Stored Procedures.
UDF can have upto 1023 input parameters whereas aStored Procedure can have upto 21000 input
parameters.
7. Temporary Tables cannot be used in a UDF where as a StoredProcedure can use Temporary
Tables.
8. UDF cannot Execute Dynamic SQL where as a Stored Procedure can execute Dynamic SQL. (Please
refer this link for DynamicSQL Queries)
9. User Defined Function does not support error handling whereas Stored Procedure supports error
handling. RAISEERROR or @@ERROR are not allowed in UDFs.
+.
Different Types of Temporary tables
In SQL Server, there are 2 types of Temporary tables - Local Temporary tables and Global
Temporary tables.
How to Create a Local Temporary Table:
Creating a local Temporary table is very similar to creating a permanent table, except that you
prefix the table name with 1 pound (#) symbol. In the example below,#PersonDetails is a
local temporary table, with Id and Name columns.
Create Table #PersonDetails(Id int, Name nvarchar(20))
You can also check the existence of temporary tables using object explorer. In the object
explorer, expand TEMPDB database folder, and then exapand TEMPORARY TABLES folder,
and you should see the temporary table that we have created.
A local temporary table is available, only for the connection that has created the table. If
you open another query window, and execute the following query you get an error
stating 'Invalid object name #PersonDetails'. This proves that local temporary tables are
available, only for the connection that has created them.
A local temporary table is automatically dropped, when the connection that has created the
it, is closed. If the user wants to explicitly drop the temporary table, he can do so using
DROP TABLE #PersonDetails
If the temporary table, is created inside the stored procedure, it get's dropped automatically
upon the completion of stored procedure execution. The stored procedure below,
creates #PersonDetails temporary table, populates it and then finally returns the data and
destroys the temporary table immediately after the completion of the stored procedure
execution.
It is also possible for different connections, to create a local temporary table with the same
name. For example User1 and User2, both can create a local temporary table with the same
name #PersonDetails. Now, if you expand the Temporary Tables folder in the TEMPDB
database, you should see 2 tables with name #PersonDetails and some random number at the
end of the name. To differentiate between, the User1 and User2 local temp tables, sql server
appends the random number at the end of the temp table name.
Global temporary tables are visible to all the connections of the sql server, and are only
destroyed when the last connection referencing the table is closed.
Multiple users, across multiple connections can have local temporary tables with the same
name, but, a global temporary table name has to be unique, and if you inspect the name of the
global temp table, in the object explorer, there will be no random numbers suffixed at the end of
the table name.
2. SQL Server appends some random numbers at the end of the local temp table name, where
this is not done for global temp table names.
3. Local temporary tables are only visible to that session of the SQL Server which has created it,
whereas Global temporary tables are visible to all the SQL server sessions
4. Local temporary tables are automatically dropped, when the session that created the
temporary tables is closed, whereas Global temporary tables are destroyed when the last
connection that is referencing the global temp table is closed.
Table Variable
Is a special data type that can be used to store a result set for processing at a later time. table is primarily
used for temporary storage of a set of rows returned as the result set of a table-valued function. Functions
and variables can be declared to be of type table. table variables can be used in functions, stored
procedures.
Transaction logs are not recorded for the table-variables. Hence, they are out of scope of the
transaction mechanism
CREATEtable #T (s varchar(128))
DECLARE@Ttable (s varchar(128))
INSERTinto #T select'old value #'
INSERTinto@Tselect'old value @'
BEGINtransaction
UPDATE #T set s='new value #'
UPDATE@Tset s='new value @'
ROLLBACKtransaction
SELECT * from #T
SELECT * from@T
s
---------------
old value #
s
---------------
new value @
Or
When do you use Table Variable over a Temporary Table and vice versa?
1. Table variable is created in the memory where as a temporary table is created in the TempDB. But, if
there is a memory pressure, the pages belonging to a table variable may be pushed out to tempdb.
2. Table variables cannot be involved in transactions, logging or locking. This makes table variable faster
than a temporary table.
3. You can pass table variable as parameter to functions and stored procedures, where as you cannot do
the same with temporary table.
4. A temporary table can have indexes, whereas a table variable can only have a primary index. If speed
is an issue Table variables can be faster, but if there are a lot of records, or there is a need to search the
temporary table based on a clustered index, then a Temporary Table would be better. If you have less
than 100 rows generally use a table variable. Otherwise use a temporary table. This is because SQL
Server won't create statistics on table variables.
Select into in sql server
The SELECT INTO statement in SQL Server, selects data from one table and inserts it into a
new table.
2. Copy all rows and columns from an existing table into a new table in an external database.
SELECT * INTO HRDB.dbo.EmployeesBackup FROM Employees
6. Create a new table whose columns and datatypes match with an existing table.
SELECT * INTO EmployeesBackup FROM Employees WHERE 1 <> 1
7. Copy all rows and columns from an existing table into a new table on a different SQL Server
instance. For this, create a linked server and use the 4 part naming convention
SELECT * INTO TargetTable
FROM [SourceServer].[SourceDB].[dbo].[SourceTable]
Please note : You cannot use SELECT INTO statement to select data into an existing table.
For this you will have to use INSERT INTO statement.
Why indexes?
Indexes are used by queries to find data from tables quickly. Indexes are created on tables and
views. Index on a table or a view, is very similar to an index that we find in a book.
If you don't have an index in a book, and I ask you to locate a specific chapter joint
that book, you will have to look at every page starting from the first page of the book.
On, the other hand, if you have the index, you look up the page number of the chapter in the
index, and then directly go to that page number to locate the chapter.
Obviously, the book index is helping to drastically reduce the time it takes to find the chapter.
In a similar way, Table and View indexes, can help the query to find data quickly.
In fact, the existence of the right indexes, can drastically improve the performance of the query.
If there is no index to help the query, then the query engine, checks every row in the table
from the beginning to the end. This is called as Table Scan. Table scan is bad for performance.
Index Example: At the moment, the Employees table does not have an index on SALARY
column.
To find all the employees, who has salary greater than 5000 and less than 7000, the query
engine has to check each and every row in the table, resulting in a table scan, which can
adversely affect the performance, especially if the table is large. Since there is no index, to help
the query, the query engine performs an entire table scan.
Now Let's Create the Index to help the query:Here, we are creating an index on Salary
column in the employee table
CREATE Index IX_tblEmployee_Salary
ON tblEmployee (SALARY ASC)
The index stores salary of each employee, in the ascending order as shown below. The
actual index may look slightly different.
Now, when the SQL server has to execute the same query, it has an index on the salary
column to help this query. Salaries between the range of 5000 and 7000 are usually present at
the bottom, since the salaries are arranged in an ascending order. SQL server picks up the row
addresses from the index and directly fetch the records from the table, rather than scanning
each row in the table. This is called as Index Seek.
An Index can also be created graphically using SQL Server Management Studio
1. In the Object Explorer, expand the Databases folder and then specific database you are
working with.
2. Expand the Tables folder
3. Expand the Table on which you want to create the index
4. Right click on the Indexes folder and select New Index
5. In the New Index dialog box, type in a meaningful name
6. Select the Index Type and specify Unique or Non Unique Index
7. Click the Add
8. Select the columns that you want to add as index key
9 Click OK
10. Save the table
To view the Indexes: In the object explorer, expand Indexes folder. Alternatively use
sp_helptext system stored procedure. The following command query returns all the indexes on
tblEmployee table.
Execute sp_helpindex tblEmployee
To delete or drop the index: When dropping an index, specify the table name as well
Drop Index tblEmployee.IX_tblEmployee_Salary
Clustered Index:
A clustered index determines the physical order of data in a table. For this reason, a table can
have only one clustered index.
Note that Id column is marked as primary key. Primary key, constraint create clustered
indexes automatically if no clustered index already exists on the table and a nonclustered
index is not specified when you create the PRIMARY KEY constraint.
To confirm this, execute sp_helpindex tblEmployee, which will show a unique clustered index
created on the Id column.
Now execute the following insert queries. Note that, the values for Id column are not in a
sequential order.
Insert into tblEmployee Values(3,'John',4500,'Male','New York')
Insert into tblEmployee Values(1,'Sam',2500,'Male','London')
Insert into tblEmployee Values(4,'Sara',5500,'Female','Tokyo')
Insert into tblEmployee Values(5,'Todd',3100,'Male','Toronto')
Insert into tblEmployee Values(2,'Pam',6500,'Female','Sydney')
Inspite, of inserting the rows in a random order, when we execute the select query we can
see that all the rows in the table are arranged in an ascending order based on the Id column.
This is because a clustered index determines the physical order of data in a table, and we have
got a clustered index on the Id column.
Because of the fact that, a clustered index dictates the physical storage order of the data
in a table, a table can contain only one clustered index. If you take the example
oftblEmployee table, the data is already arranged by the Id column, and if we try to create
another clustered index on the Name column, the data needs to be rearranged based on
the NAME column, which will affect the ordering of rows that's already done based on the ID
column.
For this reason, SQL server doesn't allow us to create more than one clustered index per table.
The following SQL script, raises an error stating 'Cannot create more than one clustered index
on table 'tblEmployee'. Drop the existing clustered index PK__tblEmplo__3214EC0706CD04F7
before creating another.'
Create Clustered Index IX_tblEmployee_Name
ON tblEmployee(Name)
A clustered index is analogous to a telephone directory, where the data is arranged by the
last name. We just learnt that, a table can have only one clustered index. However, the index
can contain multiple columns (a composite index), like the way a telephone directory is
organized by last name and first name.
Let's now create a clustered index on 2 columns. To do this we first have to drop the existing
clustered index on the Id column.
Drop index tblEmployee.PK__tblEmplo__3214EC070A9D95DB
When you execute this query, you get an error message stating 'An explicit DROP INDEX is
not allowed on index 'tblEmployee.PK__tblEmplo__3214EC070A9D95DB'. It is being used for
PRIMARY KEY constraint enforcement.' To successfully delete the clustered index, right click
on the index in the Object explorer window and select DELETE.
Now, execute the following CREATE INDEX query, to create a composite clustered Index on
the Gender and Salary columns.
Create Clustered Index IX_tblEmployee_Gender_Salary
ON tblEmployee(Gender DESC, Salary ASC)
Now, if you issue a select query against this table you should see the data physically
arranged, FIRST by Gender in descending order and then by Salary in ascending order. The
result is shown below.
In the index itself, the data is stored in an ascending or descending order of the index key,
which doesn't in any way influence the storage of data in the table.
The following SQL creates a Nonclustered index on the NAME column on tblEmployee table:
Create NonClustered Index IX_tblEmployee_Name
ON tblEmployee(Name)
Difference between Clustered and NonClustered Index:
1. Only one clustered index per table, where as you can have more than one non clustered
index
2. Clustered index is faster than a non clustered index, because, the non-clustered index
has to refer back to the table, if the selected column is not present in the index.
3. Clustered index determines the storage order of rows in the table, and hence doesn't
require additional disk space, but whereas a Non Clustered index is stored separately from the
table, additional storage space is required.
Unique index is used to enforce uniqueness of key values in the index. Let's understand this
with an example.
Since, we have marked Id column, as the Primary key for this table, a UNIQUE CLUSTERED
INDEX gets created on the Id column, with Id as the index key.
We can verify this by executing the sp_helpindex system stored procedure as shown below.
Execute sp_helpindex tblEmployee
Output:
Since, we now have a UNIQUE CLUSTERED INDEX on the Id column, any attempt to
duplicate the key values, will throw an error stating 'Violation of PRIMARY KEY constraint
'PK__tblEmplo__3214EC07236943A5'. Cannot insert duplicate key in object dbo.tblEmployee'
Now let's try to drop the Unique Clustered index on the Id column. This will raise an error
stating - 'An explicit DROP INDEX is not allowed on
index tblEmployee.PK__tblEmplo__3214EC07236943A5. It is being used for PRIMARY KEY
constraint enforcement.'
Drop index tblEmployee.PK__tblEmplo__3214EC07236943A5
So this error message proves that, SQL server internally, uses the UNIQUE index to enforce
the uniqueness of values and primary key.
Expand keys folder in the object explorer window, and you can see a primary key
constraint. Now, expand the indexes folder and you should see a unique clustered index. In the
object explorer it just shows the 'CLUSTERED' word. To, confirm, this is intact an UNIQUE
index, right click and select properties. The properties window shows the UNIQUE checkbox
being selected.
SQL Server allows us to delete this UNIQUE CLUSTERED INDEX from the object explorer.
so, Right click on the index, and select DELETE and finally, click OK. Along with the UNIQUE
index, the primary key constraint is also deleted.
Now, let's try to insert duplicate values for the ID column. The rows should be accepted,
without any primary key violation error.
Insert into tblEmployee Values(1,'Mike', 'Sandoz',4500,'Male','New York')
Insert into tblEmployee Values(1,'John', 'Menco',2500,'Male','London')
So, the UNIQUE index is used to enforce the uniqueness of values and primary key constraint.
Creating a UNIQUE NON CLUSTERED index on the FirstName and LastName columns.
Create Unique NonClustered Index UIX_tblEmployee_FirstName_LastName
On tblEmployee(FirstName, LastName)
This unique non clustered index, ensures that no 2 entries in the index has the same first and
last names. A Unique Constraint, can be used to enforce the uniqueness of values, across one
or more columns. There are no major differences between a unique constraint and a unique
index.
In fact, when you add a unique constraint, a unique index gets created behind the scenes.
To prove this, let's add a unique constraint on the city column of the tblEmployee table.
ALTER TABLE tblEmployee
ADD CONSTRAINT UQ_tblEmployee_City
UNIQUE NONCLUSTERED (City)
At this point, we expect a unique constraint to be created. Refresh and Expand the
constraints folder in the object explorer window. The constraint is not present in this folder. Now,
refresh and expand the 'indexes' folder. In the indexes folder, you will see a UNIQUE
NONCLUSTERED index with name UQ_tblEmployee_City.
So creating a UNIQUE constraint actually creates a UNIQUE index. So a UNIQUE index can
be created explicitly, using CREATE INDEX statement or indirectly using a UNIQUE constraint.
So, when should you be creating a Unique constraint over a unique index.To make our
intentions clear, create a unique constraint, when data integrity is the objective. This makes the
objective of the index very clear. In either case, data is validated in the same manner, and the
query optimizer does not differentiate between a unique index created by a unique constraint or
manually created.
Note:
1. By default, a PRIMARY KEY constraint, creates a unique clustered index, where as a
UNIQUE constraint creates a unique nonclustered index. These defaults can be changed if you
wish to.
Indexes are used by queries to find data quickly. We will learn about the different queries that
can benefit from indexes.
The following select query benefits from the index on the Salary column, because the
salaries are sorted in ascending order in the index. From the index, it's easy to identify the
records where salary is between 4000 and 8000, and using the row address the corresponding
records from the table can be fetched quickly.
Select * from tblEmployee where Salary > 4000 and Salary < 8000
Not only, the SELECT statement, even the following DELETE and UPDATEstatements can
also benefit from the index. To update or delete a row, SQL server needs to first find that row,
and the index can help in searching and finding that specific row quickly.
Delete from tblEmployee where Salary = 2500
Update tblEmployee Set Salary = 9000 where Salary = 7500
Indexes can also help queries, that ask for sorted results. Since the Salaries are already
sorted, the database engine, simply scans the index from the first entry to the last entry and
retrieve the rows in sorted order. This avoids, sorting of rows during query execution, which can
significantly improve the processing time.
Select * from tblEmployee order by Salary
The index on the Salary column, can also help the query below, by scanning the index in
reverse order.
Select * from tblEmployee order by Salary Desc
GROUP BY queries can also benefit from indexes. To group the Employees with the same
salary, the query engine, can use the index on Salary column, to retrieve the already sorted
salaries. Since matching salaries are present in consecutive index entries, it is to count the total
number of Employees at each Salary quickly.
Select Salary, COUNT(Salary) as Total
from tblEmployee
Group By Salary
Disadvantages of Indexes:
Additional Disk Space: Clustered Index does not, require any additional storage. Every Non-
Clustered index requires additional space as it is stored separately from the table.The amount of
space required will depend on the size of the table, and the number and types of columns used
in the index.
Insert Update and Delete statements can become slow: When DML (Data Manipulation
Language) statements (INSERT, UPDATE, DELETE) modifies data in a table, the data in all the
indexes also needs to be updated. Indexes can help, to search and locate the rows, that we
want to delete, but too many indexes to update can actually hurt the performance of data
modifications.
A clustered index, always covers a query, since it contains all of the data in a table. A
composite index is an index on two or more columns. Both clustered and nonclustered indexes
can be composite indexes. To a certain extent, a composite index, can cover a query.
What is a View?
A view is nothing more than a saved SQL query. A view can also be considered as avirtual
table.
At this point Employees and Departments table should look like this.
Employees Table:
Departments Table:
Now, let's write a Query which returns the output as shown below:
To get the expected output, we need to join tblEmployees table with tblDepartmentstable.
Now let's create a view, using the JOINS query, we have just written.
Create View vWEmployeesByDepartment
as
Select Id, Name, Salary, Gender, DeptName
from tblEmployee
join tblDepartment
on tblEmployee.DepartmentId = tblDepartment.DeptId
To select data from the view, SELECT statement can be used the way, we use it with a table.
SELECT * from vWEmployeesByDepartment
When this query is executed, the database engine actually retrieves the data from the
underlying base tables, tblEmployees and tblDepartments. The View itself, does not store
any data by default. However, we can change this default behavior. So, this is the reason, a
view is considered, as just, a stored query or a virtual table.
2. Views can be used as a mechanism to implement row and column level security.
Row Level Security:
For example, I want an end user, to have access only to IT Department employees. If I grant
him access to the underlying tblEmployees and tblDepartments tables, he will be able to see,
every department employees. To achieve this, I can create a view, which returns only IT
Department employees, and grant the user access to the view and not to the underlying table.
3. Views can be used to present only aggregated data and hide detailed data.
Updateable Views
Let's create tblEmployees table and populate it with some sample data.
Let's create a view, which returns all the columns from the tblEmployees table, except Salary
column.
Create view vWEmployeesDataExceptSalary
as
Select Id, Name, Gender, DepartmentId
from tblEmployee
Select data from the view: A view does not store any data. So, when this query is executed,
the database engine actually retrieves data, from the underlying tblEmployee base table.
Select * from vWEmployeesDataExceptSalary
Is it possible to Insert, Update and delete rows, from the underlying tblEmployees table,
using view vWEmployeesDataExceptSalary?
Yes, SQL server views are updateable.
The following query updates, Name column from Mike to Mikey. Though, we are updating
the view, SQL server, correctly updates the base table tblEmployee. To verify, execute,
SELECT statement, on tblEmployee table.
Update vWEmployeesDataExceptSalary
Set Name = 'Mikey' Where Id = 2
Along the same lines, it is also possible to insert and delete rows from the base table using
views.
Delete from vWEmployeesDataExceptSalary where Id = 2
Insert into vWEmployeesDataExceptSalary values (2, 'Mikey', 'Male', 2)
Now, let us see, what happens if our view is based on multiple base tables. For this
purpose, let's create tblDepartment table and populate with some sample data.
SQL Script to create tblDepartment table
CREATE TABLE tblDepartment
(
DeptId int Primary Key,
DeptName nvarchar(20)
)
Create a view which joins tblEmployee and tblDepartment tables, and return the result as
shown below.
vwEmployeeDetailsByDepartment Data:
Now, let's update, John's department, from HR to IT. At the moment, there are 2 employees
(Ben, and John) in the HR department.
Update vwEmployeeDetailsByDepartment
set DeptName='IT' where Name = 'John'
Notice, that Ben's department is also changed to IT. To understand the reasons for incorrect
UPDATE, select Data from tblDepartment and tblEmployee base tables.
tblEmployee Table
tblDepartment
What is an Indexed View or What happens when you create an Index on a view?
A standard or Non-indexed view, is just a stored SQL query. When, we try to retrieve data
from the view, the data is actually retrieved from the underlying base tables. So, a view is just a
virtual table it does not store any data, by default.
However, when you create an index, on a view, the view gets materialized. This means, the
view is now, capable of storing data. In SQL server, we call them Indexed views and in Oracle,
Materialized views.
Let's now, look at an example of creating an Indexed view. For the purpose of this session,
we will be using tblProduct and tblProductSales tables.
tblProduct Table
tblProductSales Table
Create a view which returns Total Sales and Total Transactions by Product. The output
should be, as shown below.
Script to create view vWTotalSalesByProduct
Create view vWTotalSalesByProduct
with SchemaBinding
as
Select Name,
SUM(ISNULL((QuantitySold * UnitPrice), 0)) as TotalSales,
COUNT_BIG(*) as TotalTransactions
from dbo.tblProductSales
join dbo.tblProduct
on dbo.tblProduct.ProductId = dbo.tblProductSales.ProductId
group by Name
If you want to create an Index, on a view, the following rules should be followed by the view.
For the complete list of all rules, please check MSDN.
1. The view should be created with SchemaBinding option
3. If GROUP BY is specified, the view select list must contain a COUNT_BIG(*) expression
4. The base tables in the view, should be referenced with 2 part name. In this example,
tblProduct and tblProductSales are referenced using dbo.tblProduct and dbo.tblProductSales
respectively.
Since, we now have an index on the view, the view gets materialized. The data is stored in
the view. So when we execute Select * from vWTotalSalesByProduct, the data is retrurned from
the view itself, rather than retrieving data from the underlying base tables.
Indexed views, can significantly improve the performance of queries that involves JOINS and
Aggregations. The cost of maintaining an indexed view is much higher than the cost of
maintaining a table index.
Indexed views are ideal for scenarios, where the underlying data is not frequently changed.
Indexed views are more often used in OLAP systems, because the data is mainly used for
reporting and analysis purposes. Indexed views, may not be suitable for OLTP systems, as the
data is frequently added and changed.
Limitations of views
1. You cannot pass parameters to a view. Table Valued functions are an excellent
replacement for parameterized views.
We will use tblEmployee table for our examples. SQL Script to create tblEmployee table:
CREATE TABLE tblEmployee
(
Id int Primary Key,
Name nvarchar(30),
Salary int,
Gender nvarchar(10),
DepartmentId int
)
Employee Table
3. The ORDER BY clause is invalid in views unless TOP or FOR XML is also specified.
Create View vWEmployeeDetailsSorted
as
Select Id, Name, Gender, DepartmentId
from tblEmployee
order by Id
If you use ORDER BY, you will get an error stating - 'The ORDER BY clause is invalid in views,
inline functions, derived tables, subqueries, and common table expressions, unless TOP or
FOR XML is also specified.'
DML Triggers
In SQL server there are 3 types of triggers
1. DML triggers
2. DDL triggers
3. Logon trigger
In general, a trigger is a special kind of stored procedure that automatically executes when
an event occurs in the database server.
DML stands for Data Manipulation Language. INSERT, UPDATE, and DELETE statements
are DML statements. DML triggers are fired, whenever data is modified using INSERT,
UPDATE, and DELETE events.
On other hand, as the name says, INSTEAD of triggers, fires instead of the triggering
action. The INSERT, UPDATE, and DELETE statements, can cause an INSTEAD OF trigger to
fire INSTEAD OF the respective statement execution.
tblEmployee
SQL Script to create tblEmployeeAudit table:
CREATE TABLE tblEmployeeAudit
(
Id int identity(1,1) primary key,
AuditData nvarchar(1000)
)
Whenever, a new Employee is added, we want to capture the ID and the date and time, the
new employee is added in tblEmployeeAudit table. The easiest way to achieve this is by having
an AFTER TRIGGER for INSERT event.
In the trigger, we are getting the id from inserted table. So, what is this inserted table?
INSERTED table, is a special table used by DML triggers. When you add a new row into
tblEmployee table, a copy of the row will also be made into inserted table, which only a trigger
can access. You cannot access this table outside the context of the trigger. The structure of the
inserted table will be identical to the structure of tblEmployee table.
Along, the same lines, let us now capture audit information, when a row is deleted from
the table, tblEmployee.
Example for AFTER TRIGGER for DELETE event on tblEmployee table:
CREATE TRIGGER tr_tblEMployee_ForDelete
ON tblEmployee
FOR DELETE
AS
BEGIN
Declare @Id int
Select @Id = Id from deleted
The only difference here is that, we are specifying, the triggering event as DELETE and
retrieving the deleted row ID from DELETED table. DELETED table, is a special table used by
DML triggers. When you delete a row from tblEmployee table, a copy of the deleted row will be
made available in DELETED table, which only a trigger can access. Just like INSERTED table,
DELETED table cannot be accessed, outside the context of the trigger and, the structure of the
DELETED table will be identical to the structure of tblEmployee table.
Triggers make use of 2 special tables, INSERTED and DELETED. The inserted table
contains the updated data and the deleted table contains the old data. The After trigger for
UPDATE event, makes use of both inserted and deleted tables.
Immediately after the UPDATE statement execution, the AFTER UPDATE trigger gets fired,
and you should see the contents of INSERTED and DELETED tables.
The following AFTER UPDATE trigger, audits employee information upon UPDATE, and
stores the audit data in tblEmployeeAudit table.
Alter trigger tr_tblEmployee_ForUpdate
on tblEmployee
for Update
as
Begin
-- Declare variables to hold old and updated data
Declare @Id int
Declare @OldName nvarchar(20), @NewName nvarchar(20)
Declare @OldSalary int, @NewSalary int
Declare @OldGender nvarchar(20), @NewGender nvarchar(20)
Declare @OldDeptId int, @NewDeptId int
-- Delete the row from temp table, so we can move to the next row
Delete from #TempTable where Id = @Id
End
End
We know that, AFTER triggers are fired after the triggering event(INSERT, UPDATE or DELETE
events), whereas, INSTEAD OF triggers are fired instead of the triggering event(INSERT,
UPDATE or DELETE events). In general, INSTEAD OF triggers are usually used to correctly
update views that are based on multiple tables.
We will use Employee and Department tables. So, first, let's create these 2 tables.
When you execute, Select * from vWEmployeeDetails, the data from the view, should be as
shown below
Now, let's try to insert a row into the view, vWEmployeeDetails, by executing the following
query. At this point, an error will be raised stating 'View or function vWEmployeeDetails is not
updatable because the modification affects multiple base tables.'
Insert into vWEmployeeDetails values(7, 'Valarie', 'Female', 'IT')
So, inserting a row into a view that is based on multipe tables, raises an error by default.
Now, let's understand, how INSTEAD OF TRIGGERS can help us in this situation. Since, we
are getting an error, when we are trying to insert a row into the view, let's create an INSTEAD
OF INSERT trigger on the view vWEmployeeDetails.
The instead of trigger correctly inserts, the record into tblEmployee table. Since, we are
inserting a row, the inserted table, contains the newly added row, where as thedeleted table
will be empty.
In the trigger, we used Raiserror() function, to raise a custom error, when the
DepartmentName provided in the insert query, doesn’t exist. We are passing 3 parameters to
the Raiserror() method. The first parameter is the error message; the second parameter is the
severity level. Severity level 16, indicates general errors that can be corrected by the user. The
final parameter is the state.
An INSTEAD OF UPDATE triggers gets fired instead of an update event, on a table or a view.
For example, let's say we have, an INSTEAD OF UPDATE trigger on a view or a table, and then
when you try to update a row with in that view or table, instead of the UPDATE, the trigger gets
fired automatically. INSTEAD OF UPDATE TRIGGERS, are of immense help, to correctly
update a view that is based on multiple tables.
Let's create the required Employee and Department tables that we will be using for this
session.
Since, we now have the required tables, let's create a view based on these tables. The view
should return Employee Id, Name, Gender and DepartmentName columns. So, the view is
obviously based on multiple tables.
Script to create the view:
Create view vWEmployeeDetails
as
Select Id, Name, Gender, DeptName
from tblEmployee
join tblDepartment
on tblEmployee.DepartmentId = tblDepartment.DeptId
When you execute, Select * from vWEmployeeDetails, the data from the view, should be as
shown below
We tried to insert a row into the view, and we got an error stating - 'View or function
vWEmployeeDetails is not updatable because the modification affects multiple base tables.'
Now, let's try to update the view, in such a way that, it affects, both the underlying tables, and
see, if we get the same error. The following UPDATE statement changesName
column from tblEmployee and DeptName column from tblDepartment. So, when we execute
this query, we get the same error.
Update vWEmployeeDetails
set Name = 'Johny', DeptName = 'IT'
where Id = 1
Now, let's try to change, just the department of John from HR to IT. The following UPDATE
query, affects only one table, tblDepartment. So, the query should succeed. But, before
executing the query, please note that, employees JOHN and BEN are in HRdepartment.
Update vWEmployeeDetails
set DeptName = 'IT'
where Id = 1
After executing the query, select the data from the view, and notice that BEN'sDeptName is
also changed to IT. We intended to just change JOHN's DeptName. So, the UPDATE didn't
work as expected. This is because, the UPDATE query, updated theDeptName from HR to IT,
in tblDepartment table. For the UPDATE to work correctly, we should change
the DeptId of JOHN from 3 to 1.
Before, we create the trigger, let's update the DeptName to HR for record with Id = 3.
Update tblDepartment set DeptName = 'HR' where DeptId = 3
-- If DeptName is updated
if(Update(DeptName))
Begin
Declare @DeptId int
if(@DeptId is NULL )
Begin
Raiserror('Invalid Department Name', 16, 1)
Return
End
-- If gender is updated
if(Update(Gender))
Begin
Update tblEmployee set Gender = inserted.Gender
from inserted
join tblEmployee
on tblEmployee.Id = inserted.id
End
-- If Name is updated
if(Update(Name))
Begin
Update tblEmployee set Name = inserted.Name
from inserted
join tblEmployee
on tblEmployee.Id = inserted.id
End
End
The UPDATE query works as expected. The INSTEAD OF UPDATE trigger, correctly
updates, JOHN's DepartmentId to 1, in tblEmployee table.
Now, let's try to update Name, Gender and DeptName. The UPDATE query, works as
expected, without raising the error - 'View or function vWEmployeeDetails is not updatable
because the modification affects multiple base tables.'
Update vWEmployeeDetails
set Name = 'Johny', Gender = 'Female', DeptName = 'IT'
where Id = 1
Update() function used in the trigger, returns true, even if you update with the same value. For
this reason, I recommend to compare values between inserted and deleted tables, rather than
relying on Update() function. The Update() function does not operate on a per row basis, but
across all rows.
Instead of delete trigger
An INSTEAD OF DELETE trigger gets fired instead of the DELETE event, on a table or a view.
For example, let's say we have, an INSTEAD OF DELETE trigger on a view or a table, and then
when you try to update a row from that view or table, instead of the actual DELETE event, the
trigger gets fired automatically. INSTEAD OF DELETE TRIGGERS, are used, to delete records
from a view that is based on multiple tables.
Let's create the required Employee and Department tables that we will be using for this topic.
Since, we now have the required tables, let's create a view based on these tables. The view
should return Employee Id, Name, Gender and DepartmentName columns. So, the view is
obviously based on multiple tables.
Script to create the view:
Create view vWEmployeeDetails
as
Select Id, Name, Gender, DeptName
from tblEmployee
join tblDepartment
on tblEmployee.DepartmentId = tblDepartment.DeptId
When you execute, Select * from vWEmployeeDetails, the data from the view, should be as
shown below
We tried to insert a row into the view, and we got an error stating - 'View or function
vWEmployeeDetails is not updatable because the modification affects multiple base
tables'. Along, the same lines, when we tried to update a view that is based on multiple tables,
we got the same error. To get the error, the UPDATE should affect both the base tables. If the
update affects only one base table, we don't get the error, but the UPDATE does not work
correctly, if the DeptName column is updated.
Now, let's try to delete a row from the view, and we get the same error.
Delete from vWEmployeeDetails where Id = 1
--Subquery
--Delete from tblEmployee
--where Id in (Select Id from deleted)
End
Upon executing the following DELETE statement, the row gets DELETED as expected from
tblEmployee table
Delete from vWEmployeeDetails where Id = 1
Certain system stored procedures that perform DDL-like operations can also fire DDL
triggers. Example - sp_rename system stored procedure
DDL triggers scope : DDL triggers can be created in a specific database or at the server
level.
Please note : If you can't find the trigger that you just created, make sure to refresh the
Database Triggers folder.
When you execute the following code to create the table, the trigger will automatically fire and
will print the message - New table created
Create Table Test (Id int)
The above trigger will be fired only for one DDL event CREATE_TABLE. If you want this trigger
to be fired for multiple events, for example when you alter or drop a table, then separate the
events using a comma as shown below.
The 2 DDL triggers above execute some code in response to DDL events
Now let us look at an example of how to prevent users from creating, altering or dropping tables.
To do this modify the trigger as shown below.
To be able to create, alter or drop a table, you either have to disable or delete the trigger.
To disable trigger
1. Right click on the trigger in object explorer and select "Disable" from the context menu
2. You can also disable the trigger using the following T-SQL command
DISABLE TRIGGER trMyFirstTrigger ON DATABASE
To enable trigger
1. Right click on the trigger in object explorer and select "Enable" from the context menu
2. You can also enable the trigger using the following T-SQL command
ENABLE TRIGGER trMyFirstTrigger ON DATABASE
To drop trigger
1. Right click on the trigger in object explorer and select "Delete" from the context menu
2. You can also drop the trigger using the following T-SQL command
DROP TRIGGER trMyFirstTrigger ON DATABASE
Certain system stored procedures that perform DDL-like operations can also fire DDL triggers.
The following trigger will be fired when ever you rename a database object
usingsp_rename system stored procedure.
The following code changes the name of the TestTable to NewTestTable. When this code is
executed, it will fire the trigger trRenameTable
sp_rename 'TestTable', 'NewTestTable'
The following code changes the name of the Id column in NewTestTable to NewId. When this
code is executed, it will fire the trigger trRenameTable
sp_rename 'NewTestTable.Id' , 'NewId', 'column'
The following trigger is a database scoped trigger. This will prevent users from creating, altering
or dropping tables only from the database in which it is created.
If you have another database on the server, they will be able to create, alter or drop tables in
that database. If you want to prevent users from doing this you may create the trigger again in
this database.
But, what if you have 100 different databases on your SQL Server, and you want to prevent
users from creating, altering or dropping tables from all these 100 databases. Creating the same
trigger for all the 100 different databases is not a good approach for 2 reasons.
1. It is tedious and error prone
2. Maintainability is a night mare. If for some reason you have to change the trigger, you will
have to do it in 100 different databases, which again is tedious and error prone.
This is where server-scoped DDL triggers come in handy. When you create a server scoped
DDL trigger, it will fire in response to the DDL events happening in all of the databases on that
server.
Creating a Server-scoped DDL trigger : Similar to creating a database scoped trigger, except
that you will have to change the scope to ALL Server as shown below.
Now if you try to create, alter or drop a table in any of the databases on the server, the trigger
will be fired.
Server scoped triggers will always fire before any of the database scoped triggers. This
execution order cannot be changed.
In the example below, we have a database-scoped and a server-scoped trigger handling the
same event (CREATE_TABLE). When you create a table, notice that server-scoped trigger is
always fired before the database-scoped trigger.
CREATE TRIGGER tr_DatabaseScopeTrigger
ON DATABASE
FOR CREATE_TABLE
AS
BEGIN
Print 'Database Scope Trigger'
END
GO
Using the sp_settriggerorder stored procedure, you can set the execution order of server-
scoped or database-scoped triggers.
EXEC sp_settriggerorder
@triggername = 'tr_DatabaseScopeTrigger1',
@order = 'none',
@stmttype = 'CREATE_TABLE',
@namespace = 'DATABASE'
GO
If you have a database-scoped and a server-scoped trigger handling the same event, and
if you have set the execution order at both the levels. Here is the execution order of the triggers.
1. The server-scope trigger marked First
2. Other server-scope triggers
3. The server-scope trigger marked Last
4. The database-scope trigger marked First
5. Other database-scope triggers
6. The database-scope trigger marked Last
Audit table changes in sql server
The following trigger audits all table changes in all databases on a SQL Server
CREATE TRIGGER tr_AuditTableChanges
ON ALL SERVER
FOR CREATE_TABLE, ALTER_TABLE, DROP_TABLE
AS
BEGIN
DECLARE @EventData XML
SELECT @EventData = EVENTDATA()
In the above example we are using EventData() function which returns event data in XML
format. The following XML is returned by the EventData() function when I created a table with
name = MyTable in SampleDB database.
<EVENT_INSTANCE>
<EventType>CREATE_TABLE</EventType>
<PostTime>2016-10-22T22:12:02.880</PostTime>
<SPID>55</SPID>
<ServerName>ABC-PC</ServerName>
<LoginName>abc-PC\abc</LoginName>
<UserName>dbo</UserName>
<DatabaseName>vikul</DatabaseName>
<SchemaName>dbo</SchemaName>
<ObjectName>MyTable1</ObjectName>
<ObjectType>TABLE</ObjectType>
<TSQLCommand>
<SetOptionsANSI_NULLS="ON" ANSI_NULL_DEFAULT="ON" ANSI_PADDING="ON"
QUOTED_IDENTIFIER="ON" ENCRYPTED="FALSE" />
<CommandText>
Create Table MyTable1
(
Id int,
Name nvarchar(50),
Gender nvarchar(50)
)
</CommandText>
</TSQLCommand>
</EVENT_INSTANCE>
As the name implies Logon triggers fire in response to a LOGON event. Logon triggers fire
after the authentication phase of logging in finishes, but before the user session is actually
established.
Logon trigger example : The following trigger limits the maximum number of open connections
for a user to 3.
The trigger error message will be written to the error log. Execute the following command to
read the error log.
Execute sp_readerrorlog
Object explorer with in sql server management studio can be used to get the list of tables in a
specific database. However, if we have to write a query to achieve the same, there are 3 system
views that we can use.
1. SYSOBJECTS - Supported in SQL Server version 2000, 2005 & 2008
2. SYS.TABLES - Supported in SQL Server version 2005 & 2008
3. INFORMATION_SCHEMA.TABLES - Supported in SQL Server version 2005 & 2008
Executing the above query on my SAMPLE database returned the following values for XTYPE
column from SYSOBJECTS
IT - Internal table
P - Stored procedure
PK - PRIMARY KEY constraint
S - System table
SQ - Service queue
U - User table
V - View
Please check the following MSDN link for all possible XTYPE column values and what they
represent.
http://msdn.microsoft.com/en-us/library/ms177596.aspx
When you run this script once, the table tblEmployee gets created without any errors. If you run
the script again, you will get an error - There is already an object named 'tblEmployee' in the
database.
The above script is re-runnable, and can be run any number of times. If the table is not
already created, the script will create the table, else you will get a message stating - The table
already exists. You will never get a sql script error.
Sql server built-in function OBJECT_ID(), can also be used to check for the existence of the
table
IF OBJECT_ID('tblEmployee') IS NULL
Begin
-- Create Table Script
Print 'Table tblEmployee created'
End
Else
Begin
Print 'Table tblEmployee already exists'
End
Depending on what we are trying to achieve, sometime we may need to drop (if the table
already exists) and re-create it. The sql script below, does exactly the same thing.
Use [Sample]
IF OBJECT_ID('tblEmployee') IS NOT NULL
Begin
Drop Table tblEmployee
End
Create table tblEmployee
(
ID int identity primary key,
Name nvarchar(100),
Gender nvarchar(10),
DateOfBirth DateTime
)
Let's look at another example. The following sql script adds column "EmailAddress" to
table tblEmployee. This script is not re-runnable because, if the column exists we get a script
error.
Use [Sample]
ALTER TABLE tblEmployee
ADD EmailAddress nvarchar(50)
Col_length() function can also be used to check for the existence of a column
If col_length('tblEmployee','EmailAddress') is not null
Begin
Print 'Column already exists'
End
Else
Begin
Print 'Column does not exist'
End
Let's create the required Employee and Department tables that we will be using for this topic.
Now, we want to write a query which would return the following output. The query should
return, the Department Name and Total Number of employees, with in the department. The
departments with greatar than or equal to 2 employee should only be returned.
Obviously, there are severl ways to do this. Let's see how to achieve this, with the help of a
view
Script to create the View
Create view vWEmployeeCount
as
Select DeptName, DepartmentId, COUNT(*) as TotalEmployees
from tblEmployee
join tblDepartment
on tblEmployee.DepartmentId = tblDepartment.DeptId
group by DeptName, DepartmentId
Note: Views get saved in the database, and can be available to other queries and stored
procedures. However, if this view is only used at this one place, it can be easily eliminated using
other options, like CTE, Derived Tables, Temp Tables, Table Variable etc.
Now, let's see, how to achieve the same using, temporary tables. We are using local
temporary tables here.
Select DeptName, DepartmentId, COUNT(*) as TotalEmployees
into #TempEmployeeCount
from tblEmployee
join tblDepartment
on tblEmployee.DepartmentId = tblDepartment.DeptId
group by DeptName, DepartmentId
Note: Temporary tables are stored in TempDB. Local temporary tables are visible only in the
current session, and can be shared between nested stored procedure calls. Global temporary
tables are visible to other sessions and are destroyed, when the last connection referencing the
table is closed.
Insert @tblEmployeeCount
Select DeptName, DepartmentId, COUNT(*) as TotalEmployees
from tblEmployee
join tblDepartment
on tblEmployee.DepartmentId = tblDepartment.DeptId
group by DeptName, DepartmentId
Select DeptName, TotalEmployees
From @tblEmployeeCount
where TotalEmployees >= 2
Note: Just like TempTables, a table variable is also created in TempDB. The scope of a table
variable is the batch, stored procedure, or statement block in which it is declared. They can be
passed as parameters between procedures.
Note: Derived tables are available only in the context of the current query.
Using CTE
With EmployeeCount(DeptName, DepartmentId, TotalEmployees)
as
(
Select DeptName, DepartmentId, COUNT(*) as TotalEmployees
from tblEmployee
join tblDepartment
on tblEmployee.DepartmentId = tblDepartment.DeptId
group by DeptName, DepartmentId
)
Note: A CTE can be thought of as a temporary result set that is defined within the execution
scope of a single SELECT, INSERT, UPDATE, DELETE, or CREATE VIEW statement. A CTE
is similar to a derived table in that it is not stored as an object and lasts only for the duration of
the query.
Common Table Expressions
Common table expression (CTE) is introduced in SQL server 2005. A CTE is a temporary
result set, that can be referenced within a SELECT, INSERT, UPDATE, or DELETE statement,
that immediately follows the CTE.
Write a query using CTE, to display the total number of Employees by Department Name. The
output should be as shown below.
Before we write the query, let's look at the syntax for creating a CTE.
WITH cte_name (Column1, Column2, ..)
AS
( CTE_query )
We define a CTE, using WITH keyword, followed by the name of the CTE. In our
example, EmployeeCount is the name of the CTE. Within parentheses, we specify the columns
that make up the CTE. DepartmentId and TotalEmployees are the columns
ofEmployeeCount CTE. These 2 columns map to the columns returned by the SELECT CTE
query. The CTE column names and CTE query column names can be different. Infact, CTE
column names are optional. However, if you do specify, the number of CTE columns and
the CTE SELECT query columns should be same. Otherwise you will get an error stating -
'EmployeeCount has fewer columns than were specified in the column list'. The column list, is
followed by the as keyword, following which we have the CTE query within a pair of
parentheses.
EmployeeCount CTE is being joined with tblDepartment table, in the SELECT query, that
immediately follows the CTE. Remember, a CTE can only be referenced by a SELECT,
INSERT, UPDATE, or DELETE statement, that immediately follows the CTE. If you try to do
something else in between, we get an error stating - 'Common table expression defined but not
used'. The following SQL, raise an error.
Select 'Hello'
Updatable CTE
Now, query the tblEmployee table. JOHN's gender is actually UPDATED. So, if a CTE is
created on one base table, then it is possible to UPDATE the CTE, which in turn will update the
underlying base table. In this case, UPDATING Employees_Name_GenderCTE,
updates tblEmployee table.
Now, let's create a CTE, on both the tables - tblEmployee and tblDepartment. The CTE
should return, Employee Id, Name, Gender and Department. In short the output should be as
shown below.
CTE, that returns Employees by Department
With EmployeesByDepartment
as
(
Select Id, Name, Gender, DeptName
from tblEmployee
join tblDepartment
on tblDepartment.DeptId = tblEmployee.DepartmentId
)
Select * from EmployeesByDepartment
Let's update this CTE. Let's change JOHN's Gender from Female to Male. Here, the CTE is
based on 2 tables, but the UPDATE statement affects only one base tabletblEmployee. So the
UPDATE succeeds. So, if a CTE is based on more than one table, and if the UPDATE affects
only one base table, then the UPDATE is allowed.
With EmployeesByDepartment
as
(
Select Id, Name, Gender, DeptName
from tblEmployee
join tblDepartment
on tblDepartment.DeptId = tblEmployee.DepartmentId
)
Update EmployeesByDepartment set Gender = 'Male' where Id = 1
Now, let's try to UPDATE the CTE, in such a way, that the update affects both the tables -
tblEmployee and tblDepartment. This UPDATE
statement changes Gender fromtblEmployee table and DeptName from tblDepartment table.
When you execute this UPDATE, you get an error stating - 'View or function
EmployeesByDepartment is not updatable because the modification affects multiple base
tables'. So, if a CTE is based on multiple tables, and if the UPDATE statement affects more than
1 base table, then the UPDATE is not allowed.
With EmployeesByDepartment
as
(
Select Id, Name, Gender, DeptName
from tblEmployee
join tblDepartment
on tblDepartment.DeptId = tblEmployee.DepartmentId
)
Update EmployeesByDepartment set
Gender = 'Female', DeptName = 'IT'
where Id = 1
Finally, let's try to UPDATE just the DeptName. Let's change JOHN's DeptName from HR to
IT. Before, you execute the UPDATE statement, notice that BEN is also currently in HR
department.
With EmployeesByDepartment
as
(
Select Id, Name, Gender, DeptName
from tblEmployee
join tblDepartment
on tblDepartment.DeptId = tblEmployee.DepartmentId
)
Update EmployeesByDepartment set
DeptName = 'IT' where Id = 1
After you execute the UPDATE. Select data from the CTE, and you will see that BEN's
DeptName is also changed to IT.
This is because, when we updated the CTE, the UPDATE has actually changed
theDeptName from HR to IT, in tblDepartment table, instead of changing
the DepartmentIdcolumn (from 3 to 1) in tblEmployee table. So, if a CTE is based on multiple
tables, and if the UPDATE statement affects only one base table, the update succeeds. But the
update may not work as you expect.
So in short if,
1. A CTE is based on a single base table, then the UPDATE suceeds and works as expected.
2. A CTE is based on more than one base table, and if the UPDATE affects multiple base
tables, the update is not allowed and the statement terminates with an error.
3. A CTE is based on more than one base table, and if the UPDATE affects only one base table,
the UPDATE succeeds(but not as expected always)
Recursive CTE
A CTE that references itself is called as recursive CTE. Recursive CTE's can be of great
help when displaying hierarchical data. Example, displaying employees in an organization
hierarchy. A simple organization chart is shown below.
Let's create tblEmployee table, which holds the data, that's in the organization chart.
Create Table tblEmployee
(
EmployeeId int Primary key,
Name nvarchar(20),
ManagerId int
)
Since, a MANAGER is also an EMPLOYEE, both manager and employee details are stored in
tblEmployee table. Data from tblEmployee is shown below.
Let's say, we want to display, EmployeeName along with their ManagerName. The ouptut
should be as shown below.
To achieve this, we can simply join tblEmployee with itself. Joining a table with itself is
called as self join. In the output, notice that since JOSH does not have a Manager, we are
displaying 'Super Boss', instead of NULL. We used IsNull(), function to replace NULL with
'Super Boss'.
Along with Employee and their Manager name, we also want to display their level in the
organization. The output should be as shown below.
union all
If (Exists (select *
from information_schema.tables
where table_name = 'tblProducts'))
Begin
Drop Table tblProducts
End
-- Recreate tables
Create Table tblProducts
(
[Id] int identity primary key,
[Name] nvarchar(50),
[Description] nvarchar(250)
)
Print @Id
Set @Id = @Id + 1
End
set @LowerLimitForProductId = 1
set @UpperLimitForProductId = 100000
set @LowerLimitForUnitPrice = 1
set @UpperLimitForUnitPrice = 100
set @LowerLimitForQuantitySold = 1
set @UpperLimitForQuantitySold = 10
Print @Counter
Set @Counter = @Counter + 1
End
Finally, check the data in the tables using a simple SELECT query to make sure the data
has been inserted as expected.
Select * from tblProducts
Select * from tblProductSales
Relational Database Management Systems, including sql server are very good at
handling data in SETS. For example, the following "UPDATE" query, updates a set of rows
that matches the condition in the "WHERE" clause at the same time.
Update tblProductSales Set UnitPrice = 50 where ProductId = 101
However, if there is ever a need to process the rows, on a row-by-row basis, then cursors
are your choice. Cursors are very bad for performance, and should be avoided always. Most of
the time, cursors can be very easily replaced using joins.
Let us now look at a simple example of using sql server cursor to process one row at
time. We will be using tblProducts and tblProductSales tables, for this example. The tables here
show only 5 rows from each table. However, there are 400,000 records in tblProducts and
600,000 records in tblProductSales tables.
Cursor Example: Let us say, I want to update the UNITPRICE column in tblProductSales table,
based on the following criteria
1. If the ProductName = 'Product - 55', Set Unit Price to 55
2. If the ProductName = 'Product - 65', Set Unit Price to 65
3. If the ProductName is like 'Product - 100%', Set Unit Price to 1000
-- Fetch the row from the result set into the variable
Fetch Next from ProductIdCursor into @ProductId
The cursor will loop thru each row in tblProductSales table. As there are 600,000 rows, to be
processed on a row-by-row basis, it takes around 40 to 45 second. We can achieve this very
easily using a join, and this will significantly increase the performance.
To check if the rows have been correctly updated, please use the following query.
Select Name, UnitPrice
from tblProducts join
tblProductSales on tblProducts.Id = tblProductSales.ProductId
where (Name='Product - 55' or Name='Product - 65' or Name like 'Product - 100%')
Update tblProductSales
set UnitPrice =
Case
When Name = 'Product - 55' Then 155
When Name = 'Product - 65' Then 165
When Name like 'Product - 100%' Then 1000
End
from tblProductSales
join tblProducts
on tblProducts.Id = tblProductSales.ProductId
Where Name = 'Product - 55' or Name = 'Product - 65' or
When I executed this query, it took less than a second. Where as the same thing using a
cursor took 45 seconds. Just imagine the amount of impact cursors have on performance.
Cursors should be used as your last option. Most of the time cursors can be very easily
replaced using joins.
To check the result of the UPDATE statement, use the following query.
Select Name, UnitPrice from
tblProducts join
tblProductSales on tblProducts.Id = tblProductSales.ProductId
where (Name='Product - 55' or Name='Product - 65' or
Name like 'Product - 100%')
What is a Transaction?
A transaction is a group of commands that change the data stored in a database. A
transaction is treated as a single unit. A transaction ensures that, either all of the commands
succeed, or none of them. If one of the commands in the transaction fails, all of the commands
fail, and any data that was modified in the database is rolled back. In this way, transactions
maintain the integrity of data in a database.
Lets understand transaction processing with an example. For this purpose, let's Create and
populate, tblMailingAddress and tblPhysicalAddress tables
Create Table tblMailingAddress
(
AddressId int NOT NULL primary key,
EmployeeNumber int,
HouseNumber nvarchar(50),
StreetAddress nvarchar(50),
City nvarchar(10),
PostalCode nvarchar(50)
)
Insert into tblMailingAddress values (1, 101, '#10', 'King Street', 'Londoon', 'CR27DW')
Insert into tblPhysicalAddress values (1, 101, '#10', 'King Street', 'Londoon', 'CR27DW')
An employee with EmployeeNumber 101, has the same address as his physical and mailing
address. His city name is mis-spelled as Londoon instead of London. The following stored
procedure 'spUpdateAddress', updates the physical and mailing addresses. Both the UPDATE
statements are wrapped between BEGIN TRANSACTIONand COMMIT TRANSACTION block,
which in turn is wrapped between BEGIN TRY andEND TRY block.
So, if both the UPDATE statements succeed, without any errors, then the transaction is
committed. If there are errors, then the control is immediately transferred to the catch block.
The ROLLBACK TRANSACTION statement, in the CATCH block, rolls back the transaction,
and any data that was written to the database by the commands is backed out.
Let's now make the second UPDATE statement, fail. CITY column length in
tblPhysicalAddress table is 10. The second UPDATE statement fails, because the value for
CITY column is more than 10 characters.
Alter Procedure spUpdateAddress
as
Begin
Begin Try
Begin Transaction
Update tblMailingAddress set City = 'LONDON12'
where AddressId = 1 and EmployeeNumber = 101
Now, if we execute spUpdateAddress, the first UPDATE statements succeeds, but the
second UPDATE statement fails. As, soon as the second UPDATE statement fails, the control
is immediately transferred to the CATCH block. The CATCH block rolls the transaction back.
So, the change made by the first UPDATE statement is undone.
A transaction is a group of database commands that are treated as a single unit. A successful
transaction must pass the "ACID" test, that is, it must be
A - Atomic
C - Consistent
I - Isolated
D - Durable
Atomic - All statements in the transaction either completed successfully or they were all rolled
back. The task that the set of operations represents is either accomplished or not, but in any
case not left half-done. For example, in the spUpdateInventory_and_Sellstored procedure,
both the UPDATE statements should succeed. If one UPDATE statement succeeds and the
other UPDATE statement fails, the database should undo the change made by the first
UPDATE statement, by rolling it back. In short, the transaction should be ATOMIC.
Consistent - All data touched by the transaction is left in a logically consistent state. For
example, if stock available numbers are decremented from tblProductTable, then, there has to
be a related entry in tblProductSales table. The inventory can't just disappear.
Isolated - The transaction must affect data without interfering with other concurrent
transactions, or being interfered with by them. This prevents transactions from making changes
to data based on uncommitted information, for example changes to a record that are
subsequently rolled back. Most databases use locking to maintain transaction isolation.
Durable - Once a change is made, it is permanent. If a system error or power failure occurs
before a set of commands is complete, those commands are undone and the data is restored to
its original state once the system begins running again.
With the introduction of Try/Catch blocks in SQL Server 2005, error handling in sql server,
is now similar to programming languages like C#, and java. Before understanding error handling
using try/catch, let's step back and understand how error handling was done in SQL Server
2000, using system function @@Error. Sometimes, system functions that begin with two at
signs (@@), are called as global variables. They are not variables and do not have the
same behaviours as variables, instead they are very similar to functions.
Now let's create tblProduct and tblProductSales, that we will be using for the rest of this
demo.
3. In the procedure, we are using Raiserror() function to return an error message back to the
calling application, if the stock available is less than the quantity we are trying to sell. We have
to pass atleast 3 parameters to the Raiserror() function.
RAISERROR('Error Message', ErrorSeverity, ErrorState)
Severity and State are integers. In most cases, when you are returning custom errors, the
severity level is 16, which indicates general errors that can be corrected by the user. In this
case, the error can be corrected, by adjusting the @QuantityToSell, to be less than or equal to
the stock available. ErrorState is also an integer between 1 and 255. RAISERROR only
generates errors with state from 1 through 127.
4. The problem with this procedure is that, the transaction is always committed. Even, if there
is an error somewhere, between updating tblProduct and tblProductSales table. In fact, the
main purpose of wrapping these 2 statments (Update tblProduct Statement & Insert into
tblProductSales statement) in a transaction is to ensure that, both of the statements are treated
as a single unit. For example, if we have an error when executing the second statement, then
the first statement should also be rolledback.
In SQL server 2000, to detect errors, we can use @@Error system function. @@Error returns a
NON-ZERO value, if there is an error, otherwise ZERO, indicating that the previous sql
statement encountered no errors. The stored procedurespSellProductCorrected, makes use
of @@ERROR system function to detect any errors that may have occurred. If there are errors,
roll back the transaction, else commit the transaction. If you comment the line
(Set @MaxProductSalesId = @MaxProductSalesId + 1), and then execute the stored procedure
there will be a primary key violation error, when trying to insert into tblProductSales. As a result
of this the entire transaction will be rolled back.
Alter Procedure spSellProductCorrected
@ProductId int,
@QuantityToSell int
as
Begin
-- Check the stock available, for the product we want to sell
Declare @StockAvailable int
Select @StockAvailable = QtyAvailable
from tblProduct where ProductId = @ProductId
Note: @@ERROR is cleared and reset on each statement execution. Check it immediately
following the statement being verified, or save it to a local variable that can be checked later.
On the other hand, when you execute the code below, you get message 'No Errors'printed.
This is because the @@ERROR is cleared and reset on each statement execution.
Insert into tblProduct values(2, 'Mobile Phone', 1500, 100)
--At this point @@ERROR will have a NON ZERO value
Select * from tblProduct
--At this point @@ERROR gets reset to ZERO, because the
--select statement successfullyexecuted
if(@@ERROR <> 0)
Print 'Error Occurred'
Else
Print 'No Errors'
In this example, we are storing the value of @@Error function to a local variable, which is then
used later.
Declare @Error int
Insert into tblProduct values(2, 'Mobile Phone', 1500, 100)
Set @Error = @@ERROR
Select * from tblProduct
if(@Error <> 0)
Print 'Error Occurred'
Else
Print 'No Errors'
We have seen handling errors in SQL Server using@@Error system function. We will see, how
to achieve the same using Try/Catch blocks.
Syntax:
BEGIN TRY
{ Any set of SQL statements }
END TRY
BEGIN CATCH
[ Optional: Any set of SQL statements ]
END CATCH
[Optional: Any other SQL Statements]
Any set of SQL statements, that can possibly throw an exception are wrapped between
BEGIN TRY and END TRY blocks. If there is an exception in the TRY block, the control
immediately, jumps to the CATCH block. If there is no exception, CATCH block will be skipped,
and the statements, after the CATCH block are executed.
Errors trapped by a CATCH block are not returned to the calling application. If any part of
the error information must be returned to the application, the code in the CATCH block must do
so by using RAISERROR() function.
2. Also notice that, in the scope of the CATCH block, there are several system functions, that
are used to retrieve more information about the error that occurred These functions return
NULL if they are executed outside the scope of the CATCH block.
The OVER clause combined with PARTITION BY is used to break up data into partitions.
Syntax : function (...) OVER (PARTITION BY col1, Col2, ...)
For example :
COUNT(Gender) OVER (PARTITION BY Gender) will partition the data by GENDER i.e there
will 2 partitions (Male and Female) and then the COUNT() function is applied over each
partition.
Any of the following functions can be used. Please note this is not the complete list.
COUNT(), AVG(), SUM(), MIN(), MAX(), ROW_NUMBER(), RANK(), DENSE_RANK() etc.
Example : We will use the following Employees table for the examples.
SQl Script to create Employees table
Create Table Employees
(
Id int primary key,
Name nvarchar(50),
Gender nvarchar(10),
Salary int
)
Go
Write a query to retrieve total count of employees by Gender. Also in the result we want
Average, Minimum and Maximum salary by Gender. The result of the query should be as shown
below.
This can be very easily achieved using a simple GROUP BY query as show below.
SELECT Gender, COUNT(*) AS GenderTotal, AVG(Salary) AS AvgSal,
MIN(Salary) AS MinSal, MAX(Salary) AS MaxSal
FROM Employees
GROUP BY Gender
What if we want non-aggregated values (like employee Name and Salary) in result set along
with aggregated values
One way to achieve this is by including the aggregations in a subquery and then JOININGit with
the main query as shown in the example below. Look at the amount of T-SQL code we have to
write.
SELECT Name, Salary, Employees.Gender, Genders.GenderTotals,
Genders.AvgSal, Genders.MinSal, Genders.MaxSal
FROM Employees
INNER JOIN
(SELECT Gender, COUNT(*) AS GenderTotals,
AVG(Salary) AS AvgSal,
MIN(Salary) AS MinSal, MAX(Salary) AS MaxSal
FROM Employees
GROUP BY Gender) AS Genders
ON Genders.Gender = Employees.Gender
Better way of doing this is by using the OVER clause combined with PARTITION BY
SELECT Name, Salary, Gender,
COUNT(Gender) OVER(PARTITION BY Gender) AS GenderTotals,
AVG(Salary) OVER(PARTITION BY Gender) AS AvgSal,
MIN(Salary) OVER(PARTITION BY Gender) AS MinSal,
MAX(Salary) OVER(PARTITION BY Gender) AS MaxSal
FROM Employees
Row_Number function
For example : If you have 2 rows at rank 1 and you have 5 rows in total.
RANK() returns - 1, 1, 3, 4, 5
DENSE_RANK returns - 1, 1, 2, 3, 4
Syntax :
RANK() OVER (ORDER BY Col1, Col2, ...)
DENSE_RANK() OVER (ORDER BY Col1, Col2, ...)
Example : We will use the following Employees table for the examples.
SQl Script to create Employees table
Create Table Employees
(
Id int primary key,
Name nvarchar(50),
Gender nvarchar(10),
Salary int
)
Go
DENSE_RANK() on the other hand will not skip ranks if there is a tie. The first 2 rows get rank
1. Third row gets rank 2.
If there are 2 employees with the FIRST highest salary, there are 2 different business
cases
• If your business case is, not to produce any result for the SECOND highest salary, then
use RANK function
• If your business case is to return the next Salary after the tied rows as the SECOND
highest Salary, then use DENSE_RANK function
Since we have 2 Employees with the FIRST highest salary.Rank() function will not return any
rows for the SECOND highest Salary.
WITH Result AS
(
SELECT Salary, RANK() OVER (ORDER BY Salary DESC) AS Salary_Rank
FROM Employees
)
SELECT TOP 1 Salary FROM Result WHERE Salary_Rank = 2
Though we have 2 Employees with the FIRST highest salary. Dense_Rank() function returns,
the next Salary after the tied rows as the SECOND highest Salary
WITH Result AS
(
SELECT Salary, DENSE_RANK() OVER (ORDER BY Salary DESC) ASSalary_Rank
FROM Employees
)
SELECT TOP 1 Salary FROM Result WHERE Salary_Rank = 2
You can also use RANK and DENSE_RANK functions to find the Nth highest Salary among
Male or Female employee groups. The following query finds the 3rd highest salary amount paid
among the Female employees group
WITH Result AS
(
SELECT Salary, Gender,
DENSE_RANK() OVER (PARTITION BY Gender ORDER BY Salary DESC)
AS Salary_Rank
FROM Employees
)
SELECT TOP 1 Salary FROM Result WHERE Salary_Rank = 3
AND Gender = 'Female'
• Returns an increasing integer value starting at 1 based on the ordering of rows imposed
by the ORDER BY clause (if there are no ties)
• ORDER BY clause is required
• PARTITION BY clause is optional
• When the data is partitioned, the integer value is reset to 1 when the partition changes
We will use the following Employees table for the examples.
You will only see the difference when there ties (duplicate values in the column used in the
ORDER BY clause).
To do this
First delete existing data from the Employees table
DELETE FROM Employees
• ROW_NUMBER : Returns an increasing unique number for each row starting at 1, even
if there are duplicates.
• RANK : Returns an increasing unique number for each row starting at 1. When there are
duplicates, same rank is assigned to all the duplicate rows, but the next row after the duplicate
rows will have the rank it would have been assigned if there had been no duplicates. So RANK
function skips rankings if there are duplicates.
• DENSE_RANK : Returns an increasing unique number for each row starting at 1. When
there are duplicates, same rank is assigned to all the duplicate rows but the DENSE_RANK
function will not skip any ranks. This means the next row after the duplicate rows will have the
next rank in the sequence.
Calculate running total in SQL Server 2012
So when computing running total, it is better to use a column that has unique data in the
ORDER BY clause.
NTILE function
NTILE function without PARTITION BY clause : Divides the 10 rows into 3 groups. 4 rows in
first group, 3 rows in the 2nd & 3rd group.
What if the specified number of groups is GREATER THAN the number of rows
NTILE function will try to create as many groups as possible with one row in each group.
With 10 rows in the table, NTILE(11) will create 10 groups with 1 row in each group.
The following query partitions the data into 2 partitions (Male & Female). NTILE(3) creates 3
groups in each of the partitions.
Non-clustered indexes are beneficial for queries with ORDER BY clauses as they can avoid sorting during query execution. When columns are already sorted in a non-clustered index, the database engine can simply scan the index from the first entry to the last to retrieve rows in the correct order. This improves the processing time significantly .
ROW_NUMBER assigns a unique incrementing number to each row. With duplicates, it continues incrementing without skipping. RANK assigns the same rank to duplicate values but skips subsequent rankings equal to the number of duplicates. DENSE_RANK also assigns the same rank to duplicates but does not skip any ranks, providing a more compact ranking without gaps .
Instead of triggers are useful for handling data modifications in views based on multiple tables, which are typically not directly updatable due to constraints. They allow custom logic to determine how insertions, updates, or deletions should be applied to the base tables, effectively bypassing the default SQL Server restrictions and handling errors gracefully .
In high-transaction environments, using temporary tables and table variables in SQL Server has distinct performance implications. Temporary tables are created in the TempDB and support indexing, logging, and transactions, which makes them suitable for operations requiring complex query patterns, indexing, or handling large datasets . However, this also means they may incur additional overhead due to logging and locking resources, leading to slower performance in some scenarios . Table variables, on the other hand, are created in memory but may spill over to TempDB under memory pressure. They do not participate in transactions or generate transaction logs, which can lead to faster performance for smaller datasets with fewer than 100 rows . The lack of statistics on table variables means they may suffer from inefficient query plans in complex queries or when dealing with large datasets, making them less suitable for such scenarios . Overall, while table variables may offer better performance for smaller datasets due to reduced overhead, temporary tables are more appropriate for scenarios demanding transaction support, complex operations, or those involving larger datasets .
Indexes, while improving query performance, have disadvantages such as requiring additional disk space for non-clustered indexes and potentially slowing down DML (INSERT, UPDATE, DELETE) operations. Each change in data must also be reflected in the index, adding overhead to data modification processes .
Stored procedures with output parameters allow multiple values of various data types to be returned, while return values in stored procedures are limited to a single integer value used to indicate success or failure of the procedure . Output parameters are more versatile as they can return any datatype and multiple values if needed, unlike return values that can only return one integer . This flexibility makes output parameters preferable for tasks requiring more complex data handling beyond a simple execution status . Additionally, output parameters can be used to pass data back to the calling environment in any order, as long as the parameter names are used, whereas return values do not offer this flexibility .
The advantage of using INSTEAD OF DELETE triggers in SQL Server when deleting rows from views built on multiple tables is that they allow for the deletion of records that cannot be directly deleted due to the view referencing multiple base tables. This is because views based on multiple tables are generally not updatable using direct DML operations; INSTEAD OF triggers facilitate these operations by executing custom logic to handle the changes on underlying tables, thus overcoming the limitation of direct updates or deletes in multi-table views .
Schema binding in SQL Server affects the modification of base objects by preventing any alterations that would invalidate the schema-bound objects. When a function or view is schema-bound, it creates a dependency on the underlying tables, ensuring that these tables cannot be changed in ways that would affect the integrity of the function or view. As a result, you cannot drop or alter columns or tables referenced by a schema-bound object without first removing the schema binding. This is useful for maintaining the stability of the database as it ensures the structures required by the function or view remain consistent .
To achieve running totals in SQL Server without duplicating entries in the ordered column, you can use the SQL Server "OVER" clause with the "SUM" function and partition the dataset appropriately. The syntax generally involves using the "PARTITION BY" clause for the grouping column and "ORDER BY" for the sequence column, ensuring you avoid redundancy in result entries. For example: ``` SELECT column1, column2, SUM(column3) OVER (ORDER BY column1 ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS running_total FROM table_name ``` This syntax maintains unique rows in the dataset while computing the cumulative sum .
When the NTILE function in SQL Server distributes rows and the number of rows isn't divisible by N (the number of groups), some groups will have one more row than others. This results in larger groups coming before smaller groups. For example, if NTILE(3) is applied to 10 rows, the function will create three groups where the first group contains 4 rows, and the second and third groups contain 3 rows each .