Snowflake PDF
Snowflake PDF
--------------------
Demo Class
Data:
-----
Data - Information
Database:
---------
Collection of information store.
ETL:
-----
Extract Transform Load
Reporting:
-----------
Architecture:
-------------
--------------------------------------------
Day 3: 12-07-2023:
------------------
How to install the notepad++
Regions:
-------
Table:
------
Table is contains column and row.
It is used to store the data and make different partition.
Clauses:
--------
1. SELECT
2. FROM
3. ORDER BY
Syntax:
-------
SELECT column_name / 'String'
FROM table_name
ORDER BY column_name;
1. SELECT:
----------
It is used to selecting the columns from table.
2. FROM:
---------
It is used to select the table from database.
Single Column:
--------------
SELECT EMPLOYEE_ID FROM EMPLOYEES;
All Columns:
-------------
SELECT * FROM EMPLOYEES;
3. ORDER BY:
------------
It is used to sorting the records.
ASCENDING => A - Z
DESCENDING => Z - A
Characters:
-----------
SELECT FIRST_NAME FROM EMPLOYEES ORDER BY FIRST_NAME ASC;
Number:
-------
SELECT SALARY FROM EMPLOYEES ORDER BY SALARY ASC;
Date:
------
SELECT HIRE_DATE FROM EMPLOYEES ORDER BY HIRE_DATE ASC;
--------------------------------------------
Day 4: 13-07-2023:
------------------
Clauses:
--------
1. SELECT
2. FROM
3. ORDER BY
4. DISTINCT
5. WHERE
4. DISTINCT:
------------
It is used to remove the duplicates from the column.
DUAL:
-----
It is a dummy table.
It is used to holding the own text in the table.
We should mention the single quote enclosed with text.
SELECT 12345;
5. WHERE:
----------
1. Validate the condition
2. Select specific row
SELECT 'HELLO'
FROM DUAL
WHERE 1 = 1;
SELECT 'HELLO'
FROM DUAL
WHERE 'HELLO' = 'HELLO';
SELECT FIRST_NAME
FROM EMPLOYEES;
SELECT FIRST_NAME
FROM EMPLOYEES
WHERE FIRST_NAME = 'Neena';
SELECT *
FROM EMPLOYEES
WHERE FIRST_NAME = 'Bruce';
--------------------------------------------
Day 5: 14-07-2023:
------------------
Is SQL case sensitve or not?
Sql is not case sensitve. But table data is case sesnsitive.
Operators:
-----------
It is working with table data.
1. Arithmetic Operator
2. Character Operator
3. Comparison Operator
4. Special Operator
5. Logical Operator
6. Set Operator
1. Arithmetic Operator:
-----------------------
It is used to calculation purpose only for numbers.
+ Addition
- Subtraction
* Multiplication
/ Division
% Modulo
SELECT 2 * 100;
SELECT 100 / 2;
SELECT 10 % 3;
SELECT
100 + 100 ,
150 - 100 ,
2 * 100 ,
100 / 2 ,
10 % 3;
ALIAS:
------
It is a alternate name of existing table or column.
1. Column Alias
2. Table Alias
1. Column Alias:
----------------
AS keyword is used to set the column alias name.
AS keyword is optional. But using AS keyword is good for practice.
Column names are always capital.
Alias does not accepting the spaces.
SELECT
100 + 100 AS ADDITION,
150 - 100 Subtraction,
2 * 100 AS "IT IS Multiplication",
100 / 2 AS It_is_division,
10 % 3
FROM
DUAL;
select 98767676+null;
Output: null
2. Character Operator:
----------------------
It is used to merging the multiple characters or columns.
|| - Concatenation
Hi Steven, your designation is AD_PRES and your salary is 24,000 and your department is 90.
SELECT 'Hi ' || FIRST_NAME || ', your designation is ' || JOB_ID || ' and your salary is ' || salary || ' and your depart
FROM EMPLOYEES
WHERE employee_id = 100;
3. Comparison Operator:
-----------------------
1. Equalto (=)
2. Not Equalto (!=) , (<>)
3. Less than <
4. Greater than >
5. Lessthan or equalto <=
6. Greaterthan or equalto >=
1. Equalto (=):
---------------
SELECT 'ABC'
FROM DUAL
WHERE 1 = 1;
SELECT FIRST_NAME
FROM EMPLOYEES
WHERE FIRST_NAME = 'Steven';
SELECT 'HELLO'
FROM DUAL
WHERE 1 <> 1;
SELECT FIRST_NAME
FROM EMPLOYEES
WHERE FIRST_NAME != 'Steven';
SELECT SALARY
FROM Employees
WHERE SALARY < 10000;
SELECT SALARY
FROM Employees
WHERE SALARY > 10000;
--------------------------------------------
Day 6: 17-07-2023:
------------------
4. Special Operator:
---------------------
1. IN
2. BETWEEN_AND
3. LIKE
4. ILIKE
5. ILIKE ANY
6. IS NULL
1. IN:
-------
It is used to pick the more the one records from the table.
2. BETWEEN_AND:
---------------
It is used to pick the records from intermediate level.
SELECT SALARY
FROM Employees
WHERE SALARY BETWEEN 10000 AND 12000
ORDER BY SALARY ASC;
SELECT first_name
FROM Employees
where first_name between 'Alana' and 'Allan'
ORDER BY first_name ASC;
3. LIKE:
---------
Random search or Wildcard search
select first_name
from employees
where first_name LIKE 'S%';
SELECT FIRST_NAME
FROM EMPLOYEES
WHERE FIRST_NAME LIKE '%n';
4. ILIKE:
----------
It is used to disabling the case sensitive of the table.
LIKE + ILIKE
5. ILIKE ANY:
-------------
LIKE + ILIKE + IN
SELECT FIRST_NAME
FROM EMPLOYEES
WHERE FIRST_NAME ILIKE ANY ('S%', 'B%')
ORDER BY FIRST_NAME;
6. IS NULL:
-----------
It is used to filtering the only null records from the table.
NULL
0 - NOT NULL
- - NOT NULL
' ' - NOT NULL
- NULL
SELECT COMMISSION_PCT
FROM EMPLOYEES
WHERE COMMISSION_PCT IS NULL;
5. Logical Operator:
--------------------
1. AND
2. OR
3. NOT
1. AND:
-------
If our query has more than one condition we can use for AND operator.
AND operator will returns the output when it should be Satisifed with all condition.
2. OR:
-------
SELECT 'WATER'
FROM DUAL
WHERE 1 = 2
OR 1 = 1
OR 1 = 3;
SELECT FIRST_NAME
FROM Employees
WHERE FIRST_NAME NOT IN ('Steven', 'Neena');
SELECT FIRST_NAME
FROM Employees
WHERE FIRST_NAME NOT LIKE 'S%';
SELECT SALARY
FROM Employees
WHERE SALARY NOT BETWEEN 5000 AND 10000
ORDER BY SALARY;
SELECT COMMISSION_PCT
FROM EMPLOYEES
WHERE COMMISSION_PCT IS NOT NULL;
6. Set Operator:
----------------
'A' 'A' -> ||
1 1 -> +
SQL SQL -> SET operator
TYPES:
-------
1. UNION ALL
2. UNION
3. INTERSECT
4. MINUS
Table creation:
---------------
CREATE OR REPLACE TABLE T1(T1 NUMBER);
Load Data:
-----------
INSERT ALL
INTO T1 VALUES(1)
INTO T1 VALUES(2)
INTO T1 VALUES(3)
INTO T1 VALUES(4)
INTO T2 VALUES(1)
INTO T2 VALUES(2)
INTO T2 VALUES(3)
INTO T2 VALUES(5)
SELECT * FROM DUAL;
T1
1
2
3
4
T2
1
2
3
5
1. UNION ALL:
--------------
It is used to merging all the queries and return the output of statement 1 and statement.
SELECT * FROM T1
UNION ALL
SELECT * FROM T2;
1
2
3
4
1
2
3
5
2. UNION:
-----------
It is used to merging all the queries and return the output of statement 1 and statement and removing the duplicate
SELECT * FROM T1
UNION
SELECT * FROM T2;
1
2
3
4
5
3. INTERSECT:
-------------
It is used to merging all the queries and return the output of statement 1 and statement 2 and displaying the only co
SELECT * FROM T1
INTERSECT
SELECT * FROM T2;
1
2
3
4. MINUS:
---------
It is used to merging all the queries and return the output of statement 1 and statement 2 and displaying the only un
SELECT * FROM T1
MINUS
SELECT * FROM T2;
Rules:
------
1. The column data type should be matched with each statement.
Functions:
----------
Functions are performing the user required operations.
Types:
------
1. Pre-Defined Functions (SQL)
2. User-Defined Functions (Snowflake)
1. Pre-Defined Functions:
-------------------------
1. Single Row Function | Scalar Function
2. Multiple Row Function | Group Function | Aggregate Function
3. Window Function | Analytical Function
4. Table Function
5. System Function
1. String Function:
--------------------
Arguments
1. UPPER -> 1
2. LOWER -> 1
3. INITCAP -> 1
4. LENGTH -> 1
5. LTRIM | RTRIM | TRIM -> 1
6. TRANSLATE -> 3
7. REPLACE -> Max 3 | MIN 2
8. REVERSE -> 1
9. SUBSTR -> MAX 3 | MIN 2
10. REGEXP_INSTR -> MAX 4 | MIN 2
1. UPPER:
----------
It is used to convert the any case to capital case.
UPPER(ARG)
SELECT
FIRST_NAME,
UPPER(FIRST_NAME)
FROM
EMPLOYEES;
--------------------------------------------
Day 8: 19-07-2023:
------------------
2. LOWER:
---------
It is used to convert the any case to lower case.
LOWER(ARG)
Case Sensitive:
---------------
1. ILIKE / ILIKE ANY
2. UPPER / LOWER
3. COLLET 'en-ci-trim'
SELECT FIRST_NAME
FROM EMPLOYEES
WHERE UPPER(FIRST_NAME) = 'BRUCE';
3. INITCAP:
------------
It is used to convert the any case to initial capital case.
INITCAP(ARG)
SELECT
'welcome',
INITCAP('welCome wew wewew eww');
SELECT
FIRST_NAME, INITCAP(FIRST_NAME)
FROM EMPLOYEES;
4. LENGTH:
----------
It is used to counting the characters.
LENGTH(ARG)
SELECT
'WELCOME',
LENGTH('WELCOME')
SELECT
FIRST_NAME AS BEFORE,
LENGTH(FIRST_NAME) AS AFTER
FROM EMPLOYEES;
SELECT
' WELCOME',
LENGTH(' WELCOME'),
LTRIM(' WELCOME'),
LENGTH(LTRIM(' WELCOME'));
RTRIM:
------
It is used to remove the empty spaces in right side.
select
'welcome ',
length('welcome '),
rtrim('welcome '),
length(rtrim('welcome '));
TRIM:
-----
It is used to remove the empty spaces in both side.
It will never remove the spaces if it is in between of the word.
select
' welcome ',
length(' welcome '),
trim(' welcome '),
length(trim(' welcome '));
6. TRANSLATE:
--------------
It is translating the each and every individual characters.
If does not have any equalence it will remove the character.
SELECT
'RGB',
TRANSLATE('RGB','RG','YX');
7. REPLACE:
-----------
It is used to replacing the entire word.
If we are not providing 3 arg, then it will remove the marked characters.
SELECT
'RGB',
REPLACE('RGB','R','Y');
SELECT
'RGB',
REPLACE('RGB','RG','Y');
SELECT
'RGB',
REPLACE('RGB','RG','YX');
SELECT
'RGBRRR',
REPLACE('RGBRRR','RB','YX');
Interview Question:
--------------------
How many A characters involved given string.
'ABACDA'
--------------------------------------------
Day 9: 20-07-2023:
------------------
SELECT LENGTH('ABACDA');
8. REVERSE:
-----------
It is reversing the given string.
REVERSE(ARG1)
SELECT REVERSE('Welcome')
FROM DUAL;
Palindrome:
-----------
LEVEL
MALAYALAM
9. SUBSTR:
----------
It is used to broke the text and displaying the required string.
Max - 3 Arg
Min - 2 Min
A1 W E L C O M E
A2 1 2 3 4 5 6 7
A3 1234
SELECT
'WELCOME',
SUBSTR('WELCOME',4,3);
SELECT
'WELCOME',
SUBSTR('WELCOME',4);
10. REGEXP_INSTR:
-----------------
It is used to displaying the position of the required character.
Min 2 Arg
Max 4 Arg
A1 W E L C O M E
A2 E E
A3 1 2 3 4 5 6 7 --> Output
A4 1 2
SELECT
'WELCOME',
REGEXP_INSTR('WELCOME', 'C'); --> 4
SELECT
'WELCOME',
REGEXP_INSTR('WELCOME', 'E', 1); --> 2
SELECT
'WELCOME',
REGEXP_INSTR('WELCOME', 'E', 3); --> 7
SELECT
'WELCOME',
REGEXP_INSTR('WELCOME', 'E', 1, 2); --> 7
2. Numeric Function:
--------------------
1. ABS
2. MOD
3. ROUND | TO_NUMBER
4. TRUNC
1. ABS:
-------
It is converting the negative value to positive value.
-1 => 1
-100 => 100
2. MOD:
-------
It is used to displaying the reminder value.
3. ROUND:
---------
It is displaying the whole value.
If it is above 5 it will display the next value.
If it is below 5 it will display the current value.
MIN 1 ARG
MAX 2 ARG
1 2
0123456789
1.45 -> 1
1.87 -> 2
SELECT
1.45, ROUND(1.45),
1.87, ROUND(1.87);
Table Creation:
----------------
CREATE OR REPLACE TABLE ODS.HR.EMAIL
AS
SELECT EMAIL||'@EMAIL.COM'AS MAIL
FROM EMPLOYEES;
--------------------------------------------
Day 10: 21-07-2023:
------------------
Interview Question:
--------------------
Display the user name only from EMAIL table?
4. TRUNC:
---------
It is used to remove the decimal values even its above 5.
SELECT
1.45, TRUNC(1.45),
1.87, TRUNC(1.87);
3. Conversion Function:
-----------------------
It is used to converting the date format (SQL) and data type (Snowflake).
US - MM/DD/YYYY
IN - DD/MM/YYYY
ORACLE DD-MON-YY
SNOWFLAKE YYYY-MM-DD
TYPES:
------
1. TO_CHAR
2. TO_DATE
3. CAST | ::
1. TO_CHAR:
-----------
It is used to convert the snowflake date format to user required date format.
TO_CHAR(ARG1,ARG2)
SELECT
HIRE_DATE,
TO_CHAR(HIRE_DATE, 'DD/MM/YYYY') AS IND,
TO_CHAR(HIRE_DATE, 'MM/DD/YYYY') AS US
FROM EMPLOYEES;
Display only month and year from the hire date column.
SELECT
HIRE_DATE,
TO_CHAR(HIRE_DATE, 'MM/YYYY')
FROM
EMPLOYEES;
DD - 12
MM - 07
YY - 23
YYYY - 2023
DY - FRI
MON - JUL
HH - HOURS
MI - MINUTES
SS - SECONDS
AM / PM - AFTER MERIDIAM OR POST MERIDIAM
SELECT
HIRE_DATE,
TO_CHAR(HIRE_DATE, 'DD$MM*YYYY HH+MI-SS AM')
FROM
EMPLOYEES;
2. TO_DATE:
-----------
It is used to convert and dispalying the snowflake date format from user date format.
TO_DATE(ARG1, ARG2)
4. Date Function:
-----------------
1. ADD_MONTHS
2. MONTHS_BETWEEN
3. LAST_DAY
4. DATE_TRUNC
SELECT CURRENT_DATE();
SHOW PARAMETERS;
ALTER SESSION
SET TIMEZONE = 'Asia/Kolkata';
1. ADD_MONTHS:
---------------
It is used to adding the months.
ADD_MONTHS(ARG1, ARG2)
Interview Question:
-------------------
How many years of experience has each employees since the hire_date.
SELECT
FIRST_NAME,
HIRE_DATE,
ABS(ROUND((HIRE_DATE - CURRENT_DATE)/365)) || ' years' AS YOE
FROM EMPLOYEES;
2. MONTHS_BETWEEN:
------------------
It is used to displaying the in between months of 2 dates.
MONTHS_BETWEEN(ARG1, ARG2)
SELECT
ROUND(
MONTHS_BETWEEN(
TO_DATE('2023-01-01','YYYY-MM-DD'),
TO_DATE('2023-12-31','YYYY-MM-DD')
)
);
3. LAST_DAY:
------------
It is used to displaying the last day of the month.
SELECT LAST_DAY(CURRENT_DATE);
DATE_TRUNC(ARG1, ARG2)
1. DECODE:
----------
If 1st Argument and 2nd Argument is matched then it will displaying the 3rd Argument.
If not matched then dispalying 4th Argument.
SELECT DECODE(1,2,3,4);
SELECT
FIRST_NAME,
DECODE(FIRST_NAME,'Steven','ST','Neena', 'NN','Lex','LX','Error')
FROM EMPLOYEES;
--------------------------------------------
Day 12: 25-07-2023:
------------------
Interview Question:
-------------------
How many years of experience has each employees since the hire_date use ADD_MONTHS.
SELECT
HIRE_DATE,
CURRENT_DATE,
ROUND(MONTHS_BETWEEN(CURRENT_DATE, HIRE_DATE)/12) AS YOE
FROM EMPLOYEES;
2. CASE:
--------
FIRST_NAME = 'Steven' -> St
FIRST_NAME = 'Neena' -> Nn
FIRST_NAME = 'Bruce' -> Br
SELECT
FIRST_NAME,
CASE
WHEN FIRST_NAME = 'Steven' THEN 'St'
WHEN FIRST_NAME = 'Neena' THEN 'Nn'
WHEN FIRST_NAME = 'Bruce' THEN 'Br'
ELSE
'Not Matched'
END
FROM
EMPLOYEES;
SELECT
SALARY,
CASE
WHEN SALARY > 10000 THEN 'Salary is above 10k'
ELSE
'Salary is below 10k'
END
FROM
EMPLOYEES;
SELECT
COMMISSION_PCT,
CASE
WHEN COMMISSION_PCT IS NULL THEN 0
WHEN COMMISSION_PCT IS NOT NULL THEN COMMISSION_PCT
END AS CASE
FROM
EMPLOYEES;
3. NVL:
--------
It will dispalying the first not null value.
NVL(ARG1, ARG2)
NULL, 2 -> 2
1, NULL -> 1
1, 2 -> 1
NULL, NULL -> NULL
COMMENTS:
-----------
It will hide the code in query editor
4. COALESCE:
------------
It will dispalying the first not null value. But it has unlimited args.
SELECT COALESCE(1,2,3,4,5);--1
SELECT COALESCE(NULL,NULL,3,NULL,NULL);--3
SELECT COALESCE(NULL,NULL,'a',NULL,NULL);--a
SELECT COALESCE(NULL,NULL,NULL,NULL,NULL);--NULL
5. NULLIF:
----------
If arg1 and arg2 is matched then it dispalying the NULL value or else it will displaying the first arg value.
NULLIF(ARG1, ARG2)
6. Context Function:
---------------------
It is used to know information of snowflake account.
It does not has any args.
1. CURRENT_DATE
2. CURRENT_TIMESTAMP
3. CURRENT_REGION
4. CURRENT_ACCOUNT
5. CURRENT_ROLE
7. CURRENT_SCHEMA
8. CURRENT_DATABASE
SELECT CURRENT_DATE();
SELECT CURRENT_TIMESTAMP();
SELECT CURRENT_REGION();
SELECT CURRENT_ACCOUNT();
SELECT CURRENT_ROLE();
SELECT CURRENT_SCHEMA();
SELECT CURRENT_DATABASE();
1. SUM
2. AVG
3. MIN
4. MAX
5. COUNT
Clauses:
--------
1. Group by
2. Having
SELECT SUM(SALARY)
FROM EMPLOYEES;--691,416
SELECT AVG(SALARY)
FROM EMPLOYEES; --6,461
SELECT MIN(SALARY)
FROM EMPLOYEES; --2100
SELECT MAX(SALARY)
FROM EMPLOYEES; --24000
SELECT COUNT(SALARY)
FROM EMPLOYEES; --107
Rules:
------
1. Group functions works only on single argument.
SELECT COUNT(DEPARTMENT_ID)
FROM EMPLOYEES;-->106
3. (*) is works only COUNT function to identify the all the rows of the table.
SELECT COUNT(*)
FROM EMPLOYEES;--107
SELECT COUNT(1)
FROM EMPLOYEES;
-------------------------------------------
Day 13: 26-07-2023:
--------------------
4. SUM & AVG functions works only for numbers.
SELECT AVG(FIRST_NAME)
FROM EMPLOYEES; --> error
6. A actual column comes with any group function, we should mentioned that actual column in group by clause.
Display the department wise how many employees are working and what is the lowest and highest salary, avg of dep
Display the department wise how many employees are working and what is the lowest and highest salary, avg of dep
nts.
7. If we are passing any condition in group function, we should use the HAVING clause instead of WHERE clause.
HAVING:
-------
It is used to validate the condition.
But it specifically created for Aggregate function.
SELECT 1
FROM DUAL
HAVING 1 = 2;
-------------------------------------------
Day 14: 27-07-2023:
--------------------
Over:
--------
Order by
Partition by
1. RANK:
---------
It returns the rank of each value.
If two values are the same, they have display the same rank and skip the next rank.
SELECT
FIRST_NAME AS STUDENT_NAME,
SALARY AS MARK,
RANK() OVER(ORDER BY SALARY DESC)
FROM
EMPLOYEES;
2. DENSE_RANK:
--------------
It returns the rank of each value.
If two values are the same, they have display the same rank and never skip the next rank.
SELECT
FIRST_NAME,
SALARY,
DENSE_RANK() OVER(ORDER BY SALARY DESC)
FROM
EMPLOYEES;
3. ROW_NUMBER:
--------------
It will generate each unique sequence number for every rows.
SELECT
FIRST_NAME,
LAST_NAME,
ROW_NUMBER() OVER(ORDER BY FIRST_NAME)
FROM
EMPLOYEES;
4. LEAD:
---------
Accessing the next row within a same row.
LEAD(1,2,3)
SELECT
FIRST_NAME,
LEAD(FIRST_NAME, 2, 'No name') OVER(ORDER BY FIRST_NAME)
FROM
EMPLOYEES;
5. LAG:
-------
Accessing the prior row within a same row.
SELECT
FIRST_NAME,
LAG(FIRST_NAME, 2, 'No name') OVER(ORDER BY FIRST_NAME)
FROM
EMPLOYEES;
6. LISTAGG:
-----------
It is converting the multiple rows to single row.
LISTAGG(ARG1, ARG2)
PARTITION BY:
--------------
It is used to break the data into small paritions and displaying the seperate rank.
SELECT
DEPARTMENT_ID,
FIRST_NAME,
SALARY,
DENSE_RANK() OVER(PARTITION BY DEPARTMENT_ID ORDER BY SALARY DESC) AS RNK
FROM
EMPLOYEES;
WHERE clause does not support Windows function & Aggregate Function.
SELECT
DEPARTMENT_ID,
FIRST_NAME,
SALARY,
DENSE_RANK() OVER(PARTITION BY DEPARTMENT_ID ORDER BY SALARY DESC) AS RNK
FROM
EMPLOYEES
WHERE
RNK = 1; --> ERROR
SELECT
DEPARTMENT_ID,
FIRST_NAME,
SALARY,
DENSE_RANK() OVER(PARTITION BY DEPARTMENT_ID ORDER BY SALARY DESC) AS RNK
FROM
EMPLOYEES
HAVING
RNK = 1;--> ERROR
1. WITH clause
2. QUALIFY clause
3. Inline View
WHERE clause does not support the Alias name in Oracle but Snowflake can accept.
WITH A
AS
(
SELECT
DEPARTMENT_ID,
FIRST_NAME,
SALARY,
DENSE_RANK() OVER(PARTITION BY DEPARTMENT_ID ORDER BY SALARY DESC) AS RNK
FROM
EMPLOYEES)
SELECT * FROM A
WHERE
RNK = 1;
WITH X
AS
(
SELECT DEPARTMENT_ID, COUNT(*) AS CNT, MIN(SALARY), MAX(SALARY), AVG(SALARY), SUM(SALARY)
FROM EMPLOYEES
GROUP BY DEPARTMENT_ID
ORDER BY DEPARTMENT_ID
)
SELECT * FROM X
WHERE CNT > 5;
QUALIFY:
----------
It is used to validate the condition for only Analytical Function.
SELECT
DEPARTMENT_ID,
FIRST_NAME,
SALARY,
DENSE_RANK() OVER(PARTITION BY DEPARTMENT_ID ORDER BY SALARY DESC) AS RNK
FROM
EMPLOYEES
QUALIFY
RNK = 1;
-------------------------------------------
Day 15: 28-07-2023:
--------------------
4. Table Function:
------------------
1. VALIDATE
2. VALIDATE_PIPE_LOAD
3. SPLIT_TO_TABLE
3. SPLIT_TO_TABLE:
------------------
SPLIT_TO_TABLE(ARG1, ARG2)
SELECT *
FROM TABLE(SPLIT_TO_TABLE('A,B,C',','));
SELECT VALUE
FROM TABLE(SPLIT_TO_TABLE('1,2,3,4', ','))
UNION
SELECT VALUE
FROM TABLE(SPLIT_TO_TABLE('1,2,3,5', ','));
A,B,C
A
B
C
5. SYSTEM FUNCTION:
-------------------
1. SYSTEM$TYPEOF
2. SYSTEM$CANCEL_QUERY
1. SYSTEM$TYPEOF:
-----------------
It is used to displaying the what kind of data type are used for given value or column.
SELECT SYSTEM$TYPEOF(123);
SELECT SYSTEM$TYPEOF('ABC');
SELECT
SALARY, SYSTEM$TYPEOF(SALARY),
HIRE_DATE, SYSTEM$TYPEOF(HIRE_DATE),
FIRST_NAME,SYSTEM$TYPEOF(FIRST_NAME)
FROM
EMPLOYEES;
2. SYSTEM$CANCEL_QUERY:
------------------------
It is used to stop the query while running
SELECT SYSTEM$CANCEL_QUERY('01adec4b-3200-d22f-0000-00056ce6b091');
Joins:
-------
It is combined multiple tables and displaying the single table data.
Approaches:
------------
ANSI Method (American National Standar Institute)
Oracle Method
1. CROSS JOIN:
--------------
CREATE OR REPLACE DATABASE ODS;
Oracle Method:
--------------
SELECT
A,B
FROM
T1 , T2;
ANSI Method:
-------------
SELECT
A,B
FROM
T1 CROSS JOIN T2;
SELECT
FIRST_NAME,
EMPLOYEES.DEPARTMENT_ID,
DEPARTMENTS.DEPARTMENT_ID,
DEPARTMENT_NAME
FROM
ODS_SH.HR.EMPLOYEES,
ODS.PUBLIC.DEPARTMENTS
WHERE
EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID;
-------------------------------------------
Day 16: 31-07-2023:
--------------------
Oracle Method:
------------------
SELECT
SID,
SNAME,
STUDENT.CID,
COURSE.CID,
CNAME
FROM
STUDENT,
COURSE
WHERE
STUDENT.CID = COURSE.CID;
ANSI Method:
------------
SELECT
SID,
SNAME,
STUDENT.CID,
COURSE.CID,
CNAME
FROM
STUDENT INNER JOIN
COURSE
ON
STUDENT.CID = COURSE.CID;
3. OUTER JOIN:
----------------
3.1 LEFT OUTER JOIN / LEFT JOIN:
--------------------------------
It is displaying matched records and unmatched from left side.
ORACLE:
-------
SELECT
SID,
SNAME,
STUDENT.CID,
COURSE.CID,
CNAME
FROM
STUDENT,
COURSE
WHERE
STUDENT.CID = COURSE.CID (+);
SELECT
COURSE.CID,
CNAME,
SID,
SNAME,
STUDENT.CID
FROM
COURSE, STUDENT
WHERE
COURSE.CID = STUDENT.CID (+);
ANSI Method:
---------------
SELECT
COURSE.CID,
CNAME,
SID,
SNAME,
STUDENT.CID
FROM
COURSE LEFT OUTER JOIN STUDENT
ON
COURSE.CID = STUDENT.CID;
3.2 RIGHT OUTER JOIN:
--------------------------
It is displaying the matched and unmatched from right side records.
ORACLE:
---------
SELECT
SID,
SNAME,
STUDENT.CID,
COURSE.CID,
CNAME
FROM
STUDENT, COURSE
WHERE
STUDENT.CID (+) = COURSE.CID;
ANSI:
-----
SELECT
SID,
SNAME,
STUDENT.CID,
COURSE.CID,
CNAME
FROM
STUDENT RIGHT OUTER JOIN COURSE
ON
STUDENT.CID = COURSE.CID;
ANSI
-----
SELECT
SID,
SNAME,
STUDENT.CID,
COURSE.CID,
CNAME
FROM
STUDENT FULL JOIN COURSE
ON
STUDENT.CID = COURSE.CID;
INTERVIEW QUESTION:
------------------
T1 T2
1 1
1 2
1 NULL
2
2
3
NULL
Inner Join: 5
Right Join: 6
Left Join: 7
Full Join: 8
Cross Join: 21
-------------------------------------------
Day 17: 01-08-2023:
--------------------
4. SELF JOIN:
-------------
It is used to joining with same table.
SELF JOIN - INNER JOIN
Table Alias:
-------------
It is used to provide the alternate name of the existing table name.
Oracle:
-------
SELECT
EMP.EMPLOYEE_ID,
EMP.FIRST_NAME,
EMP.MANAGER_ID,
MAN.EMPLOYEE_ID AS EMP_ID,
MAN.FIRST_NAME AS MANAGER_NAME
FROM
EMPLOYEES AS EMP,
EMPLOYEES AS MAN
WHERE
EMP.MANAGER_ID = MAN.EMPLOYEE_ID;
ANSI:
------
SELECT
EMP.EMPLOYEE_ID,
EMP.FIRST_NAME,
EMP.MANAGER_ID,
MAN.EMPLOYEE_ID AS EMP_ID,
MAN.FIRST_NAME AS MANAGER_NAME
FROM
EMPLOYEES AS EMP INNER JOIN
EMPLOYEES AS MAN
ON
EMP.MANAGER_ID = MAN.EMPLOYEE_ID;
3 Tables Joining:
-----------------
HR.EMPLOYEES
DEPARTMENT_ID
HR.DEPARTMENTS
LOCATION_ID
HR.LOCATIONS
ORACLE:
-------
SELECT
FIRST_NAME,
EMPLOYEES.DEPARTMENT_ID,
DEPARTMENTS.DEPARTMENT_ID,
DEPARTMENT_NAME,
DEPARTMENTS.LOCATION_ID,
LOCATIONS.LOCATION_ID,
CITY
FROM
HR.EMPLOYEES,
HR.DEPARTMENTS,
HR.LOCATIONS
WHERE
EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID
AND
DEPARTMENTS.LOCATION_ID = LOCATIONS.LOCATION_ID;
ANSI:
------
SELECT
FIRST_NAME,
EMPLOYEES.DEPARTMENT_ID,
DEPARTMENTS.DEPARTMENT_ID,
DEPARTMENT_NAME,
DEPARTMENTS.LOCATION_ID,
LOCATIONS.LOCATION_ID,
CITY
FROM HR.EMPLOYEES INNER JOIN HR.DEPARTMENTS
ON EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID
INNER JOIN HR.LOCATIONS
ON DEPARTMENTS.LOCATION_ID = LOCATIONS.LOCATION_ID;
4 TABLES:
---------
HR.EMPLOYEES
DEPARTMENT_ID
HR.DEPARTMENTS
LOCATION_ID
HR.LOCATIONS
COUNTRY_ID
HR.COUNTRIES
ORACLE:
---------
SELECT
FIRST_NAME,
EMPLOYEES.DEPARTMENT_ID,
DEPARTMENTS.DEPARTMENT_ID,
DEPARTMENT_NAME,
DEPARTMENTS.LOCATION_ID,
LOCATIONS.LOCATION_ID,
CITY,
LOCATIONS.COUNTRY_ID,
COUNTRIES.COUNTRY_ID,
COUNTRY_NAME
FROM
HR.EMPLOYEES,
HR.DEPARTMENTS,
HR.LOCATIONS,
HR.COUNTRIES
WHERE
EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID
AND
DEPARTMENTS.LOCATION_ID = LOCATIONS.LOCATION_ID
AND
LOCATIONS.COUNTRY_ID = COUNTRIES.COUNTRY_ID;
ANSI:
-----
SELECT
FIRST_NAME,
EMPLOYEES.DEPARTMENT_ID,
DEPARTMENTS.DEPARTMENT_ID,
DEPARTMENT_NAME,
DEPARTMENTS.LOCATION_ID,
LOCATIONS.LOCATION_ID,
CITY,
LOCATIONS.COUNTRY_ID,
COUNTRIES.COUNTRY_ID,
COUNTRY_NAME
FROM HR.EMPLOYEES INNER JOIN HR.DEPARTMENTS
ON EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID
INNER JOIN HR.LOCATIONS
ON DEPARTMENTS.LOCATION_ID = LOCATIONS.LOCATION_ID
INNER JOIN HR.COUNTRIES
ON LOCATIONS.COUNTRY_ID = COUNTRIES.COUNTRY_ID;
5 TABLES:
---------
HR.EMPLOYEES
DEPARTMENT_ID
HR.DEPARTMENTS
LOCATION_ID
HR.LOCATIONS
COUNTRY_ID
HR.COUNTRIES
REGION_ID
HR.REGIONS
------------------------------------------
Day 18: 02-08-2023:
-------------------
Subquery:
---------
A query with in another query is known as sub query.
Subquery will execute first after the main query the will execute.
1.
SELECT * FROM EMPLOYEES
WHERE FIRST_NAME = 'Adam';
2.
SELECT * FROM DEPARTMENTS
WHERE DEPARTMENT_ID = 50; --> Hard Code / Static
Types:
------
1. Scalar Subquery
2. Inline View
3. Nested Subquery
4. Single Row Subquery
5. Multiple Row Subquery
6. Correlated Subquery
1. Scalar Subquery:
-------------------
SELECT (SELECT 1 FROM DUAL) + (SELECT 1 FROM DUAL)
FROM DUAL;
2. Inline View:
---------------
SELECT *
FROM (SELECT * FROM EMPLOYEES WHERE DEPARTMENT_ID = 90);
1. Inline View
2. With Clause
3. Qualify
WITH TMP
AS
(
SELECT FIRST_NAME, SALARY, DENSE_RANK() OVER(ORDER BY SALARY DESC) AS RNK
FROM EMPLOYEES
)
SELECT * FROM TMP
WHERE RNK = 3;
SELECT * FROM
(SELECT FIRST_NAME, SALARY, DENSE_RANK() OVER(ORDER BY SALARY DESC) AS RNK
FROM EMPLOYEES)
WHERE RNK = 3;
3. Nested Subquery:
-------------------
SELECT 'WELCOME'
FROM DUAL
WHERE 1 = (SELECT 1 FROM DUAL
WHERE 2 = (SELECT 2 FROM DUAL));
4. Single Row Subquery:
-----------------------
A subquery generate a only single row its called single row Subquery.
SELECT *
FROM EMPLOYEES
WHERE FIRST_NAME = 'Adam';
SELECT *
FROM DEPARTMENTS
WHERE DEPARTMENT_ID = 50; --> HARD CODE
SELECT *
FROM DEPARTMENTS
WHERE DEPARTMENT_ID = (SELECT DEPARTMENT_ID
FROM EMPLOYEES
WHERE FIRST_NAME = 'Adam'); --> Dynamic
SELECT *
FROM EMPLOYEES
WHERE DEPARTMENT_ID = (SELECT DEPARTMENT_ID FROM DEPARTMENTS
WHERE DEPARTMENT_NAME ILIKE 'ACCOUNTING');
ILIKE
CASE FUNCTION
COLLATE 'en-ci-trim'
SELECT *
FROM EMPLOYEES
WHERE FIRST_NAME ILIKE ANY ('DAVID', 'SARAH');
6. Correlated Subquery:
-----------------------
Display the active departments
SELECT *
FROM DEPARTMENTS;
SELECT *
FROM EMPLOYEES;
SELECT *
FROM DEPARTMENTS
WHERE DEPARTMENT_ID IN (SELECT DEPARTMENT_ID
FROM EMPLOYEES
WHERE EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID);
SELECT *
FROM DEPARTMENTS
WHERE DEPARTMENT_ID NOT IN (SELECT DEPARTMENT_ID
FROM EMPLOYEES
WHERE EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID);
-----------------------------------------------------------------
Day 19: 03-08-2023:
--------------------
Lv1:
----
Self Intro - 5 mints (3 mints)
Snowflake
Data Warehouse
SQL
Lv2:
----
Self Intro - 5 mints (3 mints)
Project Explanation
Snowflake
Data Warehouse
SQL
HR:
----
Data Warehouse Architecture:
----------------------------
Day 20: 04-08-2023:
--------------------
Data Warehouse Approaches:
---------------------------
SQL Commands:
--------------
DQL -> Data Query Language
DDL -> Data Definition Language
DML -> Data Manipulation Language
TCL -> Transaction Control Language
DCL -> Data Control Language
1. CREATE:
-----------
It is used to creating the object.
Objects:
--------
TABLE
SCHEMA
DATABASE
VIEWS
STORED PROCEDURE
UDFs
Syntax:
-------
CREATE OBJECT OBJECT_NAME;
Database Creation:
-------------------
CREATE DATABASE DB;
Schema Creation:
-----------------
CREATE SCHEMA SC;
Table Create:
--------------
CREATE TABLE TABLE_NAME
(
COLUMN_NAME DATA_TYPE,
COLUMN_NAME DATA_TYPE,
COLUMN_NAME DATA_TYPE,
.
.
1000
);
-------------------------------
Day 22: 08-08-2023:
--------------------
DATA:
-----
1. STRUCTURED DATA
2. SEMI-STRUCTURED DATA
3. UNSTRUCTURED DATA
1. STRUCTURED DATA:
-------------------
number
character
date
2. SEMI-STRUCTURED DATA:
------------------------
json
xml
avro
parquet
3. UNSTRUCTURED DATA:
---------------------
mp3
mp4
files
image
DATA_TYPE:
----------
It is allowing the data to the table.
Snowflake can accept only utf-8 characters.
Types:
-------
1. Numeric
2. String
3. Date & Time
4. Semi-Structured
1. Numeric:
------------
It can accept only for numbers type of data.
Maximum size - 38
NUMBER / NUMERIC
NUMBER(SCALE)
NUMBER(SCALE, PRECISION)
INTEGER
CREATE TABLE T1
(
AGE NUMBER
);
CREATE TABLE T2
(
AGE NUMERIC
);
CREATE TABLE T3
(
PH_NUMBER NUMBER(10)
);
CREATE TABLE T4
(
AMOUNT NUMBER(5,2)
);
CREATE TABLE T5
(
AMOUNT INTEGER
);
2. String:
----------
It can accept characters, Number and date type of data.
1. Char | Character
2. String | Text | VARCHAR
1. Char | Character:
--------------------
It can accept only for single data.
4. Semi-Structured:
---------------------
1. VARIANT
CREATE OR REPLACE TABLE T1
(
DOB VARIANT
);
2. ALTER:
---------
DATABASE RENAME:
----------------
ALTER DATABASE SFDBS
RENAME TO SFDBS2;
SCHEMA RENAME:
----------------
ALTER SCHEMA HR
RENAME TO HR2;
Table Alter:
------------
Alter with ADD
Alter with Modify
Alter with Rename
Alter with Drop
Table strcuture:
----------------
5. DESCRIBE | DESC:
--------------------
DESC TABLE T1;
ALTER TABLE T1
MODIFY NAME VARCHAR(10);
-------------------------------
Day 23: 09-08-2023:
--------------------
CREATE OR REPLACE TABLE T1
(
A NUMBER,
B NUMBER,
C NUMBER
);
3. DROP:
----------
DROP TABLE T1;
4. COMMENT:
-----------
CREATE OR REPLACE DATABASE DB1
COMMENT = 'This db for testing purpose';
Existing Database:
------------------
COMMENT ON DATABASE ODS IS 'This db for testing purpose';
5. DESCRIBE | DESC:
-------------------
DESC TABLE ODS.HR.EMPLOYEES;
6. SHOW:
--------
SHOW DATABASES;
SHOW TABLES;
7. USE:
---------
It is used to changing the database in current session.
USE DATABASE DB2;
1. INSERT:
----------
FILE INGESTION:
---------------
It is loading the data from outside to snowflake warehouse.
Types:
----------
1. Limited Data
2. Bulk Data
3. Continuous Data
1. Limited Data:
-----------------
Manual Load
Snowsight Load (Upload , Copy)
SnowSQL Load (put)
INSERT INTO T1
VALUES(5, 'E', 9876543210);
1. Classic Console
2. Snowsight
-------------------------------
Day 24: 10-08-2023:
--------------------
Snowsight Load:
---------------
step 1: source file
step 2: create table
step 3: snowsight load
Home page -> Data -> Select the database, schema, Table -> Load Data -> Browse -> Select the source file -> select th
File Format:
------------
It is communicator between source file and snowflake.
Data:
-----
Good Data
Bad Data - Table Function
-------------------------------
Day 25: 11-08-2023:
--------------------
Snowsql Load:
--------------
Google -> snowsql package download -> Select your system OS -> Download the package -> Install the Snowsql -> sn
https://fa43406.ap-southeast-1.snowflakecomputing.com
IDE Tool:
---------
1. GUI -> Graphical User Interface -> Snowsight
2. CLI -> Command Line Interface -> Snowsql
1. D drive
2. Stage
3. Snowflake DB
Stage:
-------
Staging area is storing the data into the particular place.
Stage is one of the object.
Types:
------
1. Internal Stage
2. External Stage
1. Internal Stage:
------------------
1. Table Stage
2. User Stage
3. Named Stage
3. Named Stage:
----------------
It is used to load the data from local drive to snowflake.
6. PUT:
-------
It is used to load the data from local drive to stage.
file keyword should be small letters.
It is works only on snowsql.
1. Source file
2. Table
8. LIST:
----------
It is used to show what are all the files placed in stage.
LIST @STG;
5. COPY:
---------
INSERT INTO SRC(ID, NAME)
VALUES(1, 2);
-------------------------------
Day 26: 16-08-2023:
--------------------
How to open the source file in direct stage:
------------------------------------------------
SELECT $1, $2 FROM @STG/src.csv.gz;
Extract:
-------
PUT file://D:\Snowflake\Newfolder\emp2.csv @stg;
TRANSFORMATIONS:
-----------------
First_name & Last_name -> Merge 2 columns -> ||
[email protected] -> Merge email with own text -> ||
PHONE_NUMBER -> Remove the dots -> Replace
Hire_date -> Indian Date Format -> TO_CHAR -> X
Salary -> Salary with $ symbol -> ||
COMMISSION_PCT -> 0 instead of Null -> NVL/COALESCE
MANAGER_ID -> 0 instead of null -> NVL/COALESCE
DEPARTMENT_ID -> 0 instead of null -> NVL/COALESCE
SELECT
$1,
$2 || ' ' || $3,
$4||'@EMAIL.COM',
REPLACE($5,'.',''),
$6,
$7,
'$'||$8,
COALESCE($9,'0'),
NVL($10,'0'),
NVL($11,'0')
FROM
@stg/emp2.csv.gz;
LOAD:
-----
CREATE OR REPLACE TABLE EMP
(
EMMPLOYEE_ID NUMBER,
NAME VARCHAR,
EMAIL VARCHAR,
PHONE_NUMBER NUMBER,
HIRE_DATE VARCHAR,
JOB_ID VARCHAR,
SALARY VARCHAR,
COMMISSION_PCT NUMBER,
MANAGER_ID NUMBER,
DEPARTMENT_ID NUMBER
);
-------------------------------
Day 27: 17-08-2023:
--------------------
1. src file
2. snowflake internal stage
3. internal stage - snowflake stg table
ls @stg;
1. Table Stage:
---------------
It is created by snowflake whenever we create a table.
2. User Stage:
--------------
It is created by snowflake whenever we opened new worksheet.
Force:
------
We can load the same file again and again to the table.
9. REMOVE:
----------
It is used to remove the data from stage.
remove @stg/emp2.csv.gz;
Purge:
------
It is used to remove the file from stage after loading into table.
7. GET:
-------
It is used to download the file from internal stage to local drive.
It is works only snowsql.
We can download only from internal stage data.
-------------------------------
Day 28: 18-08-2023:
--------------------
2. Bulk Data:
-------------
Services:
---------
S3 -> Store
IAM -> Permission provider
AWS Load:
----------
Step 1:
-------
Upload the file into S3:
-------------------------
S3 -> Create Bucket -> Open the bucket -> Create Folder -> Open Folder -> Upload -> Add Files -> Select your File -> U
Step 2:
-------
Link the snowflake and AWS account using Storage Integration object.
IAM -> ROLES -> CREATE ROLE -> AWS ACCOUNT -> EXTERNAL ID: 0000 -> AMAZONS3FULLACCESS -> ROLE NAME -> C
STORAGE_AWS_IAM_USER_ARN & STORAGE_AWS_EXTERNAL_ID copy the id and paste the IAM service.
IAM -> ROLES -> OPEN YOUR ROLE -> TRUST RELATIONSHIPS -> EDIT TRUST POLICY -> Paste the snowflake ids -> Upd
Admin Team
Step 3:
-------
External Stage creation:
--------------------------
CREATE OR REPLACE STAGE EXSTG
URL = ('s3://18aug2023/csv/')
STORAGE_INTEGRATION = INTG;
list @exstg;
Admin Team
Step 4:
--------
Table creation:
----------------
CREATE OR REPLACE TABLE T1(ID NUMBER, NAME VARCHAR);
COPY INTO T1
FROM @EXSTG/src.csv
FILE_FORMAT = (TYPE = CSV SKIP_HEADER = 1);
Our Work
-----------------------------------
Day 29: 19-08-2023:
--------------------
Handling the bad data:
----------------------
1. Validation_Mode
2. Validate
3. Validation_pipe_load
AWS Load:
----------
Step 1:
-------
Upload the file into S3:
-------------------------
S3 -> Create Bucket -> Open the bucket -> Create Folder -> Open Folder -> Upload -> Add Files -> Select your File -> U
Step 2:
-------
Link the snowflake and AWS account using Storage Integration object.
IAM -> ROLES -> CREATE ROLE -> AWS ACCOUNT -> EXTERNAL ID: 0000 -> AMAZONS3FULLACCESS -> ROLE NAME -> C
STORAGE_AWS_IAM_USER_ARN & STORAGE_AWS_EXTERNAL_ID copy the id and paste the IAM service.
IAM -> ROLES -> OPEN YOUR ROLE -> TRUST RELATIONSHIPS -> EDIT TRUST POLICY -> Paste the snowflake ids -> Upd
Step 3:
-------
External Stage creation:
--------------------------
CREATE OR REPLACE STAGE STG
STORAGE_INTEGRATION = INTG
URL = ('s3://19aug2023/csv/');
LS @STG;
Step 4:
--------
Table creation:
----------------
CREATE OR REPLACE TABLE T1
(
ID INT,
NAME VARCHAR
);
COPY INTO T1
FROM @STG/bad_src_file.csv
FILE_FORMAT = (TYPE = CSV SKIP_HEADER = 1);
COPY INTO T1
FROM @STG/bad_src_file.csv
FILE_FORMAT = (TYPE = CSV SKIP_HEADER = 1)
ON_ERROR = ABORT_STATEMENT;
COPY INTO T1
FROM @STG/bad_src_file.csv
FILE_FORMAT = (TYPE = CSV SKIP_HEADER = 1)
ON_ERROR = CONTINUE
FORCE = TRUE;
VALIDATION_MODE:
----------------
COPY INTO T1
FROM @STG/bad_src_file.csv
FILE_FORMAT = (TYPE = CSV SKIP_HEADER = 1)
ON_ERROR = CONTINUE
VALIDATION_MODE = RETURN_ALL_ERRORS;
INSERT INTO T2
VALUES(1, 'A'),(2, 'B');
INSERT INTO T2
SELECT * FROM T1;
01ae6736-3200-d8aa-0005-6ce600077106
-----------------------------------
Day 30: 20-08-2023:
--------------------
Resume Preparation
-----------------------------------
Day 31: 21-08-2023:
--------------------
Validation, SnowPipe:
---------------------
1. Validation_Mode
2. Validation
3. Validation_Pipe_Load
Step 1:
-------
Upload the file into S3:
-------------------------
S3 -> Create Bucket -> Open the bucket -> Create Folder -> Open Folder -> Upload -> Add Files -> Select your File -> U
Step 2:
-------
Link the snowflake and AWS account using Storage Integration object.
IAM -> ROLES -> CREATE ROLE -> AWS ACCOUNT -> EXTERNAL ID: 0000 -> AMAZONS3FULLACCESS -> ROLE NAME -> C
STORAGE_AWS_IAM_USER_ARN & STORAGE_AWS_EXTERNAL_ID copy the id and paste the IAM service.
IAM -> ROLES -> OPEN YOUR ROLE -> TRUST RELATIONSHIPS -> EDIT TRUST POLICY -> Paste the snowflake ids -> Upd
Step 3:
-------
External Stage creation:
--------------------------
CREATE OR REPLACE STAGE STG
STORAGE_INTEGRATION = INTG
URL = ('s3://21aug2023/');
LS @STG;
Step 4:
--------
Table creation:
----------------
CREATE OR REPLACE TABLE T1
(
ID INT,
NAME VARCHAR
);
2. Validation:
--------------
SELECT * FROM TABLE(VALIDATE(TABLE_NAME, COPY QUERY ID));
Step 2:
-------
Link the snowflake and AWS account using Storage Integration object.
IAM -> ROLES -> CREATE ROLE -> AWS ACCOUNT -> EXTERNAL ID: 0000 -> AMAZONS3FULLACCESS -> ROLE NAME -> C
STORAGE_AWS_IAM_USER_ARN & STORAGE_AWS_EXTERNAL_ID copy the id and paste the IAM service.
IAM -> ROLES -> OPEN YOUR ROLE -> TRUST RELATIONSHIPS -> EDIT TRUST POLICY -> Paste the snowflake ids -> Upd
Step 3:
-------
External Stage creation:
--------------------------
CREATE OR REPLACE STAGE STG
STORAGE_INTEGRATION = INTG
URL = ('s3://21aug2023/CSV');
LS @STG;
Step 4:
--------
Table creation:
----------------
CREATE OR REPLACE TABLE T1
(
ID INT,
NAME VARCHAR
);
Snowpipe Creation:
-------------------
CREATE OR REPLACE PIPE PIPE1
AUTO_INGEST = TRUE
AS
COPY INTO T1
FROM @STG
FILE_FORMAT = (TYPE = CSV SKIP_HEADER = 1)
FORCE = TRUE
ON_ERROR = CONTINUE;
-----------------------------------
Day 32: 22-08-2023:
--------------------
Step 1:
-------
Upload the file into S3:
-------------------------
S3 -> Create Bucket -> Open the bucket -> Create Folder -> Open Folder -> Upload -> Add Files -> Select your File -> U
Step 2:
-------
Link the snowflake and AWS account using Storage Integration object.
IAM -> ROLES -> CREATE ROLE -> AWS ACCOUNT -> EXTERNAL ID: 0000 -> AMAZONS3FULLACCESS -> ROLE NAME -> C
STORAGE_AWS_IAM_USER_ARN & STORAGE_AWS_EXTERNAL_ID copy the id and paste the IAM service.
IAM -> ROLES -> OPEN YOUR ROLE -> TRUST RELATIONSHIPS -> EDIT TRUST POLICY -> Paste the snowflake ids -> Upd
Step 3:
-------
External Stage creation:
--------------------------
CREATE OR REPLACE STAGE STG
STORAGE_INTEGRATION = INTG
URL = ('s3://22aug2023/csv/');
LIST / LS
SELECT $1, $2 FROM @STG/src.csv;
STEP 4:
--------
Load the data from Stage to Table.
COPY INTO T1
FROM @STG/src.csv
FILE_FORMAT = (TYPE = CSV SKIP_HEADER = 1)
ON_ERROR = CONTINUE;
TRUNCATE:
---------
It is used to remove the all the data from table.
Once after we create the snowpipe object then we should link the snowpipe object id into AWS S3 bucket.
S3 -> Open your Bucket -> Properties -> Event notifications -> Create Event notifications -> Select the All Object Event
ifcation id.
DATEADD:
--------
SELECT CURRENT_TIMESTAMP();
SELECT
CURRENT_TIMESTAMP(),
DATEADD(HOUR,-1,CURRENT_TIMESTAMP());
-----------------------------------
Day 33: 23-08-2023:
--------------------
CREATE OR REPLACE TABLE T1
(
ID NUMBER,
NAME VARCHAR
);
2. UPDATE:
----------
1. All Rows Update
2. Specific Row Update
3. Update with Another Table
SELECT * FROM T1
WHERE ID = 1;
UPDATE T1
SET NAME = 'B'
WHERE ID = 2;
SELECT * FROM T1
WHERE ID IN(1,3);
UPDATE T1
SET NAME = 'Z'
WHERE ID IN(1,3);
UPDATE TGT AS T
SET T.NAME = S.NAME
FROM SRC AS S
WHERE T.ID = S.ID;
UPDATE SRC
SET NAME = 'MILKY BAR'
WHERE ID = 1;
UPDATE SRC
SET NAME = '5 STAR'
WHERE ID = 2;
UPDATE TGT
SET TGT.NAME = SRC.NAME
FROM SRC
WHERE TGT.ID = SRC.ID;
3. DELETE:
----------
1. All Rows Delete
2. Single Row Delete
3. Multiple Row Delete
4. Duplicate Row Delete
INSERT INTO T1
VALUES(1, 'RAJINI'),(2, 'RAJINI'),(3, 'KAMAL'),(4,'VIJAY'),(5,'VIJAY');
SELECT NAME
FROM T1
GROUP BY NAME;
-----------------------------------
Day 34: 24-08-2023:
--------------------
4. Duplicate Row Delete:
-------------------------
1. Aggregate Function
2. Analytical function with Qualify -- NO DELETE
3. Common Table Expression
4. Inline View (Subquery)
1. Aggregate Function:
----------------------
SELECT NAME, COUNT(*), MAX(ID)
FROM T1
GROUP BY NAME
HAVING COUNT(*) > 1;
DELETE FROM T1
WHERE ID IN(SELECT MAX(ID)
FROM T1
GROUP BY NAME
HAVING COUNT(*) > 1 );
DELETE FROM T1
WHERE ID IN
(
SELECT ID
FROM T1
QUALIFY ROW_NUMBER() OVER(PARTITION BY NAME ORDER BY NAME) > 1
);
SELECT * FROM T1
WHERE ID IN (2 'RAJINI',5 'VIJAY');
DELETE FROM T1
WHERE ID IN (
SELECT ID FROM (
WITH TMP
AS
(
SELECT ID, NAME, ROW_NUMBER() OVER(PARTITION BY NAME ORDER BY NAME) AS RN
FROM T1
)
SELECT * FROM TMP
WHERE RN > 1
)
);
WITH CTE
AS
(
SELECT ID, NAME, ROW_NUMBER() OVER(PARTITION BY NAME ORDER BY NAME) AS RN
FROM T1
)
SELECT * FROM CTE
WHERE RN > 1;
DELETE FROM T1
WHERE ID IN
(
WITH CTE
AS
(
SELECT ID, NAME, ROW_NUMBER() OVER(PARTITION BY NAME ORDER BY NAME) AS RN
FROM T1
)
SELECT ID FROM CTE
WHERE RN > 1
);
SELECT * FROM
(
SELECT
ID,
NAME,
ROW_NUMBER() OVER(PARTITION BY NAME ORDER BY NAME) AS RN
FROM T1
)
WHERE RN > 1;
DELETE FROM T1
WHERE ID IN
(
SELECT ID FROM
(
SELECT
ID,
NAME,
ROW_NUMBER() OVER(PARTITION BY NAME ORDER BY NAME) AS RN
FROM T1
)
WHERE RN > 1
);
2nd Scenario:
-------------
SELECT * FROM T1;
1
1
3
4
4
SELECT DISTINCT * FROM T1;
1
3
4
INSERT INTO T1
SELECT * FROM T1_BKP;
1
3
4
-----------------------------------
Day 35: 25-08-2023:
--------------------
4. TRUNCATE:
------------
It is used to remove the data from table.
DELETE:
---------
1. For Delete we can do for partial delete.
DELETE FROM T1
WHERE ID = 1;--> SUCCESSFULLY EXECUTION
2. If we are using DELETE command we can not load the same file again and again.
3. If we are using DELETE command we can rollback data. --> snowflake does not support.
TRUNCATE:
----------
1. For Truncate we can not do partial delete.
TRUNCATE TABLE T1
WHERE ID = 1;--> ERROR
2. If we are using TRUNCATE command we can load the same file again and again.
3. If we are using TRUNCATE command we can not rollback data.--> snowflake does not support.
Standard Login:
---------------
snowsql -a <account_name> -u <user_name>
Password:
Parameterized Login:
--------------------
snowsql -c dev
session = worksheet
Types:
-------
1. Permanent Transaction - DDL
2. Temporary Transaction - DML
1. COMMIT
2. ROLLBACK
3. BEGIN TRANSACTION
1. COMMIT:
----------
Commit is used to save the data from temporary transaction to Permanent transaction.
COMMIT;
2. ROLLBACK:
-------------
ROLLBACK is used to get back the data if it is temporary transaction.
ROLLBACK;
SHOW PARAMETERS;
ALTER SESSION
SET AUTOCOMMIT = FALSE;
-----------------------------------
Day 36: 26-08-2023:
--------------------
3. BEGIN TRANSACTION:
-----------------------
It is maintains data in temporary eventhough the worksheet is autocommit.
BEGIN TRANSACTION;
Constraints:
------------
It is a set of rules for table data.
ID:
----
NULL - Not Accepted
DUPLICATE - Not Accepted
NAME:
------
NULL - Not Accepted
DUPLICATE - Accepted
PHONE_NUMBER:
-------------
NULL - Accepted
DUPLICATE - Not Accepted
INSERT INTO T1
VALUES(100, 'ALEX', 9876543210);
INSERT INTO T1
VALUES(100, 'DAVID', 9876543211);
INSERT INTO T1
VALUES(NULL, 'STEVE', 9876543213);
INSERT INTO T1
VALUES(101, NULL, 9876543212);
CONSTRAINTS:
------------
PRIMARY KEY
NOT NULL
UNIQUE
FOREIGN KEY
PRIMARY KEY:
-------------
DUPLICATE - Not Accepted
NULL - Not Accepted
EXAMPLE: ID
NOT NULL:
-----------
DUPLICATE - Accepted
NULL - Not Accepted
UNIQUE:
---------
DUPLICATE - Not Accepted
NULL - Accepted
INSERT INTO T2
VALUES(100, 'ALEX', 9876543210);--LOADED
INSERT INTO T2
VALUES(100, 'DAVID', 9876543211);--NOT LOADED(ID-DUPLICATE)
INSERT INTO T2
VALUES(NULL, 'STEVE', 9876543213);--NOT LOADED(ID-NULL)
INSERT INTO T2
VALUES(101, NULL, 9876543212);--NOT LOADED(NAME-NULL)
INSERT INTO T2
VALUES(102, 'ALEX', 9876543214);--LOADED
INSERT INTO T2
VALUES(103, 'LEX', 9876543210);--NOT LOADED(PHONE_NUMBER-DUPLICATE)
INSERT INTO T2
VALUES(104, 'DAVID' ,NULL);--LOADED
Snowflake:
----------
PRIMARY KEY:
-------------
DUPLICATE - Accepted
NULL - Not Accepted
EXAMPLE: ID
NOT NULL:
-----------
DUPLICATE - Accepted
NULL - Not Accepted
UNIQUE:
---------
DUPLICATE - Accepted
NULL - Accepted
FOREIGN KEY:
------------
It make relationship between two tables.
CREATE TABLE LOC
(
ZIP INT PRIMARY KEY,
LOC VARCHAR(10) NOT NULL
);
-----------------------------------
Day 37: 28-08-2023:
--------------------
Snowflake Architecture:
-----------------------
Warehouse:
-----------
1. Virtual Warehouse (C Drive)
2. Data Warehouse (D Drive)
-----------------------------------
Day 38: 29-08-2023:
--------------------
Snowflake Architecture Cont.:
-----------------------------
-----------------------------------
Day 39: 30-08-2023:
--------------------
Cache:
--------
Micro-Partitions:
-----------------
CREATE OR REPLACE TABLE LOG(
TYPE VARCHAR,
NAME VARCHAR,
COUNTRY VARCHAR,
DATE DATE
) cluster by(TYPE, NAME, COUNTRY, DATE);
-----------------------------------
Day 40: 31-08-2023:
--------------------
Time Travel:
-------------
Snowflake time travel enables accessing the historical data. Which means deleted data or changed data.
Restore the data and database object.
We can backup deleted data.
Current Storage
Time Travel
Fail Safe
INSERT:
---------
INSERT INTO T1 VALUES(1,'A'),(2,'B'),(3,'C');
UPDATE:
--------
UPDATE T1
SET ID = 300
WHERE ID = 3;
DELETE:
-------
DELETE FROM T1
WHERE ID = 300;
SELECT CURRENT_TIMESTAMP();
TIME TRAVEL:
-------------
CLAUSES:
---------
1. TIMESTAMP
2. OFFSET
3. STATEMENT
AT
BEFORE
1. TIMESTAMP:
--------------
SELECT * FROM T1
AT(TIMESTAMP => CAST('2023-08-30 21:12:13.247 -0700' AS TIMESTAMP_TZ));
SELECT * FROM T1
AT(TIMESTAMP => '2023-08-30 21:12:38.709 -0700'::TIMESTAMP_TZ);
SELECT * FROM T1
AT(TIMESTAMP => CAST('2023-08-30 21:13:27.374 -0700' AS TIMESTAMP_TZ));
SELECT * FROM T1
AT(TIMESTAMP => CAST('2023-08-30 21:14:20.124 -0700' AS TIMESTAMP_TZ));
SELECT * FROM T1
BEFORE(TIMESTAMP => CAST('2023-08-30 21:14:20.124 -0700' AS TIMESTAMP_TZ));
2. OFFSET:
----------
SELECT * FROM T1
AT(OFFSET => -60*17);--3
SELECT * FROM T1
AT(OFFSET => -60*19);--0
3. STATEMENT:
-------------
SELECT * FROM T1
AT(STATEMENT => '01aeaafd-3200-dbe8-0005-e94600019136');
SELECT * FROM T1
BEFORE(STATEMENT => '01aeaafe-3200-dbe8-0005-e94600019152');
Data Restore:
--------------
CREATE OR REPLACE TABLE T1_BKP
AS
(
SELECT * FROM T1
AT(TIMESTAMP => CAST('2023-08-30 21:13:27.374 -0700' AS TIMESTAMP_TZ))
);
TRUNCATE T1;
INSERT INTO T1
SELECT * FROM T1_BKP;
Object Restore:
----------------
UNDROP TABLE T1_BKP;
How to undrop the recreated same table:
----------------------------------------
CREATE OR REPLACE TABLE T1
(
ID INT
);
ALTER TABLE T1
RENAME TO T100;
Fail Safe:
----------
Fail-Safe provides 7 days of time travel period.
We dont have the access for it.
Which is handled by direct snowflake team.
UPDATE T1
SET NAME = 'X';
-----------------------------------
Day 41: 01-09-2023:
--------------------
Types Tables:
-------------
1. Permanent Table
2. Transient Table
3. Temporary Table
4. External Table
5. Dynamic Table
1. Permanent Table:
-------------------
CREATE OR REPLACE TABLE T1(A INT);
select * from t1
at(timestamp => ' 2023-08-31 21:02:30.176 -0700'::timestamp_tz);
2. Transient Table:
-------------------
CREATE OR REPLACE TRANSIENT TABLE T2(A INT);
select * from t2
at(timestamp => ' 2023-08-31 21:02:30.176 -0700'::timestamp_tz);
3. Temporary Table:
-------------------
CREATE OR REPLACE TEMPORARY TABLE T3(A INT);
select * from t3
at(timestamp => ' 2023-08-31 21:02:30.176 -0700'::timestamp_tz);
4. External Table:
--------------------
External tables are used to view the data without loading into the table.
STEP 1:
-------
Uploading the file into S3
Step 2:
-------
CREATE OR REPLACE STORAGE INTEGRATION INTG
TYPE = EXTERNAL_STAGE
ENABLED = TRUE
STORAGE_PROVIDER = S3
STORAGE_AWS_ROLE_ARN = 'arn:aws:iam::359767273164:role/R5'
STORAGE_ALLOWED_LOCATIONS = ('s3://onenine23/abc/');
Step 3:
-------
CREATE OR REPLACE STAGE STG
URL = ('s3://onenine23/abc/')
STORAGE_INTEGRATION = INTG;
ls @stg;
Step 4:
-------
CREATE OR REPLACE EXTERNAL TABLE T4
WITH LOCATION = @STG
FILE_FORMAT = (type = csv Skip_header = 1);
SELECT
VALUE:c1::int as ID,
VALUE:c2::varchar as Name
FROM T4;
--------
CREATE OR REPLACE TABLE T4_BKP
AS
SELECT
VALUE:c1::int as ID,
VALUE:c2::varchar as Name
FROM T4;
STEP 4:
--------
CREATE OR REPLACE TABLE T5
(
JD VARIANT
);
COPY INTO T5
FROM @STG/Employees_Json.json
FILE_FORMAT = (TYPE = JSON);
1. Incremental Load:
-----------------------
1. Streams
2. Subquery
3. Merge
2. Full Load:
--------------
1. Insert
2. Dynamic
1. Streams:
------------
A Stream is an object, if any DML action happened in base table that stream will capturing the all the DML actions lik
This process is referred as Change Data Capture (CDC)
Table Creation:
----------------
CREATE OR REPLACE TABLE SRC(ID INT, NAME STRING);
Streams Creation:
------------------
CREATE OR REPLACE STREAM STR;
METADATA Columns:
-----------------
METADATA$ACTION
METADATA$ISUPDATE
METADATA$ROW_ID
METADATA$ACTION:
----------------
Capturing the DML action(INSERT or DELETE)
METADATA$ISUPDATE:
-------------------
Capturing the UPDATE action
METADATA$ROW_ID:
----------------
Generating the unique ROW ID
INSERT CDC:
-----------
insert into src values(1, 'Alex');
Table Consume:
--------------
INSERT INTO TGT
SELECT ID, NAME FROM SRC;
Stream Consume:
---------------
INSERT INTO TGT
SELECT ID, NAME FROM STR;
DELETE CDC:
-----------
SELECT * FROM SRC;
Once the stream has been consumed, after the Streams has truncated.
Update CDC:
-----------
SELECT * FROM SRC;
UPDATE SRC
SET NAME = 'Bruce'
WHERE ID = 2;
--COMSUME
UPDATE SRC
SET NAME = 'BRUCE'
WHERE ID = 1;
--CONSUME
-----------------------------------
Day 43: 05-09-2023:
--------------------
5. Dynamic Table:
-----------------
We can use this table for auto full refresh.
3. Merge:
---------
CREATE OR REPLACE TABLE SRC
(
ID INT,
NAME STRING,
AMOUNT INT
);
INSERT INTO SRC
VALUES
(1, 'MILK',20),
(2, 'MILKY BAR',30),
(3, 'DAIRY MILK',50);
Types:
------
1. Standalone Task
2. Tree Task
1. Standalone Task:
--------------------
CREATE OR REPLACE TABLE T1(A INT);
2. Tree Task:
-------------
# __________ minute (0-59)
# | ________ hour (0-23)
# | | ______ day of month (1-31, or L)
# | | | ____ month (1-12, JAN-DEC)
# | | | | _ day of week (0-6, SUN-SAT, or L)
# | | | | | _ Time Zone
#||||||
******
0 - SUNDAY
1 - MONDAY
2 - TUESDAY
3 - WEDNESDAY
4 - THURSDAY
5 - FRIDAY
6 - SATURDAY
SHOW PARAMETERS;
ALTER SESSION
SET TIMEZONE = 'Asia/Kolkata';
Start Task:
-----------
ALTER TASK TSK3 RESUME;
ALTER TASK TSK2 RESUME;
ALTER TASK TSK1 RESUME;
Stop Task:
-----------
ALTER TASK TSK1 SUSPEND;
ALTER TASK TSK2 SUSPEND;
ALTER TASK TSK3 SUSPEND;
ALTER TASK LOAD SUSPEND;
ALTER TASK LOAD2 SUSPEND;
RULES:
------
1. If we are trying to keep more than one sql statement, task will throw the error
2. Schedule parameter only takes minutes and cron but it does not support seconds or hours.
3. The maximum value is 11520 minutes and can not go above this number.
------------------------------
ZERO COPY CLONING:
------------------
Cloning is copying the data from source table to target table. Same as backup concept.
Cloning does not storing physical data its just storing the metadata. That is reason we are calling it zero copy cloning
Cloning is very faster comparing to backup concept.
CLONING CREATION:
--------------------
CREATE OR REPLACE TABLE SRC_CL CLONE SRC;
-----------------------------------
Day 45: 07-09-2023:
--------------------
DCL -> Data Control Language:
------------------------------
1. GRANT
2. REVOKE
Access Control:
----------------
Users
Roles
Users Creation:
---------------
CREATE OR REPLACE USER U1
PASSWORD = '123';
Roles:
------
1. ACCOUNTADMIN
2. SECURITYADMIN
3. SYSADMIN
4. USERADMIN
5. ORGADMIN
6. PUBLIC
1. ACCOUNTADMIN:
----------------
It is superior role.
It is providing the access to other roles.
It is working with direct object(Table create).
It is used to create the new users into the snowflake system.
Monitoring the billing activity.
2. SECURITYADMIN:
-----------------
It is providing the access to other roles.
3. SYSADMIN:
-------------
It is working with direct object(db/Table create).
4. USERADMIN:
-------------
It is used to create the new users into the snowflake system.
5. ORGADMIN:
-------------
Check the billing.
PERMISSIONS / PRIVILEGES:
--------------------------
USAGE
SELECT
MODIFY
2. REVOKE:
----------
It is get back the access from the role/user.
-----------
Views:
------
We can see whatever all data in view which is presents in base table.
If we are providing the table to reportes then they might change the data.
If we are providing the view to reportes then they can not change the data. because view does not have the write acc
UPDATE STG
SET NAME = 'ABC'
WHERE ID = 2;
UPDATE STG_V
SET NAME = 'ABC'
WHERE ID = 2;
Types:
------
1. Normal View
2. Secure View
3. Materialized View
1. Normal View:
---------------
CREATE OR REPLACE VIEW V1
AS
SELECT * FROM STG
UNION ALL
SELECT * FROM STG;
2. Secure View:
---------------
Other roles can not see the base query except who create this view.
Who create the secure view that person can see the base query.
-----------------
Data Governance:
-----------------
Dynamic Data Masking - Column Level Security:
---------------------------------------------
CREATE OR REPLACE MASKING POLICY MSK
AS
(PWD VARCHAR)
RETURNS VARCHAR
->
CASE
WHEN CURRENT_ROLE() = 'SYSADMIN' THEN PWD
ELSE
'*****'
END;
Rules / Limitation:
-------------------
1. Prior to dropping a policy, we should unset is required.
2. Only one column per alter table.
3. Masking policy definition must have the same sata type for the input and output
-----------------
Sequence:
----------
It is used to generate a new unique numbers everytime.
Instead of primary key we can use the Sequence for unique number generation.
Sequence Creation:
------------------
CREATE OR REPLACE SEQUENCE SEQ;
INSERT INTO T1
VALUES(SEQ.NEXTVAL, 'A');
-----------------------------------
Day 47: 11-09-2023:
--------------------
Data Sharing:
-------------
Sharing the data to multiple snowflake accounts.
Types:
------
1. Consumer Account (Inbound Shares - Receiver)
2. Provider Account (Outbound Shares - Sender)
Shareable Objects:
------------------
Database
Schema
Table
External Table
Secure View
Secure MView
DATABASE - TT
SCHEMA - PUBLIC
TABLE - SRC
ALTER SHARE SH
ADD ACCOUNTS = VPVLDEB.WT42058;
Rules:
-------
We can read only for shared data.
We can not see the base query.
We can reshare to the other account.
We can not do the time travel.
We can not do the cloning of the shared object.
---------------------------------------
User Defined Function (UDF):
----------------------------
CREATE OR REPLACE FUNCTION F1()
RETURNS NUMBER
AS
$$
100
$$;
SELECT F1();
SELECT UPPER(FIRST_NAME);
SELECT
FIRST_NAME,
UPPER(FIRST_NAME),
SALARY,
BONUS(SALARY, 1000)
FROM EMPLOYEES;
-----------------------------------
Day 48: 12-09-2023:
--------------------
Stored Procedure:
------------------
Stored Procedures are used to storing the codes in database.
We can write store procedure two languages.
SQL procedure is not case sensitive but Javascript is case sensitive.
Procedure can not accept the NUMBER data type in RETURNS if the procedure writter Javascript language.
If we are procedure using the SQL language it is optional for Javascript procedure Language keyword is must.
Language:
---------
1. JavaScript
2. SQL
CALL P1();
SELECT F1();
Procedure:
-----------
1. We can use CALL keyword for execute the procedure.
2. In a procedure we can write multiple statements.
3. In procedure we should use semi colon for each statement.
CALL P5();
return 'Inserted';
$$;
call p6();
5, 2, sub => 3
$$;
Privileges:
--------------
1. Caller
2. Owner
CALLER:
--------
Anyone or any role can see the procedure definition.
OWNER:
------
Whoever create the procedure he is the owner, so owner only can see the definition of procedure.
-----------------------------------
Day 49: 13-09-2023:
--------------------