Unit 22:
Add Node Lab
© Copyright IBM Corporation 2013
Agenda
• Configure the hardware
• Set up operating system environment for the new node
• Extending the Oracle Grid Infrastructure Home to the New
Node
• Extending the Oracle RAC Home Directory
• Add database instances using DBCA
© Copyright IBM Corporation 2013
Reconfigure hardware
• Public and private network connections should already be in
place
• Provide access from the new nodes to the existing clusters’
shared storage
© Copyright IBM Corporation 2013
Setup activities: Day 1 lab
• Conventions
• Set up tools on student laptop
• Synchronize time between nodes
• Verify, install required AIX filesets and fixes
• Configure tuning parameters
• Configure networks
• Create operating system user, groups and shell environment
• Set up user equivalence (ssh)
• Configure shared storage
• Set up directories for Oracle binaries
• Run Oracle RDA / HCVE script
• Set up Oracle staging directories and final checks
© Copyright IBM Corporation 2013
Setup notes
• Configure network
– Public and private networks on the new node
– Add entries in /etc/hosts for every node
• Entries in existing nodes’ /etc/hosts for the new node.
• Entries in the new node for the existing nodes
• Don’t forget the VIP for the new node
• Users
– Make users’ id (grid, oracle)and group id’s (dba) are the same on all nodes
– Check and Grant Privileges
– modify users shell limitation
• ssh equivalence
– Merge /home/oracle/.ssh/authorized_keys
– Populate known_hosts file by logging in (using ssh) between the existing nodes and the new node.
• Short and fully qualified hostnames
• Include private network
– If using GPFS, also set up host equivalence for root
© Copyright IBM Corporation 2013
Setup notes: Shared storage (1 of 4)
• To determine the size of the shared LUNs, use the
bootinfo command:
For example:
# bootinfo –s hdisk0
70006 70GB (nominal) disk
© Copyright IBM Corporation 2013
Setup notes: Shared storage (2 of 4)
• CAUTION: Do not set or clear PVIDs on hdisks that are in
use
• To check disk mapping:
– On LPARs physically attached to the SAN, use the
“pcmpath query
device” (as root user) andmatch up Serial IDs
– Otherwise, use lquerypv to dump and match up the
hdisk header
© Copyright IBM Corporation 2013
Setup notes: Shared storage (3 of 4)
For ASM disks,
# lquerypv -h /dev/rhdisk2
00000000 00820101 00000000 80000000 D2193DF6 |..............=.|
00000010 00000000 00000000 00000000 00000000|................|
00000020 4F52434C 4449534B 00000000 00000000 |ORCLDISK........|
00000030 00000000 00000000 00000000 00000000|................|
00000040 0A100000 00000103 4447315F 30303030 |........DG1_0000|
00000050 00000000 00000000 00000000 00000000|................|
00000060 00000000 00000000 44473100 00000000 |........DG1.....|
00000070 00000000 00000000 00000000 00000000|................|
00000080 00000000 00000000 4447315F 30303030 |........DG1_0000|
00000090 00000000 00000000 00000000 00000000|................|
000000A0 00000000 00000000 00000000 00000000|................|
000000B0 00000000 00000000 00000000 00000000|................|
000000C0 00000000 00000000 01F5D870 1DA82400 |...........p..$.|
000000D0 01F5D870 1E2DA400 02001000 00100000 |...p.-..........|
000000E0 0001BC80 00001400 00000002 00000001 |................|
000000F0 00000002 00000002 00000000 00000000|................|
© Copyright IBM Corporation 2013
Setup notes: Shared storage (4 of 4)
• On the new node
– Make grid the owner of the the ASM disks and set
permissions.
For example, if your shared LUNs are hdisk1 – hdisk6:
# chown grid:dba /dev/*hdisk[123456]
# chmod 660 /dev/*hdisk(123456]
–DO NOT use “dd” to write to any of the disks!!
© Copyright IBM Corporation 2013
GPFS add node (1 of 2)
• Check GPFS filesets on target node:
For example:
# lslpp -L |grep gpfs
[Link] [Link] C F GPFS File Manager
[Link] [Link] C F GPFS Server Manpages and
[Link] [Link] C F GPFS Native RAID
• Verify remote command execution:
For example (with rsh):
# rsh lpnx date
Wed Jun 6 [Link] PDT 2007
• Check cluster membership on existing node:
For example:
# /usr/lpp/mmfs/bin/mmlsnode -a .
GPFS nodeset Node list
------------- ----------------------------------
rac30-priv rac30-priv rac31-priv
© Copyright IBM Corporation 2013
GPFS add node (2 of 2)
• Accept license on the new node:
#mmchlicense server --accept -N rac32-priv
• Add the node:
For example:
# /usr/lpp/mmfs/bin/mmaddnode rac32-priv
Wed Jun 6 [Link] PDT 2007: 6027-1664 mmaddnode: Processing node
[Link]
mmaddnode: Command successfully completed
mmaddnode: 6027-1371 Propagating the changes to all affected nodes.
This is an asynchronous process.
# usr/lpp/mmfs/bin/mmlsnode -C .
GPFS nodeset Node list
------------- -------------------------------------------------------
rac30-priv rac30-priv rac31-priv rac32-priv
• Check to make sureall nodes are active
# mmgetstate –a
If nodes are down,
# mmstartup –w <node name>
© Copyright IBM Corporation 2013
Extending the Oracle Grid Infrastructure Home
to the New Node
• At any existing node, as gird user, run [Link]
[grid]$GRID_HOME/oui/bin/[Link]
$ ./[Link] -silent
"CLUSTER_NEW_NODES={rac32}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3
2-vip}“
• When the script finishes, run the [Link] and
[Link] script as the root user on the new node
© Copyright IBM Corporation 2013
Extending the Oracle RAC Home Directory
• At any existing node, as oracle user, run
[Link]
[oracle]$ORACLE_HOME/oui/bin/[Link]
$ ./[Link] -silent
"CLUSTER_NEW_NODES={rac32}"
• When the script finishes, run the [Link] script as
the root user on the new node
© Copyright IBM Corporation 2013
Check all nodes resources
$ ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....[Link] ora....[Link] ONLINE ONLINE rac30
ora....[Link] ora....[Link] ONLINE ONLINE rac30
ora....[Link] ora....[Link] ONLINE ONLINE rac30
ora....[Link] ora....[Link] ONLINE ONLINE rac30
ora....[Link] ora....[Link] ONLINE ONLINE rac30
[Link] [Link] ONLINE ONLINE rac30
[Link] ora....[Link] ONLINE OFFLINE
[Link] ora....[Link] ONLINE ONLINE rac31
[Link] [Link] ONLINE ONLINE rac30
[Link] ora....[Link] OFFLINE OFFLINE
[Link] [Link] OFFLINE OFFLINE
ora....network ora....[Link] ONLINE ONLINE rac30
ora.oc4j [Link] OFFLINE OFFLINE
[Link] [Link] ONLINE ONLINE rac30
ora....[Link] application ONLINE ONLINE rac30
ora....[Link] application ONLINE ONLINE rac30
[Link] application OFFLINE OFFLINE
[Link] application ONLINE ONLINE rac30
[Link] ora....[Link] ONLINE ONLINE rac30
ora....[Link] application ONLINE ONLINE rac31
ora....[Link] application ONLINE ONLINE rac31
[Link] application OFFLINE OFFLINE
[Link] application ONLINE ONLINE rac31
[Link] ora....[Link] ONLINE ONLINE rac31
ora....[Link] application ONLINE ONLINE rac32
ora....[Link] application ONLINE ONLINE rac32
[Link] application OFFLINE OFFLINE
[Link] application ONLINE ONLINE rac32
[Link] ora....[Link] ONLINE ONLINE rac32
[Link] ora....[Link] ONLINE ONLINE rac30
© Copyright IBM Corporation 2013
Add new instance for RAC database
[oracle]$./dbca
© Copyright IBM Corporation 2013
Add new instance for RAC database
© Copyright IBM Corporation 2013
Add new instance for RAC database
© Copyright IBM Corporation 2013
Add new instance for RAC database
© Copyright IBM Corporation 2013
Add new instance for RAC database
© Copyright IBM Corporation 2013
Add new instance for RAC database
© Copyright IBM Corporation 2013
Check the new instance
• In any node:
$ sqlplus / as sysdba
SQL> select INSTANCE_NAME,HOST_NAME,STATUS from gv$instance;
INSTANCE_NAME HOST_NAME STATUS
---------------------------------------------------------------- ------------
gpfsdb3 rac32 OPEN
gpfsdb1 rac31 OPEN
gpfsdb2 rac30 OPEN
© Copyright IBM Corporation 2013