Call for Oracle support & training (800) 766-1884
Free Oracle Tips

Home
Corporate Oracle Training
Custom Oracle Training
Oracle New Features Training
Advanced Oracle DBA Classes
Oracle Tuning Courses
Oracle Tips & Tricks
Oracle Training Links
Oracle Training Links
Oracle Training Links

We are top for USA Oracle Training Clients

 

Free Oracle Tips


 
HTML Text AOL

Free Oracle App Server Tips


 
HTML Text

Oracle support

Oracle training

Oracle tuning

Rednecks!

Remote Oracle

Custom Oracle Training

Donald K. Burleson

Oracle RAC Tips

Shared Storage Volumes

Shared disks are visible to all nodes in the cluster. Real Application Clusters requires the use of raw devices on Linux or cluster file system files. When using raw devices, carefully partition the disks to insure partitions are sized adequately. The LVM (Logical Volume Manager) is very useful and makes the management of raw devices more flexible.

Use the disk utility from the RedHat system utilities menu to format the disk (there can be multiple partitions or one large partition).

Use pvcreate to create a physical volume for use by the logical volume manager.

$ pvcreate -d /dev/sda

For a single partition on a multi-partition drive, use the partition designator such as /dev/sda1.

Use vgcreate from a root session to create a volume group for the drive or for the partition that will be used for the raw.

$ vgcreate -l 256 -p 256 -s 128k /dev/sda

The above command allows 256 logical partitions and 256 physical partitions with a 128K-extent size.

Use lvcreate to create the logical volumes inside the volume group. An example script is shown below.

pvcreate -d /dev/sda

vgcreate -l 256 -p 256 -s 128k /dev/pv1 /dev/sda

lvcreate -L 500m /dev/pv1

lvcreate -L 500m /dev/pv1

lvcreate -L 300m /dev/pv1

lvcreate -L 100m /dev/pv1

The above commands create /dev/pv1/lvol1 to lvoln.

Next, bind the volumes to the raw devices. This is accomplished through the /usr/bin/raw command.

vgchange -a y /dev/pv1

/usr/bin/raw /dev/raw/raw1 /dev/pv1/lvol1

/usr/bin/raw /dev/raw/raw2 /dev/pv1/lvol2

/usr/bin/raw /dev/raw/raw3 /dev/pv1/lvol3

/usr/bin/raw /dev/raw/raw4 /dev/pv1/lvol4

/usr/bin/raw /dev/raw/raw5 /dev/pv1/lvol5

/usr/bin/raw /dev/raw/raw6 /dev/pv1/lvol6

/usr/bin/raw /dev/raw/raw7 /dev/pv1/lvol7

/usr/bin/raw /dev/raw/raw8 /dev/pv1/lvol8

/usr/bin/raw /dev/raw/raw9 /dev/pv1/lvol9

Soft links can be created from the raw volumes to make file recognition easy.

Using Network Attached Storage (Filers)

Network attached storage can be used for storing the RAC database files in an Oracle RAC on a Linux cluster, besides the usual SCSI and FC-based direct and SAN-based shared storage. However, the support is very limited at this time. Only the Network Appliance Filer (8xx and 9xx Series), EMC Celerra and Fujitsu filers (NR1000 Series) are currently supported for network-attached storage.

We will briefly examine the storage part of the implementation using NetApp filer 880.

Sample storage configuration steps include:

  1. Install the filer and connect the disk shelves to it.

  2. Install Data ONTAP 6.1.3R2 and configure the filer by giving it host names such as Data1.

  3. Install the NFS and SnapRestore license keys.

  4. Configure the gigabit NIC in the filer.

  5. Ensure that the gigabit NIC is installed in the filer.

  6. Configure the gigabit interfaces.

  7. Create and export volumes for storing Oracle database files on the filers:

  8. Create a volume on the Data1 filer for storing Oracle database files. This volume will be used to store all the data files, control files, and log files for the Oracle database. Specify the default RAID group size as 14.

  9. Use the following commands at the filer console.

Data1> vol create ordata -r 14 16

  1. Edit the /etc/exports file on Data1, and add the following entries to that file:

/vol/ordata -anon=0

  1. Execute the following command at the filer console:

Data1> exportfs -a

  1. The next step is to create mount points on each of the nodes in the cluster. Update the /etc/fstab file on all the nodes and add the following entries:

data1:/vol/ordata - /oradb nfs - yes

rw,bg,hard,intr,rsize=32768,wsize=32768, tcp, noac,vers=3

Where Data1 is the name of the filer and oradb is the mount point on the cluster nodes.

The mount options that are required for Oracle9i RAC are:

  • noac: This mount option disables caching on the client side.

  • tcp: Mount the file system using the tcp option.

  1. Then create the following mount point:

#mkdir /oradb

  1. Mount the exported volume on the mount point created above:

#mount /oradb

  1. Create the following two shared files on /oradb. The "oracle" user on both cluster nodes should have read/write permission on these files.

#su - oracle

$touch /oradb/SharedConfigFile

$touch /oradb/CmDiskFile

Where /oradb/SharedConfigFile will be used by cluster management utilities such as srvctl and oem to store the cluster configuration, and /oradb/CmDiskFile will be used by Oracle Cluster Manager to store the quorum disk information.

For more details, see the white paper ‘Oracle9i RAC Installation with a Netapp Filer in Red Hat Advanced Server Linux environment’ by Sunil Mahale, Vasu Subbiah, Petros Xides - September 2002


For more information, see the book Oracle 11g Grid and Real Application Clusters 30% off if you buy it directly from Rampant TechPress . 

Written by top Oracle experts, this RAC book has a complete online code depot with ready to use RAC scripts.



 

 

 

 
 
 

Oracle performance tuning book

 

 

Oracle performance tuning software

 
Oracle performance tuning software
 
Oracle performance Tuning 10g reference poster
 
Oracle training in Linux commands
 
Oracle training Excel
 
 
 
 
 
email BC:


Copyright © 1996 -  2014 by Burleson Inc. All rights reserved.

Oracle® is the registered trademark of Oracle Corporation.