正文

Step-By-Step Installation of RAC on Compaq Tru64 Unix Cluster

(2008-07-23 13:07:03) 下一個

Step-By-Step Installation of RAC on Compaq Tru64 Unix Cluster 

Note: This note was created for 9i RAC.  The 10g Oracle documentation provides installation instructions for 10g RAC.  These instructions can be found on OTN:

Oracle® Real Application Clusters Installation and Configuration Guide
10g Release 1 (10.1) for AIX-Based Systems, hp HP-UX PA-RISC (64-bit), hp Tru64 UNIX, Linux, Solaris Operating System (SPARC 64-bit)


Purpose 

This document will provide the reader with step-by-step instructions on how to install a cluster, install Oracle Real Application Clusters (RAC) and start a cluster database on Compaq Tru64 Cluster 5.1. For additional explanation or information on any of these steps, please see the references listed at the end of this document.

Disclaimer: If there are any errors or issues prior to step 3.5, please contact Compaq Support.
The information contained here is as accurate as possible at the time of writing.


1. Configuring the Clusters Hardware


1.1 Minimal Hardware list / System Requirements

First see the RAC/Tru64 Certification Matrix for supported configurations. For a two node cluster the following would be a minimum recommended hardware list:

  • For Tru64  servers, third-party storage products, Cluster interconnects, Public networks, Switch options, Memory, Swap
    & CPU requirements consult the operating system or hardware vendor and see the RAC/Tru64 certification matrix
  • Two Alpha servers capable of running OS 5.1A To check the version issue:
        /usr/sbin/sizer -v
  • At least 512 Meg of memory in each server
  • One disk to install the original OS on. This disk will be used as an emergency boot disk after the cluster is formed. This disk can be an internal disk on the server.
  • Four disks on a shared storage array available to each node in the cluster. This is not NFS but physically connected to each node and shared. These disks will be used to create the cluster.
  • Additional disks as needed on the shared storage array to store your Oracle data files and Oracle software.
  • Standard network cards for user access. This is separate from the Interconnect.
  • Memory channel cables and cards or other approved Interconnect methods such as Gigabit or Fast Ethernet. (Proprietary Compaq hardware parts Cards – CCMAB, Cable – BN39B see the RAC/Tru64 certification matrix. for updated hardware components) - Note: Prior to OS 5.1A Compaq Memory Channel was required for the Interconnect. With OS 5.1A other options are available to customers. Please contact your Hardware vendor for approved Interconnect configurations other than the standard Memory Channel. 
  • Tru64 UNIX Operating System Minimum Patches Required
        For Tru64 UNIX version 5.0A:
            5.0A patchkit 4 or higher. 
        For Tru64 UNIX version 5.1:
            5.1 patchkit 4 or higher.
        For Tru64 UNIX version 5.1A:
            5.1A patchkit 1 or higher.
    To determine which patches have been installed, enter the following commands: 
        $ /usr/sbin/dupatch -track -type patch
    Further information regarding RAC Certification can be found at the RAC/Tru64 Certification Matrix

1.2 Memory Channel / Interconnect configuration

This guide will use the Memory channel for the private interconnect. These are CCMAB PCI cards. When running OS 5.1A you can use alternatives but it is a good idea for a production environment to use Memory Channel.  In a two-node cluster configuration the memory channel cards will be configured to run in a virtual hub mode. Meaning that no additional hub hardware is needed for two nodes.

The J1 Jumpers on the Memory card should be modified for each to enable virtual hub mode. 

            Node 0 jump Pin 2&3 

            Node 1 no Pins jumped together.     

            With the servers turned off physically install the Interconnect PCI cards

            To connect the servers together proprietary Compaq cable BN39B is needed. (see the RAC/Tru64 certification matrix.     for updated hardware components)
            Connect the cable to each node     Note: For other approved hardware components please check 


NETWORK DEVICES INSTEAD OF MEMORY CHANNEL

    1.3 Test the Interconnect channel 

    This is only if you are using Memory Channel for the Interconnect. Test to see if your PCI Memory Channel cards are working. On each machine run mc_diag at the Firmware prompt:  (see the RAC/Tru64 certification matrix. for updated hardware components)

    >>mc_diag 

    Testing MC_Adapter 
    Adapter mca0 passed test 

    Test the cable to see if you can connect to the other Node. On each machine run mc_cable at the Firmware prompt “At the same time”: 

    >>mc_cable >>mc_cable 

    mca0 Node id0 is online mca0 Node id1 is online 
    Response from Node 1 on mca0 Response from Node 0 on mca0 

    If the Adapter passed and the cable responded on each side, then you have connection. Do not attempt to configure the cluster until you pass these tests.

    1.4 Set up the minimal hard disks needed for a two node Cluster

    Generally each server will come with an internal disk that is configured with a running Operating system installed. The internal disk will be needed from one node to form the cluster. Once that cluster is formed, the internal disks are not part of the cluster. They can be used as an emergency boot disk or additional swap space for your running cluster. There are four disks total that is used to form the cluster. These disks must be connected to a shared scsi bus to both nodes. For additional disks used specifically for Oracle data make the same change to those devices as well. 

    This is basically like configuring a standard external SCSI disk. They are connected to each Server’s SCSI card via a Y cable connecting both nodes to the shared disk cabinet. Each Cable is then properly terminated. 

    Upon Booting at the firmware or cheveron prompt, the physical disks will be seen. Each disk is assigned a SCSI ID. 

    On the second Server you must change the SCSI ID assigned. 
    At the firmware prompt

        >>show dev - Will display all devices including SCSI disks. Change accordingly
        >>set pka0_host_id 7 - The other node is automatically set to 6
        >>set pkb0_host_id 7 - The other node is automatically set to 6
        >>set pkd0_host_id 7 - The other node is automatically set to 6

    After this is complete, the disks will be seen by the Servers and can be used and will not experience SCSI timeouts. Again, you are only changing the above on one of the two servers because they will both default to SCSI id 6 which will cause SCSI timeouts.


    2. Create a cluster

    2.1 Install the OS on the Internal / Emergency boot disk

    Prepare the internal boot disk: 

    • Upgrade to the latest firmware version for both servers all instructions are available in the sleeve of the Firmware cdrom. 
    • Choose one server and boot from the cdrom containing OS 5.1

                >>> boot dka400 

    • Select a Full install to the internal disk on server 1 with All options
    • After the OS has been installed and the server is booted up from the internal disk, install the Latest patch set which can be found at http://ftp.service.digital.com/public/unix. Refer to the above section for minimal OS patch level to install. The readme that accompanies any patch will guide you through the installation process of the patch. This is still considered a single node and not a cluster. 
    • After installing the latest patch set mount the cdrom labeled "Associated Products Disk 1" and Install the TCS1 component onto the internal boot disk. 

            cd to /SD_CDROM/TCS1/kit 

            setld –l .

    • A license for TCS1 must be obtained from Compaq to successfully use the Clustering components. Obtain the license and use the command lmfsetup as root to register required components. In addition to licensing the TCS1 component you also license the following as a minimum in order to run Oracle Software successfully. Type lmflist at the unix command prompt to verify they are all licensed. If any of the following do not have a status of Active then contact Compaq Support for assistance. 

            # lmflist
            Product                        Status         Users;Total
            TCS1                           Active         unlimited
            OSF-BASE                 Active         unlimited 
            OSF-USR                  Active         unlimited
            OSFCMPLRS520      Active         unlimited 
            OSFLIBA520             Active         unlimited 
            OSFPGMR520           Active         unlimited 
            OSFINCLUDE520     Active         unlimited

    2.2 Form a One-Node Cluster

    The above steps will allow you to boot from your properly configured internal boot disk from Server 1. Prior to forming a cluster you must diagram your disk layout and specify which disks located on your shared storage device will be your ROOT_CLUSTER, MEMBER ONE BOOT DISK, QUORUM, and MEMBER TWO BOOT DISK. 

    Fill in the following checklist information. Reference this information when prompted for during Cluster creation: Examples are given in italic 

    Cluster Name – deccluster

    Cluster IP Address – 138.138.138.190

    Clulsterwide root partition – dsk5a

    Clusterwide usr partition – dsk5g

    Clusterwide var partition – dks5h

    Quorum disk device (entire disk) – dsk3

    Number of votes assigned to quorum disk – 1

    First member’s member id – 1

    Number of votes assigned to this member – 1

    First member’s boot disk – dsk1

    First member’s cluster interconnect IP name – mcdec1

    First member’s cluster interconnect IP address - 10.0.0.1

    Use the disk configuration GUI or any other method to format the four shared disks with the minimum disk layout. Notice above that the cluster root disk has three partitions (/, usr and var). 

    For a 2-node cluster, Oracle  is expecting member id 1 for node 1, member id 2 for node 2 . That is, oracle utilities like lsnodes is not expecting a gap in the member id number.  To ensure there are no problem, number your nodes beginning from 1 and leave no gaps as you add nodes to the cluster.

    Absolute Minimum cluster disk size layout:

    Clusterwide root

    Absolute Minimum Recommended
    / = 125mb                         at least 500mb
    usr = 675mb                     at least 2000mb
    var = 300mb                     at least 2000mb 

    Absolute Minimum for Each Member Boot Disk

    / = 128mb
    b (swap1) = 128mb make this as large as you have room for on the disk
    h (cnx) = 1mb 

    Quorum Disk

    H (cnx) = 1mb 

    Unfortunately the quorum device will use an entire disk ie. nothing else can use this disk even though only 1mb is being used. This will surely change in the future so an entire disk is not wasted for quorum.

    After having the disks configured as listed above you can now create a cluster for the one member.

    - Login as root and run clu_create.

    Answer the questions regarding your cluster configuration. Use the checklist with disk layout and cluster names above. After all questions are answered a one node cluster is formed including the configuration of the first member boot disk, quorum and cluster root disk. Once this is completed the server will be booted up from the new boot disk on the shared disk array. All this takes up to 4 hours on a good day. 


    2.3 Boot from the First Member Boot disk

    If the above reboot didn’t change your boot device flag at the hardware level to the first members boot disk. The actual device will vary based on how your scsi names the devices. You can see the device names by doing a show dev at the >>> prompt

    - shutdown to the hardware or cheveron prompt

        >>> show dev See which disk device number is the first member boot disk
        >>> set boot_osflag A Allows for multiuser mode on next boot up.
        >>> set bootdef_dev dkb1xxxxx

    Set the boot device to be the first members boot disk. This is where the first member will always boot from in the future. If you ever need to boot from the internal disk, change the boot dev. This should only need to be done in the event of an emergency.

    2.4 Create a second member boot disk

    Fill in the following checklist information. Reference this information when prompted for during adding a second member creation: Examples are given in italic 

    Hostname – deccluster2

    Member ID – 2

    Member votes (total expected votes with the addition of this member) - 3

    Number of votes for this member –

    Boot disk for this member – dsk2 

    Cluster interconnect IP name – mcdec2

    Cluster interconnect IP address - 10.0.0.2

    After booting the first node in the cluster you then add members to the cluster by running the following command from the first member while it is up and running. New members can be added or deleted without interrupting the cluster. 

    Ensure that the second node is currently shutdown before running the add command. 

    clu_add_member 

    • At the prompts use the information from the above checklist and answer the questions.
    • The second nodes boot disk has been configured. 
    • Boot the second node to the Firmware or cheveron prompt. 
    • Run the following command from the second node at the >>> prompt to Boot the new member using genvmunix: The exact boot command should be listed on the screen at the end of running the clu_add_member command.

             >>>boot -file genvmunix  

    • As you did on the first node, change the bootdef_dev device at the >>> prompt to ensure the second node will always boot from its own members boot disk

             >>> show dev                 See which disk device number is the second members boot disk
             >>> set boot_osflag A     Allows for multiuser mode on next boot up.
             >>> set bootdef_dev  dkb2xxxxx

    The second node will now perform a kernel build and install all subsets. 


    3.0 Installation of RAC

    3.1 Kernel Parameters 

    As a minimum set the following kernel parameters on EACH NODE of the cluster. Edit these via gui or vi /etc/sysconfigtab file as the root user: 

    proc: 
    max_per_proc_address_space = set to physical memory value or atleast set to 1 gig 
    per_proc_address_space = set to physical memory value or atleast set to 1 gig 
    per_proc_stack_size = No larger than 500m 
    max_per_proc_stack_size = No larger than 500m 
     
    vm: 
    new_wire_method = 0   >> for 5.1a patchkit 5 or less OR 5.1b patchkit 2 or less
    5.1b patchkit 3 or greater (Note:  the most recent patchkits are recommended)
    vm_swap_eager = 1 
    ubc_maxpercent = 70 
    vm_bigpg_enabled = 0;
     
    ipc: 
    shm_max = 2139095040 
    shm_mni = 256 
    shm_seg = 1024 
    sem_mni = 1024 
    sem_msl = 2600 
    ssm_threshold = 0 
     
    rdg: 
    msg_size = 32768 
    max_objs = 5120 
    max_async_req = 256 
    max_sessions = Oracle process + 20 
    rdg_max_auto_msg_wires = 0 
    rdg_auto_msg_wires = 0 
     
    rt: 
    aio_task_max_num = 1040  (this parameter must be greater than (max dbwr I/O’s * db_writer_processes + parallel_max_servers * db_file_multiblock_read_count + 10))

    inet:
     udp_sendspace = 66560
     udp_recvspace = 66560

    It is not necessary to change the udp_sendspace/udp_recvspace parameters from default since Oracle9i uses RDG by default for all GES/GCS, cache fusion and other cluster traffic. These parameter values are only required if you have a mixture of 8i and 9i OPS/RAC instances on the same cluster or you have enabled UDP as the IPC protocol for Oracle9i. 

    NOTE:  Since it is recommended to run 9i databases with TIMED_STATISTICS=TRUE, this increases the number of calls to get system time.  As a result, it is recommended to setup the /dev/timedev device on each node in the cluster (if non existent):

    # mknod /dev/timedev c 15 0 

    # chmod ugo+r /dev/timedev (or chmod 644 /dev/timedev)

    Using the /dev/timedev also avoids potential instance evictions due to issues with the NTP (Network Time Protocol) on Tru64.

     

    3.2 Create the Oracle user and dba group

    Login as root and use the adduser command to create an account named Oracle that belongs to the dba (Oracle database administrator) Create the dba group as needed: 

    # adduser
    Enter a login name for the new user (for example, john): Oracle
    Enter a UID for (Oracle) [15]: 100
    Enter a full name for (Oracle): Oracle
    Enter a login group for (Oracle) [users]: dba
    The group dba was not found.
    Do you want to add group dba to the /etc/group file ([y]/n)? y
    Adding group dba to the /etc/group file...
    Enter a new group number [200]: 200
    Group dba was added to the /etc/group file.
    Enter a parent directory for (Oracle) [/usr/users]: /usr/users
    Enter a login shell for (Oracle) [/bin/sh]: /bin/csh
    Finished adding user account for (Oracle).

    3.3 Mount point for the Oracle Software 

    Create the mount point for the Oracle software as the root user. Select a different disk on your shared disk device. In this example it shows using dsk9 

    # disklabel -z dsk9
    # disklabel -rw dsk9 RZ1CF-CF Oracle
    # mkfdmn /dev/disk/dsk9c Oracle
    # mkfset Oracle fs1
    # mkdir /u01
    # mount Oracle#fs1 /u01
    # chown Oracle:dba /u01

    3.4 Mount point for the Oracle datafiles 

    Create the file systems for the Oracle database datafiles. If there is sufficient room on the same disk (/u01) as the Oracle software, create a directory named /u01/oradata which could alternatively be used. 

    # disklabel -z dsk20
    # disklabel -rw dsk20 RZ1CF-CF Oracle
    # mkfdmn /dev/disk/dsk20c OracleDb
    # mkfset OracleDb fs1
    # mkdir /u02
    # chown Oracle:dba /u02
    # mount OracleDb#fs1 /u02
     
    Add the mount lines to /etc/fstab.
    Oracle#fs1 /u01 advfs rw 0 2
    OracleDb#fs1 /u02 advfs rw 0 2 

    3.5 Install the Software 

    Ensure that the system has at least the following resources prior to installing Oracle. To ensure the system has the proper configuration for a successful installation of Oracle run the InstallPrep script found on Metalink Note number 189256.1 :

    • At least 400 meg in /tmp
    • At least 256 meg of Physical memory
    • At least Three times the amount of Physical Memory for Swap space (unless you exceed 1 Gig of Physical Memory, in this case two times the amount of Swap space is sufficient) 

    After downloading the InstallPrep.sh script (Note number 189256.1) in Ascii, run the script, review  the output and resolve any issues listed in the /tmp/Oracle_InstallPrep_Report prior to attempting the Oracle installation:

     ./InstallPrep.sh 
    You are currently logged on as oracle 
    Is oracle the unix user that will be installing Oracle Software? y or n 
    y
    Enter the unix group that will be used during the installation
    Default: dba
    dba 

    Enter Location where you will be installing Oracle
    Default: /u01/app/oracle/product/oracle9i
    /u01/app/oracle/product/9.2.0.1
    Your Operating System is SunOS
    Gathering information... Please wait 

    Checking unix user ...
    user test passed 

    Checking unix umask ... 
    umask test passed 

    Checking unix group ... 
    Unix Group test passed

    Checking Memory & Swap... 
    Memory test passed

    /tmp test passed 

    Checking for a cluster...

    Cluster has been detected You have 2 cluster members configured and 2 are currently up
    No cluster warnings detected
    Processing kernel parameters... Please wait

    Running Kernel Parameter Report...

    Check the report for Kernel parameter verification

    Completed.

    /tmp/Oracle_InstallPrep_Report has been generated

    Please review this report and resolve all issues before attempting to install the Oracle Database Software.  The following steps outline steps to install the Oracle Database Software.

    1. Mount the Oracle 9i cdrom as the root user onto /mnt. The Oracle Enterprise edition has multiple cdroms. During the installation you will be prompted to mount the second cdrom when needed. Unmount the first cdrom and repeat this step when required.

        # mount -r /dev/disk/cdrom0c /mnt 

    2. The Oracle installation process is a GUI only installation. Ensure that you have x-windowing capability. Run the following command in a unix window.

        xhost +

    3. Open a new window and login as the oracle user. Ensure the following environments are set prior to beginning the installation. The location must be writeable by the oracle user and is where the Oracle software will be installed.

        ORACLE_HOME=/u01/app/oracle/product/9.0.1 

        ORACLE_BASE=/u01/app/oracle

        PATH=$ORACLE_HOME/bin:/usr/ccs/bin:and what ever else you want to have set follows these two items in the PATH.

        DISPLAY= The ip address is where the xwindows will display. If you are running from the server put the ip address of the server or if you are running from a x-window client place the client ip address here. 

    These environment variables should be placed in the oracle users .profile so that they are set each time the Oracle user is logged in. 

    4. Change directories to the /tmp directory and run the command to begin the installation process on the cdrom answering questions and following directions for a Typical Install: 

        cd /tmp
        /mnt/runInstaller

    The Oracle software will automatically install the RAC option as long as it is being installed on a valid cluster environment. Below lists common screen answers that may help install the software:

    Screen Name in Upper left                       Common reply
    Inventory Location                                   Location must be write-able by Oracle user 
    Unix group name                                      dba 
    File Locations                                           Source is cdrom and Destination is the ORACLE_HOME                                   where the software will be installed
                                                                    Available products Oracle9i Database 
    Installation Types                                      Enterprise Edition is for Database software 
    Database Configuration Software              Only, after the software install is successful, 
                         &n sp;                                           create a database seperatly 
    Cluster Node Selection                              Do not select any node in this screen
    Shared Configuration File Name                 If this appears, provide a complete filename and 
                                                                      location seen by each node on the cluster, 
                                                                      ie. /u01/app/Oracle/oradata/Sconfig
    All other screens are self-explanatory          Simply follow the instructions till completion of 
                                                                      the software installation. 

    3.6 Create a RAC Database

    The Oracle Database Configuration Assistant (DBCA) will create a database for you (for an example of manual database creation see Database Creation in Oracle9i RAC). The DBCA creates your database using the optimal flexible architecture (OFA). This means the DBCA creates your database files, including the default server parameter file, using standard file naming and file placement practices. The primary phases of DBCA processing are:-

    • Create the database
    • Configure the Oracle network services
    • Start the database instances and listeners

    Oracle Corporation recommends that you use the DBCA to create your database. This is because the DBCA preconfigured databases optimize your environment to take advantage of Oracle9i features such as the server parameter file and automatic undo management. The DBCA also enables you to define arbitrary tablespaces as part of the database creation process. So even if you have datafile requirements that differ from those offered in one of the DBCA templates, use the DBCA. You can also execute user-specified scripts as part of the database creation process.

    The DBCA and the Oracle Net Configuration Assistant also accurately configure your Real Application Clusters environment for various Oracle high availability features and cluster administration tools.

    Note: Prior to running the DBCA it may be necessary to run the NETCA tool or to manually set up your network files. To run the NETCA tool execute the command netca from the $ORACLE_HOME/bin directory. This will configure the necessary listener names and protocol addresses, client naming methods, Net service names and Directory server usage. Also, it is recommended that the Global Services Daemon (GSD) is started on all nodes prior to running DBCA. To run the GSD execute the command gsd from the $ORACLE_HOME/bin directory.

    • DBCA will launch as part of the installation process, but can be run manually by executing the command dbca from the $ORACLE_HOME/bin directory on UNIX platforms. The RAC Welcome Page displays. Choose Oracle Cluster Database option and select Next.

    Note: When creating a database via DBCA is it common to receive one of these error messages during the creation of the database. If that occurs, follow the steps provided: 

    ERROR: “gsd daemon is not running on local node. Please start the daemon by executing gsd command from another window before proceeding”

    - Exit the DBCA utility and run the following command: 
    - gsd & 
    - Telnet to the second node and start the gsd daemon on that server as well.

    ERROR:  If this is the first Oracle9i database created on this cluster, then you must initialize the clusterwide SRVM configuration. Firstly, create or edit the file /var/opt/oracle/srvConfig.loc file and add the entry srvconfig_loc=path_name. where the path name is the shared srvconfig location.

     vi /var/opt/oracle/srvConfig.loc srvconfig_loc=/u01/oradata/rac_srvconfig 

    Then execute the following command to initialize the config file (Note: This cannot be run while the gsd is running. Before v9.2 you will need to kill the .. ./jre/1.1.8/bin/... process to stop the gsd from running):- 

    srvconfig -init


    ERROR: “Oracle Corporation does not support shared nothing Real Application Clusters databases. Please use either shared raw devices or a cluster filesystem files for your datafiles.” 

    - In the DBCA Node Selection window, select BOTH nodes in the cluster instead of just one node.

    • The Operations page is displayed. Choose the option Create a Database and click Next.
    • The Node Selection page appears. Select the nodes that you want to configure as part of the RAC database and click Next. If nodes are missing from the Node Selection then perform clusterware diagnostics by executing the $ORACLE_HOME/bin/lsnodes -v command and analyzing its output. Refer to your vendor's clusterware documentation if the output indicates that your clusterware is not properly installed. Resolve the problem and then restart the DBCA.
    • The Database Templates page is displayed. The templates other than New Database include datafiles. Choose New Database and then click Next.
    • The Show Details button provides information on the database template selected.
    • DBCA now displays the Database Identification page. Enter the Global Database Name and Oracle System Identifier (SID). The Global Database Name is typically of the form name.domain, for example mydb.us.oracle.com while the SID is used to uniquely identify an instance (DBCA should insert a suggested SID, equivalent to name1 where name was entered in the Database Name field). In the RAC case the SID specified will be used as a prefix for the instance number. For example, MYDB, would become MYDB1, MYDB2 for instance 1 and 2 respectively.
    • The Database Options page is displayed. Select the options you wish to configure and then choose Next. Note: If you did not choose New Database from the Database Template page, you will not see this screen.
    • The Additional database Configurations button displays additional database features. Make sure both are checked and click OK.
    • Select the connection options desired from the Database Connection Options page. Note: If you did not choose New Database from the Database Template page, you will not see this screen. Click Next.
    • DBCA now displays the Initialization Parameters page. This page comprises a number of Tab fields. Modify the Memory settings if desired and then select the File Locations tab to update information on the Initialization Parameters filename and location. Then click Next.
    • The option Create persistent initialization parameter file is selected by default. Enter a file system name then click Next.
    • The button File Location Variables… displays variable information. Click OK.
    • The button All Initialization Parameters… displays the Initialization Parameters dialog box. This box presents values for all initialization parameters and indicates whether they are to be included in the spfile to be created through the check box, included (Y/N). Instance specific parameters have an instance value in the instance column. Complete entries in the All Initialization Parameters page and select Close. Note: There are a few exceptions to what can be altered via this screen. Ensure all entries in the Initialization Parameters page are complete and select Next.
    • DBCA now displays the Database Storage Window. This page allows you to enter file names for each tablespace in your database.
    • The file names are displayed in the Datafiles folder, but are entered by selecting the Tablespaces icon, and then selecting the tablespace object from the expanded tree. Any names displayed here can be changed. Complete the database storage information and click Next.
    • The Database Creation Options page is displayed. Ensure that the option Create Database is checked and click Finish.
    • The DBCA Summary window is displayed. Review this information and then click OK.
    • Once the Summary screen is closed using the OK option, DBCA begins to create the database according to the values specified.

    A new database now exists. It can be accessed via Oracle SQL*PLUS or other applications designed to work with an Oracle RAC database.


    4.0 Administering Real Application Clusters Instances

    Oracle Corporation recommends that you use SRVCTL to administer your Real Application Clusters database environment. SRVCTL manages configuration information that is used by several Oracle tools. For example, Oracle Enterprise Manager and the Intelligent Agent use the configuration information that SRVCTL generates to discover and monitor nodes in your cluster. Before using SRVCTL, ensure that your Global Services Daemon (GSD) is running after you configure your database. To use SRVCTL, you must have already created the configuration information for the database that you want to administer.  You must have done this either by using the Oracle Database Configuration Assistant (DBCA), or by using the srvctl add command as described below.

    If this is the first Oracle9i database created on this cluster, then you must initialize the clusterwide SRVM configuration. Firstly, create or edit the file /var/opt/oracle/srvConfig.loc file and add the entry srvconfig_loc=path_name.where the path name is the shared srvconfig location.

    $ vi /var/opt/oracle/srvConfig.loc
    srvconfig_loc=
    /u01/oradata/rac_srvconfig

    Then execute the following command to initialize the config file (Note: This cannot be run while the gsd is running. Before v9.2 you will need to kill the .../jre/1.1.8/bin/... process to stop the gsd from running):-

    $ srvconfig -init

    The first time you use the SRVCTL Utility to create the configuration, start the Global Services Daemon (GSD) on all nodes so that SRVCTL can access your cluster's configuration information. Then execute the srvctl add command so that Real Application Clusters knows what instances belong to your cluster using the following syntax:-


    For Oracle RAC v9.0.1:- 

    $ gsd
    Successfully started the daemon on the local node.

    $ srvctl add db -p db_name -o oracle_home

    Then for each instance enter the command: 

    $ srvctl add instance -p db_name -i sid -n node

    To display the configuration details for, example, databases racdb1/2, on nodes racnode1/2 with instances racinst1/2 run:- 

    $ srvctl config 
    racdb1
    racdb2

    $ srvctl config -p racdb1
    racnode1 racinst1
    racnode2 racinst2

    $ srvctl config -p racdb1 -n racnode1
    racnode1 racinst1

    Examples of starting and stopping RAC follow:- 

    $ srvctl start -p racdb1
    Instance successfully started on node: racnode2
    Listeners successfully started on node: racnode2
    Instance successfully started on node: racnode1
    Listeners successfully started on node: racnode1

    $ srvctl stop -p racdb2 
    Instance successfully stopped on node: racnode2
    Instance successfully stopped on node: racnode1
    Listener successfully stopped on node: racnode2
    Listener successfully stopped on node: racnode1

    $ srvctl stop -p racdb1 -i racinst2 -s inst 
    Instance successfully stopped on node: racnode2

    $ srvctl stop -p racdb1 -s inst
    PRKO-2035 : Instance is already stopped on node: racnode2
    Instance successfully stopped on node: racnode1


    For Oracle RAC v9.2.0+:- 

    $ gsdctl start
    Successfully started the daemon on the local node.

    $ srvctl add database -d db_name -o oracle_home [-m domain_name] [-s spfile]

    Then for each instance enter the command: 

    $ srvctl add instance -d db_name -i sid -n node

    To display the configuration details for, example, databases racdb1/2, on nodes racnode1/2 with instances racinst1/2 run:- 

    $ srvctl config 
    racdb1
    racdb2

    $ srvctl config -p racdb1 -n racnode1
    racnode1 racinst1 /u01/app/oracle/product/9.2.0.1

    $ srvctl status database -d racdb1
    Instance racinst1 is running on node racnode1
    Instance racinst2 is running on node racnode2

    Examples of starting and stopping RAC follow:- 

    $ srvctl start database -d racdb2

    $ srvctl stop database -d racdb2

    $ srvctl stop instance -d racdb1 -i racinst2 

    $ srvctl start instance -d racdb1 -i racinst2

    $ gsdctl stat
    GSD is running on local node

    $ gsdctl stop

    For further information on srvctl and gsdctl see the Oracle9i Real Application Clusters Administration manual.

     

    DBUA can be used to go from 9.0 to 9.2 for either RAC or non-RAC. If you are converting from 9.2 non-rac to RAC you can just install the RAC option, add the required init.ora parameters, and start the instances (if the cluster is already configured). 

    Init.ora parameters to include would be: 

    -  cluster_database=true 

    -  An undo tablespace for each node.

     - Instance_Number = # (must be unique between instances) 

    -  Instance_Name = = Must be unique per instance. 

    -  Thread = # (must be unique between instances)


    5.0 References
    • RAC/Tru64 Certification Matrix
    • Note:177087.1 - Metalink COMPAQ: Quick Start Guide - 9.0.x RDBMS Installation
    • Note:137288.1 - Database Creation in Oracle9i RAC
    • Oracle9i Real Application Clusters on Compaq Tru64
    • Oracle9i Real Application Clusters Installation and Configuration Release 1 (9.0.1)
    • Oracle9i Real Application Clusters Concepts
    • Oracle9i Real Application Clusters Administration
    • Oracle9i Real Application Clusters Deployment and Performance
    • Oracle9i Installation Guide for Compaq Tru64, Hewlett-Packard HPUX, IBM-AIX, Linux, and Sun Solaris-based systems.
    • Oracle9i Release Notes
[ 打印 ]
閱讀 ()評論 (0)
評論
目前還沒有任何評論
登錄後才可評論.