Storage for the database
Use standard procedures for creating, restoring, or importing a database with ASM storage. If using the default ASM disk groups, use DATA disk group for the database files and FRA disk group for Fast Recovery Area.
For critical or high-performance databases it is recommended to avoid sharing disk groups between multiple databases. Having a separate disk group (or set of disk groups) for each database ensures better failure isolation and minimizes performance interference.
Note: Do not use the GRID disk group as storage for a database. The GRID disk group is dedicated for OCR and voting files. It must not be used for any other purposes.
Creating a database using DBCA (cloud only)
For running DBCA in GUI mode use X-forwarding or VNC. If running Oracle Database version 22.214.171.124, note that the DBCA utility has problems when working via VNC - use X-forwarding or silent mode instead.
To run DBCA via VNC, perform the following steps on database node 1
- Make sure VNC port 5901 (TCP) is either open via the security group settings or forwarded via SSH tunnel.
- Switch to user oracle:
sudo su - oracle
Set VNC password and run VNC server. The set of commands depends on operating system.
- RHEL7 or OL7: run
RHEL8 or OL8: run the following commands:
$ vncpasswd $ Xvnc -rfbauth .vnc/passwd :1 & sleep 2; DISPLAY=:1 mwm & DISPLAY=:1 xterm & wait
- RHEL7 or OL7: run
- Connect your VNC client (e.g. RealVNC) to the node. Use one of the following connection strings:
<Public IP>:5901- if connecting via public IP
<Private IP>:5901- if connecting via VPN to the VPC private IP
localhost:5901- if using port forwarding via SSH
dbcain the x-terminal inside VNC client
Note: DBCA is expected to show a warning about insufficient swap size. Swap is intentionally disabled and the warning should be ignored. If the swap is enabled then swapping may happen when system is low on memory even when there is some physical memory still available. Swapping will cause system being unresponsive and potentially causing time outs at various levels including storage. While swapping may prevent or delay running out of memory, the results of swapping are likely to be worse than the results of running out of memory.
Note: If DBCA fails with
[FATAL] [DBT-06604] The location specified for 'Fast Recovery Area Location' has insufficient free space, you can omit the parameter and add it after the creation is completed.
Note: With database version 12.1 on RH 8/OL 8:
the DBCA tool may fail "Cluster Validation Checks" with no output. Either ignore this error or run DBCA with
$ CV_ASSUME_DISTID=OL7 dbca
the DBCA tool may change the
NODEFQDNentries in the
tnsnames.orafiles incorrectly. Review
tnsnames.oraafter running DBCA and amend if needed. Make sure that
NODEFQDNon each node is resolved to the node's hostname. Refer to Connecting Database Clients (cloud only) for details.
the DBCA tool may fail the "Single Client Access Name (SCAN)" validation in "Prerequisite Checks" with
PRVG-2002: Encountered error in copying file "/etc/nsswitch.conf". The error should be ignored.
the DBCA tool may report packages compat-libcap1-1.10 and compat-libstdc++-33-3.2.3 as missing. The errors should be ignored.
Note: With database version 12.1 or 12.2 running on GI ver 18.3 the DBCA tool will show failed "CRS Integrity" check. This error should be ignored. DBCA ver 12.1/12.2 uses crs_stat utility that is deprecated in GI ver 18.3.
Configuring database memory settings
Note: Do not use Automatic Memory Management (AMM) because it is not designed for systems with more than 4 GB of memory. Do not configure AMM parameters MEMORY_TARGET and MEMORY_MAX_TARGET. Instead, use ASMM parameters SGA_MAX_SIZE and PGA_AGGREGATE_LIMIT.
Larger SGA and PGA allocations can help with achieving higher database performance. However, it is critical to ensure that the settings are configured correctly to avoid swapping or running out of memory. Running out of memory will result in processes being killed with unpredictable results to the system stability. A typical mistake leading to low available memory condition is having the PGA size parameters set too high or too many HugePages configured.
On systems with 60 GiB or larger physical memory FlashGrid recommends allocating 80% of the total memory for use by the database(s). The remaining 20% must be reserved for the OS, Grid Infrastructure, and FlashGrid software. On systems with less than 60 GiB of physical memory, 12 GiB must be reserved for the OS, Grid Infrastructure, and FlashGrid software. The optimal ratio of SGA and PGA is different for different types of databases. However, the sum of SGA and PGA allocations for all databases must not exceed the total database memory allocation (Database_Memory_GiB in the calculations below).
If configuring HugePages then the amount of memory allocated as HugePages must match the SGA_MAX_SIZE (or sum of all SGA max sizes for multiple databases) plus 2 GiB for GIMR. Note that PGA and other software cannot use HugePages. Allocating too many HugePages may result in running out of memory.
Note: when the configured SGA is greater than the memory allocated as HugePages, then Oracle allocates the rest of the SGA using regular sized pages. This behavior is undesirable and can lead to inconsistent performance. To guarantee that the whole SGA is allocated using HugePages, set
ALTER SYSTEM SET USE_LARGE_PAGES='ONLY' SID='*' SCOPE=SPFILE;
In this case, the DB instance will fail to start when there is not enough huge pages available:
ORA-27106: system pages not available to allocate memory
Example of memory allocations calculation for a 40% PGA / 60% SGA ratio for a single database:
Calculate total database memory (for all databases)
- If Total_Memory_GiB >= 60 GiB then Database_Memory_GiB = 0.8 x Total_Memory_GiB
- If Total_Memory_GiB < 60 GiB then Database_Memory_GiB = Total_Memory_GiB - 12 GiB
Calculate PGA size parameters
- PGA_AGGREGATE_LIMIT_GiB = round(0.4 x Database_Memory_GiB)
- PGA_AGGREGATE_TARGET_GiB = round(0.5 x PGA_AGGREGATE_LIMIT_GiB)
Note: In database version 126.96.36.199 the PGA_AGGREGATE_LIMIT parameter is not available.
Calculate SGA max size: SGA_MAX_SIZE_GiB = round(0.6 x Database_Memory_GiB)
- Calculate number of huge pages: Number_HugePages = (SGA_MAX_SIZEGiB + 2) x 512
In case of multiple databases sharing the same nodes, the sum of all PGA aggregate limit/target parameters must be equal to or lower than the values calculated using the formulas above.
Enabling Strict Read-Local mode for a new RAC database
It is recommended that Strict Read-Local mode is enabled for every new RAC database.
ASM does not allow reads from disks that are resynchronizing data (SYNCING state) after being offline. As a result, if database is running on a node whose local disks are in SYNCING state, all reads will be performed remotely over the network. This may result in lower performance of the database instance on the node that has just rebooted and is still resynchronizing its data.
Strict Read-Local mode prevents such performance asymmetry between nodes. When the Strict Read-Local mode is enabled, a database instance start will be delayed until its local disks complete resynchronization.
Use the following commands to enable, disable, and show status of Strict Read-Local mode:
# flashgrid-cluster strict-read-local-enable # flashgrid-cluster strict-read-local-disable # flashgrid-cluster strict-read-local-show
Note: Enabling Strict Read-Local mode changes the setting for existing databases only. After creating new database(s) re-run
Note: In order to unmount a disk group while Strict Read-Local mode enabled use
srvctl stop diskgroup command with
Example of unmounting a diskgroup:
$ srvctl stop diskgroup -diskgroup DGNAME -node rac1,rac2 -force
Enabling Failover HA for a new Standard Edition database (non-RAC)
If using Oracle Database 19c SE2 then you can enable Standard Edition High Availability (SEHA) for automatic failover of the database instances.
To enable SEHA, after the database is created and registered with Clusterware, specify all database nodes on which the database can run:
$ srvctl modify database -db mydb -node node1,node2
The database will automatically restart on an available node in case the node it is running on fails.
Enabling Failover HA for a new Enterprise Edition database (non-RAC)
- The database must be registered with Oracle Clusterware. DBCA registers the database automatically. To check if the database is registered, run the
srvctl config database -db <db_name>command. If the database is not registered, use the
srvctl add databasecommand to register it.
- The server parameter file (SPFILE) must be on a shared location that is accessible from all cluster nodes, such as an ASM disk group, or an ACFS filesystem.
- If there are any other resources which are specific to a server hosting the database, such as audit file destinations or database directory objects, they should be either on a shared location or created on all cluster nodes locally.
To enable automatic failover for a database
Stop the database:
$ srvctl stop database -db <db_name>
Run the following command as a user that has sudo privileges (e.g. fg or root) and specify all nodes available for running the database:
sudo flashgrid-db-failover-cfg -n <cluster_nodes> -d <db_name>
For example, to configure automatic failover for database orcl between cluster nodes rac1 and rac2:
sudo flashgrid-db-failover-cfg -n node1 node2 -d orcl
To relocate a running database to a different node
$ srvctl relocate database -db <db_name> -n <target_node>