Prerequisites
- Please make sure to attach disks of the same size to all database nodes. It is strongly recommended to use the same name (AWS) / LUN number (Azure) / device name (GCP) on all database nodes to ensure a symmetric configuration.
- Create and attach one 1 GB-sized disk to each quorum node. Each disk group requires one 1 GB-sized disk on each quorum node.
Please follow steps 1-2 to attach the disks:
Disk Group Compatibility
When you create a disk group, you need to specify the disk group compatibility attribute settings. This section discusses two compatibility attributes: COMPATIBLE.ASM
and COMPATIBLE.RDBMS
.
These attributes determine the availability of certain ASM features described in Oracle ASM features enabled by disk group compatibility attribute settings Once attributes are set, they cannot be reverted to a lower value, and can only be advanced.
ASM Compatibility
The COMPATIBLE.ASM attribute must be advanced before advancing other disk group compatibility attributes and its value must be greater than or equal to the value of other disk group compatibility attributes (Reference). Set this attribute to the version of a (Grid Infrastructure) GI stack, such as: 19.0
, 18.0
, 12.2
, etc.
RDBMS Compatibility
You should set RDBMS Compatibility attribute to match the database version(s) in use. If using one database version then set the attribute to that version. If using two or more different database versions then set the attribute to the lowest database version. Note that the version must be 11.2 or higher. Additional information is available in Oracle documentation.
If the disk group will be used for ACFS only then set RDBMS Compatibility to match ASM Compatibility.
Creating a disk group using flashgrid-dg CLI tool
Connect as the grid user to any database node to run the flashgrid-dg command. flashgrid-dg is a non-interactive CLI tool. The following command creates a new NORMAL redundancy disk group with 2 data disks and one quorum disk:
flashgrid-dg create --name MYDG \ --normal \ --asm-compat 19.0 \ --db-compat 19.0 \ --disks /dev/flashgrid/rac[12].lun10 \ --quorum-disks /dev/flashgrid/racq.lun2 \ --disk-repair-time 24000h \ --failgroup-repair-time 24000h \ --au-size 4M
Note: On AWS / Azure / GCP, set disk-repair-time and failgroup-repair-time to 24000h, as shown in the example above. This will prevent ASM from dropping disks unnecessarily during transient disk or node failures.
Note: The disk group created in such a way will be mounted on the node where that command was run. Make sure you log in to the remaining database nodes and mount the disk group manually:
$ asmcmd mount MYDG
You can get help with flashgrid-dg options using
$ flashgrid-dg create -h
Verification
Please make sure that the created disk group is mounted on all nodes. For that, run the flashgrid-cluster command:
- Mounted should be AllNodes
- Status should be Good
[grid@rac1 ~]$ flashgrid-cluster ... --------------------------------------------------------------------------------------------------------- GroupName Status Mounted Type TotalMiB FreeMiB OfflineDisks LostDisks Resync ReadLocal Vote --------------------------------------------------------------------------------------------------------- MYDG Good AllNodes NORMAL 8192 7880 0 0 No Enabled None ---------------------------------------------------------------------------------------------------------