The gp3 storage type launched by AWS in December 2020 is the default option on newer FlashGrid Cluster deployments.
While the previous generation (gp2) is fully supported, the gp3 volume performance is no longer tied to storage capacity.
Using gp3 volumes now offers the flexibility to independently provision IOPS and throughput, separate from storage capacity.
Things to consider before migrating from gp2 to gp3:
gp2:
- baseline performance scales linearly at 3 IOPS per GiB of volume size (up to 16000 IOPS ~ 5TB )
- maximum throughput can never exceed 250 MiB/s
- minimum size to achieve maximum throughput is 334 GB
gp3:
- baseline rate is 3,000 IOPS and 125 MiB/s throughput
- additional IOPS (up to 16,000) and throughput (up to 1,000 MiB/s) could be provisioned
- maximum ratio of provisioned throughput to provisioned IOPS is .25 MiB/s per IOPS
In most cases, migrating EBS volumes from gp2 to gp3 and keeping the default options (3,000 IOPS and 125MiB/s throughput) will result in decreased performance.
Example for common volume sizes:
Volume size (GiB) |
gp2 | gp3 (baseline) | ||
Max. IOPS | Throughput (MiB/s) | IOPS | Throughput (MiB/s) | |
512 | 3000* | 250* | 3000 | 125 |
1024 | 3072 | 250 | 3000 | 125 |
2048 | 6144 | 250 | 3000 | 125 |
3072 | 9216 | 250 | 3000 | 125 |
4096 | 12288 | 250 | 3000 | 125 |
*burst
Conclusion: the gp3 volumes need to be provisioned (IOPS and throughput options) to offer at least the existing performance as an equivalent size on gp2.
There are two options to migrate existing gp2 volumes to gp3 type:
a) using the procedure provided by AWS
b) using Oracle ASM rebalancing:
The following procedure could be used (for each diskgroup):
- create and attach to each database node the exact number of disks and size
- confirm the disks are visible at cluster level:
$ flashgrid-cluster drives
- use replace-all-disks option:
eg.
$ flashgrid-dg replace-all-disks -G DATA -d /dev/flashgrid/rac[12].xvdb[k-p]
the above command will replace all existing disks in DATA diskgroup with rac[12].xvdb[k-p]
Note: Starting with Storage Fabric version 23.04, the replace-all-disks operation executes data rebalance in the background. While rebalance is running in the background, the flashgrid-cluster
and flashgrid-dg
commands may show old disks reported as "LostDisks". This will be cleared after the background operation is completed.
- when completed, remove the old gp2 volumes:
- prepare each (old) gp2 volume for removal
eg. (on each node):
$ sudo flashgrid-node stop-target /dev/flashgrid/rac1.xvdbb $ sudo flashgrid-node stop-target /dev/flashgrid/rac2.xvdbb
- detach the (old) EBS gp2 volumes used by DATA from the instances
- repeat the procedure for each diskgroup
In both cases it is recommended to perform these activities in a maintenance window.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html