Note: Adding a node to an existing FlashGrid Cluster is supported only in configurations with separate "database-only" and "storage-only" nodes. It is not supported with standard 2-node or 3-node clusters that have data disks attached to database nodes.
Before You Start
- FlashGrid Cluster software must be on version 24.1.108 or newer
- All cluster nodes must be online
- Cluster status reported by flashgrid-cluster command must be Good
- Select a name for the new storage node. It will be used as the VM name and the host name. Further in this article it will be referred to as <new storage node>.
- Request a new license file for the new node from FlashGrid Support.
Steps for Adding a New Storage Node
1) Create a new VM
- Create a snapshot from another storage node of the <existing storage node>-root volume, choose "Full" snapshot type, and "Disable public and private access" in the Networking options.
- Locate the snapshot and create a disk named <new storage node>-root . If using availability zones then make sure to create the disk in the desired availability zone.
- Create a new AN-enabled NIC with a specific name, VNet, subnet, and NSG in a chosen resource group and statically allocated IP. Accelerated-networking option must be set to True.
az network nic create \ --resource-group <resource group name> \ --name <new storage node>-nic1 \ --vnet-name <VNet name> \ --subnet <subnet name> \ --accelerated-networking true \ --network-security-group <network security group> \ --private-ip-address <private ip address for the new storage node>
- Create a VM from the recently created OS disk (<new storage node>-root). Indicate the resource group, VM name, plan details, OS disk name, OS type, size, previously created NIC name, and either availability zone number or availability set name.
az vm create \ --resource-group <resource group name> \ --name <new storage node> \ --plan-name <plan name> \ --plan-product flashgrid-skycluster \ --plan-publisher flashgrid-inc \ --attach-os-disk <new storage node>-root \ --os-type Linux \ --size <VM type, similar to others> \ --nics <new storage node>-nic1 \ {--zone <zone id> | --availability-set <availability set name> }
Use the same plan name that is used for other VMs in the cluster. To retrieve plan details from an existing VM:az vm show --name <existing VM> --resource-group <resource group name> --query plan
2) Create and attach disks to the new VM
- Create a set of new disks that matches disks on other storage nodes. Make sure that the same number, size, and performance is used for each diskgroup.
- If you're using Premium SSD v2 disks, follow this article to add them using Azure CLI.
- Attach the new disks to the new VM. Start with LUN 1.
- For Premium SSD disks, use Read-only caching. In case of using Performance Plus on existing disks, enable it for new disks too.
3) Add the new VM to the FlashGrid Cluster
Note: The new VM will temporarily have the same hostname as the original VM from which we cloned the OS disk.
Connect to the new node as root user and run
# flashgrid-add-node-from-clone <new storage node>
Wait until the command output is successful.
4) Place the license file on the new node as /etc/flashgrid-license
5) Reboot the new node
# reboot
6) Add the disks attached to the new storage node to the corresponding diskgroups.
Connect to any database node as fg user and run the following command for each diskgroup (including GRID):
$ flashgrid-dg add-disks -G <DG NAME> -d /dev/flashgrid/<new storage node>.<disk lun(s)>
Example for adding a single lun2 disk to a diskgroup:
$ flashgrid-dg add-disks -G MYDG -d /dev/flashgrid/mynewhost.lun2
Example for adding multiple disks to a diskgroup:
$ flashgrid-dg add-disks -G MYDG -d /dev/flashgrid/mynewhost.lun[3-7]
7) Confirm status of the node and that its disks are correctly assigned to diskgroups.
On the new node run the following command as fg user:
$ flashgrid-node
8) Confirm status of the cluster.
On the new node run the following command as fg user:
$ flashgrid-cluster
Wait until resyncing operations complete on all disk groups. All disk groups must have zero offline disks and Resync = No
9) Upload diags to FlashGrid support.