VIP Manager is a service responsible for updating the routing tables on FlashGrid Cluster database nodes on AWS when VIP moves between database nodes.
Common use cases
Communication between two or more clusters using Virtual IPs when:
a) Clusters are deployed in the same region(s) or in different regions.
b) Clusters/standalone deployments on-premises access clusters deployed in AWS.
Prerequisites
a) Clusters deployed in non-overlapping CIDR blocks.
b) Identify all route tables that are used on other clusters nodes/database servers for connecting to the target cluster (e.g. in each VPC or each subnet).
c) For each cluster create a (unique) VIP in 192.168.1.128/25 subnet.
(For example):
# appvipcfg create -network=1 -ip=192.168.1.201 -vipname=myrac-vip -user=root
# crsctl start resource myrac-vip -n rac1
d) In case of deployments in different VPCs, different regions or on-prem then Transit Gateways are required.
Note: For multiple Region deployments it's recommended to use a unique Autonomous System Number (ASN) for each transit gateway.
e) For clusters deployed in different regions check where Transit Gateway inter-Region Peering support is available.
The following actions should be implemented on AWS infrastructure components:
Transit Gateway attachments
a) For each Transit Gateway create a VPC attachment type pointing to the VPC where the clusters are deployed.
b) In case of multiple Transit Gateways, a Peering Connection type attachment is required.
c) When the cluster(s) must be accessible through VPN, a VPN attachment type is required.
Transit Gateway Route Tables
On each Transit Gateway created:
a) Add a static route to each remote VPC where the cluster(s) must be accessible.
b) Add static route to each VIP (IP/32) through corresponding resource type attachment.
VPC Routing tables
For each subnet routing table corresponding to the VPCs where the clusters are deployed:
a) Add route to remote VIPs (IP/32) through Transit Gateway (Target).
b) Add route to remote VPCs CIDR block (Destination) through Transit Gateway (Target).
Cluster nodes
a) Create IAM Role (example) and include all subnet route table IDs for VPC where clusters are deployed (For example):
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "ec2:CreateRoute", "ec2:DeleteRoute", "ec2:ReplaceRoute" ], "Resource": [ "arn:aws:ec2:*:*:route-table/rtb-0121ac9ad214b58c1", "arn:aws:ec2:*:*:route-table/rtb-0b059d9e3feb32a50" ] }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "ec2:DescribeRouteTables" ], "Resource": [ "*" ] } ] }
b) Attach the IAM Role to each database node (on both clusters) (AWS Portal: Actions => Security => Modify IAM Role , select the IAM Role, Save).
c) On each security group associated with cluster nodes allow the traffic from remote database nodes using VPC IPs/32.
d) On each database node install AWS VIP Manager rpm package.
e) On each database node customize the /etc/flashgrid-vip-manager-for-aws.cfg
file (e.g. - YAML syntax).
rt_tables:
- rtb-0121ac9ad214b58c1
local_vips:
- 192.168.1.201
main_nic: eth0
vip_nic: fg-pub
vip_subnet: 192.168.1.128/25
Notes:
On database nodes from the same cluster, the file /etc/flashgrid-vip-manager-for-aws.cfg
is identical.
rt_tables: subnet route table ID associated with the VPC where the cluster is deployed.
local_vips: VIP created on local cluster.
f) Start and enable the ec2routevip.service (on all database nodes).
sudo systemctl enable --now ec2routevip.service
g) Check the ec2routevip.service status.
sudo systemctl status ec2routevip.service
h) Check if all database nodes can communicate with each other (local and remote) using virtual IP addresses.
References:
Registering an Application as a Resource (section Creating an Application VIP Managed by Oracle Clusterware)