Be aware that the SUSE Linux Cluster will change the network topology and start and stop instances. The privileges required for these operations allow to change the AWS network topology in an AWS account. Verify and test all entries very carefully, incorrect route entries may have a negative effect on the routing in a given VPC. Limit access to users working on the SUSE Linux Cluster nodes to the required minimum.
The AWS cluster nodes will have to be able to communicate through a second IP address. The document IP Failover with Overlay IP Addresses on this site describe how to disable the source/destination check for AWS instances and how to host a second IP address on the same Linux system.
Create two instances for your cluster. The instances will be most likely located in two different availability zones.
This implicates that they will be instantiated in two different subnets which are able to communicate with each other.
- Use a SLES for SAP AMI. Search for "suse-sles-sap-12-sp1" in the list of public AMIs. There is currently (March 2016) a BYOS AMI available. Use this AMI to create a SAP HANA compliant configuration.
- Use the AWS Marketplace AMI SUSE Linux Enterprise Server for SAP Applications 12 SP1 which already includes the required SUSE subscription.
The two instances will have to be created with the following AWS IAM policy:
IAM Policy: SAP-HA-INSTANCE-START-STOP-ROUTE-CHANGE
The following policy can be applied to both cluster nodes. It will allow both nodes to stop themselves and the other node. Replace the strings i-node1 and i-node2 with the instance ids of the two cluster nodes
Tagging the Instances
The instances will have host names which are automatically generated. These hosts names tend to be to long. Pick to hostnames and configure the instances to use them.
The SLES agents will have to be able to identify the instances in the correct way. This happens through instance tags. Tag the two instances through the console or the AWS Command Line Interface (CLI) with arbitrarily chosen tag like "pacemaker" and the host name as value as it will be shown in the command uname. Use the same tag (like "pacemaker") and the individual host names for both instances. The AWS documentation explains how to tag EC2 instances.
The screen shot on the left side has been created by identifying an EC2 instance at the console. The last tab Tags has been clicked. The button Add/Edit Tags has then been clicked. A new tag with the key pacemaker and the hostname has been created. The host name in this example has been suse-node52.
Tag both of your instances this way.
This section lists the ports which need to be available for the SUSE cluster only.
The following ports and protocols need to be configured to allow the two cluser nodes to communicate which each other:
- Port 5405 for inbound UDP: Used to configure the corosync communication layer. Port 5405 is being used in common examples. A different port may be used depending on the corosync configuration.
- Port 7630 for inbound TCP: Used by the SUSE "hawk" web GUI.
- enable ICMP: Used through a ping command in the AWS IP-move agent of the SUSE cluster.
SAP HANA will require the following ports inbound to be open
- TCP 30101-30107, 30140 are for HANA replication with the assumption that HANA instance no is 00 ( adjust as needed)
Generic ports which should be considered to keep open
- TCP 22: ssh access
- TCP 5900-5910 for VNC
We assume that there are no restriction for outbound network communication.
Creating a AWS CLI Profile on both Instances
The SLES agents use the AWS Command Line Interface (CLI). They will use an AWS CLI profile which needs to be created for the super user account on both instances. The SLES agents will require a profile which creates output in the text format. The name of the profile is arbitrary. The name choosen in this example is cluster. The region of the instance needs to be added as well. It is us-east-1 in the following example.
One way to create such a profile is to create a file /root/.aws/config with the following content:
[default] region = us-east-1 [profile cluster] region = us-east-1 output = text
The file above creates a default profile and a cluster profile which return output in text format. Replace the string us-east-1 with the region your instance is using!
The other way is to use the aws configure CLI command in the following way:
my-node1:~/.aws # aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: us-east-1 Default output format [None]: my-node1:~/.aws # aws configure --profile cluster AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: us-east-1 Default output format [None]: text
This command sequence generates a default profile and a cluster profile.
Disable the Source/Destination Check for the Cluster Instances
The source/destination check can be disabled through the EC2 console. It takes the execution of the following pull down menu in the console for both EC2 instances (see left).
The same operation can be performed through scripts using the AWS command line interface (AWS-CLI). The following command needs to be executed one time for both instances, which are supposed to receive traffic from the Overlay IP address:
ec2-modify-instance-attribute EC2-INSTANCE --source-dest-check false
The system on which this command gets executed needs temporarily a role with the following policy:
"Action": [ "ec2:ModifyInstanceAttribute"],
Replace the individual parameters (bold letters) for the region, the account identifier and the two identifiers for the EC2 instances with the placeholders in bold letters.
Create Routing Entry
The cluster will have to use an overlay IP address. This overlay IP address has to have a CIDDR range which is outside of the VPC. The cluster will expect that there is a routing entry which points to one of the instances.
Identify the AWS routing table, which routes the traffic for all consumers of the IP address in a given VPC. The AWS command line interface (AWS-CLI) allows creating such a route with the command:
aws ec2 create-route --route-table-id ROUTE_TABLE --destination-cidr-block CIDR --instance INSTANCE
- ROUTE_TABLE is the identifier of the routing table, which needs to me modified.
- CIDR is an IP address with the filter. The filter will have to be 32 since an individual overlay IP address is used. An example is 10.2.0.2/32 whereas 10.2.0.2 is the overlay IP address.
- INSTANCE is the node to which the traffic gets directed. Pick the AWS instance-id of one of your cluster nodes.
The cluster will modify this routing entry whenever it fails over a HANA instance.
The name of the routing table and the IP address will be required within the cluster configuration.
Important: The routing table which will contain the routing entry, has to be inherited to all subnets in the VPC which have consumers of the service.
Enable Cluster Instances to use the overlay IP Addresse
The cluster instances need the overlay IP addresse to be configured as secondary IP address on their standard interface eth0. This can be achieved by the command:
ip address add OVERLAY-IP dev eth0:1
Execute this command with super user privileges on both instances
General SAP-AWS specific Changes to the SUSE Installations
- Hostname change: AWS automatically assigned hostnames tend to be to long for SAP installations. SUSE on AWS requires a few extra steps to change the hostname.
- Adding swap space: SAP installations need extra swap space
HANA can be installed from the command line without a graphical user interface. Many other SAP products require to operate a graphical user interface on the target system.
It is convenient to have an installation of a Graphical Desktop with RDP Access for SUSE SLES.
By the end of this phase you have the following information which will be required for the SUSE HAE and HANA installation.
|Cluster node 1||Cluster node 2|
None instance specific information which will be required later on:
|Overlay IP address|
|Overlay IP address|
|AWS routing table|
|Instance tag||pacemaker (or individual one)|