AWS setup for CPSH extended capabilities
This section describes the AWS setup for CPSH extended capabilities.
Provisioning
Currently, the only supported Kubernetes cluster type is Amazon Elastic Kubernetes Service (EKS). Support for additional Kubernetes cluster types may be introduced in future releases.
To provision the environment, ensure the following:
-
An available EKS cluster.
-
The AWS Load Balancer Controller is installed in the cluster.
-
The cluster includes well-defined node groups that meet your workload requirements.
We utilize eksctl to streamline our installation process, as eksctl handles most of the setup, which can be complex when creating the EKS environment manually. First, install eksctl by following the installation instructions here based on your operating system.
Setup node group launch templates
-
Retrieve the latest Bottlerocket AMI:
Copyaws ssm get-parameters \
--names "/aws/service/bottlerocket/aws-k8s-${EKS_VERSION}-fips/x86_64/latest/image_id" \
--region ${REGION} \
--query "Parameters[0].Value" \
--output text -
Collect the necessary EKS cluster information for the launch template:
Copyaws eks describe-cluster \
--name ${CLUSTER_NAME} \
--region ${REGION} \
--query 'cluster.{ca:certificateAuthority.data,endpoint:endpoint}' \
--output json -
Generate base64 encoded user data from the following content. If using IPv6, the following script must be executed. If you are using IPv4, the IPV6 cluster DNS discovery can be ignored:
CopySERVICE_IPV6_CIDR=$(aws eks describe-cluster \
--name "${STACK_NAME}" \
--region "${REGION}" \
--query 'cluster.kubernetesNetworkConfig.serviceIpv6Cidr' \
--output text)
CLUSTER_DNS=$(echo "$SERVICE_IPV6_CIDR" | sed 's/::.*$/::a/') -
Base64 encode your user data and set it as the value of the environment variable:
-
Set the Lineage user data using the following environment variable:
Copyexport USER_DATA_LINEAGE=$(echo -n "[settings.kubernetes]
cluster-name = \"$STACK_NAME\"
authentication-mode = \"aws\"
api-server = \"$EKS_API_ENDPOINT\"
cluster-certificate = \"$EKS_CA_DATA\"
cloud-provider = \"external\"
cluster-domain = \"cluster.local\"
[settings.kubernetes.node-labels]
\"collibra.com/node-storage\" = \"local\"
\"collibra.com/team\" = \"techlin\"
[settings.kubernetes.node-taints]
\"collibra.com/node-storage\" = \"local:NoSchedule\"
\"collibra.com/team\" = \"techlin:NoSchedule\"
[settings.bootstrap-commands.k8s-ephemeral-storage]
commands = [
[\"apiclient\", \"ephemeral-storage\", \"init\"],
[\"apiclient\", \"ephemeral-storage\" ,\"bind\", \"--dirs\", \"/var/lib/containerd\", \"/var/lib/kubelet\", \"/var/log/pods\"]
]
essential = true
mode = "always"
" \
$(if [ -n "$CLUSTER_DNS" ]; then
echo "cluster-dns-ip = \"$CLUSTER_DNS\""
fi) | base64) -
Create the default launch template:
- Create the techlin launch template:
export USER_DATA=$(echo -n "[settings.kubernetes]
cluster-name = \"$STACK_NAME\"
authentication-mode = \"aws\"
api-server = \"$EKS_API_ENDPOINT\"
cluster-certificate = \"$EKS_CA_DATA\"
cloud-provider = \"external\"
cluster-domain = \"cluster.local\"" \
$(if [ -n "$CLUSTER_DNS" ]; then
echo "cluster-dns-ip = \"$CLUSTER_DNS\"";
fi) | base64)
aws ec2 create-launch-template \
--launch-template-name ${CLUSTER_NAME}-default-ng \
--launch-template-data file://launch-template-default-ng.json \
--region ${REGION}
aws ec2 create-launch-template \
--launch-template-name ${CLUSTER_NAME}-techlin-local-ssd \
--launch-template-data file://launch-template-techlin-local-ssd.json \
--region ${REGION}
Create EKS node groups
-
Get the
VPC IDfrom the current EKS cluster:Copyexport VPC_ID=$(aws eks describe-cluster \
--name testing \
--query "cluster.resourcesVpcConfig.vpcId" \
--output text --region ${REGION}) -
Find the
PRIVATE_SUBNET_IDSon the VPC:Copyexport PRIVATE_SUBNET_IDS=$(aws ec2 describe-subnets \
--filters "Name=vpc-id,Values=${VPC_ID}" \
--query "Subnets[?MapPublicIpOnLaunch==\`false\`].SubnetId" \
--region ${REGION} \
--output text) -
Create the node group role and attach policies:
Copyaws iam create-role \
--role-name eksctl-nodegroup-role \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "ec2.amazonaws.com" },
"Action": "sts:AssumeRole"
}
]
}'
# Attach the managed policies to the role
aws iam attach-role-policy \
--role-name eksctl-nodegroup-role \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
aws iam attach-role-policy \
--role-name eksctl-nodegroup-role \
--policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
aws iam attach-role-policy \
--role-name eksctl-nodegroup-role \
--policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
aws iam attach-role-policy \
--role-name eksctl-nodegroup-role \
--policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore -
Create the default node group:
Copyaws eks create-nodegroup \
--cluster-name "${CLUSTER_NAME}" \
--nodegroup-name "default-ng" \
--launch-template "name=${CLUSTER_NAME}-default-ng" \
--subnets $(echo $PRIVATE_SUBNET_IDS | tr ' ' ',') \
--node-role "arn:aws:iam::${AWS_ACCOUNT_ID}:role/eksctl-nodegroup-role" \
--scaling-config minSize=2,maxSize=4,desiredSize=2 \
--capacity-type ON_DEMAND \
--region "${REGION}" -
Create the techlin node group:
aws eks create-nodegroup \
--cluster-name "${CLUSTER_NAME}" \
--nodegroup-name "techlin-local-ssd" \
--launch-template "name=${CLUSTER_NAME}-techlin-local-ssd" \
--subnets $(echo $PRIVATE_SUBNET_IDS | tr ' ' ',') \
--node-role "arn:aws:iam::${AWS_ACCOUNT_ID}:role/eksctl-nodegroup-role" \
--scaling-config minSize=2,maxSize=4,desiredSize=2 \
--capacity-type ON_DEMAND \
--region "${REGION}"
Install AWS Load Balancer Controller
-
Update
kubeconfigwith the EKS cluster:Copyaws eks update-kubeconfig --name ${CLUSTER_NAME} --region us-east-2 -
Add the
eks-chartsHelm chart repository. (This repository is by AWS. Click here for the AWS repo.)Copyhelm repo add eks https://aws.github.io/eks-charts -
Update your local repository to ensure you have the latest charts:
Copyhelm repo update eks -
Install the AWS Load Balancer Controller:
Copyhelm install aws-lb-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=${CLUSTER_NAME} \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-lb-controller \
--set region="${REGION}" \
--set vpcId="${VPC_ID}" \
--version 2.13.3 -
Check to ensure the AWS Load Balancer Controller is installed:
Copykubectl get deployment -n kube-system aws-lb-controller