|
3 years ago | |
---|---|---|
.. | ||
hpcc-efs | 0a174d20e1 HPCC-26157 Update examples NOTES.txt to match new storage definition | 3 years ago |
README.md | 2791e2f628 HPCC-25632 refactor examples/efs Helm chart with storage planes | 4 years ago |
efs-env | 2791e2f628 HPCC-25632 refactor examples/efs Helm chart with storage planes | 4 years ago |
install-efs-provisioner.sh | 11fb97917c HPCC-25873 fix security group variable name in efs install script | 4 years ago |
values-auto-efs.yaml | ae89996acf HPCC-26134 Final helm file storage plane changes | 3 years ago |
values-retained-efs.yaml | ae89996acf HPCC-26134 Final helm file storage plane changes | 3 years ago |
Make sure an EFS server is available or create one before using this chart.
EFS Server settings should be set in the file efs-env
The chart requires and
The can be found in AWS Console EFS service or from AWS Cli:
aws efs describe-file-systems --region <EFS region> | grep "^FILESYSTEMS" | awk -F $'\t' '{print $8, $5}'
The output will display EFS name and ID.
It is recommended to provide the option "--managed" and a security group id during EKS cluster creation. EFS Server can use the same security group or if you know your security group of node pools they are OK to use for EFS security group. Otherwise you need to provide the following:
EKS cluster name
kubectl config get-clusters | cut -d'.' -f1
Set EKS cluster name to variable "EKS_NAME"
EFS security groups
aws efs describe-mount-targets --file-system-id <EFS ID> --region <EFS region> | awk -F $'\t' '{print $7}'
# For each file-system mount target:
aws efs describe-mount-targets-security-groups --mount-target-id <mount target id> --region <EFS region> | awk -F $'\t' '{print $2}'
Add each unique security group id to the variable "EFS_SECURITY_GROUPS"
There is a setting "EFS_CSI_DRIVER", the default is "true" and it is recommended to leave that.
efs-provisioner pod should be started first:
./install-efs-provision.sh
# To check the pod:
kubectl get pod
There may be some warnings which can be ignored:
An error occurred (InvalidPermission.Duplicate) when calling the AuthorizeSecurityGroupIngress operation: the specified rule "peer: sg-0a5a005489115aac6, TCP, from port: 2049, to port: 2049, ALLOW" already exists
Warning: storage.k8s.io/v1beta1 CSIDriver is deprecated in v1.19+, unavailable in v1.22+; use storage.k8s.io/v1 CSIDriver
csidriver.storage.k8s.io/efs.csi.aws.com configured
"v1beta" will be replaced with "v1" when it is available.
It will automically create Persistent Volume Claims (PVC) and delete them when the HPCC cluster is deleted:
Under the helm directory:
helm install myhpcc ./hpcc --set global.image.version=<HPCC Platform version> -f examples/efs/values-auto-efs.yaml
This will require deploying HPCC PVCs first. PVCs will persist after the HPCC cluster is deleted.
Under the helm directory:
helm install awsstorage examples/efs/hpcc-efs
# To start HPCC cluster:
helm install myhpcc hpcc/ --set global.image.version=latest -f examples/efs/values-retained-efs.yaml
An example values file to be supplied when installing the HPCC chart. NB: Either use the output auto-generated when installing the "hpcc-efs" helm chart, or ensure the names in your values files for the storage types match the PVC names created. "values-retained-efs.yaml" expects that helm chart installation name is "awsstorage". Change the PVC name accordingly if another name is used.
helm uninstall will not delete EFS persistant volumes claims (PVC). You can either run "kubectl delete pv or --all".
Reference:
We will go through some steps of creating EFS server with AWS CLI. If you don't have AWS CLI reference
For simple setup we recommend you use the same VPC/Subnets and security group for both EFS and EKS
aws efs create-file-system --throughput-mode bursting --tags "Key=Name,Value=<EFS Name>" --region <REGION>
To get the EFS ID
aws efs describe-file-systems --region <REGION>
The output shows "NAME" and FileSystemID
The EFS server FQDN will be
<EFS ID>.efs.<REGION>.amazonaws.com
Pick a VPC in the the same region of EFS server
aws ec2 describe-vpcs --region <REGION>
The output shows the VPC IDs.
Get all the subnets of the VPC
aws ec2 describe-subnets --region <REGION> --fileters "Name=vpc-id,Values=<VPC ID>"
The output shows "AvailabilityZone" and "Subnets ID"
We recommend you use these all or subets of these "AvailabalityZone" and "Subnet IDs" to create EKS cluster with the option "--managed". It will be easier to configure EFS and EKS.
To create the mount target:
aws efs create-mount-target --region <REGION> --file-system-id <EFS ID> --subnet-id <Subnet id>
Repeat this for all subnets. Usually an AWS EKS cluster needs at least two available zones (subnets). If you are not sure, create mount targets for all zones.
To display the mount targets:
aws efs describe-mount-targets --region <REGION> --file-system-id <EFS ID>