Home >> Blog – EN >> How to deploy an Openshift cluster in AWS

How to deploy an Openshift cluster in AWS

21 November 2022

By Lionel Gurret.

Context

Thanks to the Openshift platform (OCP), thousands of companies allow their developers to build, deploy, and run applications in containers.

This platform also offers advanced features such as:

  • User, rights and authentication management
  • A user interface
  • CI / CD tools
  • The integration of a private registry
  • Monitoring and logging

This platform can be hosted both on a private cloud (on-premise) and on a public cloud.

It is possible to install Openshift in two different ways:

  • IPI (Installer provisioned infrastructure cluster) : the installation will be automated from start to finish. For example, infrastructure component resources such as DNS records or servers will be provisioned automatically by the installer in the cloud.
  • UPI (User provisioned infrastructure cluster) : the installation will be more manual, offering more freedom and configurations. In particular, it will be necessary to provision the infrastructure (instances, network components, etc.) before using the files generated by the installer to configure the nodes of our cluster.

In this blog post, we will first detail the different steps related to the process of installing Openshift in a UPI context. Next, we will list the necessary prerequisites and associated configurations to install Openshift on AWS with Terraform (IaC). AWS public cloud specifics will be discussed.

Installation process

General

The UPI installation will be based on ignitions files :

These ignitions files will be used by EC2 instances (master and worker nodes) to configure the RH CoreOS operating system at startup.

In order to generate these ignition files, it will be necessary to provide the OCP installer with the configuration of our cluster via an install-config.yaml file. The installer will then first generate the manifests and then the ignitions files.

We will host these files in an S3 bucket and our instances will use these files to perform their configurations. An additional instance of boostrap will also need to be provisioned to start the cluster installation and configuration process.

Process

Indeed, it is not necessary to understand the process in detail to administer an Openshift cluster.

However, it can be useful in case you need to debug a problem during the installation.

The process can be divided into 5 steps.

First, the ignition files generated by the OCP installer will be retrieved by the bootstrap machine. A dedicated DCE will be provisioned. It will be used in particular to store the configuration of the cluster.

Then, the master nodes will recover their configurations and the boostrap machine will deploy a temporary control plane on port 22623.

Through this temporary control plane, an ETCD cluster will be provisioned on the 3 master nodes and a production control plane will be accessible on port 6443.

From this point, the bootstrap server becomes useless and can be removed after the installation is complete. The deletion is automatic in the case of an IPI installation.

Finally, the worker nodes are configured and the OCP cluster installed.

Architecture

Here, an example of a possible architecture for our OCP cluster on the AWS cloud.

We can observe :

  • A VPC to host our infrastructure.
  • An S3 bucket which will be used to host the ignitions files.
  • Cluster instances and bootstrap server in private networks.
  • A bastion server in the public network to monitor and debug the installation.
  • 3 Load balancers :
    • A classic load balancer will be automatically provisioned by the installer for the workload (applications)
    • Un network load balancer for the temporary control plane mentioned above (port 22623), which will allow the bootstrap server to configure the master nodes.
    • Un network load balancer for the production control plane (port 6443) which will configure the worker nodes and use the OCP cluster once the installation is complete.

To provision these resources, the use of Terraformcan be judicious in order to remain in an IaC context.

It is interesting to first provision the AWS infrastructure with the exception of the cluster nodes. Second, our EC2 worker and master instances, once all DNS entries are provisioned.

Prerequisites

Amazon Web Services

In order to provision our infrastructure, certain considerations must be taken into account.

To get started, the EC2 instance type must meet certain hardware requirements:

Next, it is important to note that AMIs are dependent on the AWS Region used:

Moreover, the following prerequisites will be required:

  • An AWS account
  • An IAM user with the SystemAdministrator role
  • An AWS Key in order to use Terraform through AWS API
  • A domain name
  • An SSL certificate for the Openshift console
  • A Bastion server (EC2 t2.micro) to follow the installation process (see architecture scheme)
  • A bucket S3 (see architecture scheme)

Laptop

In order to launch the creation of the ignition files, the creation of the infrastructure and the OCP cluster, the following prerequisites will have to be met on your work environment:

  • AWS CLI
  • The AWS Key and the attached ID
  • Terraform CLI
  • Openshift CLI
  • OCP installer and the attached pull secret
  • Your Terraform manifests (infrastructure and cluster OCP)

Configuration

Generation of ignition files

On your laptop, create an install-config.yaml file in a specific folder (example: installation_dir) from the following template provided by the Openshift documentation:

apiVersion: v1
baseDomain: example.com 
credentialsMode: Mint 
controlPlane:   
  hyperthreading: Enabled 
  name: master
  platform:
    aws:
      zones:
      - us-west-2a
      - us-west-2b
      rootVolume:
        iops: 4000
        size: 500
        type: io1 
      type: m5.xlarge
  replicas: 3
compute: 
- hyperthreading: Enabled 
  name: worker
  platform:
    aws:
      rootVolume:
        iops: 2000
        size: 500
        type: io1 
      type: c5.4xlarge
      zones:
      - us-west-2c
  replicas: 3
metadata:
  name: test-cluster 
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  aws:
    region: us-west-2 
    userTags:
      adminContact: jdoe
      costCenter: 7536
    amiID: ami-96c6f8f7 
    serviceEndpoints: 
      - name: ec2
        url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com
fips: false 
sshKey: ssh-ed25519 AAAA... 
pullSecret: '{"auths": ...}'

Modify and complete this file according to the desired configuration then run the command:

    openshift-install create manifests --dir=installation_dir/

You will find that your install-config.yml file has been replaced with Openshift manifests.

Then run the following command to create the ignitions files:

    openshift-install ignition-configs --dir=installation_dir/

Your installation folder should now contain the following files:

  • An auth directory containing the kubeconfig and the password of the kubeadmin user
  • ignition files
  • a metadata.json file

This metadata.json file contains important information for your infrastructure.

Indeed, the clusterID and the infraID must in particular be used to tag your resources for the smooth running of the installation.

IaC configuration and AWS specifics

The Terraform code will not be covered in detail in this blog post but here are the main key points:

  • It may be a good idea to create Terraform modules to provision the infrastructure and the OCP cluster.
  • Use a remote state for obvious security reasons.
  • Remember to use the terraform.tfvars file to override the variables specific to the installation of your cluster and make your code reusable for several clusters.
  • Ignitions files must be stored in an S3 bucket (or an HTTP or FTP server.)
  • Don’t forget to manage DNS resources, Network Loadbalancers, networks (Security Group, VPC, subnets) directly in your manifests.
  • It is also necessary via the IaC to create the public and reverse DNS zones. Remember to disable the automatic management of DNS reverses:

  • For EC2 instances to retrieve ignition files, the use of EC2 user data will be necessary:

  • Tag the EC2 Instances and the primary DNS zone with the values retrieved in the metadata.json by referring to the documentation:

  • Add the AdministratorAccess Access role to EC2 instances:

Provisioning

Once your ignitions files are hosted on the S3 bucket and your Terraform manifests have been prepared, you can start provisioning the infrastructure and then the cluster with Terrraform (terraform plan, terraform apply).

To follow the installation of your cluster, connect to the bastion server then to the bootstrap server and run the following command:

    journalctl --unit=bootkube.service

This documentation will allow you to easily debug any possible problem.

Once the installation is complete, there are still two steps to access your cluster securely.

  • Accept worker certificates to join the cluster:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"n"}}{{end}}{{end}}' | xargs oc adm certificate approve
watch oc get nodes
  • Set up SSL certificates for your Openshift console:
oc create configmap custom-ca 
   --from-file=ca-bundle.crt=/home/ec2-user/certificates/mycluster/mycluster.crt 
   -n openshift-config

oc get configmap custom-ca -n openshift-config -oyaml

oc create secret tls certificate 
  --cert=/home/ec2-user/certificates/mycluster/mycluster.crt 
  --key=/home/ec2-user/certificates/mycluster/mycluster.key 
  -n openshift-ingress

oc patch ingresscontroller.operator default 
     --type=merge -p 
     '{"spec":{"defaultCertificate": {"name": "certificate"}}}' 
     -n openshift-ingress-operator

watch oc -n openshift-ingress get pods

You should now be able to join your console and log in with the kubeadmin account!

Sources

Leave a Reply

  Edit this page