Solution Patterns: Connect, Secure and Protect with Red Hat Connectivity Link

See the Solution in Action

This solution walks you through the activities from the lens of a Platform Engineer and a Developer. But first, we need a few prerequisites before getting started.

1. Prerequisites

To provision the demo you will perform the following steps - each of which is explained in detail in the next sections:

  • You will need an OpenShift cluster with cluster-admin privileges. This solution pattern has been tested on OpenShift 4.15 and 4.16

  • Ensure you have the tools oc and ansible installed in your local environment such as your laptop

  • Access to AWS Route53 or Google Cloud DNS to be able to create new domain names

1.1. CLI tools

To check if you have the cli tools, you can open your terminal and use following commands:

oc version #openshift cli client
ansible --version
ansible-galaxy --version
ansible-galaxy collection list #the list should include kubernetes.core and amazon.aws.route53 (version 8.1.0 ) module

Please run these commands to ensure you have the right dependencies

pip install openshift pyyaml kubernetes botocore boto3
ansible-galaxy collection install kubernetes.core amazon.aws community.general

1.2. Managed Zone on AWS

It is good to note that you need a hosted zone as a subdomain in AWS Route53 ] (for example, managed.myrootdomain.com) for the applications that you want to manage and secure with Connectivity Link.

This subdomain is automatically setup by the deployment scripts. But you will need to update the inventory file with the root domain of AWS Route53 when you are instructed to in next sections.

Ref: this article to know more about how a Route53 subdomain can be created.

1.3. Personalize the instructions

To personalize the rest of the instructions to your OpenShift environment:

  • At the top-right of this page enter the following information under the Your Workshop Environment section

    • AWSROOTZONE is the Root Route53 domain of your AWS environment.

      The AWSROOTZONE would look something like this mycluster.abc.com or sandbox100.opentlc.com

    • OPENSHIFTSUBDOMAIN to match your OpenShift cluster

      The OPENSHIFTSUBDOMAIN would look something like this apps.mycluster.myopenshift.com

  • Press enter or click on the Set button

    setup instructions
  • The menubar and the rest of this walkthrough guide will be updated with the Managed Zone name and the subdomain as shown below

    setup instructions complete

2. Platform Setup

This section is typically performed by a Platform Engineer persona.

The primary goal of a Platform Engineer is to deploy a Gateway that provides secure communication and is protected and ready for use by application development teams to deploy their service endpoints or APIs. This gateway should be protected and secured with global rate limiting and auth policies.

In this demo, the deployment script uses ArgoCD to:

  • Install Red Hat Connectivity Link (Kuadrant) operator

  • Setup a ManagedZone for DNS configuration.

  • Define a TLS issuer for TLS certificates for secure communication to the Gateways.

  • Create a Gateway (based on Istio gateway) with a wildcard hostname based on the root domain.

  • Kuadrant Custom Resources (CRs) including various policies: DNS, TLS.

2.1. Get the deployment scripts

  • Login to your OpenShift cluster as cluster-admin (because a number of operators will need to be installed)

  • Click on the username on the top right hand, and then click on Copy login command. This will open another tab and you will need to login again

  • Click on Display token link, and copy the command under Log in with this token. This will look like this

oc login --token=<token> --server=<server>
  • Clone the ansible script

    git clone https://github.com/rh-soln-pattern-connectivity-link/connectivity-link-ansible
  • Open the inventories/inventory.template file and update the variables. Save the file.

    Click for details of inventory.template file
    ocp4_workload_connectivity_link_aws_access_key=<AWS_ACCESS_KEY_ID>
    ocp4_workload_connectivity_link_aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
    
    # E.g.: sandbox902.opentlc.com
    ocp4_workload_connectivity_link_main_domain=<AWS ROUTE53 ROOT DOMAIN>
    
    ocp4_workload_connectivity_link_aws_managed_zone_region=<Managed Zone region - default region of your AWS setup>
    # E.g.: eu-central-1
    
    ocp4_workload_connectivity_link_ingress_gateway_tls_issuer_email=<your  address email for letsencrypt>
    
    ocp4_workload_connectivity_link_gateway_geo_code=<gateway geo code>
    # E.g.: EU or US

2.2. Run the deployment scripts

Prerequisites checklist

Before running the following Ansible script, check if you have done these prerequisites

  • The inventory file reflects the correct AWS credentials, Root zone details and region etc.

Run the Ansible script which will setup the RHCL Operator, Istio and Kuadrant system workloads

cd operator-setup
ansible-playbook playbooks/ocp4_workload_connectivity_link.yml -e ACTION=create -i inventories/inventory.template

2.3. What’s next

In the next section, we’ll go through the Platform Engineer’s Workflow