See the Solution in Action

Concepts and Commands Used in the Demonstration

Please make sure to explain what each Skupper command does the first time you use it, especially for people unfamiliar with Skupper. The following commands should be explained:

  • skupper init: Initializes the Skupper network, setting up the necessary components to enable secure communication between services.

  • skupper expose: Exposes a local service through the Skupper network, allowing it to be accessed from other Skupper-connected sites.

  • skupper token: Generates a connection token that can be used by other sites to link to the Skupper network, ensuring secure communication.

  • skupper link: Establishes a secure link between two Skupper sites using the token created by skupper token.

  • skupper service status: Displays the status of services exposed through Skupper, showing what is accessible and how it’s connected within the network.

  • ilab download: Downloads the model to be used by the chatbot.

  • ilab model serve: Starts the server that will be responsible for receiving the user input, sending it to the LLaMA3 model, and sending the response back to the user.

  • ilab model chat: Starts the chatbot, allowing the user to interact with it.

Run the demonstration

Before getting started

To set up the demonstration, you need to have the following prerequisites:

  • Access to an OpenShift cluster with the Skupper and InstructLab installed.

  • A server running the InstructLab chat model.

  • Access to a terminal to run the commands.

  • Access to a web browser to interact with the chatbot.

  • oc client installed and configured to access the OpenShift cluster.

  • skupper client installed and configured to access the OpenShift cluster.

  • Podman installed to run the private Skupper.

AI Model Deployment with InstructLab

The first step is to deploy the InstructLab chat model in the InstructLab site. The InstructLab chat model will be responsible for receiving the user input and sending it to the LLaMA3 model. The response from the LLaMA3 model will be sent back to the user. This is based on the article: Getting Started with InstructLab for Generative AI Model Tuning.

mkdir instructlab && cd instructlab
sudo dnf install gcc gcc-c++ make git python3.11 python3.11-devel
python3.11 -m venv --upgrade-deps venv
source venv/bin/activate
pip install instructlab
  • The mkdir instructlab && cd instructlab command is used to create a directory called instructlab and navigate to it.

  • The sudo dnf install gcc gcc-c++ make git python3.11 python3.11-devel command is used to install the necessary dependencies for InstructLab.

  • The python3.11 -m venv --upgrade-deps venv command is used to create a virtual environment called venv for InstructLab.

  • The source venv/bin/activate command is used to activate the virtual environment.

  • The pip install instructlab command is used to install InstructLab in the virtual environment.

Initialize the InstructLab Configuration

Now it’s time to initialize the InstructLab configuration. The ilab config init command is used to initialize the InstructLab configuration, creating the config.yaml file with the default configuration. Run the following command:

ilab config init

The output will be similar to the following, with the user being prompted to provide the necessary values to initialize the environment:

Welcome to InstructLab CLI. This guide will help you to setup your environment.
Please provide the following values to initiate the environment [press Enter for defaults]:
Path to taxonomy repo [/home/user/.local/share/instructlab/taxonomy]:
Path to your model [/home/user/.cache/instructlab/models/merlinite-7b-lab-Q4_K_M.gguf]:
Generating `/home/user/.config/instructlab/config.yaml` and `/home/user/.local/share/instructlab/internal/train_configuration/profiles`...
Please choose a train profile to use.
Train profiles assist with the complexity of configuring specific GPU hardware with the InstructLab Training library.
You can still take advantage of hardware acceleration for training even if your hardware is not listed.
[0] No profile (CPU, Apple Metal, AMD ROCm)
[1] Nvidia A100/H100 x2 (A100_H100_x2.yaml)
[2] Nvidia A100/H100 x4 (A100_H100_x4.yaml)
[3] Nvidia A100/H100 x8 (A100_H100_x8.yaml)
[4] Nvidia L40 x4 (L40_x4.yaml)
[5] Nvidia L40 x8 (L40_x8.yaml)
[6] Nvidia L4 x8 (L4_x8.yaml)
Enter the number of your choice [hit enter for no profile] [0]:
No profile selected - any hardware acceleration for training must be configured manually.
Initialization completed successfully, you're ready to start using `ilab`. Enjoy!
  • The ilab config init command is used to initialize the InstructLab configuration, creating the config.yaml file with the default configuration.

  • The user is prompted to provide the necessary values to initialize the environment, such as the path to the taxonomy repo and the path to the model.

  • After running ilab config init your directories will look like the following on a Linux system:

    • ~/.cache/instructlab/models/ (1)

    • ~/.local/share/instructlab/datasets (2)

    • ~/.local/share/instructlab/taxonomy (3)

    • ~/.local/share/instructlab/checkpoints (4)

(1) Directory where the models are stored. (2) Directory where the datasets are stored. (3) Directory where the taxonomy is stored. (4) Directory where the checkpoints are stored.

To enable external access to your model, please modify the config.yaml file, located into your instructlab directory: ~/.config/instructlab/config.yaml. This change needs to be done under the serve section, as shown below:

host_port: 0.0.0.0:8000
  • The `host_port: The IP address and port where the model will be exposed. In this case, the model will be exposed on all interfaces.

Before starting the server, download the model to be used by the chatbot. The ilab download command is used to download the model to be used by the chatbot. Run the following command:

ilab model download
  • The ilab download command is used to download the model to be used by the chatbot.

Now, start the server that will be responsible for receiving the user input, sending it to the model, and sending the response back to the user. The ilab model serve command is used to start the server. Run the following command:

ilab model serve
  • The ilab model serve command is used to start the server that will be responsible for receiving the user input, sending it to the model, and sending the response back to the user.

Public Skupper Deployment

Deploy the public Skupper in Openshift Cluster. The public Skupper will receive the connection from the private Skupper and create a secure connection between the two sites.

Creating the project and deploying the public Skupper:

This is the step where you create the project and deploy the public Skupper. The public Skupper will be responsible for receiving the connection from the private Skupper and creating a secure connection between the two sites. Open a new terminal and run the following commands:

export SKUPPER_PLATFORM=kubernetes
oc new-project ilab-pilot
skupper init --enable-console --enable-flow-collector --console-user admin --console-password admin
  • Run this command in a new terminal and keep it open, because the default platform is kubernetes and the private terminal is using podman.

  • SKUPPER_PLATFORM=kubernetes is used to set the platform to Kubernetes. This is necessary because the public Skupper will be running on a Kubernetes cluster.

  • oc new-project ilab-pilot is used to create a new project called ilab-pilot in the OpenShift cluster.

  • skupper init is used to initialize the Skupper network, setting up the necessary components to enable secure communication between services.

  • The --enable-console flag is used to enable the Skupper console, which provides a web interface for managing the Skupper network.

  • The --enable-flow-collector flag is used to enable the flow collector, which collects and displays information about the traffic flowing through the Skupper network.

  • The --console-user admin flag is used to set the username for the Skupper console to admin.

  • The --console-password admin flag is used to set the password for the Skupper console to admin.

Creating the token to allow the private Skupper to connect to the public Skupper:

This is the step where you create the token to allow the private Skupper to connect to the public Skupper. At the same terminal, run the following command:

skupper token create token.yaml
  • skupper token create token.yaml is used to generate a connection token that can be used by other sites to link to the Skupper network, ensuring secure communication.

  • The token.yaml file will contain the token to connect the two sites.

Now, you’ll have a token.yaml file with the token to connect the two sites.

Private Skupper Deployment

The second step is to deploy the private Skupper in Private Local Environment. The private Skupper will be responsible for creating a secure connection between the two sites, allowing the Ollama Pilot application to send requests to the InstructLab chat model.

Install Skupper

To install skupper on site A, with podman as the platform, open a new terminal to handle all the commands related to the private Skupper. Here, we will create a skuper site using podman as the platform, we need to enable the podan service before running the skupper init command:

systemctl --user enable --now podman.socket
  • systemctl --user enable --now podman.socket is used to enable and start the podman service at the user level.

Now, run the following commands to install Skupper:

export SKUPPER_PLATFORM=podman
skupper init --ingress none
  • SKUPPER_PLATFORM=podman is used to set the platform to podman. This is necessary because the private Skupper will be running on a podman container.

  • skupper init is used to initialize the Skupper network, setting up the necessary components to enable secure communication between services.

  • The --ingress none flag is used to disable the automatic creation of an ingress controller. This is necessary because the public Skupper will be responsible for exposing the service to the internet.

Exposing the InstructLab Chat Model

To bind the local service running the InstructLab chat model to the Skupper service:

skupper expose host <YOUR MACHINE IP> --address instructlab --port 8000
  • skupper expose is used to expose a local service through the Skupper network, allowing it to be accessed from other Skupper-connected sites.

  • host <YOUR MACHINE IP> is used to specify the IP address of the machine where the service is running.

  • --address instructlab is used to specify the address of the service.

  • --port 8000 is used to specify the port of the service.

  • The port 8000 needs to be open in the firewall into your internal network.

Check the status of the Skupper service:

skupper service status

Services exposed through Skupper:
╰─ instructlab:8000 (tcp)
  • skupper service status is used to display the status of services exposed through Skupper, showing what is accessible and how it’s connected within the network.

Secure Communication Between the Two Sites with Skupper

Now it’s time to establish a secure connection between the two sites using the token created by the public Skupper. Using the token created by the public Skupper, run the following command at the terminal where the private Skupper is running:

skupper link create token.yaml --name instructlab
  • skupper link create token.yaml --name instructlab is used to establish a secure link between two Skupper sites using the token created by skupper token.

Check the status of the Skupper link:

skupper link status

Links created from this site:

        Link instructlab is connected

Current links from other sites that are connected:

        There are no connected links
  • skupper link status is used to display the status of the links created by the Skupper network, showing which sites are connected and how they are connected.

Check the status on the public Skupper terminal:

skupper link status

Links created from this site:

       There are no links configured or connected

Current links from other sites that are connected:

       Incoming link from site b8ad86d5-9680-4fea-9c07-ea7ee394e0bd
  • skupper link status is used to display the status of the links created by the Skupper network, showing which sites are connected and how they are connected.

Chatbot with Protected Data

The last step is to expose the service in the public Skupper and create the Ollama Pilot application.

  • Still on the terminal where the public Skupper is running, run the following command to expose the service:

skupper service create instructlab 8000
  • skupper service create instructlab 8000 is used to create the service in the public Skupper, allowing it to be accessed from the private Skupper.

Deploying the InstructLab Chatbot

Before run the chatbot, let’s understand the final part of this solution, the Frontend application.

This application will be deployed in a OpenShift cluster, and will be responsible for sending the user input to the InstructLab chat model and displaying the response to the user. The application will be deployed at the same namespace where the public Skupper is running.

  • The frontend application called ILAB Frontend chatbot will use the local service in the public cluster to send the user input to the InstructLab chat model and display the response to the user. See the line 23 of the ilab-client-deployment.yaml file.

Deploy ILAB Frontend chatbot

To deploy the ILAB Frontend chatbot, lets use the following yaml deployment file, in this case the file is located at ~/instructlab/ilab-client-deployment.yaml:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  name: ilab-client
spec:
  replicas: 1
  selector:
    app: ilab-client
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: ilab-client
    spec:
      containers:
      - name: ilab-client-container
        image: quay.io/rzago/ilab-client:latest
        ports:
        - containerPort: 5000
        env:
        - name: ADDRESS
          value: "http://instructlab:8000" # The address of the InstructLab chat model connected to the private Skupper
  triggers:
  - type: ConfigChange

Apply the deployment file:

oc apply -f ~/instructlab/ilab-client-deployment.yaml
  • The ilab-client-deployment.yaml file is used to deploy the ILAB Frontend chatbot, which will be responsible for sending the user input to the InstructLab chat model and displaying the response to the user.

  • The ADDRESS environment variable is used to specify the address of the InstructLab chat model connected to the private Skupper.

  • The oc apply -f ~/instructlab/ilab-client-deployment.yaml command is used to apply the deployment file and deploy the ILAB Frontend chatbot.

Creating the service of the ILAB Frontend chatbot deployment

Now, let’s create the service for the ILAB Frontend chatbot deployment, the file is located at ~/instructlab/ilab-client-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: ilab-client-service
spec:
  selector:
    app: ilab-client
  ports:
  - protocol: TCP
    port: 5000
    targetPort: 5000

Apply the service file:

oc apply -f ~/instructlab/ilab-client-service.yaml
  • The ilab-client-service.yaml file is used to create the service for the ILAB Frontend chatbot deployment.

  • The oc apply -f ~/instructlab/ilab-client-service.yaml command is used to apply the service file and create the service for the ILAB Frontend chatbot deployment.

Exposing the service of the ILAB Frontend chatbot deployment

We are almost there, now let’s expose the service of the ILAB Frontend chatbot deployment, the file is located at ~/instructlab/ilab-client-route.yaml:

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: ilab-client-route
spec:
  to:
    kind: Service
    name: ilab-client-service
  port:
    targetPort: 5000

Apply the route file:

oc apply -f ~/instructlab/ilab-client-route.yaml
  • The ilab-client-route.yaml file is used to expose the service of the ILAB Frontend chatbot deployment.

  • The oc apply -f ~/instructlab/ilab-client-route.yaml command is used to apply the route file and expose the service of the ILAB Frontend chatbot deployment.

Accessing the ILAB Frontend chatbot

Finally, to access the ILAB Frontend chatbot, you can use the following command to get the public URL:

oc get route ilab-client-route
  • The oc get route ilab-client-route command is used to get the public URL of the ILAB Frontend chatbot, which will be used to access the chatbot from the Ollama Pilot application.

Interacting with the chatbot

To interact with the chatbot, you can access the public URL of the ILAB Frontend chatbot using a web browser. The chatbot will be displayed on the screen, and you can start interacting with it by typing messages in the chat window.

Chatbot

Considerations

  • If your machine has an Nvidia GPU, you can use the InstructLab chat model to generate responses to user input. The InstructLab chat model is a large language model trained on the Merlinite dataset and is capable of generating human-like responses to user input. By following the steps outlined in this demonstration, you can deploy the InstructLab chat model in your environment and interact with it using the ILAB Frontend chatbot. This will allow you to experience the power of generative AI models and see how they can be used to create engaging and interactive applications.

  • Adjust the model temperature to control the randomness of the responses generated by the chatbot. A lower temperature will result in more deterministic responses, while a higher temperature will result in more random responses. Experiment with different temperature values to find the right balance between coherence and creativity in the chatbot’s responses.

  • Image repository for ilab-client: rafaelvzago/ilab-client