Create Workbench

Prerequisites

  • Ensure you have kubectl configured and connected to your cluster.
  • Ensure you have created PVC.
CreatePVC
  1. Login, go to the Alauda Container Platform page.
  2. Click Storage > PersistentVolumeClaims to enter the PVC list page.
  3. Find the Create PVC button, click Create, and enter the info.

Create Workbench by using the web console

Procedure

Login, go to the Alauda AI page.

Click Workbench to enter the Workbench list page.

Find the Create button, click Create, you will enter the creation form, and you can create a workbench after filling in the information.

Connect to Workbench

After creating a workbench instance, click Workbench in the left navigation bar; your workbench instance should show up in the list. When the status becomes Running, click the Connect button to enter the workbench.

INFO

We have built-in WorkspaceKind resources that are ready to use out of the box; you can see the two options we provide in the dropdown menu.

The following additional workbench images are available but are not built into the platform by default:

  • alaudadockerhub/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9
  • alaudadockerhub/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9
  • alaudadockerhub/odh-workbench-jupyter-pytorch-cuda-py312-ubi9

If you want to use these images, you must first manually synchronize them to your own image registry (for example, by using a tool such as skopeo). After the image is available in your registry, you also need to add the corresponding configuration to the imageConfig field of the WorkspaceKind resource that you plan to use.

Below is an example patch YAML that adds a new image configuration to an existing WorkspaceKind:

add-llmcompressor-image-patch.json
[
  {
    "op": "add",
    "path": "/spec/podTemplate/options/imageConfig/values/-",
    "value": {
      "id": "jupyter-pytorch-llmcompressor-cuda-py312",
      "spawner": {
        "displayName": "Jupyter | PyTorch LLM Compressor | CUDA | Python 3.12",
        "description": "JupyterLab with PyTorch and LLM Compressor for CUDA",
        "labels": [
          {
            "key": "python_version",
            "value": "3.12"
          },
          {
            "key": "framework",
            "value": "pytorch"
          },
          {
            "key": "accelerator",
            "value": "cuda"
          }
        ]
      },
      "spec": {
        "image": "mlops/workbench-images/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9:3.4_ea1-v1.41",
        "imagePullPolicy": "IfNotPresent",
        "ports": [
          {
            "id": "jupyterlab",
            "displayName": "JupyterLab",
            "port": 8888,
            "protocol": "HTTP"
          }
        ]
      }
    }
  }
]

You can apply the patch to the WorkspaceKind you are using with a command similar to the following:

kubectl patch workspacekind jupyterlab-internal-3-4-ea1-v1-41 \
  --type=json \
  --patch-file add-llmcompressor-image-patch.json \
  -o yaml

This command applies the JSON patch file to the specified WorkspaceKind and updates its imageConfig so the new workbench image becomes available in the workbench creation UI.

In practice, you can adapt the name, image, and description fields according to the image you synchronized and the naming conventions used in your cluster.

INFO

We have also built in some resource options, which you can see in the dropdown menu.