Engineers like to play and learn locally. It does not matter which tool is under investigation: a high-end storage solution, a workflow orchestration engine, or the latest thing in distributed computing. The best way to learn a new technology is to find a way to cram it all on a single machine so that you can put your hands on everything.
Kubeflow Pipelines is a core component of the full distribution of Kubeflow. You can install the full distribution of Kubeflow or the standalone installation containing just Kubeflow Pipelines. In this post, I’ll show how to set up a development machine with the standalone installation of Kubeflow Pipelines (KFP) and a standalone installation of MinIO. KFP and MinIO are better together. Using KFP, you can build and run pipelines for acquiring data and training models. As you build pipelines for your data and train models, you will need a storage solution. This is where MinIO can help.
MinIO is a great way to store your ML data and models. Using MinIO, you can save training sets, validation sets, test sets, and models without worrying about scale or performance. Also, someday AI will be regulated; when this day comes, you will need MinIO’s enterprise features (object locking, versioning, encryption, and legal locks) to secure your data at rest and to make sure you do not accidentally delete something that a regulatory agency may request. We could have tried to use KFP’s instance of MinIO — however, this is not the best design for an ML Data Pipeline. You will want a storage solution that is totally under your control. Below is a diagram of our Kubeflow and MinIO deployments that illustrates the purpose of each MinIO instance.
What We Will Install
Below is a list of everything that needs to be installed. This list includes core components (MinIO and KFP), as well as dependencies and SDKs. It is my hope that this post serves as a recipe that can be followed exactly to configure a KFP Pipeline development machine. If any of these instructions do not work, then please let us know.
- Docker Desktop
- kubectl (the Kubernetes command line tool)
- Kubeflow Pipeline Resources
- KFP SDK
- MinIO Access Key and Secret Key
- MinIO SDK
You can find the appropriate installation for your operating system on Docker’s site, located here. If you are installing Docker Desktop on a Mac, then you need to know the chip your Mac is using — Apple or Intel. You can determine this by clicking the Apple icon in the upper left corner of your Mac and clicking the “About This Mac” menu option.
Kubeflow runs on Kubernetes – consequently, you will need a running Kubernetes cluster. Also, you must be familiar with the Kubernetes command line tool to install and manage Kubeflow. The fastest way to get both Kubernetes and its command line tool is to enable the Kubernetes capabilities that come with Docker Desktop. To do this, start the Docker Desktop application, and in the upper right corner, click on the “Settings” icon.
This will take you to the Docker Desktop’s settings page, as shown below.
Click on the Kubernetes tab on the left, and you should see the Kubernetes setup page.
Click the Enable Kubernetes check box to start a Kubernetes cluster on your machine. Once you click this check box, it will take a few minutes for Docker Desktop to get a cluster ready for you - so go and get a cup of coffee if you wish.
If at any point you wish to remove all the deployments you have installed in your Kubernetes cluster, then click the “Reset Kubernetes Cluster” button. This will remove all resources and give you a brand-new cluster. You will do this often when you are experimenting with prerelease software.
Enabling Kubernetes also installs the Kubernetes command line tool (`kubectl`) for you. Type the following command in a terminal window to make sure `kubectl` is working.
You should see output similar to what is shown below.
Once Kubernetes is installed and the `kubectl` command line tool works, you can install Kubeflow Pipelines.
Setting up Kubeflow Pipelines is four simple steps. First, we need to specify the version of KFP we would like to install. We will set the environment variable below, which will be used by subsequent `kubectl` apply commands. These instructions are for Kubeflow Pipelines 2.0.0. You can check the latest version here.
KFP prefers cluster-scoped resources to be installed separately from namespace-scoped resources. Depending on the environment, cluster-scoped resources may need the admin role. Namespace-scoped resources can be deployed by individual teams managing a namespace. The command below installs the cluster-scoped resources.
You should see output indicating the creation of various resources. It is omitted here for brevity, but you should scan it and ensure no errors occurred.
The next command is a wait command that will check the status of the previous command.
Keep running the wait command until you get a message indicating success, as shown below.
The apply command for namespace-scoped resources is below. It will also show output as resources are created in your cluster. Make sure there are no errors.
Check all the pods that our two `kubectl` apply commands created by running the command below.
If you check the pods right after installing KFP, you will notice that many are still starting. Wait until all pods are running before moving on to the next section. Once you start creating pipelines, run this `get pods` command while a pipeline runs. You will see how KFP creates pods based on the tasks in your pipeline.
Starting the KFP UI
To use the KFP UI on a local machine, we must forward an unused port on our local machine to port 80 of KFP’s UI service. This is done using kubectl’s port-forward command. This command will not return. You need to keep it running until you are done using the KFP UI.
Navigate to localhost:8080. You should see the Kubeflow Pipelines home page.
Take some time to explore all the tabs. If you are new to Kubeflow, then get familiar with Pipelines, Runs, and Experiments. A detailed description of these three concepts is beyond the scope of this post but here is the short story:
- Pipelines are the descriptions you create in code. Pipelines are analogous to classes in object oriented programming.
- A Run is an instance of a pipeline much like an object is an instance of a class.
- Experiments are a way to tag related runs so that you can see them grouped together in the KFP UI. For example, you may have multiple runs of a pipeline as you iron out the kinks. Tagging these runs with the same experiment name will group them together in the Experiments tab.
I like to use Docker Compose to install MinIO as the configuration is in a YAML file, and the command is simple. Below is the Docker Compose YAML. Name this file `docker-compose.yml`.
Run the following command in the same directory as the `docker-compose.yml` file.
This installs MinIO in a Docker container outside of the Kubernetes cluster. If you do not want to use Docker Compose to install MinIO, then this document will show you how to install MinIO using the Docker command line.
MinIO Access Key and Secret Key
To use the MinIO SDK, you will need a new access key and secret key. You can get these keys in the MinIO UI. From your browser, go to localhost:9001. If you specified a different port for the MinIO console address in the docker-compose file, use that port instead.
Once you sign in navigate to the Access Keys tab and click the Create access key button.
This will take you to the Create Access Key page.
Your access key and secret key are not saved until you click the Create button. So do not navigate away from this page until this is done. Don’t worry about copying the keys from this screen. Once you click the Create button, you will be given an option to download the keys to your file system (in a JSON file).
You are now ready to start using the MinIO SDK. In the next two sections we will install both the KFP SDK and the MinIO SDK.
Install the KFP Python package
The KFP Python package is a simple `pip` install. I recommend installing it in a Python virtual environment - especially when you are testing prerelease versions of KFP. You can check PyPi for the latest version of the KFP package or you can install the latest prerelease version as shown in the command below.
Remove the `--pre` switch once KFP 2.0 is generally available.
Double check the installation by listing out the KFP libraries.
You should see the three libraries below.
Install the MinIO Python Package
If you installed the KFP Python package in a virtual environment, install MinIO in the same environment.
Double check the installation.
This will confirm that the Minio library was installed and display the version you are using.
This post provided an easy to follow recipe for creating a development machine with Kubeflow Pipelines 2.0 and MinIO. The goal was to save you the time and effort of researching Kubeflow dependencies, installation commands, and SDK setup for the new version.
You are ready to start coding and building pipelines with Kubeflow and MinIO. Check out Building an ML Data Pipeline with MinIO and Kubeflow v2.0 where we use Kubeflow and MinIO to build a data pipeline.