Constructing a CI/CD Build Pipeline

Constructing a CI/CD Build Pipeline

So far, in our CI/CD series, we’ve talked about some of the high-level concepts and terminology. In this installment, we’ll go into the practical phase of the build pipeline to show how it looks in implementation.

We’ll go deeper into baking and frying to show how it looks in the different stages of the MinIO distributed cluster build process. Along the way, we’ll use Infrastructure as Code to ensure everything is version-controlled and automated using Packer and Jenkins.

In order to make it easy for developers to use our distributed MinIO setup without launching resources in the cloud, we’ll test it using Vagrant. Most of our laptops are powerful enough to try something out, yet we’re so accustomed to the cloud that we tend to look there first. But we’ll change that perception and show you just how easy it is to develop in a cloud-like environment locally, complete with MinIO object storage, just on your laptop.

MinIO Image

This is the baking phase. In order to automate as much of this as possible, we’ll use build tools to create a L2 image of MinIO that could be used to launch any type of MinIO configuration. This will not only allow us to follow Infrastructure as Code best practices but also provide version control for any future changes.

MinIO build template

We’ll use Packer to build machine-readable images in a consistent manner. Packer allows you to source from multiple inputs such as ISO, Docker Image, and AMI, among other inputs, and post-process the built images by uploading them to Amazon Elastic Container Registry (ECR), Google Container Registry (GCR), Docker Hub, Vagrant Cloud, and several other outputs, or even locally to disk.

Installing is pretty straightforward; you can either use brew

$ brew install packer

or if you are looking for other distributions, head over to the Packer downloads page.

When Packer was first introduced, it used only JSON as the template language, but as you may know, JSON is not the most human-writable language. It's easy for machines to use JSON, but if you have to write a huge JSON blob, the process is error-prone. Even omitting a single command can cause an error, not to mention you cannot add comments in JSON.

Due to the aforementioned concerns, Packer transitioned towards Hashicorp Configuration Language or HCL (which is also used by Terraform) as the template language of choice. We wrote a HCL Packer template that builds a MinIO image – let's go through it in detail.

For all our code samples, we’ll paste the crux of the code here, and we have the entire end-to-end working example available for download on GitHub. The primary template will go in main.pkr.hcl and the variables (anything that starts with var) will go in variables.pkr.hcl.

Git clone the repo to your local setup, where you’ll run the Packer build

$ git clone https://github.com/minio/blog-assets.git
$ cd blog-assets/ci-cd-build/packer/

We’ll use virtualbox to build the MinIO image. Install Virtualbox

$ brew install virtualbox

then define it as a source in main.pkr.hcl as the first line

source "virtualbox-iso" "minio-vbox" {
...
}

main.pkr.hcl#L1

Note: Along the post, we’ll reference links to the exact lines in the GitHub repo where you can see the code in its entirety, similar to above. You can click on these links to go to the exact line in the code.

Let’s set some base parameters for the MinIO image that will be built. We’ll define the name, CPU, disk, memory, and a couple of other parameters related to the image configuration.

  vm_name = var.vm_name

  cpus      = var.cpus
  disk_size = var.disk_size
  memory    = var.memory

  hard_drive_interface = "sata"
  headless             = var.headless

main.pkr.hcl#L3-L10

A source needs to be given in order to build our custom image. The source can be local relative to the path where you run the packer command or a URL online.

The sources need to be given in the order that Packer will try to use them. In the below example, we are giving a local source and an online source; if the local source is available, it will skip the online source. In order to speed up the build, I’ve already pre-downloaded the ISO and added it to the `${var.iso_path}` directory.

  iso_checksum = "${var.iso_checksum_type}:${var.iso_checksum}"
  iso_urls     = [
        "${var.iso_path}/${var.iso_name}",
        var.iso_url
      ]

main.pkr.hcl#L12-L16

The image we create needs a few defaults like Hostname, SSH user, and password set during the provisioning process. These commands are executed as part of the preseed.cfg kickstart process served via http server.

        " hostname=${var.hostname}",
        " passwd/user-fullname=${var.ssh_fullname}",
        " passwd/username=${var.ssh_username}",
        " passwd/user-password=${var.ssh_password}",
        " passwd/user-password-again=${var.ssh_password}",

main.pkr.hcl#L30-L34

The image will be customized using a few scripts we wrote located in the scripts/ directory. We’ll use the shell provisioner to run these scripts.

provisioner "shell" {
...
}

main.pkr.hcl#L61

Next, let's flex some concepts we learned in the previous blog post, specifically baking and frying the image. In this tutorial, we’ll bake the MinIO binary and the service dependencies, such as the username and group required for MinIO. We are baking this as opposed to frying because these will remain the same no matter how we configure and launch our image.

We’ll set the MinIO version to install along with the user and group the service will run under.

        "MINIO_VERSION=${var.minio_version}",
        "MINIO_SERVICE_USER=${var.minio_service_user}",
        "MINIO_SERVICE_GROUP=${var.minio_service_group}",

main.pkr.hcl#L67-L69

Most of the install steps are tucked away in dedicated bash scripts that are separate from the template, but they will be called from the template. This keeps the overall template clean and simple, making it easier to manage. Below are the list of scripts we are going to use

    scripts = [
        "scripts/setup.sh",
        "scripts/vagrant.sh",
        "scripts/minio.sh",
        "scripts/cleanup.sh"
      ]

main.pkr.hcl#L75-L80

I’m not going to go into much more detail about what each of these scripts does because most of them are basic boilerplate code that is required to set up the base Linux image – think of this as the L1 phase of our install.

The file I do want to talk about is minio.sh where we are baking MinIO binary.

The MinIO binary is downloaded from upstream, the `MINO_VERSION` variable was set a couple of steps ago as an environment variable that is available during install time.

wget https://dl.min.io/server/minio/release/linux-amd64/archive/minio_${MINIO_VERSION}_amd64.deb -O minio.deb
dpkg -i minio.deb

minio.sh#L8-L9

Once the binary is installed, it needs the user and group, which we created separately so that it will be used by the MinIO service to run as

groupadd -r ${MINIO_SERVICE_GROUP}
useradd -M -r -g ${MINIO_SERVICE_GROUP} ${MINIO_SERVICE_USER}

minio.sh#L12-L13

Generally, for every new version of MinIO, we should build a new image, but in a pinch, you can upgrade the binary without upgrading the entire image. I’ll show you how to do that in the frying stage..

Once the image is built, we must tell Packer what to do with it. This is where post-processors come into play. There are several of them that you can define, but we’ll go over three of them here:

  • The vagrant post-processor builds an image that can be used by vagrant. It creates the vagrant VirtualBox, which can be launched using Vagrantfile (more on Vagrantfile later). The .box file is stored in the location defined in output which can be imported locally.

    post-processor "vagrant" {
      output = "box/{{.Provider}}/${var.vm_name}-${var.box_version}.box"
      keep_input_artifact  = true
      provider_override    = "virtualbox"
      vagrantfile_template = var.vagrantfile_template
    }

main.pkr.hcl#L88-L93

  • The shell-local post-processor allows us to run shell commands locally, where the packer command is done to do post-cleanup and other operations. In this case, we are removing the output directory, so it doesn’t conflict with the next packer run build.

    post-processor "shell-local" {
      inline = ["rm -rf output-${var.vm_name}-virtualbox-iso"]
    }

main.pkr.hcl#L101-L103

  • Having the image locally is great, but what if you want to share your awesome MinIO image with the rest of your team or even developers outside your organization? You can manually share the box image generated before, but that could be cumbersome. Rather we’ll upload the image to the Vagrant cloud registry that can be pulled by anyone who wants to use your MinIO image.

Another advantage of uploading the image to the registry is that you can version your images so even if you upgrade it in the future, folks can pin it to a specific version and upgrade at their leisure.

    post-processor "vagrant-cloud" {
      access_token = "${var.vagrant_cloud_token}"
      box_tag      = "${var.vagrant_cloud_username}/${var.vm_name}"
      version      = var.box_version
    }

main.pkr.hcl#L95-L99

In order to upload your MinIO image to Vagrant Cloud, we will need an access token and the username of the account used to create the Vagrant Cloud account. We’ll show you how to get this info in the next steps.

Registry for MinIO image

Go to https://app.vagrantup.com to create a Vagrant Cloud account. Make sure to follow the instructions below.

In order for us to use Vagrant Cloud, we need to create a repo beforehand with the same name as the vm_name in our Packer configuration.

variable "vm_name" {
  type = string
  default = "minio-ubuntu-20-04"
}

variables.pkr.hcl#L108-L111

Follow the steps below to create the repo where our image will be uploaded.

One more thing we need to create in Vagrant Cloud is the token we need to authenticate prior to uploading the built image. Follow the instructions below.

Build the MinIO image

At this point, you should have a Packer template to build the custom MinIO image and a registry to upload the image. These are all the prerequisites that are required before we actually start the build.

Go into the directory where the main.pkr.hcl the template file is located, and you will see a bunch of other files we’ve discussed in the previous steps.

├── http
│   └── preseed.cfg
├── iso
│   └── ubuntu-20.04.1-legacy-server-amd64.iso
├── main.pkr.hcl
├── scripts
│   ├── cleanup.sh
│   ├── minio.sh
│   ├── setup.sh
│   └── vagrant.sh
└── variables.pkr.hcl

The build needs valid values for the variables to be passed in during the build process, or it will fail to try to use the default values. Specifically, in this case, we need to pass the Vagrant Cloud token and username. You can edit these values directly in variables.pkr.hcl for now but do not commit the file with these values to the repo for security reasons. Later during the automation phase, we’ll show you another way to set these variables that don’t involve editing any files and is a safer alternative.

The values for these variables were previously created as part of the step when we created the Vagrant Cloud account. You can add those values to the default key.

variable "vagrant_cloud_token" {
  type = string
  sensitive = true
  default = "abc123"
}

variable "vagrant_cloud_username" {
  type = string
  default = "minio"
}

variables.pkr.hcl#L113-L122

As you build your images, you also need to bump up the version of the image, so each time you build your image, there will be a unique version. We use a formatting system called Semver which allows us to set the MAJOR.MINOR.PATCH numbers are based on the type of release we are making. To begin with, we always start with 0.1.0 which needs to be incremented for every release.

variable "box_version" {
  type = string
  default = "0.1.0"
}

variables.pkr.hcl#L124-L127

After setting valid values for the variables and other settings, let's ensure our templates are inspected and validated to work properly.

$ packer inspect .
$ packet validate .

The configuration is valid.

Once everything is confirmed to be correct, run the build command in Packer

$ packer build .

This will start the build process, which can take anywhere from 15-20 minutes, depending on your machine and internet speed, to build and upload the entire image.

Open the VirtualBox Manager to see the VM launched by Packer. It will have the name that was assigned to the variable vm_name. Double-click on the VM to open the Java console with a live version of the preview screenshot where you can see the OS install process.

Below is a snippet of the beginning and ending, with some MinIO bits sprinkled in between.

+ packer build .


==>

virtualbox-iso.minio-vbox: output will be in this color.

==>

virtualbox-iso.minio-vbox: Retrieving Guest additions
==>

virtualbox-iso.minio-vbox: Trying /Applications/VirtualBox.app/Contents/MacOS/VBoxGuestAdditions.iso
virtualbox-iso.minio-vbox: Trying /Applications/VirtualBox.app/Contents/MacOS/VBoxGuestAdditions.iso
==>

==>

virtualbox-iso.minio-vbox: /Applications/VirtualBox.app/Contents/MacOS/VBoxGuestAdditions.iso => /Applications/VirtualBox.app/Contents/MacOS/VBoxGuestAdditions.iso
virtualbox-iso.minio-vbox: Retrieving ISO
==>

==>

virtualbox-iso.minio-vbox: Trying iso/ubuntu-20.04.1-legacy-server-amd64.iso
==>

==>

virtualbox-iso.minio-vbox: Trying iso/ubuntu-20.04.1-legacy-server-amd64.iso?


…TRUNCATED…


    virtualbox-iso.minio-vbox: ==> Downloading MinIO version 20221005145827.0.0

==> virtualbox-iso.minio-vbox: --2022-10-10 15:01:07--  https://dl.min.io/server/minio/release/linux-amd64/archive/minio_20221005145827.0.0_amd64.deb

==> virtualbox-iso.minio-vbox: Resolving dl.min.io (dl.min.io)... 178.128.69.202, 138.68.11.125

==> virtualbox-iso.minio-vbox: Connecting to dl.min.io (dl.min.io)|178.128.69.202|:443... connected.

==> virtualbox-iso.minio-vbox: HTTP request sent, awaiting response... 200 OK

==> virtualbox-iso.minio-vbox: Length: 31806114 (30M) [application/vnd.debian.binary-package]

==> virtualbox-iso.minio-vbox: Saving to: ‘minio.deb’

==> virtualbox-iso.minio-vbox:

==> virtualbox-iso.minio-vbox:      0K .......... .......... .......... .......... ..........  0% 98.5K 5m15s

==> virtualbox-iso.minio-vbox:     50K .......... .......... .......... .......... ..........  0%  199K 3m55s

==> virtualbox-iso.minio-vbox:    100K .......... .......... .......... .......... ..........  0% 15.2M 2m37s


…TRUNCATED…


Build 'virtualbox-iso.minio-vbox' finished after 18 minutes 41 seconds.


==> Wait completed after 18 minutes 41 seconds


==> Builds finished. The artifacts of successful builds are:

--> virtualbox-iso.minio-vbox: VM files in directory: output-minio-ubuntu-20-04-virtualbox-iso

--> virtualbox-iso.minio-vbox: 'virtualbox' provider box: box/virtualbox/minio-ubuntu-20-04-0.1.0.box

--> virtualbox-iso.minio-vbox: 'virtualbox': minioaj/minio-ubuntu-20-04

--> virtualbox-iso.minio-vbox: 'virtualbox': minioaj/minio-ubuntu-20-04

Finished: SUCCESS

Head on over to https://app.vagrantup.com dashboard to see the newly uploaded image

MinIO distributed cluster

This is the frying phase. We have now published an image that can be consumed by our team and anyone who wants to run MinIO. But how do we actually use this published image? We’ll use Vagrant to launch VMs locally, similar to how we use Terraform for cloud instances.

MinIO Vagrantfile

The good news is we’ve written the Vagrantfile for you, and we’ll go through it step-by-step so you can understand how to deploy MinIO in distributed mode. You can use these same steps in a production environment on bare metal, Kubernetes, Docker and others. The only difference is that you would probably want to use something like Terraform or CDK to launch these because they manage more than just VMs or Containers like DNS, CDN, and Managed Services, among other cloud resources.

Go into the vagrant the directory inside the same ci-cd-build directory where packer is located. If you are still in the packer the directory, you can run the following command

$ cd ../vagrant


ci-cd-build
├── packer
│   ├── main.pkr.hcl
│   └── variables.pkr.hcl
└── vagrant
    └── Vagrantfile

Install Vagrant using the following command along with the vagrant-hosts a plugin that will allow us to communicate between VMs using their hostname.

$ brew install vagrant

$ vagrant plugin install vagrant-hosts

The way DNS works is pretty rudimentary, as the plugin basically edits the /etc/hosts file on each VM with the details of the other VMs launched in the Vagrantfile. This can be done manually, but why would we want to do that when we can automate it? Please note all VMs must be launched using the same Vagrantfile because `vagrant-hosts` does not keep track of hosts between two discrete Vagrantfiles.

You have to be a little familiar with Ruby in order to understand the Vagrantfile, but if you’re not, that’s OK – it's very similar to Python, so we’ll walk you through every part of it. Below you can see how we define variables for some of the basic parameters needed to bring the MinIO cluster up.

Most of these can remain the default setting, but I would recommend paying attention to BOX_IMAGE. The box image username is currently set to minioaj and should be updated with the username you chose when you built the Packer image.

MINIO_SERVER_COUNT  = 4
MINIO_DRIVE_COUNT   = 4
MINIO_CPU_COUNT     = 2
MINIO_MEMORY_SIZE   = 4096 # MB
MINIO_DRIVE_SIZE    = 2 # GB
MINIO_ROOT_USER     = "minioadmin"
MINIO_ROOT_PASSWORD = "minioadmin"

BOX_IMAGE     = "minioaj/minio-ubuntu-20-04"
BOX_VERSION   = "0.1.0"
SUBNET_PREFIX = "192.168.60.1"
NAME_PREFIX   = "minio-"

Vagrantfile#L5-L16

You don’t need to edit anything else besides the above variables. They are pretty self-explanatory and follow the MinIO distributed setup guide to the letter. However, let’s go through the entire file anyway, so we have a better understanding of the concepts we’ve learned so far.

The following loop will create as many VMs as we specify in MINIO_SERVER_COUNT. In this case, it will loop 4 times while defining the VM settings, such as hostname and IP for 4 VMs, along with the image they will be using.

  (1..MINIO_SERVER_COUNT).each do |i_node|

    config.vm.define (NAME_PREFIX + i_node.to_s) do |node|

      node.vm.box         = BOX_IMAGE
      node.vm.box_version = BOX_VERSION
      node.vm.hostname    = NAME_PREFIX + i_node.to_s

      node.vm.network :private_network, :ip => "#{SUBNET_PREFIX}#{i_node}"

Vagrantfile#L20-L28

Next, we have to define the number of drives per server set using MINIO_DRIVE_COUNT. You must’ve noticed below we are not only looping numerically but also drive_letter starting at b and incrementing (c, d, e...) with every loop. The reason for this is that drives in Linux are named with letters like sda, sdb, sdc, and so on. We are starting from sdb because sda is taken by the / root disk where the operating system is installed.

In this case, each of the 4 servers will have 4 disks each with a total of 16 disks across all of them. In the future, if you change the settings, the easiest way to know the total number of disks is to multiply MINIO_SERVER_COUNT x MINIO_DRIVE_COUNT.

      drive_letter = "b"

      (1..MINIO_DRIVE_COUNT).each do |i_drive|
        node.vm.disk :disk, size: "#{MINIO_DRIVE_SIZE}GB", name: "data-#{i_drive}"


…TRUNCATED…


        drive_letter.next!
      end

Vagrantfile#L30-L48

The previous step only creates a virtual drive, as if we added a physical drive to a bare metal machine. You still need to configure the disk partitions and mount them so Linux can use them.

Use parted to create an ext4 /dev/sd*1 partition for each of the 4 disks (b, c, d, e).

node.vm.provision "shell", inline: <<-SHELL
  parted /dev/sd#{drive_letter} mklabel msdos
  parted -a opt /dev/sd#{drive_letter} mkpart primary ext4 0% 100%
SHELL

Vagrantfile#L35-L38

Format the created partitions and add them to /etc/fstab so each time the VM is rebooted the 4 disks get mounted automatically.

node.vm.provision "shell", inline: <<-SHELL
  mkfs.ext4 -L minio-data-#{i_drive} /dev/sd#{drive_letter}1
  mkdir -p /mnt/minio/data-#{i_drive}
  echo "LABEL=minio-data-#{i_drive} /mnt/minio/data-#{i_drive} ext4 defaults 0 2" >> /etc/fstab
SHELL

Vagrantfile#L40-L45

This is when we mount the disks we created and set the proper permissions needed for the MinIO service to start. We add the volume and credentials settings  /etc/default/minio after all the disk components have been configured. We’ll enable MinIO but not start it yet. After the VMs are up, we’ll start all of them at the same time to avoid an error condition that occurs when MinIO times out if it cannot find the other nodes within a certain period of time.

node.vm.provision "shell", inline: <<-SHELL
  mount -a
  chown minio-user:minio-user /mnt/minio/data-*
  echo "MINIO_VOLUMES=\"http://minio-{1...#{MINIO_SERVER_COUNT}}:9000/mnt/minio/data-{1...#{MINIO_DRIVE_COUNT}}\"" >> /etc/default/minio
  echo "MINIO_OPTS=\"--console-address :9001\"" >> /etc/default/minio
  echo "MINIO_ROOT_USER=\"#{MINIO_ROOT_USER}\"" >> /etc/default/minio
  echo "MINIO_ROOT_PASSWORD=\"#{MINIO_ROOT_PASSWORD}\"" >> /etc/default/minio


  systemctl enable minio.service
SHELL

Vagrantfile#L50-L60

We set a couple more settings related to the VM, but most importantly the last line in this snippet makes use of the vagrant-hosts plugin to sync /etc/hosts file across all VMs.

node.vm.provider "virtualbox" do |vb|
  vb.name   = NAME_PREFIX + i_node.to_s
  vb.cpus   = MINIO_CPU_COUNT
  vb.memory = MINIO_MEMORY_SIZE
end

node.vm.provision :hosts, :sync_hosts => true

Vagrantfile#L62-L68

By now, you must be wondering why we did not just bake all these shell commands in the Packer build process in the minio.sh script. Why did we instead fry these settings as part of the Vagrantfile provisioning process?

That is an excellent question. The reason is that we will want to modify the disk settings based on each use case – sometimes, you’d want more nodes in the MinIO cluster, and other times you might not want 4 disks per node. You don’t want to create a unique image for each drive configuration as you could end up with thousands of similar images, with the only difference being the drive configuration.

You also don’t want to use the default root username and password for MinIO, and we’ve built a process that allows you to modify that at provisioning time.

What if you wanted to upgrade the MinIO binary version but don’t want to build a new Packer image each time? This is as simple as adding a shell block with commands to download and install the MinIO binary, just as we did for Packer in the minio.sh script. Below is the sample pseudo code to get you started

node.vm.provision "shell", inline: <<-SHELL


  # Download MinIO Binary


  # Install MinIO Binary


SHELL

This is where you, as a DevOps engineer, need to use your best judgment to find the right balance between baking and frying. You can either bake all the configurations during the Packer build process or have the flexibility to fry the configuration during the provisioning process. We wanted to leave some food for thought as you develop your own setup.

Launch MinIO cluster

By now, we have a very good understanding of the internals of how the cluster will be deployed. Now all that is left is deploying the cluster on Virtualbox using Vagrant.

Set the following environment variable as it’s needed to get our disks automagically detected by the VMs

$ export VAGRANT_EXPERIMENTAL=disks

Be sure you are in the same location as Vagrantfile and run the following command

$ vagrant status


Current machine states:


minio-1                   not created (virtualbox)

minio-2                   not created (virtualbox)

minio-3                   not created (virtualbox)

minio-4                   not created (virtualbox)

Your output should look like the above, with the states set to not created. It makes sense because we haven’t provisioned the nodes yet.

Finally the pièce de résistance! Provision the 4 nodes in our MinIO cluster

$ vagrant up
==> vagrant: You have requested to enabled the experimental flag with the following features:
==> vagrant:
==> vagrant: Features:  disks
==> vagrant:
==> vagrant: Please use with caution, as some of the features may not be fully
==> vagrant: functional yet.
Bringing machine 'minio-1' up with 'virtualbox' provider...
Bringing machine 'minio-2' up with 'virtualbox' provider...
Bringing machine 'minio-3' up with 'virtualbox' provider...
Bringing machine 'minio-4' up with 'virtualbox' provider...

Be sure you see this output ==> vagrant: Features:  disks as this verifies the environment variable VAGRANT_EXPERIMENTAL=disks has been set properly. As each VM comes up, its output will be prefixed with the hostname. If there are any issues during the provisioning process, look at this hostname prefix to see which VM the message was from.

==> minio-1: …

==> minio-2: …

==> minio-3: …

==> minio-4: …

Once the command is done executing, run vagrant status again to verify all the nodes are running

$ vagrant status

Current machine states:

minio-1                   running (virtualbox)
minio-2                   running (virtualbox)
minio-3                   running (virtualbox)
minio-4                   running (virtualbox)

All the nodes should be in the running state. This means the nodes are running, and if you don’t see any errors in the output earlier from the provisioning process, then all our shell commands in Vagrantfile ran successfully.

Last but not least, let's bring up the actual MinIO service. If you recall, we enabled the service, but we didn’t start it because we wanted all nodes to start first. Then we will bring all the services up at almost the same time using the vagrant ssh the command, which will SSH into all 4 nodes using this bash loop.

$ for i in {1..4}; do vagrant ssh "minio-${i}" -c "sudo systemctl start minio"; done

Confirm using the journalctl command to see the logs and verify that the MinIO service on all 4 nodes started properly

$ for i in {1..4}; do vagrant ssh "minio-${i}" -c "sudo journalctl -u minio -n 10 --no-pager"; done

You should see an output similar to the example below from minio-4. The 16 Online means all 16 of our drives across the 4 nodes are online.

Oct 17 09:34:47 minio-4 minio[1616]: Status:         16 Online, 0 Offline.
Oct 17 09:34:47 minio-4 minio[1616]: API: http://10.0.2.15:9000  http://192.168.60.14:9000  http://127.0.0.1:9000
Oct 17 09:34:47 minio-4 minio[1616]: Console: http://10.0.2.15:9001 http://192.168.60.14:9001 http://127.0.0.1:9001

In the future, if you ever want to bring your own custom MinIO cluster up you now have all the necessary tools to do so.

Automate MinIO build

Technically you could stop here and go on your merry way. But we know you always want more MinIO, so we’ll take this one step further to show you how to automate this process, so you don’t have to manually run the Packer build commands every time.

Install Jenkins

Install and start Jenkins using the following commands

$ brew install jenkins-lts
$ brew services start jenkins-lts

Once Jenkins is started, it will ask you to set a couple of credentials which should be pretty straightforward. After you have the credentials set up, you need to ensure the git plugin is installed using the following steps.

Configure MinIO build

Let’s create a build configuration to automate our Packer build process of the MinIO image. But before you get started, we mentioned earlier there are other ways to set the values for Packer variables. Using environment variables, the values in variables.pkr.hcl can be overwritten by prefixing the variable name with PKR_VAR_varname. Instead of editing the file directly, we’ll use the environment variables to overwrite some of the variables in the next step.

Follow the steps below to configure the build.

Under the “Build Steps” section, we added some commands. The text for that is below so you can copy and paste it into your configuration.

export PATH=$PATH:/usr/local/bin
export PKR_VAR_box_version="0.1.1"

cd ci-cd-build/packer/
packer inspect .
packer validate .
packer build .

Execute MinIO build

Once we have the job configured, we can execute the build. Follow the steps below.

Once the build is successful head over to https://app.vagrantup.com to verify the image has, in fact, been successfully uploaded.

Final Thoughts

In this blog post, we showed you how to take the MinIO S3-compatible object store in distributed mode all the way from building to provisioning on your laptop. This will help you, and your fellow developers get up and running quickly with a production-grade MinIO cluster locally without compromising on using the entire feature set. MinIO is S3-compatible, lightweight, and can be installed and deployed on almost any platform, such as Bare Metal, VMs, Kubernetes, and Edge IoT, among others, in a cloud-native fashion.

Let’s recap what we performed briefly below:

  • We created a Packer template to create a MinIO image from scratch.
  • We used Vagrant as the provisioner so we could develop locally using the image.
  • We leveraged Jenkins to automate uploading the image to the Vagrant cloud for future builds.

We demonstrated that you must achieve a balance between baking and frying. Each organization's requirements are different, so we didn’t want to draw a line in the sand to say where baking should stop and frying should start. You, as a DevOps engineer, will need to evaluate to determine the best course for each application. We built MinIO to provide the flexibility and extensibility you need to run your own object storage.

If you implement this on your own or have added additional features to any of our processes, please let us know on Slack!

Previous Post Next Post