When we started MinIO, we set out to set a new standard for object storage. We built a software-defined, S3 API compatible object storage that is the fastest and most scalable solution available that isn’t S3. But we didn’t stop there. We are also crafting an entirely different experience through our SUBNET experience. SUBNET combines a commercial license with our legendary direct-to-engineer support program.
We don’t play games when it comes to supporting your mission-critical object storage, so we’ll never ask you to first raise a level 1 ticket just to gather basic information and then to immediately escalate to a “support engineer.” With a MinIO SUBNET subscription, you cut down the middle communication layers and instead reach the same engineers who work on the codebase directly. SUBNET is THE place to interact with us, and a lot of our customers have told us they love using the browser-based support portal to message engineers and share files. SUBNET makes communication seamless with a Slack/Chat type experience where you can type and send your messages as you think through the problem with engineers who guide you through fixes as they go back and forth with you. The process is more than resolving an issue, it’s an ongoing relationship with MinIO engineers and these are some smart people who enjoy solving problems.
It’s not uncommon for users of SUBNET to reach out to us with a sometimes unique request specific to their environment. This is where the collective problem-solving starts in the SUBNET portal. We had several Fortune 100 customers ask us how to back up large objects - as in 10+ TB to several PB size databases - quicker and more efficiently. The problem boiled down to “what can we do to overcome the limitations of database vendor tools, namely that they push a single extremely large file to a MinIO bucket using a single stream”. While MinIO does not reinvent the wheel, we do strive to make the wheel more efficient. With that spirit, we wrote Jumbo to back up objects of any size by creating parallel streams to upload segments of large objects. Jumbo supports any type of massive upload, dramatically reducing backup time up to 15x in our testing, and is being put to work backing up MariaDB, PostgreSQL, MongoDB and just about any other database.
Driven by a need to solve complex technical problems and make their lives easier – and the lives of everyone around them from their non-engineering co-workers to customers and open-source community users. They build tools to solve problems, and one in particular stands out because of the tangible impact it has had on us, making it much easier to use datacenter resources so they can keep up high-velocity coding and help SUBNET users faster.
There are automated support tools built into MinIO, such as detailed Health Check and Performance reports and Call Home so engineers can get up to speed on a customer’s environment and quickly respond to a SUBNET request. To help reproduce issues and run the same applications as customers, we built a sandboxed on-prem environment where everyone in the company could deploy anything they wanted. We thought that total freedom would enable us to build anything and explore any new open-source application in order to help customers, but user adoption has been very low. Total freedom is nice, but we needed something simpler and elegant, with a basic set of features that gets the average user up and running quickly without manual steps and time-consuming tinkering.
With that in mind, our Engineers took it upon themselves to build the VM Broker. We used Linux Containers Runtime (LXC) as the core backend to launch the VMs, and designed a console much like the MinIO Console that everyone knows and loves. The result is an easy yet powerful GUI to deploy VMs for testing and support that follows the same principles – high performance, cloud native, Kubernetes native, and simple – as our award-winning object storage.
During the course of resolving customer issues on SUBNET, our engineers inevitably run into a few problems that they need to understand better, it could be something like “The new KMS update is taking up significant amounts of memory”. In order to diagnose this issue they have to create an instance where they can install MinIO and try to reproduce the issue.
Now let's see some of the features that we’ve implemented in VM Broker in order to debug this issue.
We’ll start with the basics and show you how to create an instance. Once logged in there will be a portal presented with various options such as
- Creating an Instance
- Creating a Load Balancer and Proxy
- Adding Images such as Ubuntu, CentOS, etc. to boot VMs from
- Adding Hypervisor nodes for more capacity
In this case we’ll create a simple instance. Click on the
Create Instances + button on the top right of the page.
Once you click on the button there are several options to launch your VM, let's go into detail about each one.
- Name: This is the name the VM will launch with and sets the Hostname
- Image: Choose between several uploaded images, in this case, we’ll choose Ubuntu.
- Count: This setting will launch identical instances of the same configuration and VM Broker will ensure any naming collisions are resolved automagically.
- CPU, Memory and Disk: Note that VM Broker automatically places the instance onto the node with the most resources available (CPU, Disk, Memory).
- Public Key: Once the instance is launched, this is the SSH Pub key used to log in to the instance.
- Node: You can also select a specific hypervisor that you want to launch the instances in. Generally, this is not filled in so we get VM Broker do its magic to deploy the VM on a node with enough resources.
Once the instance is launched, wait a few minutes for the OS to install and for it to completely come online. But how do you log in to the instance? Well, again our Engineers thought with a do-it-yourself mindset for even the non-engineering folks by showing the command to log in.
Just enter this in your favorite terminal and voila, you can log into your VM instance without the need for any SSH tunnel or a bastion host.
ssh -p 30005 firstname.lastname@example.org
Let’s go through some of the other basic functionality that we might need to do once in a while.
One of the most common are:
- removing an instance
- starting and stopping an instance
- Adding and removing SSH pub keys
To delete an instance, once logged in navigate to Instances
Add or remove columns by clicking the Columns icon so you can see other columns that provide more information about the instance such as number of CPUs, Memory, etc.
Select the instance and delete it by clicking the Delete icon. You will get a prompt to confirm deletion and the instance will be deleted.
To stop and start the instance, click on the instance record. On the left, you will see several tabs:
We went through some of these previously but let's go ahead and click on the
Once you click on the
Edit tab there will be a toggle to
Start or stop an instance. We can use the switch to stop the instance.
To Add or remove public keys for SSH access we click on Access.
Audit tab allows us to see all the actions taken on the instance during the course of its lifetime. This is especially useful if you want to track changes like SSH pub keys added and removed, instance stopping and starting, among other important things.
Being a company that is at the forefront of data storage capabilities and backups, we practice what we preach. Even for internal tools such as VM Broker we highly recommend that the folks launching the instances back them up in case of a failure of the Hypervisor or just for best practices. One time I recall I accidentally deleted the
/usr directory by issuing the wrong
rm -rf command. Needless to say, I had a bad day as I had to restore it from another VM that had a similar configuration and hoped everything would be back to normal, but it wasn't, and I unfortunately had to start over again. So having a good set of backups is critical. Below is a screenshot of the backup tab.
I’ve learned very early in my DevOps career that a backup is only as good as its ability to restore. You can have the best backup software and process in the world but if you cannot restore it back to its original state quickly and confidently, then you’ve essentially put all of your eggs in one fragile basket and now the entire effort was wasted. You must ensure that backups taken are actually working backups that are restorable. Below is a screenshot of the restore process.
Building Tools and Solving Problems
While the VM Broker is currently an internal tool, we shared it with you today to give you an idea of how we attack problems at MinIO and to give you a behind-the-scenes look at SUBNET. Before we built our own solution we did look into other solutions such as VMware. For the majority of use cases we needed something simple yet feature rich but because of the enterprise features of VMware it was somewhat cost prohibitive. So after carefully considering the build vs. buy scenarios we decided on building our own – plus we think it’s a cool tool.
Everyone at MinIO is here because we enjoy what we do. We enjoy attacking real-world problems from a different angle than the status quo and we really enjoy it when the answer is cloud-native and elegant. MinIO engineers lead the way and we operate as a team. With this approach, we’ve developed tools such as Jumbo for large object backups and VM Broker as an internal tool that simplifies building and deploying instances needed for support and testing.
What we shared about VM Broker in this blog post is just the tip of the Iceberg. There’s more to VM Broker and there’s more to SUBNET. We’ve written more code that solves a problem too, like Mint, an S3 API compatibility checker, and DirectPV to simplify the use of persistent volumes.
If you would like to know more about any of these tools or SUBNET give us a ping on Slack and we’ll get you going!