AIStor on ROSA

AJ AJ on AIStor |
AIStor on ROSA

AIStor is a pioneer in high-performance and interoperable cloud-native object storage that is versatile and agile. AIStor runs on a myriad of platforms such as Kubernetes, AWS, GCP, Azure, bare metal Linux and a host of other environments.

Lately, there has been a trend in the industry to bring data “closer” to home. The result is that  organizations now want to keep their data on servers that they own, in their own datacenter or at a colocation provider. The primary  reason is the out of control cost of the cloud coupled with the current economic climate. For most applications, as long as the workload is well understood, it is possible to achieve the same level of scalability and performance on-prem as the cloud - at a fraction of the cost.

But this brings up a conundrum – one of the benefits of the cloud is also that the infrastructure is more or less managed by the cloud provider. For instance, managed Kubernetes services such as EKS and GKE take care of managing the backend with regards to upgrades, downgrades, adding and removing nodes and other operations for you. When it comes to on-prem Kubernetes, perhaps dozens or hundreds of clusters, can add a good chunk of tech debt to your engineering and operations team. You need to bootstrap your own Kubernetes infrastructure and be able to take ownership of management, maintenance and other operations. This can be quite a challenge – wouldn’t it be great if it made our lives a little easier in the process?

This is where ROSA (Red Hat OpenShift on AWS) is a game changer. It not only gives you the ability to manage your own Kubernetes clusters but also lets you do it on your own private cloud. You must ensure the various components are up to date on a regular basis, but the OpenShift platform, with its DevOps centric yet Developer oriented approach to architecture, makes an excellent choice for on-prem Kubernetes clusters.

The AIStor Operator and Kubernetes plugin are certified for use with OpenShift, making it easy to incorporate AIStor within existing workflows. Our customers frequently run OpenShift in a multi-cloud configuration that leverages on-premise and public cloud resources. Running AIStor on OpenShift enables enterprises to achieve cloud-native elasticity on their hardware or cloud instance of choice, balance cost, capacity and performance.

By running AIStor on OpenShift, you gain software-defined scalability and automation using  Kubernetes orchestration with AIStor providing the object storage.  With storage part of the software-defined infrastructure, you can unify the deployment and management  of AI/ML, analytics, and other modern data workloads. Instead of building multiple data silos and duplicating data between them,  these applications can share the same on-premise AIStor deployment, which aids in the security and resiliency of data. Finally, this approach avoids cloud lock-in. . You want to be able to move your data to whatever environment that best fits your workloads and business objectives. By deploying AIStor on top of OpenShift in multiple locations/clouds, you can move the data seamlessly using site-to-site replication. This ensures you always have access to the best tools for the job - whether that be on the public cloud, private cloud or a colo. 

Flexibility to deploy AIStor to Red Hat OpenShift

AIStor operator comes with a swiss army knife of features to get you started quickly on your local environment, it does have some caveats that are worth mentioning.

Our recommendation is to use AIStor with OpenShift so that you can quickly get started with deploying your on-prem Kubernetes cluster. Don’t take our word for it though — build it yourself and you can join our Slack channel here so we can help you along the process.