OpenSearchCon 2023 Talk

OpenSearchCon 2023 Talk

We wrote a blog post about the searchable snapshot feature on OpenSearch back in June because it was a cool project that was designed to fill the need for a fast search backend that is truly open source. At that time, we focused on the setup of OpenSearch with a MinIO backend – and left the rest for a future post.

The time has come to revisit OpenSearch and MinIO. While we were looking through OpenSearch docs, the CFP for OpenSearchCon 2023 in Seattle caught our eye. We like OpenSearch because it has a distributed design, not unlike MinIO, which stores your data and processes requests in parallel. MinIO is very simple to get up and running with just a single small binary. Not only can you build a distributed OpenSearch cluster, but you can also subdivide the responsibilities of various nodes in the cluster as it grows. You can have nodes with large disks to store data, nodes with a lot of RAM for indexing and nodes with a lot of CPU but less disk to manage the state of the cluster.

With so many similarities, we thought "why not" and sent a proposal to speak at the conference about the rest of the combined featureset of OpenSearch and MinIO, such as Backup and Restore Snapshot, Tier data via ISM (Index State Management) and of course Searchable Snapshots. Apparently, the OpenSearch folks love MinIO as much as we love OpenSearch, the talk got accepted and we were invited to present it in Seattle. It was very cold in the Northwest but we were very excited nonetheless.

So what did we talk about? Here is a brief synopsis of the talk at OpenSearchCon 2023.

How to Create, Restore, Tier, and Search Snapshot Indices Stored in Object Storage

The topic involved the various ways MinIO can be used to store OpenSearch data in the course of the data storage lifecycle. I will share the deck and video in a future blog post once the conference folks release it.

I first showed how MinIO can be installed via Docker, but also explained the various other ways, such as Kubernetes and other on-prem methods, it could be installed.

docker run -d \
-p 20091:9001 \
-v /home/aj/minio/disk-1:/mnt/disk1 \
-v /home/aj/minio/disk-2:/mnt/disk2 \
-v /home/aj/minio/disk-3:/mnt/disk3 \
-v /home/aj/minio/disk-4:/mnt/disk4 \
--name minio \
--hostname minio \
quay.io/minio/minio server http://minio/mnt/disk{1...4}/minio --console-address ":9001"

Once the MinIO backend was up and running, I logged in and created a bucket called testbucket123.

Next, I set up the required credentials in OpenSearch to access MinIO, first I added the access and secret keys.

echo minioadmin | ./bin/opensearch-keystore add --stdin s3.client.default.access_key
echo minioadmin | ./bin/opensearch-keystore add --stdin s3.client.default.secret_key

In opensearch.yml, update the following settings

s3.client.default.endpoint: localhost:20091
s3.client.default.protocol: http

In production it's recommended to use https but in this demo, I will keep it simple and use http.

I’ll make an API call to add the bucket name I created earlier which will use the credentials I set up to access the MinIO instance.

PUT _snapshot/object-storage-repository
{
"type": "s3",
"settings": {
"bucket": "testbucket123",
"base_path": "openseasrch/snapshot"
}
}

Yup, there was a typo in the base path but I went with it.

The Backup, Restore and Searchable Snapshots are pretty straightforward, the most interesting bits are the ISM or Index State Management policies. If you recall from our Elasticsearch Frozen Tier blog post, I talked about Index Lifecycle Management, or ILM, and this is very similar to that, but a lot easier to configure.

I went through a simple use case that is one of the most common ways an ISM policy could be used. The following is the workflow I had in mind:

  • Rollover the data from Hot tier to the Warm tier after 14 days
  • Make the Warm tier read-only and move to ObjectStorage tier after 30 days
  • Take a Snapshot from ObjectStorage tier and move to Delete tier after 60 days.
  • From the Delete tier, delete the index.

The below policy will roll over the data from Hot tier to the Warm tier after 14 days. Basically, any new index that gets created will first go in the Hot tier. Once it gets rolled over and a new index is created, the old one is moved to the Warm tier.

{
"name": "hot",
"actions": [
{
"rollover": {
"min_index_age": "14d"
}
}
],
"transitions": [
{
"state_name": "warm"
}
]
},

We made the Warm tier read-only so no additional writes happen. Later, after 30 days, it will be moved to the `objectstorage` tier where MinIO comes into play.

{
"name": "warm",
"actions": [
{
"read_only": {}
}
],
"transitions": [
{
"state_name": "objectstorage",
"conditions": {
"min_index_age": "30d"
}
}
]
},

The actual snapshot is taken in the objectstorage tier . I first mention the OpenSearch MinIO repository I configured earlier. Once a Snapshot is taken, the index is moved from ObjectStorage tier to Delete tier after 60 days.

{
"name": "objectstorage",
"actions": [
{
"snapshot": {
"repository": "object-storage-repository",
"snapshot": ""
}
}
],
"transitions": [
{
"state_name": "delete",
"conditions": {
"min_index_age": "60d"
}
}
]
},

Finally, once it's moved to the Delete tier, the index will be deleted to make room for future indices.

{
"name": "delete",
"actions": [
{
"delete": {}
}
]
}
],

Ideally, I would have demonstrated all of this at the conference, but I quickly ruled that out because the Wi-Fi connection was so…very…slow at the conference. I tried to do a live demo, but it was fruitless. It's a good reminder to anyone presenting at a conference to never expect demo-class performance from any conference Wi-FI. Always test the Wi-Fi, during peak usage, before doing a live demo and prepare a backup like a recorded demo or deep technical content in the slides themselves.

Talks, Tracks, Going forward…

There were many interesting talks in the Analytics, Observability and Security tracks, which are all of my favorite topics when it comes to monitoring and metrics. I was surprised that there were not a lot of talks on how to actually operate the OpenSearch cluster and its components. The majority of the talks focused on higher-level topics, such as Analytics, and that seems to be how the community uses the app. Talks about the infrastructure topics I love, such as how to maintain the OpenSearch infrastructure, how to set it up in production or how and when to back up were few and far between. My talk was one of them, and, if these topics appeal to you, it is one of the few that discussed how to operate and maintain OpenSearch, while meeting DR, SOC and PCI/DSS compliance requirements, and also described a methodology for long term tiering of data. You can see the full list of the talks and tracks here.

If you have any questions on OpenSearch and how to configure it be sure to reach out to us on Slack!

Previous Post Next Post