Skip to content

OpenEverest in the Real World: Running Databases on Kubernetes Without Cloud Lock-In

Davide Maggi
Published date:
A cartoon monkey orchestra conductor leads three database musicians: MongoDB playing saxophone, PostgreSQL playing violin, and MySQL playing piano, each reading sheet music labeled orchestration, backup, and restore.
Three databases, one conductor, zero tolerance for out-of-tune backups. Credit: MonkeyProof internal artwork

TL;DR

Running databases on Kubernetes is no longer the scary part. Running them well in production is.

OpenEverest gives platform teams a control plane to provision and operate MySQL, PostgreSQL, and MongoDB on Kubernetes using operators, while keeping the deployment portable across clouds and on-prem environments. You still own your infrastructure choices, but you avoid rebuilding every Day 2 workflow from scratch.

This article walks through a realistic setup and operations flow, including install, provisioning, backups, PITR, monitoring, RBAC, and failure-mode thinking.


Table of contents

Open Table of contents

Why This Problem Still Hurts in 2026

Most teams are already comfortable shipping stateless services on Kubernetes. Databases are where the smooth story gets messy.

Day 1 is usually manageable:

Day 2 is where the real bill arrives:

Cloud DBaaS can hide a lot of this. It can also hardwire you into one provider’s APIs, pricing model, and operational constraints. OpenEverest sits in the middle: DBaaS-like workflows, but on infrastructure you control.


What OpenEverest Actually Adds

From the official docs, OpenEverest is positioned as an open source platform for automated database provisioning and management on Kubernetes.

In practical terms, it gives you:

As of the local docs snapshot in this repository, OpenEverest 1.14.0 was released on March 10, 2026, with chart migration to openeverest/helm-charts and updated operator support.


Real-Life Example: A Platform Team Under Pressure

Let’s use a realistic (fictional) case.

Your company, BananaRama, runs:

Data requirements:

The team picks OpenEverest to standardize DB operations across clusters in AWS and on-prem environments.

Music-sheet style flow diagram showing BananaRama using OpenEverest to provision PostgreSQL, schedule backups, enable PITR, and recover after a bad deployment.
Music-sheet style flow diagram showing BananaRama using OpenEverest to provision PostgreSQL, schedule backups, enable PITR, and recover after a bad deployment.

Implementation Steps

Step 1: Install OpenEverest

You can install via Helm directly (or use everestctl, which uses Helm under the hood in recent versions).

helm repo add openeverest https://openeverest.github.io/helm-charts/
helm repo update

helm install everest openeverest/openeverest \
  --namespace everest-system \
  --create-namespace

Then expose the service for UI/API access (for quick lab access, port-forward is fine):

kubectl port-forward svc/everest 8080:8080 -n everest-system

At this point, your control plane exists, but you still need namespaces, engines, and policies to make it production-safe.


Step 2: Prepare Database Namespaces and Operator Scope

One useful operational pattern is to separate environments early (dev, staging, prod) and keep permissions scoped.

Example:

everestctl namespaces add prod --operator.postgresql=true --operator.mongodb=false --operator.xtradb-cluster=false

That gives your prod namespace only the operator stack you actually need there.

Small detail, big impact: this reduces “everything everywhere” drift and lowers blast radius.


Step 3: Provision a Production PostgreSQL Cluster with Backups + PITR

OpenEverest can be driven from UI, API, or CRDs. For repeatability, here’s a CRD-based example inspired by the official DatabaseCluster docs:

apiVersion: everest.percona.com/v1alpha1
kind: DatabaseCluster
metadata:
  name: checkout-pg-prod
  labels:
    clusterName: checkout-pg-prod
spec:
  backup:
    pitr:
      enabled: true
      backupStorageName: s3-backups-prod
    schedules:
      - name: nightly-backup
        enabled: true
        backupStorageName: s3-backups-prod
        schedule: "0 2 * * *"
  engine:
    type: postgresql
    version: "17.4"
    replicas: 3
    resources:
      cpu: "4"
      memory: 8G
    storage:
      class: gp3
      size: 200Gi
    userSecretsName: everest-secrets-checkout-pg-prod
  proxy:
    type: pgbouncer
    replicas: 2
    expose:
      type: internal
  monitoring:
    resources: {}

Apply:

kubectl apply -f checkout-pg-prod.yaml -n prod

Now you have:


Step 4: Add Monitoring and API Automation

OpenEverest integrates with PMM (currently PMM v2.x in the docs), so database teams can monitor slow queries and cluster health without building another custom monitoring sidecar maze.

For automation, grab a JWT from the API and drive workflows from your internal platform tooling:

curl --location -s 'http://127.0.0.1:8080/v1/session' \
  --header 'Content-Type: application/json' \
  --data '{"username":"admin","password":"<YOUR_PASSWORD>"}' | jq -r .token

This is where teams usually connect OpenEverest to:


Step 5: Enforce RBAC Before Everyone Is “Temporarily Admin”

By default, RBAC is disabled. In real environments, enable it early.

Minimal activation example:

apiVersion: v1
kind: ConfigMap
metadata:
  name: everest-rbac
  namespace: everest-system
data:
  enabled: "true"
  policy.csv: |
    g, platform-admin, role:admin

Then evolve policies by namespace and resource type so product teams can run their DB lifecycle without owning platform-wide settings.


How This Helps on Day 2

Back to BananaRama.

One Friday night deploy introduces a query regression in checkout. CPU spikes. Latency climbs. Then a monkey-error update corrupts part of a pricing table.

Without a platform: pager storm, ad hoc scripts, and five Slack channels debating restore strategy.

With OpenEverest patterns already in place:

  1. PMM dashboards reveal the query bottleneck fast.
  2. Team scales resources horizontally and vertically from the cluster management workflow.
  3. PITR restores to the correct timestamp after the bad change.
  4. RBAC keeps incident actions limited to the right namespace and team scope.

You do not magically eliminate incidents. You make incident response more deterministic.


Tradeoffs and Limitations You Should Know

OpenEverest is strong, but not magic. A few constraints from the docs are worth planning around:

The good news is these are documented and predictable. The bad news is you still need operational discipline.


Practical Takeaways


Conclusion

If your platform strategy is “Kubernetes everywhere,” then database operations need the same level of consistency as app deployments.

OpenEverest is interesting because it closes the gap between operator-level raw power and DBaaS-level usability. You keep portability, gain automation, and avoid building every Day 2 workflow from zero.

That is not just convenience. It is leverage.


Further Reading



Practical Next Steps

  1. Install OpenEverest in a non-production cluster and provision one PostgreSQL cluster via UI and one via CRD.
  2. Run a restore game day: simulate bad data writes and validate PITR workflow.
  3. Enable RBAC and remove broad admin access from day-to-day developer accounts.
  4. Define backup retention + storage quota policy per namespace before production rollout.
Next
Do You Really Need to Master Kubernetes for DevOps?