
Running a secrets manager that requires manual intervention every time a pod restarts is not a secrets manager, it is a liability. Every node drain, every OCP upgrade, every unexpected crash becomes a human bottleneck at the worst possible time. This guide walks you through how to deploy OpenBao on OpenShift with a three-node HA Raft cluster, end-to-end TLS using cert-manager, and static key auto-unseal; no AWS KMS, no GCP Cloud KMS, no Azure Key Vault, and no second OpenBao instance required.
Table of Contents
Deploy OpenBao on OpenShift with HA Raft, TLS, and Static Key Auto-Unseal
What Is OpenBao and Why Does It Exist?
OpenBao is an open-source secrets management platform and a community fork of HashiCorp Vault. In August 2023, HashiCorp relicensed Vault from the Mozilla Public License 2.0 (MPL 2.0) to the Business Source License 1.1 (BSL 1.1). The BSL 1.1 restricts competitive use, organisations offering managed services or building products on top of Vault faced new legal exposure, and the broader open-source community lost the freedom to fork and commercialise. In response, the Linux Foundation announced the OpenBao project in December 2023. OpenBao is released under MPL 2.0 and is governed by an independent Technical Steering Committee.
If you are currently running HashiCorp Vault on a self-hosted Kubernetes or OpenShift cluster for internal use, OpenBao is a drop-in replacement. The CLI binary is bao instead of vault, and the API paths are identical. Everything else including authentication methods, secrets engines, audit backends, Raft storage, works the same way.
OpenBao Core Concepts
It’s important to understand the key components of OpenBao. Knowing how each part works will make troubleshooting easier and help you maintain the system effectively.
The Barrier
OpenBao uses a security barrier for all requests made to the backend. The security barrier automatically encrypts all data leaving OpenBao using a 256-bit AES cipher in Galois Counter Mode (GCM) with 96-bit nonces. Think of the barrier as a one-way checkpoint: everything that leaves OpenBao going to storage is encrypted, everything coming back in is verified and decrypted. The storage backend, whether Raft, PostgreSQL, or a file is considered untrusted. Even if an attacker gets direct access to the Raft data directory, they get encrypted blobs that are useless without the encryption key.
The Encryption Key Chain
Most OpenBao data is encrypted using the encryption key in the keyring; the keyring is encrypted by the root key and the root key is encrypted by the unseal key.
Written out as a chain:
Unseal Key
└── decrypts > Root Key
└── decrypts > Keyring (encryption key)
└── decrypts > All stored secrets
This three-layer design matters because no single key does everything. The unseal key’s only job is protecting the root key. The root key’s only job is protecting the keyring. The keyring does the actual work of encrypting your secrets. Rotating any layer does not expose the layers below it.
With static key auto-unseal, your AES-256 OCP Secret takes the role of the unseal key. OpenBao reads it at startup to decrypt the root key, that is the entire unseal operation.
Sealed vs Unsealed
When OpenBao starts, it is sealed: the encryption key is not in memory, no requests can be served, no authentication works, no secrets can be read or written. The only operation possible is unsealing itself or checking seal status.
Once unsealed, OpenBao loads the keyring into memory and becomes fully operational. It stays unsealed until a pod restart, an explicit bao operator seal command, or a storage-layer failure.
This is why auto-unseal matters operationally. Without it, every pod restart, planned or unplanned, requires a human to provide unseal key shares before the pod becomes Ready. In a HA cluster, each pod must be individually unsealed. With static key auto-unseal, the pod reads the key from the mounted OCP Secret and unseals within seconds of starting, with no human involvement.
Secrets Engines
A secrets engine is responsible for managing secrets. Simple secrets engines, such as the “KV” secrets engine, return the same secret when queried. Some secrets engines support the use of policies to dynamically generate a new secret each time they are queried, allowing for unique secrets to be used which allows OpenBao to do fine-grained revocation and policy updates.
Secrets engines are mounted at paths. Think of them like filesystems, each one is mounted at a path prefix, and all operations under that prefix go to that engine:
| Engine | Mount path | What it does |
|---|---|---|
| KV v2 | secret/ | Static key-value secret storage with versioning |
| Database | database/ | Dynamic credentials for PostgreSQL, MySQL, etc. |
| PKI | pki/ | Certificate authority, issue and revoke TLS certs |
| Transit | transit/ | Encryption-as-a-service, encrypt/decrypt data without storing it |
| Kubernetes | kubernetes/ | Dynamic Kubernetes service account tokens |
Secrets engines receive a barrier view to the configured OpenBao physical storage, similar to a chroot. When a secrets engine is enabled, a random UUID is generated which becomes the data root for that engine. Whenever that engine writes to the physical storage layer, it is prefixed with that UUID. This means secrets engines are fully isolated from each other at the storage level.
Auth Methods
Auth methods are how clients prove their identity to OpenBao. OpenBao does not store usernames and passwords internally (with the exception of the userpass method). It delegates identity verification to external systems.
The auth methods you will use most in an OCP environment:
- Kubernetes auth: a pod presents its ServiceAccount JWT token; OpenBao verifies it against the Kubernetes API. No static credentials needed in the pod. This is what the next post in this series covers.
- Token auth: built-in, always enabled. The root token from initialisation uses this. Tokens have TTLs, policies, and can be revoked.
- AppRole: machine-oriented auth for CI/CD pipelines. A Role ID (not secret) and a Secret ID (single-use, short-lived) combine to generate a token. Used in GitLab CI integration.
When a client successfully authenticates, OpenBao generates a token and attaches the policies associated with that auth method role to it. Every subsequent request uses that token.
Policies
Each policy is path-based and policy rules constrain the actions and accessibility to the paths for each client. Policies are written in HCL (HashiCorp Configuration Language) and follow a simple pattern:
# Allow reading secrets under the infrawatch path
path "secret/data/infrawatch/*" {
capabilities = ["read", "list"]
}
# Allow the deploy job to read OCP credentials
path "secret/data/ocp/deploy-token" {
capabilities = ["read"]
}
The six capabilities are create, read, update, delete, list, and sudo. Deny is the default, if a path is not explicitly permitted in a policy, access is denied. This is the principle of least privilege enforced at the API level.
Leases and TTLs
Every dynamic secret OpenBao generates comes with a lease, a TTL after which the secret is automatically revoked. If a database credential is generated with a 1-hour TTL and the application crashes without revoking it, OpenBao revokes it automatically at the hour mark. No stale credentials left alive.
Static secrets stored in KV do not have leases, they persist until deleted. Tokens do have TTLs and can be renewed up to a maximum TTL.
Raft Storage
Raft is OpenBao’s built-in distributed consensus storage, based on the Raft consensus algorithm. It requires no external database or Consul cluster. Each OpenBao node maintains a full copy of the data. Write operations go to the active (leader) node, which replicates to followers before acknowledging the write. Reads can be served by any node.
For a three-node cluster, Raft requires at least two nodes to agree (quorum = N/2 + 1) before any write is committed. This means:
- One node down, cluster continues operating normally
- Two nodes down, cluster loses quorum, writes are blocked, reads may serve stale data
- All three nodes down, cluster is unavailable; pods auto-unseal on restart and Raft elects a new leader automatically
This is why three replicas is the minimum for a production Raft cluster. One replica has no fault tolerance. Two replicas lose quorum the moment either node fails.
The Secret Read Request Flow
Understanding how a secret read works end-to-end helps you troubleshoot failures:
Client (pod / pipeline)
│
│ 1. Authenticate (Kubernetes SA token / AppRole / Token)
▼
OpenBao (active node)
│
│ 2. Validate identity against auth method
│ 3. Generate token with attached policies
│ 4. Client presents token + read request
│ 5. Policy check: does this token's policy allow read on this path?
│ 6. Route request to the correct secrets engine
│ 7. Engine reads encrypted data from Raft storage
│ 8. Decrypt through the barrier (keyring > root key)
│ 9. Return plaintext secret to client
│ 10. Log request + response to audit backend
▼
Client receives secret
If any step fails, token expired, policy too restrictive, secrets engine not mounted, pod sealed, you know exactly which layer to check. The audit log records every step.
Understanding the Unseal Problem
Before touching a single YAML file, it’s important to understand why unsealing exists and why doing it wrong can make your system unavailable.
OpenBao encrypts all its data. The main encryption key used in encrypting the data (called the barrier key) is itself encrypted using a root key. This root key never sits on disk unencrypted, it’s protected by a seal mechanism (a system that keeps the root key safe, usually by splitting it into multiple shares and requiring a threshold of the shares to unlock it). When OpenBao starts up, it is sealed, meaning it cannot access the root key and cannot read or write any secrets. Unsealing is the process of giving OpenBao the root key so it can start working.
Why does this matter? Every time an OpenBao pod restarts, whether due to a crash, an upgrade, or maintenance, it starts sealed. Without auto-unseal, a person must manually provide enough unseal key shares (pieces/shares of the root key, created using Shamir’s Secret Sharing) to reach the required threshold before the pod can become ready. In a three-node HA cluster, each node must be unsealed individually after a restart.
OpenBao Auto-Unseal Options
OpenBao supports multiple auto-unseal mechanisms.
1. Shamir’s Secret Sharing (Manual Unseal, Default)
This is the default option if no seal stanza is configured. OpenBao generates a root key and splits it into N shares using Shamir’s Secret Sharing algorithm. To unseal, the operators must provide a threshold number of shares (e.g., 3 of 5). No single person holds the full root key, making this secure but operationally manual. Every pod restart requires human input.
2. Cloud KMS (AWS KMS, GCP Cloud KMS, Azure Key Vault)
The root key is encrypted using a cloud provider’s KMS. On startup, OpenBao calls the KMS API to decrypt the root key and unseals automatically. Key usage is auditable through the cloud provider (CloudTrail, GCP Audit Logs, Azure Monitor). This is a strong auto-unseal option when running in the corresponding cloud.
3. Transit Auto-Unseal (OpenBao A unseals OpenBao B)
A dedicated OpenBao instance (the unsealer) runs with the Transit secrets engine. Production OpenBao nodes call the unsealer to decrypt their root keys at startup. The unsealer itself is initialized with Shamir key shares held offline.
This is architecturally correct and is a legitimate production pattern but the unsealer must be a separate, independent instance outside the production cluster’s failure domain. If both the unsealer and the production cluster run on the same OCP cluster, a full cluster outage seals both simultaneously, leaving you with the same manual intervention problem you were trying to avoid. You now have two OpenBao instances to operate instead of one, without gaining availability.
4. PKCS#11 / HSM
OpenBao can integrate with a Hardware Security Module (HSM) via PKCS#11. The root key is wrapped by a key that never leaves the HSM hardware. This is the highest-security option and meets compliance requirements such as PCI-DSS or FIPS 140-2. It requires HSM infrastructure to use it.
5. Static Key Auto-Unseal
Introduced in OpenBao 2.4.0, this method stores a 32-byte AES-256-GCM-96 key in an OCP Secret. OpenBao mounts the Secret into each pod, reads the key on startup, and unseals automatically; no cloud KMS, HSM, or second OpenBao instance required.
OpenBao’s RFC #1303 explains that in Kubernetes environments lacking a widely-deployed KMS or HSM, using a securely stored static secret provides equivalent security. The effectiveness of this method however, depends on the cluster’s security:
- etcd encryption at rest must be enabled (OCP 4.x enables this by default).
- RBAC must restrict access to the OpenBao namespace and the unseal Secret.
- The unseal key must never be committed to Git or stored in plaintext.
This method is pragmatic for self-hosted OCP clusters without cloud KMS or HSM infrastructure and aligns with OpenBao’s design for such scenarios.
Environment and Prerequisites
This guide is tested on the following:
- OpenShift Container Platform 4.20 (OVN-Kubernetes networking)
- OpenBao 2.5.x (the current stable release at time of writing)
- OpenBao Helm chart (official, from
https://openbao.github.io/openbao-helm) - cert-manager operator (for TLS certificate management)
- ODF/Ceph storage backend (for persistent volumes)
Before starting, confirm the following are in place:
- cert-manager operator is installed and running.
oc get pods -n cert-manager - Helm is available on the bastion.
helm version
If cert-manager is not installed, install it via OperatorHub (Software Catalog) in the OCP console under Ecosystem > Software Catalog (OCP 4.20 and later). Search for the Red Hat provided cert-manager for OpenShift and install it.
Architecture Overview
What we will build:
OpenShift Cluster
└── Namespace: openbao
├── openbao-0 (StatefulSet pod, Raft leader)
├── openbao-1 (StatefulSet pod, Raft follower)
├── openbao-2 (StatefulSet pod, Raft follower)
├── Service: openbao (active node)
├── Service: openbao-internal (Raft peer communication)
├── Route: openbao (external TLS access)
├── Secret: openbao-unseal-key (the AES-256 static key, file-mounted)
├── Secret: openbao-server-tls (cert-manager-managed TLS cert)
└── PVC: openbao-data-{0,1,2} (Raft storage, ODF RBD)
Each pod holds a copy of the unseal key file, mounted from the same OCP Secret. When a pod restarts for any reason, it reads the key and unseals automatically within seconds.
Deploy OpenBao on OpenShift with HA Raft
Step 1: Create the Namespace with Correct Labels
OpenShift enforces Pod Security Admission (PSA) at the namespace level. OpenBao pods do not need privileged SCC, but they do need to run as a specific UID defined by the Helm chart. As such, we will create a namespace with the appropriate PSA labels:
cat <<EOF | oc apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: openbao
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
EOF
OpenBao does not need host-level access, hence enforcing restricted PSA at the namespace level is enough to prevent any future workload from accidentally running with elevated privileges in this specific namespace.
Step 2: Generate the Static Unseal Key
Generate a 32-byte random key. This is your AES-256-GCM-96 unseal key. Treat it like a root credential, back it up immediately, offline, in at least two physically separate locations.
You can use openssl command to generate the key (binary format, 32 bytes = 256 bits):
openssl rand -out unseal.key 32
The command generates a 32 random raw binary bytes that is not human-readable. You can’t view it with any text editor.
If you want, you can verify that it is actually a 32 bytes key;
wc -c unseal.key
Expected output: 32 unseal.key
Next, store it as an OCP Secret in the openbao namespace:
oc create secret generic openbao-unseal-key \
--from-file=unseal.key=unseal.key -n openbao
Verify the Secret contains actual key bytes, not a filename string. The xxd output must show binary bytes, not ASCII text.
oc get secret openbao-unseal-key -n openbao \
-o jsonpath='{.data.unseal\.key}' | base64 -d | xxd | head -3
Expected (correct):
00000000: 44a1 dc0c bf8c db52 4af3 ee25 2397 12a2 D......RJ..%#...
00000010: 134a cf62 801b 163f 2859 8da3 c94f 1b5b .J.b...?(Y...O.[
Immediately, backup the key to an offline location and then remove the local copy, since it now lives only in the OCP Secret.
So, after you have made an offline copy, remove the local copy;
shred -u ./unseal.key
rm -rf, the rm only removes the directory entry and leave the actual bytes on the disk. As such, it makes the date recoverable with forensic tools until the blocks happen to be overwritten by something else. For ordinary files that is acceptable. For a 32-byte AES key that protects your entire secrets store, it is not.
shred overwrites the file contents with random data multiple times before unlinking it, making recovery from the raw storage blocks impractical.
Caveat: shred is reliable on local block devices (ext4, xfs on your RHEL bastion). On SSDs with wear levelling, the controller may redirect overwrite writes to different physical cells, leaving original bytes on the old cells. On copy-on-write filesystems (Btrfs, ZFS), shred is ineffective entirely. The key’s true security rests in OCP Secret RBAC and etcd encryption at rest, this step is a defence-in-depth measure, not your primary control.Verify the Secret was created:
oc get secret -n openbao
You should be able to see your secret.
Step 3: Set Up TLS with cert-manager
OpenBao’s documentation is explicit: OpenBao should always be used with TLS in production environments. Traffic must be encrypted end-to-end between clients and the OpenBao server.
Intermediate load balancers or reverse proxies must not terminate TLS and send plaintext traffic to OpenBao. If TLS is terminated, it must be re-established so that traffic remains encrypted when forwarded to the OpenBao pods.
Let’s first, create a self-signed CA Issuer for the openbao namespace:
This step creates a self-signed CA using cert-manager. This means:
- cert-manager generates a private key and a self-signed CA certificate that is not signed by any public Certificate Authority such as Let’s Encrypt, DigiCert, or your organisation’s PKI.
- This CA is then used to issue the OpenBao server TLS certificate.
The result is encrypted TLS communication between clients and OpenBao. The key difference is trust: this CA is not trusted by default, because it is not present in any OS trust store or browser.
What this means for each client:
- bao CLI on the bastion: export the CA certificate and set
BAO_CACERTpointing to it. - curl / HTTP clients: pass
--cacert ~/openbao-ca.crt. For quick one-off tests only,curl -kskips verification, never use-kin automation. - Browsers hitting the UI: you will see a certificate warning. For a home lab, import the CA certificate into your OS/browser trust store. For production external access, use a publicly trusted certificate via cert-manager’s ACME (Let’s Encrypt) integration.
- Application pods inside OCP: the CA certificate must be mounted and passed to the client SDK explicitly, or added to the cluster-wide trusted CA bundle.
Why not use Public CAs here?
Let’s Encrypt and other public CAs require proof of domain ownership using HTTP-01 or DNS-01 challenges. Internal cluster DNS names such as openbao-0.openbao-internal.openbao.svc are not publicly resolvable or verifiable, so a public CA cannot issue certificates for them. A self-signed internal CA is therefore the correct and practical choice for internal TLS communication (including Raft peer-to-peer traffic). For external access (for example, via an OpenShift Route), a publicly trusted certificate can still be used alongside the internal self-signed CA.
Hence, copy the command below, update it and apply to create the CA and Issuer.
cat <<EOF | oc apply -f -
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: openbao-selfsigned-issuer
namespace: openbao
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: openbao-ca
namespace: openbao
spec:
isCA: true
commonName: openbao-ca
secretName: openbao-ca-secret
rotationPolicy: Always
privateKey:
algorithm: ECDSA
size: 256
issuerRef:
name: openbao-selfsigned-issuer
kind: Issuer
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: openbao-ca-issuer
namespace: openbao
spec:
ca:
secretName: openbao-ca-secret
EOF
This configuration bootstraps an internal certificate authority (CA) and configures the cert-manager to issue certificates signed by that CA. In essence, it creates three resources that work together:
- Self-signed Issuer (
openbao-selfsigned-issuer): a bootstrap issuer used to generate the initial root CA certificate. - CA Certificate (
openbao-ca): defines a root CA (isCA: true). cert-manager generates a CA private key and self-signed CA certificate, storing them in theopenbao-ca-secretSecret. - CA Issuer (
openbao-ca-issuer): references the CA keypair inopenbao-ca-secretand is used to sign new certificates requested viaCertificateresources (e.g., the OpenBao server TLS certificate generated below).
Now create the server TLS certificate. The DNS SANs must cover every name OpenBao pods will use including the active service, the internal peer service, and all pod FQDNs:
cat <<EOF | oc apply -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: openbao-server-tls
namespace: openbao
spec:
secretName: openbao-server-tls
duration: 8760h
renewBefore: 720h
privateKey:
algorithm: ECDSA
size: 256
rotationPolicy: Always
dnsNames:
- openbao
- openbao.openbao
- openbao.openbao.svc
- openbao.openbao.svc.cluster.local
- openbao-internal
- openbao-internal.openbao.svc
- openbao-internal.openbao.svc.cluster.local
- "*.openbao-internal"
- "*.openbao-internal.openbao"
- "*.openbao-internal.openbao.svc"
- "*.openbao-internal.openbao.svc.cluster.local"
ipAddresses:
- 127.0.0.1
issuerRef:
name: openbao-ca-issuer
kind: Issuer
EOF
This creates a TLS certificate for OpenBao, signed by the internal CA, and stores it in a Kubernetes Secret called openbao-server-tls, for use by the OpenBao service.
Verify the certificate is issued before proceeding:
oc get certificate openbao-server-tls -n openbao
NAME READY SECRET AGE
openbao-server-tls True openbao-server-tls 2m6s
oc get secret openbao-server-tls -n openbao
NAME TYPE DATA AGE
openbao-server-tls kubernetes.io/tls 3 2m41s
Step 4: Add the OpenBao Helm Repository
OpenBao is deployed using its official Helm chart, maintained by the OpenBao project. Helm charts package all the Kubernetes manifests; StatefulSet, Services, ServiceAccount, RBAC, ConfigMap, into a single versioned, configurable release. Using the official chart means you get a deployment that is tested by the OpenBao team and follows their recommended patterns for HA Raft on Kubernetes.
Before you can proceed, you must have installed helm program. Refer to guide below on how to install helm.
Installing Helm on Kubernetes Cluster
You must add the OpenBao chart repository to your local Helm before you can install from it.
Hence, add the official OpenBao Helm chart repository:
helm repo add openbao https://openbao.github.io/openbao-helm
Fetch the latest chart index from the repository:
helm repo update
List available chart versions. Use this to choose a specific chart version to pin in the deployment. The chart version and the OpenBao app version are separate, check both columns:
helm search repo openbao/openbao --versions | head -5
Sample output;
NAME CHART VERSION APP VERSION DESCRIPTION
openbao/openbao 0.26.2 v2.5.2 Official OpenBao Chart
openbao/openbao 0.26.1 v2.5.1 Official OpenBao Chart
openbao/openbao 0.26.0 v2.5.1 Official OpenBao Chart
openbao/openbao 0.25.7 v2.5.1 Official OpenBao Chart
Step 5: Create the Helm Values File
The OpenBao Helm chart ships with hundreds of lines of Kubernetes manifest templates including StatefulSet, Services, ServiceAccount, RBAC, PodDisruptionBudget, ConfigMap, and more. These templates contain placeholders configuration options rather than hardcoded values. The values.yaml file is the centralised configuration file that stores the values for those placeholders. Every configurable setting such as the number of replicas, image tag, TLS behaviour, storage class, resource limits, volume mounts, the Raft configuration block, the seal stanza, is driven from this single file.
When you run helm install, Helm merges your values file with the chart’s default values, renders the templates, and applies the resulting manifests to the cluster. When you run helm upgrade, you change the values file and Helm works out what changed and applies only the diff. You never touch the raw Kubernetes YAML directly, the chart handles that.
This design has two practical consequences for you:
- Everything is in one place. Your entire OpenBao deployment configuration, HA settings, TLS, storage, auto-unseal, is captured in
openbao-values.yaml. You can commit this file to Git (without the unseal key, which lives in an OCP Secret) and reproduce the exact same deployment on any cluster. - Every parameter you do not set falls back to the chart default. You need to understand which defaults are acceptable and which must be overridden. The values below override only what is necessary for a production-grade HA deployment.
As such, here is my sample OpenBao Helm values file:
cat openbao-values.yaml
Every parameter is explained inline so you know exactly what each one does and why it is set the way it is.
global:
# tlsDisable controls whether OpenBao pods listen on HTTP or HTTPS.
# This is a double negative — tlsDisable: false means "do NOT disable TLS",
# meaning TLS is ENABLED. Setting this to true would make every pod listen
# on plain HTTP, which is never acceptable for a secrets manager.
# Always set this to false in any environment that is not a throwaway dev cluster.
tlsDisable: false
# global.openshift: true activates OpenShift-specific behaviour in the chart
# templates. Specifically it changes how the chart handles securityContext:
# on non-OpenShift Kubernetes, the chart sets pod/container securityContext
# fields itself. On OpenShift, it leaves securityContext empty and relies on
# OCP's SCC (Security Context Constraints) system to assign the correct
# security profile — which is the correct approach on OCP. Without this flag
# the chart generates manifests that conflict with OCP's SCC admission.
openshift: true
server:
image:
# Use the UBI (Red Hat Universal Base Image) variant for OCP.
# The UBI image is built on RHEL UBI9, is Red Hat certified, and has
# better compatibility with OCP's security model than the Alpine-based image.
# The chart requires registry and repository as SEPARATE fields.
# Combining them into repository alone produces an invalid image reference.
# This matches the structure in the official values.openshift.yaml.
registry: "quay.io"
repository: "openbao/openbao-ubi"
# Pin to a specific stable version — never use 'latest' in production.
# Chart version 0.26.2 ships OpenBao v2.5.2 (confirmed via helm search).
# Chart version and app version are independent — always verify both.
tag: "2.5.2"
pullPolicy: IfNotPresent
# Override the readiness probe path.
# The default probe path returns HTTP 501 when OpenBao is uninitialised,
# which Kubernetes treats as unhealthy and causes the pod to restart in
# a loop before you can run 'bao operator init'.
# Adding uninitcode=204 tells OpenBao to return 204 (healthy) when
# uninitialised — pods stay Running and you can initialise them cleanly.
# This override comes directly from the official values.openshift.yaml.
readinessProbe:
path: "/v1/sys/health?uninitcode=204"
# Environment variables injected into every OpenBao pod
extraEnvironmentVars:
# Point OpenBao at its own CA cert for verifying Raft peer TLS
BAO_CACERT: /openbao/tls/ca.crt
# Mount the unseal key Secret and TLS cert Secret as files into each pod
volumes:
- name: unseal-key
secret:
secretName: openbao-unseal-key
defaultMode: 0400 # read-only, owner only
- name: tls
secret:
secretName: openbao-server-tls
volumeMounts:
- name: unseal-key
mountPath: /openbao/unseal
readOnly: true
- name: tls
mountPath: /openbao/tls
readOnly: true
# Resource requests and limits
# OpenBao is memory-sensitive — the barrier key and secret cache live in RAM
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
# HA configuration — 3 replicas for quorum (requires 2 of 3 for writes)
ha:
enabled: true
replicas: 3
# Raft integrated storage — no external Consul or etcd required
raft:
enabled: true
setNodeId: true
config: |
ui = true
# TLS listener — full end-to-end encryption, not edge-terminated
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/openbao/tls/tls.crt"
tls_key_file = "/openbao/tls/tls.key"
tls_min_version = "tls12"
telemetry {
unauthenticated_metrics_access = "true"
}
}
# Raft storage with peer auto-join using TLS
storage "raft" {
path = "/openbao/data"
retry_join {
leader_api_addr = "https://openbao-0.openbao-internal.openbao.svc:8200"
leader_ca_cert_file = "/openbao/tls/ca.crt"
leader_tls_servername = "openbao-0.openbao-internal.openbao.svc"
}
retry_join {
leader_api_addr = "https://openbao-1.openbao-internal.openbao.svc:8200"
leader_ca_cert_file = "/openbao/tls/ca.crt"
leader_tls_servername = "openbao-1.openbao-internal.openbao.svc"
}
retry_join {
leader_api_addr = "https://openbao-2.openbao-internal.openbao.svc:8200"
leader_ca_cert_file = "/openbao/tls/ca.crt"
leader_tls_servername = "openbao-2.openbao-internal.openbao.svc"
}
}
# Static key auto-unseal
# current_key_id: a permanent identifier for this key — use a date-based
# label so you can track which physical key is in use
# current_key: file:// prefix tells OpenBao to read the key from disk
# (the file is mounted from the openbao-unseal-key Secret above)
# This stanza is what makes the pod unseal automatically on every restart
seal "static" {
current_key_id = "20250404-1"
current_key = "file:///openbao/unseal/unseal.key"
}
# Register with Kubernetes service discovery — enables the
# active/standby labels on pods that OpenShift routes use
service_registration "kubernetes" {}
# Persist Raft data — losing this means losing all stored secrets
dataStorage:
enabled: true
size: 10Gi
# Use ODF RBD storage class — provides ReadWriteOnce PVCs
# Each Raft node needs its own independent PVC
storageClass: "ocs-storagecluster-ceph-rbd"
accessMode: ReadWriteOnce
# Audit log storage — strongly recommended for any production deployment
auditStorage:
enabled: true
size: 5Gi
storageClass: "ocs-storagecluster-ceph-rbd"
accessMode: ReadWriteOnce
# Anti-affinity: spread pods across different OCP nodes
# This ensures a single node failure does not take down quorum
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/name: openbao
component: server
topologyKey: kubernetes.io/hostname
# OpenBao UI — accessible via the OCP Route
ui:
enabled: true
serviceType: ClusterIP
# Injector (agent sidecar) — set to false for now
# The injector webhook enables automatic secret injection into pods
# We will enable it in our next post when we configure application integration
injector:
enabled: false
You can read more on OpenBao values page.
One thing to note is that the static seal stanza requires a current_key_id, a permanent identifier tied to this specific key. Use a date-based format such as YYYYMMDD-1 (for example, 20250404-1). If you rotate the key in the future, the old ID goes into previous_key_id and you introduce a new current_key_id. The ID must never change for the same key material.
Step 6: Deploy OpenBao with Helm
You can now install OpenBao with Helm using helm command. You can pin the chart version explicitly using --version. This ensures you deploy exactly what you tested against. Without --version, Helm installs the latest available chart, which may have changed since you last ran helm repo update, a silent difference that can cause unexpected behaviour.
helm install openbao openbao/openbao \
--namespace openbao \
--version 0.26.2 \
--values openbao-values.yaml \
--wait \
--timeout 5m
Chart version 0.26.2 ships OpenBao v2.5.2. If you want a different OpenBao version, run helm search repo openbao/openbao --versions and pick the chart version that corresponds to the app version you need.
Sample installation output;
NAME: openbao
LAST DEPLOYED: Sat Apr 4 22:41:34 2026
NAMESPACE: openbao
STATUS: deployed
REVISION: 1
NOTES:
Thank you for installing OpenBao!
Now that you have deployed OpenBao, you should look over the docs on using
OpenBao with Kubernetes available here:
https://openbao.org/docs/
Your release is named openbao. To learn more about the release, try:
$ helm status openbao
$ helm get manifest openbao
List the chart;
helm list -n openbao
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
openbao openbao 1 2026-04-04 22:41:34.390165016 +0200 CEST deployed openbao-0.26.2 v2.5.2
Watch the pods come up:
oc get pods -n openbao -w
With the readinessProbe override (uninitcode=204) from the official values.openshift.yaml, pods will show 1/1 Running immediately even before initialisation. This is intentional. Without that override, the default health endpoint returns HTTP 501 when uninitialised, which Kubernetes treats as unhealthy and the pods stay 0/1 in a restart loop, making it impossible to run initialize them by running bao operator init. With the override on, OpenBao returns 204 (healthy) when uninitialised, pods pass the readiness check and stay stable, waiting for you to initialise them.
NAME READY STATUS RESTARTS AGE
openbao-0 1/1 Running 0 6m
openbao-1 1/1 Running 0 5m
openbao-2 1/1 Running 0 4m
Verify the actual state. 1/1 Running does not mean initialised:
oc exec -n openbao openbao-0 -- bao status
Expected output at this point:
Key Value
--- -----
Seal Type static
Recovery Seal Type shamir
Initialized false
Sealed true
Total Recovery Shares 0
Threshold 0
Version 2.5.2
HA Enabled true
Initialized: false and Sealed: true is the correct state here. The cluster is healthy and waiting for initialization. The exit code 2 returned by bao status when sealed is expected, it is not an error.
Step 7: Initialise OpenBao
Initialisation is a one-time operation. It generates the root key (which is immediately wrapped by your static unseal key and stored encrypted in Raft storage), and returns recovery keys. With auto-unseal, you do not get unseal key shares, you get recovery keys instead.
Recovery keys are not used to unseal OpenBao. They are used to authorise root token generation and administrative operations if the auto-unseal mechanism itself needs reconfiguration. Store them securely offline.
Let’s initialize OpenBao via openbao-0 pod. The > openbao-init.json redirects the recovery shares onto the bastion filesystem.
oc exec -n openbao openbao-0 -- bao operator init \
-recovery-shares=5 \
-recovery-threshold=3 \
-format=json > openbao-init.json
View the output:
cat openbao-init.json
{
"unseal_keys_b64": [],
"unseal_keys_hex": [],
"unseal_shares": 1,
"unseal_threshold": 1,
"recovery_keys_b64": [
"<key-1>",
"<key-2>",
"<key-3>",
"<key-4>",
"<key-5>"
],
"recovery_keys_hex": [
"<key-1>",
"<key-2>",
"<key-3>",
"<key-4>",
"<key-5>"
],
"recovery_keys_shares": 5,
"recovery_keys_threshold": 3,
"root_token": "<masked>"
}
The output contains:
recovery_keys_b64: five recovery key shares (you need any 3 to reconstruct the recovery key)root_token: the initial root token for first-time configuration
Also notice unseal_keys_b64 and unseal_keys_hex are empty arrays. With auto-unseal there are no unseal key shares, the static key handles unsealing automatically. What you get instead are recovery_keys, five shares of which any three reconstruct the recovery key used for administrative operations like generating a new root token.
Back up openbao-init.json immediately to a secure, offline location. Then delete it from the bastion.
Verify the seal status on openbao-0, it should have auto-unsealed
oc exec -n openbao openbao-0 -- bao status
Expected output:
Key Value
--- -----
Seal Type static
Recovery Seal Type shamir
Initialized true
Sealed false
Total Recovery Shares 5
Threshold 3
Version 2.5.2
Build Date 2026-03-25T16:16:27Z
Storage Type raft
Cluster Name vault-cluster-09b1789d
Cluster ID 53dd5b19-b3ac-84c5-d5d8-c14b92f6042b
HA Enabled true
HA Cluster https://openbao-0.openbao-internal:8201
HA Mode active
Active Since 2026-04-04T21:23:07.976804314Z
Raft Committed Index 34
Raft Applied Index 34
The other pods (openbao-1, openbao-2) will join the Raft cluster automatically via retry_join and also auto-unseal. Check them:
oc exec -n openbao openbao-1 -- bao status | grep -E "Init|Sealed|HA Mode"
oc exec -n openbao openbao-2 -- bao status | grep -E "Init|Sealed|HA Mode"
All three should show Sealed: false (unsealed). Only openbao-0 will be active, the other two are on standby.
Initialized true
Sealed false
HA Mode standby
At this point, openbao-1 and openbao-2 are back to 0/1.
oc get pods -n openbao
NAME READY STATUS RESTARTS AGE
openbao-0 1/1 Running 0 15m
openbao-1 0/1 Running 0 15m
openbao-2 0/1 Running 0 14m
This is expected and here is exactly why: Before initialisation, openbao-1 and openbao-2 were running and waiting to join Raft, but the cluster did not exist yet. The moment bao operator init ran on openbao-0, the following sequence happened:
openbao-0 > initialised > auto-unsealed using static key > became Raft leader > stayed 1/1
openbao-1 > joined Raft cluster > fetched stored unseal key from Raft storage >
auto-unsealed > entered standby mode > readiness probe fires > probe hits /v1/sys/health > standby returns HTTP 429 (default) > probe fails > 0/1
openbao-2 > same sequence as openbao-1 > 0/1
The pods are fully functional, Sealed: false, HA Mode: standby, but the readiness probe fails because by default OpenBao returns HTTP 429 for standby nodes, and Kubernetes requires a 2xx response to mark a pod Ready.
Some of the default OpenBao status codes on the /sys/health endpoint include:
200if initialized, unsealed, and active429if unsealed and standby501if not initialized503if sealed
You can verify the key is actually mounted and the pods are genuinely unsealed, not just claiming to be:
for i in 1 2; do
echo "=== openbao-$i ==="
oc exec openbao-$i -n openbao -- ls -la /openbao/unseal/unseal.key
done
Sample output;
=== openbao-1 ===
lrwxrwxrwx. 1 root 1000990000 17 Apr 5 15:56 /openbao/unseal/unseal.key -> ..data/unseal.key
=== openbao-2 ===
lrwxrwxrwx. 1 root 1000990000 17 Apr 5 15:57 /openbao/unseal/unseal.key -> ..data/unseal.key
The symlink to ..data/unseal.key is the Kubernetes Secret projection, the key is mounted correctly. Confirm the logs show successful unsealing:
oc logs openbao-1 -n openbao | grep -E "unseal|standby|sealed"
oc logs openbao-2 -n openbao | grep -E "unseal|standby|sealed"
You will see lines confirming:
core: vault is unsealed
core: entering standby mode
core: post-unseal setup complete
Everything is working correctly. The 0/1 is purely the readiness probe issue, not a seal problem.
So, how to get the pods openbao-1 and openbao-2 running? The fix is simple; just add the standbycode=200 parameter in the probe path in the values Chart file.
Therefore, edit your openbao-values.yaml on the bastion and change the readinessProbe path from:
readinessProbe:
path: "/v1/sys/health?uninitcode=204"
to;
readinessProbe:
path: "/v1/sys/health?uninitcode=204&standbycode=200"
Then save the file and run helm upgrade:
helm upgrade openbao openbao/openbao \
--namespace openbao \
--version 0.26.2 \
--values openbao-values.yaml
Important! The StatefulSet update strategy is OnDelete:
oc get sts openbao -n openbao -o yaml | grep -A2 updateStrategy
updateStrategy:
type: OnDelete
OnDelete means pods do not restart automatically when the StatefulSet spec changes. You must manually delete each standby pod for it to pick up the new probe configuration. Delete one at a time, wait for 1/1 before deleting the next:
Delete openbao-1 first — it is a standby, safe to restart
oc delete pod openbao-1 -n openbao
Watch it come back with the new probe:
oc get pods -n openbao -w
Wait until openbao-1 shows 1/1 Running before proceeding
Then delete openbao-2:
oc delete pod openbao-2 -n openbao
Watch it come back:
oc get pods -n openbao -w
Never delete openbao-0 while it is the active leader unless necessary, doing so triggers a Raft leader election and causes a brief write unavailability window.
After both restarts the output should be:
NAME READY STATUS RESTARTS AGE
openbao-0 1/1 Running 0 1h
openbao-1 1/1 Running 0 59m
openbao-2 1/1 Running 0 58m
All three nodes 1/1 Running. The cluster is fully operational.
Step 8: Install the bao CLI on the Bastion and Create an OCP Route
Up to this point every bao command ran inside a pod via oc exec. From here on you will interact with OpenBao directly from the bastion over the external Route URL. That requires the bao CLI installed locally.
Install the bao CLI
My bastion is RHEL-based. OpenBao publishes official RPM packages through EPEL (Extra Packages for Enterprise Linux):
Enable EPEL if not already present
sudo dnf install -y epel-release
Install the bao CLI
sudo dnf install -y openbao
Verify
bao version
OpenBao v2.5.2-1.el9, built 2026-03-25 (cgo)
Alternative: precompiled binary
You can download the binary from GitHub releases on a connected machine and copy it across:
BAO_VERSION="2.5.2"
ARCH="x86_64"
curl -LO "https://github.com/openbao/openbao/releases/download/v${BAO_VERSION}/bao_${BAO_VERSION}_linux_${ARCH}.tar.gz"
Verify the checksum against the checksums file on the same release page
sha256sum bao_${BAO_VERSION}_linux_${ARCH}.tar.gz
Then extract and move it to local bin directory:
tar xf bao_${BAO_VERSION}_linux_${ARCH}.zip
sudo mv bao /usr/local/bin/
sudo chmod +x /usr/local/bin/bao
Check the version:
bao version
Create the OCP Route
OpenBao is now running inside the cluster and accessible only via its internal Service (openbao.openbao.svc:8200). Before creating the Route, look at what the Helm chart actually created:
oc get svc -n openbao
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
openbao ClusterIP 172.30.137.77 <none> 8200/TCP,8201/TCP 1h
openbao-active ClusterIP 172.30.231.192 <none> 8200/TCP,8201/TCP 1h
openbao-internal ClusterIP None <none> 8200/TCP,8201/TCP 1h
openbao-standby ClusterIP 172.30.20.125 <none> 8200/TCP,8201/TCP 1h
openbao-ui ClusterIP 172.30.110.134 <none> 8200/TCP 1h
When deployed in HA mode on OpenShift with the UI enabled, five services are created:
openbao: The main ClusterIP service exposing the OpenBao API on port 8200. This is the primary endpoint clients use to interact with OpenBao, and it can route traffic to any pod in the cluster regardless of its role.openbao-internal: A headless service used exclusively for intra-cluster communication. It exposes port 8200 (API) and 8201 (cluster coordination), enabling Raft peers to discover and communicate with each other.openbao-active: A ClusterIP service that targets only the current active (leader) pod on port 8200. Any operation that requires hitting the leader directly such as writes should use this service.openbao-standby: A ClusterIP service that targets only standby pods on port 8200. Useful for isolating read traffic or for health checks against non-leader nodes.openbao-ui: A ClusterIP service exposing the web UI on port 8200. On OpenShift, this service acts as the backend for an OCP Route, which is the mechanism used to expose the UI externally
openbao-internal has no ClusterIP by design. As a headless service, DNS resolves directly to individual pod IPs, which is required for Raft peers to address each other directly for leader election and data replication.The Route we are about to create points at openbao service, the primary Service that always tracks the active node. If the active node fails and a standby is elected, the Service selector updates automatically and the Route continues working without any manual intervention.
An OCP Route is OpenShift’s ingress mechanism that exposes an internal Service through the cluster’s HAProxy router on the wildcard apps domain (*.apps.ocp.domain.com). Unlike a standard Kubernetes Ingress, Routes have native TLS termination options built in.
There are three TLS termination modes in OCP Routes:
edge: TLS terminates at the router. Traffic from router to pod is plain HTTP. Never use this for a secrets manager as credentials would travel unencrypted inside the cluster network.passthrough: TLS passes through the router untouched to the pod. The pod handles TLS entirely. Requires SNI-capable clients and does not support path-based routing.reencrypt: TLS terminates at the router and is re-established to the pod using a separate certificate. ThedestinationCACertificatefield tells the router which CA to trust when connecting to the pod. This is the correct choice we will use as it keeps traffic encrypted end-to-end while allowing the router to inspect the host header for routing decisions.
The destinationCACertificate is populated inline from the openbao-ca-secret, the same CA cert that cert-manager created in Step 3 and that OpenBao pods use for their TLS listener. The router uses it to verify the pod’s certificate when establishing the re-encrypted connection.
Copy the command below, update it to suite your environment setup and execute to create a route for OpenBao service.
cat <<EOF | oc apply -f -
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: openbao
namespace: openbao
spec:
host: openbao.apps.ocp.comfythings.com
port:
targetPort: 8200
tls:
termination: reencrypt
destinationCACertificate: |
$(oc get secret openbao-ca-secret -n openbao \
-o jsonpath='{.data.ca\.crt}' | base64 -d | sed 's/^/ /')
to:
kind: Service
name: openbao
weight: 100
EOF
Confirm;
oc get routes -n openbao
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
openbao openbao.apps.ocp.comfythings.com openbao 8200 reencrypt None
The reencrypt Route has two separate TLS hops:
Client (bastion)
│
│ TLS #1 - Client to OCP Router
│ Certificate: OCP Ingress wildcard cert (*.apps.ocp.comfythings.com)
│ Signed by: OCP Ingress CA
▼
OCP Router (HAProxy)
│
│ TLS #2 - Router to OpenBao pod (the "reencrypt" part)
│ Certificate: openbao-server-tls (signed by your openbao-ca)
│ Verified using: destinationCACertificate in the Route spec
▼
OpenBao pod
Configure the bastion CLI environment
Extract the OCP Ingress router CA cert from the cluster, needed for TLS verification between client and the route endpoint:
oc get secret router-ca \
-n openshift-ingress-operator \
-o jsonpath='{.data.tls\.crt}' | base64 -d > ~/ocp-ingress-ca.crt
Create two variables for OpenBao route host and CA cert. These tell the bao CLI where OpenBao address is and how to verify its TLS certificate.
You can add to ~/.bashrc, or wherever, depending on your shell, for persistence
cat >> ~/.bashrc <<'EOF'
export BAO_ADDR="https://openbao.apps.ocp.comfythings.com"
export BAO_CACERT="$HOME/ocp-ingress-ca.crt"
EOF
source ~/.bashrc
BAO_ADDR and BAO_CACERT are the OpenBao equivalents of Vault’s VAULT_ADDR and VAULT_CACERT. If you are migrating from Vault, only the variable names changed, the behaviour is identical.
Test external access from the bastion
Export the root token from the init output captured in Step 7
export BAO_TOKEN=$(cat openbao-init.json | grep root_token | cut -d'"' -f4)
Confirm the CLI can reach OpenBao over the Route
bao status
List secrets engines; confirms authentication works
bao secrets list
Check the peers:
bao operator raft list-peers
Step 9: Enable the Audit Backend
Audit logging records every request and response to OpenBao:
- Who accessed it: the token, identity, and source IP
- What they accessed: the secret path and operation (read, write, delete)
- When it happened: a precise timestamp for every request and response
This is non-negotiable for any production deployment and is how you detect compromise or accidental secret exposure.
Sensitive values like tokens and secret paths are HMAC-hashed in the audit log, so the actual values are never exposed.
Running bao audit enable via CLI or API fails with:
cannot enable audit device via API; use declarative, config-based audit device management insteadThis is a deliberate security change after CVE-2025-6000 (HCSEC-2025-14). The correct approach is to define audit devices declaratively in the server configuration using the audit stanza, applied on startup and SIGHUP.
While OpenBao Helm includes a configurable auditStorage option that provisions a persistent volume to store audit logs, we will configure our OpenBao file audit device with file_path set to stdout so we can process the logs with a cluster-level log collector such as Loki or Elasticsearch. With this approach, audit logs flow into whatever log aggregation you already have, with no PVC sizing concerns and no log rotation complexity. If you choose this approach, configure the audit device with a prefix so your log collector can separate audit entries from OpenBao’s operational server logs since both share the same stdout stream.
If you choose to use Helm’s auditStorage PVC instead, be aware that:
- Log rotation is mandatory. A full PVC causes OpenBao to stop serving requests. It treats a non-writable audit device as a fatal condition. There is no built-in rotation in OpenBao; you need a sidecar container running logrotate sending SIGHUP to the OpenBao process after each rotation.
- PVC sizing has no fixed formula. It depends on your secrets operation volume, average log entry size, and compliance retention requirements. SOC 2 and PCI-DSS typically require at least 12 months of audit log retention. You must monitor PVC usage proactively.
To enable auditing, add the audit stanza to the server.ha.config HCL block in openbao-values.yaml:
vim openbao-values.yaml
...
# Declarative audit device, stdout for Kubernetes-native log collection.
# Audit entries are written to container stdout and collected by the
# cluster logging stack (Loki, Elasticsearch, etc.).
# prefix="AUDIT:" separates audit entries from OpenBao operational logs.
# Source: https://openbao.org/docs/configuration/audit/
# https://developer.hashicorp.com/vault/docs/audit/best-practices
# NOTE: As of OpenBao v2.3.2, audit log prefixing requires
# `allow_audit_log_prefixing = true` in the server configuration block
# before an audit device with a prefix can be enabled.
allow_audit_log_prefixing = true
audit "file" "file-audit" {
options {
file_path = "stdout"
prefix = "AUDIT:"
}
}
# Register with Kubernetes service discovery — enables the
# active/standby labels on pods that OpenShift routes use
service_registration "kubernetes" {}
Save and exit the file.
Apply the changes via helm upgrade:
helm upgrade openbao openbao/openbao \
--namespace openbao \
--version 0.26.2 \
--values openbao-values.yaml
and restart pods one at a time. Remember, they use The pods use an OnDelete update strategy instead of the default RollingUpdate by default. As such, OpenShift will not automatically restart pods when their configuration changes until when a pod is deleted.
Since OpenBao (like Vault) uses a Raft consensus cluster, if you deleted all three pods at once, you’d lose quorum and the cluster would become unavailable. One-at-a-time pod deletion keeps the Raft cluster healthy the whole time.
oc delete pod openbao-0 -n openbao
Wait for 1/1 Running, then:
oc delete pod openbao-1 -n openbao
Wait for 1/1 Running, then:
oc delete pod openbao-2 -n openbao
At the end, all pods should be auto-unsealed and running;
NAME READY STATUS RESTARTS AGE
openbao-0 1/1 Running 0 2m47s
openbao-1 1/1 Running 0 2m
openbao-2 1/1 Running 0 36s
Verify the audit device is active:
bao audit list
Path Type Description
---- ---- -----------
file-audit/ file n/a
Verify audit entries are appearing in pod logs:
oc logs openbao-0 -n openbao | grep "^AUDIT:" | tail -5
Sample logs:
AUDIT:{"time":"2026-04-06T08:31:34.436630306Z","type":"request","auth":{"token_type":"default"},"request":{"id":"b26e884d-3a20-ab9b-dc5e-4452d1119d03","operation":"update","namespace":{"id":"root"},"path":"sys/audit/test"}}
AUDIT:{"time":"2026-04-06T08:36:08.751334741Z","type":"request","auth":{"token_type":"default"},"request":{"id":"47f4beee-d6ef-a499-e597-ec32dcaf61a8","operation":"read","mount_point":"sys/","mount_type":"system","mount_running_version":"v2.5.2+builtin.bao","mount_class":"secret","namespace":{"id":"root"},"path":"sys/audit","remote_address":"127.0.0.1","remote_port":35984},"error":"permission denied"}
AUDIT:{"time":"2026-04-06T08:36:08.753058489Z","type":"response","auth":{"token_type":"default"},"request":{"id":"47f4beee-d6ef-a499-e597-ec32dcaf61a8","operation":"read","mount_point":"sys/","mount_type":"system","mount_running_version":"v2.5.2+builtin.bao","mount_class":"secret","namespace":{"id":"root"},"path":"sys/audit","remote_address":"127.0.0.1","remote_port":35984},"response":{"mount_point":"sys/","mount_type":"system","mount_running_plugin_version":"v2.5.2+builtin.bao","mount_class":"secret","data":{"error":"hmac-sha256:bfb8eba5f854dc5e0e9909599778be02d04029c7b2305da1bfdb86de979e4524"}},"error":"1 error occurred:\n\t* permission denied\n\n"}
AUDIT:{"time":"2026-04-06T08:37:53.838343087Z","type":"request","auth":{"client_token":"hmac-sha256:30b8f02499e8659aab8f8e7d41e6e7ad12cca88050cdbd572f2b165a5793fb73","accessor":"hmac-sha256:b1c62331f1381ffef592769e89294200688a2795e6885e8f0d12534891da9312","display_name":"root","policies":["root"],"token_policies":["root"],"policy_results":{"allowed":true,"granting_policies":[{"name":"root","namespace_id":"root","type":"acl"}]},"token_type":"service","token_issue_time":"2026-04-05T16:00:32Z"},"request":{"id":"0560b4ec-a1e5-ab39-2e3a-400bf5e7fca4","client_id":"0DHqvq2D77kL2/JTPSZkTMJbkFVmUu0TzMi0jiXcFy8=","operation":"read","mount_point":"sys/","mount_type":"system","mount_running_version":"v2.5.2+builtin.bao","mount_class":"secret","client_token":"hmac-sha256:30b8f02499e8659aab8f8e7d41e6e7ad12cca88050cdbd572f2b165a5793fb73","client_token_accessor":"hmac-sha256:b1c62331f1381ffef592769e89294200688a2795e6885e8f0d12534891da9312","namespace":{"id":"root"},"path":"sys/audit","remote_address":"127.0.0.1","remote_port":54802}}
AUDIT:{"time":"2026-04-06T08:37:53.838615691Z","type":"response","auth":{"client_token":"hmac-sha256:30b8f02499e8659aab8f8e7d41e6e7ad12cca88050cdbd572f2b165a5793fb73","accessor":"hmac-sha256:b1c62331f1381ffef592769e89294200688a2795e6885e8f0d12534891da9312","display_name":"root","policies":["root"],"token_policies":["root"],"policy_results":{"allowed":true,"granting_policies":[{"name":"root","namespace_id":"root","type":"acl"}]},"token_type":"service","token_issue_time":"2026-04-05T16:00:32Z"},"request":{"id":"0560b4ec-a1e5-ab39-2e3a-400bf5e7fca4","client_id":"0DHqvq2D77kL2/JTPSZkTMJbkFVmUu0TzMi0jiXcFy8=","operation":"read","mount_point":"sys/","mount_type":"system","mount_accessor":"system_e1f168c8","mount_running_version":"v2.5.2+builtin.bao","mount_class":"secret","client_token":"hmac-sha256:30b8f02499e8659aab8f8e7d41e6e7ad12cca88050cdbd572f2b165a5793fb73","client_token_accessor":"hmac-sha256:b1c62331f1381ffef592769e89294200688a2795e6885e8f0d12534891da9312","namespace":{"id":"root"},"path":"sys/audit","remote_address":"127.0.0.1","remote_port":54802},"response":{"mount_point":"sys/","mount_type":"system","mount_accessor":"system_e1f168c8","mount_running_plugin_version":"v2.5.2+builtin.bao","mount_class":"secret","data":{"file-audit/":{"description":"hmac-sha256:ae571d631ca1cd3ddec023505935a0cddd9285e68037b2417e620da63f38e9bb","local":false,"options":{"file_path":"hmac-sha256:8f2430830b5e41d707f9be77395a6acc7b2b57389e58bd6744df1e1deb11eac3","prefix":"hmac-sha256:39bcaa8224c20565832b1a51d88c471f175678d953c35099f14fd98b2c7dc39e"},"path":"hmac-sha256:20f1af9f9f115c20f8451e25ff8c78265f400921f8125e0443c93a5fe2828c13","type":"hmac-sha256:07e4734089b98b9445e9f6759aee828a380347c0e6e191231890ce6b891848f8"}}}}
From the sample logs, the 08:37:53 entry confirms all three:
- Who: the
roottoken (policies:["root"]) - What:
readonsys/audit - When:
2026-04-06T08:37:53
Step 10: Lock Down RBAC
The default root token should not be used for day-to-day operations. Instead, create a dedicated policy and token for administrative use.
A policy defines what an identity is allowed to do, which paths they can access and what operations they can perform (create, read, update, delete, list, sudo). For example, an admin policy that grants full access to all paths:
Note: You can exec into the OpenBao pod and run the bao CLI commands or run directly on your bastion if you have installed bao CLI and configured the endpoint as well as the token for login.
- Inline via CLI (stdin):
bao policy write admin - <<'EOF' path "*" { capabilities = ["create", "read", "update", "delete", "list", "sudo"] } EOF - From an HCL file:
Sample configuration:cat admin-policy.hcl
Then create the policy from the file:path "*" { capabilities = ["create", "read", "update", "delete", "list", "sudo"] }bao policy write admin admin-policy.hcl - or via the API:
curl -XPUT \ -H "X-Vault-Token: $BAO_TOKEN" \ -d '{"policy":"path \"*\" { capabilities = [\"create\",\"read\",\"update\",\"delete\",\"list\",\"sudo\"] }"}' \ $BAO_ADDR/v1/sys/policies/acl/admin \ --cacert $BAO_CACERT
You can also restrict access to the unseal key Secret at the OCP level by creating a role that grants only the openbao ServiceAccount permission to read it:
Verify Auto-Unseal Works: The Restart Test
This is the test that proves everything works. Restart all three pods and confirm they come back sealed=false without any manual intervention.
Delete all pods. Kubernetes/OCP will recreate them from the StatefulSet:
oc delete pods -n openbao -l app.kubernetes.io/name=openbao
Watch them come back:
oc get pods -n openbao -w
Once the pods are 1/1 Running, check seal status:
for pod in openbao-0 openbao-1 openbao-2; do
echo "=== $pod ==="
oc exec -n openbao $pod -- bao status 2>/dev/null | grep -E "Sealed|HA Mode"
done
Expected:
=== openbao-0 ===
Sealed false
HA Mode active
=== openbao-1 ===
Sealed false
HA Mode standby
=== openbao-2 ===
Sealed false
HA Mode standby
All three nodes unsealed automatically. No human intervention. That is the goal.
Auto-Unseal Key Rotation
The static seal mechanism supports key rotation using the previous_key and previous_key_id fields.
When you need to rotate the unseal key;
Generate a new 32-byte key
openssl rand -out ./openbao-unseal.key 32
Update the OCP Secret to include both keys:
oc create secret generic openbao-unseal-key \
--from-file=unseal.key=./unseal.key \
--from-file=unseal-prev.key=./unseal.key \
-n openbao \
--dry-run=client -o yaml | oc apply -f -
Update the seal "static" stanza in your openbao-values.yaml to include both current_key_id/current_key (new) and previous_key_id/previous_key (old):
...
seal "static" {
current_key_id = "20250505-1"
current_key = "file:///openbao/unseal/unseal.key"
previous_key_id = "20250404-1"
previous_key = "file:///openbao/unseal/unseal-prev.key"
}
...
Save the values file and then helm upgrade.
helm upgrade openbao openbao/openbao \
--namespace openbao \
--version 0.26.2 \
--values openbao-values.yaml
Then delete the pods one by one until they have all restarted.
Once all pods have restarted and come up healthy with the new key:
- remove the
previous_keyfields from the values file - Run helm upgrade again.
- Delete the pods to ensure they are restarted
- Update the OCP secret to remove the old key.
What Comes Next
This post covered deploying OpenBao with HA Raft, TLS, and static key auto-unseal. This is the foundation. The next posts in this series will build on it:
- Configuring Kubernetes Auth in OpenBao allowing OCP workloads to authenticate using their ServiceAccount tokens, so no static credentials are needed in pods
- Integrating OpenBao with GitLab CI pipelines replacing masked CI/CD variables with dynamic, short-lived tokens. At pipeline runtime, the pipeline authenticates to OpenBao using AppRole (Role ID + Secret ID), receives a short-lived token, and uses it to fetch the actual credentials it needs.
- External Secrets Operator + OpenBao syncing secrets from OpenBao into Kubernetes Secrets so applications consume them natively without any OpenBao SDK
