How do you run Apache Airflow on Kubernetes in 2026?
Quick Answer: As of April 2026, the standard way to run Apache Airflow on Kubernetes is to install the official Apache Airflow Helm chart (version 1.x) using the KubernetesExecutor or CeleryKubernetesExecutor. The chart provisions the scheduler, webserver, and triggerer; tasks run as ephemeral pods controlled by the executor.
Running Apache Airflow on Kubernetes
Apache Airflow on Kubernetes is the most common deployment pattern for self-hosted production installs as of April 2026. The official Apache Airflow Helm chart provides the supported path.
Step 1 — Install the Helm Chart
Add the Apache Airflow Helm repository and install with values that select the executor:
helm repo add apache-airflow https://airflow.apache.org
helm install airflow apache-airflow/airflow \
--namespace airflow --create-namespace \
--set executor=KubernetesExecutor
The chart deploys the scheduler, webserver, triggerer, and a Postgres metadata database (or use an external one for production).
Step 2 — Choose an Executor
- KubernetesExecutor — Each task runs in its own pod. Best for variable workloads and cost control.
- CeleryKubernetesExecutor — Mixed mode: short tasks run on persistent Celery workers, long tasks spawn pods. Best for high-throughput pipelines with long tails.
- CeleryExecutor — Persistent workers only. Best for stable, high-volume DAG runs.
Step 3 — Mount DAGs
Three patterns are common:
- Git-sync sidecar — Pulls DAGs from a Git repo into a shared volume
- Persistent Volume Claim — Operator pushes DAGs to a shared PVC
- Baked image — DAGs included in a custom Airflow image (most reproducible)
Step 4 — Production Hardening
Use an external Postgres (RDS, Cloud SQL), enable RBAC, configure resource limits per task pod, and route logs to S3, GCS, or Elasticsearch via the chart's config.logging values.