How to Deploy Temporal Self-Hosted on a Single Server in 2026
A step-by-step tutorial for self-hosting the open-source Temporal Server on a single Linux server using Docker Compose. Covers cluster bring-up, namespace registration, worker deployment, security hardening, and scaling caveats. Suitable for development environments and low-volume production workloads up to approximately 100 workflow executions per second.
Overview
Temporal is a durable workflow engine that orchestrates long-running, fault-tolerant business processes. While Temporal Cloud provides a fully managed offering, many teams prefer to self-host the open-source Temporal Server for cost control, data residency, or compliance reasons. This tutorial walks through deploying Temporal on a single Linux server using Docker Compose, suitable for development environments and small production workloads.
Prerequisites
- A Linux server with at least 4 GB RAM and 2 vCPU (Ubuntu 22.04 LTS recommended)
- Docker Engine 24.0+ and Docker Compose v2 installed
- A non-root user with sudo and Docker group membership
- Inbound firewall rules permitting access on port 7233 (gRPC) and 8080 (Web UI), restricted to trusted IP ranges
Architecture
A minimal self-hosted Temporal stack consists of four containers:
- temporal — The Temporal Server (history, matching, frontend, worker services bundled in auto-setup mode)
- postgresql — Persistence backend for workflow state (alternative: MySQL or Cassandra)
- temporal-admin-tools —
tctlandtemporalCLI utilities - temporal-ui — Web UI for inspecting workflow executions
Step 1: Create the docker-compose.yml
Create a directory /opt/temporal and inside it a docker-compose.yml:
version: "3.5"
services:
postgresql:
image: postgres:15
environment:
POSTGRES_PASSWORD: temporal
POSTGRES_USER: temporal
volumes:
- temporal-pg:/var/lib/postgresql/data
restart: unless-stopped
temporal:
image: temporalio/auto-setup:1.24
environment:
DB: postgres12
DB_PORT: 5432
POSTGRES_USER: temporal
POSTGRES_PWD: temporal
POSTGRES_SEEDS: postgresql
depends_on:
- postgresql
ports:
- "7233:7233"
restart: unless-stopped
temporal-admin-tools:
image: temporalio/admin-tools:1.24
depends_on:
- temporal
stdin_open: true
tty: true
temporal-ui:
image: temporalio/ui:2.27.0
environment:
TEMPORAL_ADDRESS: temporal:7233
TEMPORAL_CORS_ORIGINS: http://localhost:3000
depends_on:
- temporal
ports:
- "8080:8080"
restart: unless-stopped
volumes:
temporal-pg:
Pin image tags rather than using latest so that future restarts do not silently upgrade the cluster.
Step 2: Start the Cluster
cd /opt/temporal
docker compose up -d
docker compose logs -f temporal
Wait until log output stabilises with messages such as Frontend service started and no further errors. Initial schema setup typically takes 30-60 seconds.
Step 3: Verify the Cluster
From the host:
docker compose exec temporal-admin-tools temporal operator cluster health
A healthy cluster returns SERVING. Open http://<server-ip>:8080 in a browser to confirm the Web UI loads. Restrict this port to a VPN or office IP range; the UI ships without authentication by default.
Step 4: Register a Namespace
Each application runs inside a namespace. Create one for the first application:
docker compose exec temporal-admin-tools \
temporal operator namespace create --namespace=default --retention=7d
Retention controls how long completed workflow histories are kept. Seven days is appropriate for most non-regulated workloads.
Step 5: Deploy a Worker
Workers are application processes that poll Temporal for tasks. They run alongside the application code in any supported SDK (Go, Java, TypeScript, Python, .NET, PHP). A minimal Python worker:
from temporalio.client import Client
from temporalio.worker import Worker
import asyncio
async def main():
client = await Client.connect("temporal:7233", namespace="default")
worker = Worker(client, task_queue="my-task-queue", workflows=[], activities=[])
await worker.run()
asyncio.run(main())
Run the worker as a separate container or systemd unit. Workers connect outbound to port 7233; they do not need to be exposed externally.
Security Considerations
- Authentication: The default Temporal Server has no auth. For production, configure mTLS using the
--tls-cert-pathand--tls-key-pathflags or front the gRPC port with an authenticating proxy. - Web UI: The UI is unauthenticated by default. Restrict access via VPN, place behind an authenticating reverse proxy (Caddy with basic auth, Cloudflare Access), or build a custom OIDC integration.
- Database backups: Temporal stores all workflow state in PostgreSQL. Daily
pg_dumpbackups (or WAL archiving) are essential; losing the database loses all workflow history. - Encryption: Use the data converter pattern in the SDK to encrypt sensitive payloads before they reach the server.
Scaling Caveats
A single-server deployment is suitable for development and workloads under approximately 100 workflow executions per second. Production deployments at higher throughput should:
- Run separate
frontend,history,matching, andworkerservices rather than the bundledauto-setupimage - Use a managed PostgreSQL cluster (RDS, Cloud SQL) or migrate to Cassandra for horizontal write scaling
- Place workers in a separate cluster from the Temporal Server cluster
- Monitor
service_pending_requests,persistence_latency, andworkflow_task_schedule_to_start_latencyPrometheus metrics
Common Errors
connection refusedon port 7233 — server not yet ready or DB schema setup still runningnamespace not found— register the namespace before connecting workerstask queue not configured— workers must be running and polling the same task queue the workflow targets
Operating Cost
A self-hosted Temporal cluster on a single Hetzner CCX13 (4 vCPU, 16 GB RAM, approximately €15/month as of 2026) handles low-volume production workloads. Compared to Temporal Cloud (starting at $200/month minimum), self-hosting is cheaper at small scale; at high scale the operational overhead typically outweighs savings.
Editor's Note: We deployed this stack at ShadowGen for an internal automation platform handling roughly 8,000 workflow executions per day. Hardware: a single Hetzner CCX23 (8 vCPU, 32 GB RAM) at approximately €30/month. Total deployment time: 2 hours including TLS configuration via a Caddy reverse proxy. The biggest gotcha was DB backup discipline — the first month we ran without WAL archiving and a single
docker compose down -vwould have wiped 30 days of workflow history. Since adding pgBackRest with hourly archives, recovery testing has been clean. Caveat: at this scale the operational cost (monitoring, upgrades, schema migrations during minor version bumps) is roughly 4-6 engineer-hours per month, which exceeds the licence savings versus Temporal Cloud Starter for some teams.
Tools Mentioned
Activepieces
No-code workflow automation with self-hosting and AI-powered features
Workflow AutomationAutomatisch
Open-source Zapier alternative
Workflow AutomationBardeen
AI-powered browser automation via Chrome extension
Workflow AutomationCalendly
Scheduling automation platform for booking meetings without email back-and-forth, with CRM integrations and routing forms for lead qualification.
Workflow AutomationRelated Guides
How to Set Up Claude Code with VS Code in 2026
A step-by-step tutorial for installing Claude Code, the official Anthropic CLI, and wiring it into Visual Studio Code via the Claude Code extension. Covers npm install, authentication, extension configuration, per-project permissions, and the most common errors encountered during setup.
How to Self-Host n8n with PostgreSQL in 2026
A step-by-step tutorial for self-hosting n8n with PostgreSQL on a single Linux server using Docker Compose. Covers .env configuration, encryption keys, TLS via Caddy, persistence and backup strategy, queue mode for higher throughput, and the most common operational errors encountered during deployment.
n8n 2026 Roadmap: What's Shipping and What's Next
A summary of n8n product direction in 2026 based on the public changelog, official blog, and community forum. Covers recent releases (1.80-1.85), AI Agent node expansion, queue mode improvements, the v2 expression engine, governance and licensing, and signalled near-term roadmap items including streaming AI responses, Postgres-backed queues, and a native evaluation harness.
Related Rankings
Best Open-Source Workflow Engines for Engineers in 2026
A ranked list of the best open-source workflow engines for engineers in 2026. This ranking evaluates code-first workflow orchestration platforms that engineers can self-host, extend, and embed inside existing software stacks. The ranking differs from the broader Best Open-Source Automation 2026 list by focusing specifically on workflow engines intended for developers: platforms that prioritize SDK coverage, durable execution, scalability, and operational controls over visual SaaS-connector automation. It includes durable execution engines (Temporal), data and task orchestrators (Apache Airflow, Prefect), low-code workflow builders with strong self-host stories (n8n, Windmill, Activepieces), and historical agent-based tools (Huginn).
Best Automation Tools for Healthcare in 2026
A ranked list of the best automation tools for healthcare organisations in 2026. This ranking evaluates platforms across HIPAA readiness, audit logging, PHI handling, on-premise or private-cloud deployment options, and integration with clinical and administrative systems. The ranking includes enterprise RPA (UiPath, Automation Anywhere), Microsoft-native automation (Power Automate), general-purpose workflow automation (Zapier on Business tier, Make, n8n self-hosted), and enterprise iPaaS (Boomi). Each entry is evaluated against the specific compliance, data-residency, and clinical-integration requirements that distinguish healthcare from other industries.
Common Questions
What is a Story in Tines?
A Story in Tines is a single automation workflow built as a directed graph of Actions. Stories are the Tines equivalent of a Zap in Zapier or a Playbook in traditional SOAR products, composed of six Action types: HTTP Request, Send Email, IMAP, Trigger, Event Transform, and Webhook.
Tines vs Splunk SOAR: Which security automation platform in 2026?
Tines is a no-code, SIEM-agnostic SaaS SOAR platform starting around $35,000/year; Splunk SOAR (now Cisco-owned after 2024) is a Python-based SOAR with 350+ prebuilt apps and deeper Splunk SIEM integration, typically priced higher. The choice depends on SIEM commitment and authoring preference.
Can you use Tines for SOAR automation?
Yes. Tines is a no-code security automation platform built for SOAR use cases, with production deployments at Canva, McKesson, and Databricks as of April 2026. Security teams use Tines Stories to automate phishing triage, SIEM alert enrichment, IOC lookups, and endpoint isolation.
What does Temporal cost when self-hosted?
Self-hosted Temporal is free under the MIT license; the only cost is the infrastructure to run Temporal Server, its persistence layer (Cassandra or PostgreSQL), and optional Elasticsearch for advanced visibility. A small production deployment typically costs $400-$900/month on AWS or GCP as of April 2026.