Kubernetes Basics
Quick start guide for deploying Elsa Workflows to Kubernetes with PostgreSQL persistence, including configuration, troubleshooting, and production best practices.
This guide provides a practical introduction to deploying Elsa Workflows on Kubernetes with PostgreSQL persistence. It focuses on common deployment patterns and configuration challenges, helping you move from SQLite (development) to PostgreSQL (production).
Overview
This guide covers:
Using Kubernetes manifests from the elsa-core repository
Switching from SQLite to PostgreSQL for production
Common configuration pitfalls and how to avoid them
Troubleshooting database connectivity and persistence
Prerequisites
Kubernetes cluster (v1.24+) - Minikube, k3s, or cloud provider (EKS, AKS, GKE)
kubectlCLI configured to access your clusterBasic understanding of Kubernetes concepts (Pods, Services, ConfigMaps, Secrets)
PostgreSQL database (managed or self-hosted)
Understanding the elsa-core Kubernetes Manifests
The elsa-core repository includes sample Kubernetes manifests in the scripts/ directory (if available). These manifests provide a starting point for deploying Elsa Server and Studio to Kubernetes.
Typical Manifest Structure
Note: The exact structure may vary by version. Always refer to the latest elsa-core repository for current manifest examples. If manifests are not present, use the examples in this guide as a starting point.
What the Manifests Provide
elsa-server-deployment.yaml: Deployment for Elsa Server (workflow runtime and API)
elsa-studio-deployment.yaml: Deployment for Elsa Studio (designer UI)
services.yaml: ClusterIP or LoadBalancer services for external access
configmap.yaml: Application configuration (connection strings, feature flags)
secrets.yaml: Sensitive data (database passwords, API keys)
Default Configuration: SQLite
By default, Elsa Server deployments often use SQLite for simplicity:
Default ConfigMap:
Why SQLite is Used:
Zero configuration required
Works out of the box
Suitable for demos and development
Why SQLite is NOT Suitable for Production:
Single-file database: Does not support multiple pods (no horizontal scaling)
No concurrent writes: Workflow execution errors under load
Data loss risk: Data is lost if the pod restarts (unless using PersistentVolume)
Limited performance: Not optimized for high-throughput scenarios
Never use SQLite in production Kubernetes deployments. Always use PostgreSQL, SQL Server, or MySQL for production workloads.
Switching to PostgreSQL
To use PostgreSQL in Kubernetes, you need to:
Deploy or connect to a PostgreSQL database
Update connection strings and environment variables
Configure Elsa modules to use the PostgreSQL provider
Apply database migrations
Step 1: Deploy PostgreSQL (Optional)
If you don't have an external PostgreSQL instance, deploy one in Kubernetes:
postgres-deployment.yaml:
postgres-secret.yaml:
Apply the manifests:
Step 2: Update Elsa Server Configuration
Changing the connection string alone is not enough. You must also configure the persistence provider in the Elsa modules.
Option A: Environment Variables
Update the Elsa Server deployment to use PostgreSQL:
elsa-server-deployment.yaml:
Option B: ConfigMap and appsettings.json
Mount a ConfigMap as appsettings.Production.json:
elsa-configmap.yaml:
Deployment with ConfigMap:
Step 3: Configure Persistence in Program.cs
If you're building a custom Elsa Server image, configure PostgreSQL persistence in Program.cs:
Why Changing Only the Connection String Isn't Enough
A common mistake is to update the connection string but forget to configure the persistence provider. This leads to:
Symptoms:
Elsa still creates
elsa.dbfile (SQLite)Connection string is ignored
Data not persisted to PostgreSQL
Root Cause:
Elsa modules have default persistence providers built into the code. Simply changing the connection string doesn't change the provider. You must explicitly configure each module:
Configuration Points:
Connection String: Specifies where to connect
Provider: Specifies how to connect (SQLite, PostgreSQL, SQL Server, etc.)
Module Configuration: Each module (Management, Runtime) needs its own provider configuration
All three must be aligned for PostgreSQL to work.
Running Database Migrations
Before Elsa Server can use PostgreSQL, the database schema must be created.
Option 1: Init Container
Use a Kubernetes init container to run migrations before the main app starts:
Important: The above init container example assumes the Elsa Server image includes the EF Core tooling (dotnet-ef). Most production images do not include this tool for security and size reasons.
To run migrations reliably, use a dedicated migration image that includes
dotnet-ef, or build a custom image for this purpose.Alternatively, ensure your main image has the necessary tooling, but this is not recommended for production.
Option 2: Auto-Migration on Startup
Enable auto-migration in the application (simpler but not ideal for production):
Set the environment variable in the deployment:
Production Best Practice: Run migrations as a separate Job or init container, not on every pod startup. This prevents race conditions when multiple pods start simultaneously.
Option 3: Kubernetes Job
Create a one-time migration Job:
elsa-migration-job.yaml:
Important: Similar to the init container example, this Job assumes the Elsa Server image includes the EF Core tooling (dotnet-ef). Most production images do not include this tool for security and size reasons.
To run migrations reliably, use a dedicated migration image that includes
dotnet-ef, or build a custom image for this purpose.Alternatively, ensure your main image has the necessary tooling, but this is not recommended for production.
Run the job before deploying Elsa Server:
Troubleshooting
Problem: Elsa Still Uses SQLite
Symptoms:
elsa.dbfile created in podPostgreSQL connection string appears in logs but isn't used
No tables created in PostgreSQL
Diagnosis:
Fix:
Verify provider configuration in environment variables:
Or ensure
appsettings.Production.jsonis mounted correctly:Check that the PostgreSQL package is included in your Docker image:
Problem: Connection Refused or Timeout
Symptoms:
Diagnosis:
Fix:
Verify PostgreSQL service name matches connection string:
Ensure PostgreSQL is ready before Elsa Server starts:
Check namespace - services in different namespaces require FQDN:
Problem: Tables Not Created
Symptoms:
PostgreSQL connection succeeds
No error messages in logs
Queries fail: "relation 'Elsa_WorkflowDefinitions' does not exist"
Diagnosis:
Fix:
Ensure migrations ran successfully:
Manually run migrations if needed:
Check migration history:
Problem: Missing Environment Variables
Symptoms:
Connection string contains literal
$(POSTGRES_PASSWORD)instead of actual passwordAuthentication failures
Diagnosis:
Fix:
Environment variable substitution in connection strings doesn't happen automatically. Use one of these approaches:
Option 1: Reference secret directly in each field:
Option 2: Build connection string in code:
Problem: ConfigMap Not Mounted
Symptoms:
Settings in ConfigMap not applied
App uses default configuration
Diagnosis:
Fix:
Ensure volume mount path is correct:
Set
ASPNETCORE_ENVIRONMENTto load the file:Verify ConfigMap is in the same namespace as the deployment.
Verifying PostgreSQL is Being Used
After deploying, confirm that Elsa is using PostgreSQL:
1. Check Logs
2. Check Database Tables
3. Create a Test Workflow
Production Best Practices
1. Use Managed PostgreSQL
Amazon RDS for PostgreSQL: Automated backups, point-in-time recovery, Multi-AZ
Azure Database for PostgreSQL: High availability, automatic patching, geo-replication
Google Cloud SQL: Automated backups, read replicas, automatic failover
2. Separate Database User Permissions
3. Use Connection Pooling
4. Enable TLS for Database Connections
5. Implement Health Checks
6. Set Resource Limits
7. Production Scaling Considerations
For production deployments, run at least 3 replicas:
Why 3+ replicas?
High Availability: If one pod fails, others continue serving traffic
Rolling Updates: Allows zero-downtime deployments (update one pod at a time)
Load Distribution: Better distribution of workflow execution across pods
Pod Disruption Budget: Can configure PDB to maintain minimum available pods
Scaling Strategy:
Development/Staging: 1-2 replicas
Production (low traffic): 3 replicas
Production (high traffic): 5-10+ replicas with HPA
Enterprise: 10+ replicas with node affinity and pod anti-affinity rules
For automatic scaling based on CPU/memory usage, see the Full Kubernetes Deployment Guide.
Next Steps
Scale Your Deployment: Configure Horizontal Pod Autoscaling
Add Monitoring: Set up Prometheus and Grafana
Secure Your Cluster: Configure authentication and authorization
Integrate with Studio: Set up Blazor Dashboard
Production Hardening: Follow the Production Checklist
Related Documentation
Full Kubernetes Deployment Guide - Complete reference with Helm charts, autoscaling, and monitoring
Database Configuration - Detailed persistence setup
Clustering Guide - Multi-node deployment patterns
Security & Authentication - Securing your Kubernetes deployment
Troubleshooting - Common issues and solutions
Last Updated: 2025-12-02 Addresses Issues: #75
Last updated