Kubernetes on DigitalOcean: Getting Started
A practical guide to DigitalOcean Kubernetes (DOKS) covering cluster creation, deploying Node.js applications, services, ingress, scaling, and monitoring.
Kubernetes on DigitalOcean: Getting Started
Kubernetes orchestrates containers across multiple servers. It handles deployment, scaling, networking, and self-healing — if a container crashes, Kubernetes replaces it automatically. If traffic spikes, Kubernetes scales up. If a server fails, Kubernetes redistributes workloads to healthy servers.
DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service. DigitalOcean runs the control plane (the API server, scheduler, and etcd database), manages upgrades, and handles high availability. You manage the worker nodes and the applications running on them. This guide covers deploying a Node.js application on DOKS from cluster creation through production-ready configuration.
Prerequisites
- A DigitalOcean account
kubectlinstalled locallydoctlCLI installed and authenticated- Docker installed (for building images)
- A Node.js application with a Dockerfile
Core Concepts
Before creating a cluster, understand the key components:
- Cluster — a set of servers running Kubernetes. Includes control plane (managed by DigitalOcean) and worker nodes (your Droplets).
- Node — a worker server in the cluster. Each node runs containers.
- Pod — the smallest deployable unit. Contains one or more containers. Usually one container per pod for web applications.
- Deployment — declares how many replicas of a pod should run. Handles rolling updates and rollbacks.
- Service — a stable network endpoint for accessing pods. Pods come and go; services provide a consistent address.
- Ingress — routes external HTTP/HTTPS traffic to services based on hostnames and paths.
- Namespace — a logical partition within a cluster for organizing resources.
Creating a Kubernetes Cluster
Via CLI
# Create a cluster with 3 worker nodes
doctl kubernetes cluster create my-cluster \
--region nyc3 \
--version latest \
--size s-2vcpu-4gb \
--count 3 \
--tag k8s-cluster
# This takes 5-10 minutes
Node Pool Sizing
| Size | vCPU | RAM | Monthly/Node | Best For |
|---|---|---|---|---|
| s-1vcpu-2gb | 1 | 2GB | $12 | Development, small apps |
| s-2vcpu-4gb | 2 | 4GB | $24 | Production workloads |
| s-4vcpu-8gb | 4 | 8GB | $48 | CPU-intensive apps |
| c-4vcpu-8gb | 4 | 8GB | $84 | Dedicated CPU |
Start with 3 nodes of s-2vcpu-4gb for a production cluster. Three nodes provide redundancy — if one node fails, the other two handle the workload.
Connecting kubectl
# Download the cluster config
doctl kubernetes cluster kubeconfig save my-cluster
# Verify connection
kubectl cluster-info
kubectl get nodes
The kubeconfig save command adds the cluster credentials to your local ~/.kube/config. All subsequent kubectl commands target this cluster.
Containerizing Your Node.js Application
Dockerfile
FROM node:20-alpine
WORKDIR /app
# Install dependencies first (layer caching)
COPY package*.json ./
RUN npm ci --production
# Copy application code
COPY . .
# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
USER nodejs
EXPOSE 3000
CMD ["node", "server.js"]
Building and Pushing the Image
DOKS works with any container registry. DigitalOcean has its own:
# Create a container registry
doctl registry create my-registry
# Log Docker into the registry
doctl registry login
# Build and push
docker build -t registry.digitalocean.com/my-registry/myapp:1.0.0 .
docker push registry.digitalocean.com/my-registry/myapp:1.0.0
Connect the Registry to Your Cluster
doctl kubernetes cluster registry add my-cluster
This allows the cluster to pull images from your private registry without additional authentication.
Deploying Your Application
Deployment Manifest
# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: default
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: registry.digitalocean.com/my-registry/myapp:1.0.0
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: "production"
- name: PORT
value: "3000"
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: myapp-secrets
key: database-url
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 15
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
Key sections:
- replicas: 3 — runs three copies of your application across the cluster
- resources — CPU and memory requests (guaranteed) and limits (maximum)
- livenessProbe — Kubernetes restarts the container if this fails
- readinessProbe — Kubernetes stops sending traffic if this fails
Creating Secrets
# Create a secret for sensitive environment variables
kubectl create secret generic myapp-secrets \
--from-literal=database-url="postgresql://user:pass@host:25060/db?sslmode=require" \
--from-literal=session-secret="your-session-secret"
Applying the Deployment
kubectl apply -f k8s/deployment.yaml
# Watch the rollout
kubectl rollout status deployment/myapp
# Check running pods
kubectl get pods -l app=myapp
Exposing Your Application
Service
A Service provides a stable network endpoint for your pods:
# k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: ClusterIP
kubectl apply -f k8s/service.yaml
Service types:
- ClusterIP — internal only. Other pods can reach it, but not the internet.
- NodePort — exposes the service on each node's IP at a static port.
- LoadBalancer — creates a DigitalOcean Load Balancer automatically.
Quick Exposure with LoadBalancer
For a simple setup, change the service type:
apiVersion: v1
kind: Service
metadata:
name: myapp-service
annotations:
service.beta.kubernetes.io/do-loadbalancer-size-slug: "lb-small"
service.beta.kubernetes.io/do-loadbalancer-certificate-id: "YOUR_CERT_ID"
service.beta.kubernetes.io/do-loadbalancer-protocol: "https"
service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
spec:
selector:
app: myapp
ports:
- name: https
protocol: TCP
port: 443
targetPort: 3000
- name: http
protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
This automatically creates a DigitalOcean Load Balancer with SSL termination.
Ingress Controller
For multiple services behind one load balancer, use an Ingress controller. Install the Nginx Ingress Controller:
# Install via the DigitalOcean marketplace
doctl kubernetes 1-click install my-cluster --1-clicks nginx_ingress_controller
Or via Helm:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
--set controller.publishService.enabled=true
Ingress Resource
# k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
This routes traffic based on the hostname and path. All requests to myapp.example.com go to myapp-service, while requests to myapp.example.com/api go to api-service.
SSL with cert-manager
cert-manager automates SSL certificate management inside Kubernetes:
# Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml
# k8s/cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
kubectl apply -f k8s/cluster-issuer.yaml
With the cert-manager.io/cluster-issuer annotation on your Ingress, cert-manager automatically provisions and renews Let's Encrypt certificates.
Scaling
Manual Scaling
# Scale to 5 replicas
kubectl scale deployment myapp --replicas=5
# Scale down
kubectl scale deployment myapp --replicas=2
Horizontal Pod Autoscaler
Automatically scale based on CPU or memory usage:
# k8s/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
kubectl apply -f k8s/hpa.yaml
# Watch scaling decisions
kubectl get hpa myapp-hpa --watch
The HPA checks metrics every 15 seconds. When average CPU exceeds 70%, it adds pods. When usage drops, it removes pods (down to the minimum of 2).
Node Auto-Scaling
Scale the cluster nodes themselves:
# Enable auto-scaling on the node pool
doctl kubernetes cluster node-pool update my-cluster default-pool \
--auto-scale \
--min-nodes 2 \
--max-nodes 5
When pods cannot be scheduled (not enough CPU or memory on existing nodes), DOKS adds a new node automatically. When nodes are underutilized, it removes them.
Rolling Updates
Updating Your Application
# Build and push a new version
docker build -t registry.digitalocean.com/my-registry/myapp:1.1.0 .
docker push registry.digitalocean.com/my-registry/myapp:1.1.0
# Update the deployment
kubectl set image deployment/myapp myapp=registry.digitalocean.com/my-registry/myapp:1.1.0
# Watch the rollout
kubectl rollout status deployment/myapp
Kubernetes creates new pods with the updated image, waits for them to pass readiness checks, then terminates old pods — one at a time by default.
Rollback
# View rollout history
kubectl rollout history deployment/myapp
# Rollback to the previous version
kubectl rollout undo deployment/myapp
# Rollback to a specific revision
kubectl rollout undo deployment/myapp --to-revision=3
Deployment Strategy
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Create 1 extra pod during update
maxUnavailable: 0 # Never have fewer than desired replicas
With maxUnavailable: 0, Kubernetes always maintains the full replica count during deployments. The trade-off is slower deployments (must wait for new pods to be ready before terminating old ones).
ConfigMaps and Secrets
ConfigMap for Non-Sensitive Configuration
# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
data:
NODE_ENV: "production"
LOG_LEVEL: "info"
CACHE_TTL: "3600"
MAX_UPLOAD_SIZE: "10485760"
Using ConfigMaps in Deployments
spec:
containers:
- name: myapp
envFrom:
- configMapRef:
name: myapp-config
- secretRef:
name: myapp-secrets
envFrom loads all key-value pairs from the ConfigMap and Secret as environment variables.
Monitoring and Debugging
Pod Logs
# View logs for a specific pod
kubectl logs myapp-abc123
# Follow logs in real time
kubectl logs -f myapp-abc123
# Logs from all pods with a label
kubectl logs -l app=myapp --all-containers
# Previous container logs (if pod restarted)
kubectl logs myapp-abc123 --previous
Debugging
# Describe a pod (events, status, conditions)
kubectl describe pod myapp-abc123
# Execute a command inside a running pod
kubectl exec -it myapp-abc123 -- sh
# Port-forward to test a pod directly
kubectl port-forward pod/myapp-abc123 3000:3000
# Check resource usage
kubectl top pods
kubectl top nodes
Monitoring Stack
Install the DigitalOcean monitoring stack:
doctl kubernetes 1-click install my-cluster --1-clicks kube-prometheus-stack
This installs Prometheus (metrics collection) and Grafana (dashboards). Access Grafana:
kubectl port-forward svc/kube-prometheus-stack-grafana 3000:80 -n prometheus
Persistent Storage
For applications that need persistent data (file uploads, SQLite databases):
# k8s/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: do-block-storage
# In the deployment
spec:
containers:
- name: myapp
volumeMounts:
- name: data
mountPath: /app/data
volumes:
- name: data
persistentVolumeClaim:
claimName: myapp-storage
DigitalOcean automatically provisions a Block Storage volume and attaches it to the node running the pod.
Note: ReadWriteOnce means only one pod can mount the volume at a time. For shared storage across multiple pods, use DigitalOcean Spaces (S3-compatible object storage) instead.
Common Issues and Troubleshooting
Pods stuck in "Pending" state
Not enough resources on the cluster nodes:
Fix: Check pod events with kubectl describe pod POD_NAME. Look for "Insufficient cpu" or "Insufficient memory" messages. Either reduce resource requests in the deployment or add more nodes to the cluster.
Pods in "CrashLoopBackOff"
The application is crashing repeatedly:
Fix: Check logs with kubectl logs POD_NAME --previous. Common causes: missing environment variables, database connection failures, incorrect startup command. The --previous flag shows logs from the last crashed container.
Image pull errors
Kubernetes cannot pull the container image:
Fix: Verify the image name and tag are correct. Check that the container registry is connected to the cluster (doctl kubernetes cluster registry add my-cluster). Verify the image exists in the registry.
Service not reachable
The application is running but not accessible:
Fix: Verify the service selector matches the pod labels. Check that the target port matches the container port. Test with kubectl port-forward to rule out networking issues. For LoadBalancer services, wait a few minutes for the load balancer to provision.
DNS resolution failures inside pods
Pods cannot resolve hostnames:
Fix: Check CoreDNS is running with kubectl get pods -n kube-system. Verify the service name is correct. Services in the same namespace can be reached by name (myapp-service). Cross-namespace services require the full name (myapp-service.other-namespace.svc.cluster.local).
Best Practices
- Set resource requests and limits on every container. Without limits, a single misbehaving pod can consume all node resources and affect other workloads.
- Use readiness and liveness probes. Readiness probes prevent traffic to pods that are not ready. Liveness probes restart pods that are stuck.
- Store secrets in Kubernetes Secrets, not ConfigMaps. Secrets are base64-encoded (not encrypted by default) but can be encrypted at rest with DigitalOcean's managed control plane.
- Use namespaces to organize workloads. Separate staging and production in different namespaces. Apply resource quotas per namespace.
- Enable node auto-scaling. Let the cluster grow and shrink with demand. Set minimum and maximum node counts to control costs.
- Tag your container images with versions. Never deploy the
latesttag in production. Explicit version tags make rollbacks reliable and deployments reproducible. - Use Ingress for multiple services. One load balancer with Ingress routing costs less than multiple LoadBalancer services and provides more flexible routing.
- Monitor resource usage. Use
kubectl topand Prometheus/Grafana to track CPU and memory. Right-size your resource requests based on actual usage.