GitHub

Deployment

Pulse deploys as a single Node.js process. No adapters, no serverless wrapping, no separate static file server required. Build once, run anywhere Node 22+ runs. All guarantees — security headers, brotli compression, immutable asset caching — are active in production automatically.

Build

Run the production build before deploying:

npm run build

This generates content-hashed bundles in public/dist/ and writes public/dist/manifest.json. The server reads the manifest at startup to resolve hydration script paths.

Without a manifest, the server falls back to serving source files directly — no compression, no content-hashed filenames, and no immutable cache headers. Always run npm run build before deploying to production.

What to deploy

IncludeReason
src/Page specs — imported by the server at runtime
public/Static assets and built bundles (public/dist/)
server.jsEntry point
pulse.config.jsServer config
package.json + node_modules/Runtime dependencies
ExcludeReason
.claude/AI agent config — not needed at runtime
.pulse/Local report data — not needed at runtime

Environment variables

VariableDefaultDescription
NODE_ENVdevelopmentSet to production to enable HSTS headers and production cache behaviour.
PORTValue in pulse.config.js (default 3000)Override the listening port. Most PaaS platforms set this automatically.
NODE_ENV=production pulse start

VPS with PM2

PM2 keeps the process alive, restarts it on crash, and manages logs.

# Install PM2 globally
npm install -g pm2

# Start the app
NODE_ENV=production pm2 start server.js --name myapp

# Persist across reboots
pm2 save
pm2 startup

# Zero-downtime reload after a deploy
pm2 reload myapp

For repeatable deployments, check an ecosystem.config.cjs into version control:

// ecosystem.config.cjs
module.exports = {
  apps: [{
    name:   'myapp',
    script: 'server.js',
    env_production: {
      NODE_ENV: 'production',
      PORT:     3000,
    },
  }],
}
pm2 start ecosystem.config.cjs --env production

Docker

A two-stage build keeps the image small — build tools stay in the first stage.

# ---- build stage ----
FROM node:22-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npx pulse build

# ---- runtime stage ----
FROM node:22-alpine
WORKDIR /app
ENV NODE_ENV=production
COPY package*.json ./
RUN npm ci --omit=dev
COPY --from=build /app/src         ./src
COPY --from=build /app/public      ./public
COPY --from=build /app/server.js   ./server.js
COPY pulse.config.js ./
EXPOSE 3000
CMD ["node", "server.js"]
docker build -t myapp .
docker run -p 3000:3000 --env NODE_ENV=production myapp

Fly.io

# fly.toml
app            = 'myapp'
primary_region = 'lhr'

[env]
  NODE_ENV = 'production'

[build]
  [build.args]
    NODE_VERSION = '22'

[deploy]
  release_command = 'npx pulse build'

[http_service]
  internal_port       = 3000
  force_https         = true
  auto_stop_machines  = 'stop'
  auto_start_machines = true

[[vm]]
  memory   = '256mb'
  cpu_kind = 'shared'
  cpus     = 1
# First deploy
fly launch

# Subsequent deploys
fly deploy

Railway

Railway auto-detects Node apps. Add a railway.json to set the build and start commands:

{
  "$schema": "https://railway.app/railway.schema.json",
  "build": {
    "builder": "NIXPACKS",
    "buildCommand": "npm run build"
  },
  "deploy": {
    "startCommand": "NODE_ENV=production node server.js",
    "healthcheckPath": "/",
    "restartPolicyType": "ON_FAILURE"
  }
}

Render

SettingValue
EnvironmentNode
Build commandnpm install && npm run build
Start commandNODE_ENV=production node server.js
Node version22.x

Set NODE_ENV=production in the Render environment variables dashboard.

Vercel, Cloudflare, and edge platforms

These platforms each have multiple products with very different runtimes — the compatibility story varies significantly between them.

Vercel

Vercel has two distinct runtimes:

ProductRuntimePulse compatible?
Functions (Node.js)Full Node.js — same built-ins as a VPSPartially — see below
Edge FunctionsV8 isolates (no Node built-ins)No

Vercel Functions (Node.js) can run Pulse with some differences in behaviour:

FeatureBehaviour on Vercel Functions
serverTtl cacheWorks within a warm instance, but cold starts reset it. Not reliable for expensive queries.
Streaming SSRVercel Functions support streaming responses, but require explicit configuration via supportsResponseStreaming.
Static filesVercel serves public/ automatically via its CDN — Pulse's static file serving is bypassed.
Security headersWork as normal — Pulse adds them to every response.
Vercel Functions are not a tested or officially supported deployment target for Pulse. The adapter pattern (exporting a request handler rather than starting a server) is not yet documented. Railway, Render, or Fly.io are simpler choices with no adaptation required.

Cloudflare

ProductRuntimePulse compatible?
WorkersV8 isolates — no node:http, node:fs, node:zlibNo
Pages FunctionsSame V8 isolate runtime as WorkersNo
CDN / proxySits in front of your origin serverYes — works great with Fly.io or a VPS behind it
The recommended pattern for edge performance: deploy Pulse to Fly.io (which runs real VMs in many regions) and put Cloudflare as a CDN/proxy in front of it. Static assets and cached HTML are served from Cloudflare's edge; dynamic requests are proxied to the nearest Fly VM.

HTTPS and reverse proxy

Pulse detects TLS automatically. When a request arrives with an x-forwarded-proto: https header or over a direct TLS socket, Strict-Transport-Security: max-age=31536000; includeSubDomains is added to the response. All four platforms above forward this header — no Pulse config is needed.

If running behind nginx for TLS termination:

# nginx — TLS termination, proxy to Pulse
server {
  listen 443 ssl;
  server_name myapp.com;

  ssl_certificate     /etc/letsencrypt/live/myapp.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/myapp.com/privkey.pem;

  location / {
    proxy_pass         http://localhost:3000;
    proxy_http_version 1.1;
    proxy_set_header   Host              $host;
    proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header   X-Forwarded-Proto $scheme;
  }
}

# Redirect HTTP to HTTPS
server {
  listen 80;
  server_name myapp.com;
  return 301 https://$host$request_uri;
}
Use Certbot to obtain and auto-renew a free Let's Encrypt certificate: certbot --nginx -d myapp.com.