proompteng

Temporal Bun SDK

Run Temporal workers and clients on Bun with replay tooling, Docker helpers, and Temporal Cloud/TLS support.

@proompteng/temporal-bun-sdk runs Temporal workers and clients on Bun.

Prerequisites

  • Bun 1.3.10 or newer (matches the package engine requirement).
  • Access to a Temporal Cloud namespace or self-hosted cluster.
  • (Optional) The temporal CLI for namespace administration and replaying live executions.
  • docker if you plan to build container images with the provided helpers.

Quickstart

Create a new worker project outside another Bun workspace:

bunx @proompteng/temporal-bun-sdk init my-worker
cd my-worker
bun install

Add .env:

printf "TEMPORAL_ADDRESS=127.0.0.1:7233\nTEMPORAL_NAMESPACE=default\nTEMPORAL_TASK_QUEUE=hello-bun\n" > .env

Start Temporal:

temporal server start-dev --headless

Start the worker:

bun run dev

Start a workflow in another shell:

temporal workflow start \
  --task-queue hello-bun \
  --type helloWorkflow \
  --input '"Codex"'

Add to an existing Bun project

Add the SDK to an existing Bun workspace:

bun add @proompteng/temporal-bun-sdk

The template includes example workflows, activities, and Docker packaging scripts that map one-to-one with the library's defaults.

Strict mode

The generated worker defaults to workflowGuards: 'warn' so local setup works with temporal server start-dev.

If you switch to strict mode, configure worker versioning and stable build IDs. See Worker build IDs and versioning below.

Configure your Temporal connection

Configuration flows through loadTemporalConfig(), which reads environment variables, normalizes paths, and enforces required values. Drop a .env file in your worker project and tailor the defaults as needed:

TEMPORAL_ADDRESS=127.0.0.1:7233
TEMPORAL_NAMESPACE=default
TEMPORAL_TASK_QUEUE=demo-worker
# Add these when connecting to Temporal Cloud or a TLS-enabled cluster:
# TEMPORAL_API_KEY=your-cloud-api-key
# TEMPORAL_TLS_CERT_PATH=./certs/client.crt
# TEMPORAL_TLS_KEY_PATH=./certs/client.key
# TEMPORAL_TLS_CA_PATH=./certs/ca.pem

Environment variables supported by the config loader:

VariableDefaultDescription
TEMPORAL_ADDRESS${TEMPORAL_HOST}:${TEMPORAL_GRPC_PORT}Direct address override (e.g. temporal.example.com:7233).
TEMPORAL_HOST127.0.0.1Hostname used when TEMPORAL_ADDRESS is unset.
TEMPORAL_GRPC_PORT7233Temporal gRPC port.
TEMPORAL_NAMESPACEdefaultNamespace passed to the worker and client.
TEMPORAL_TASK_QUEUEreplay-fixturesWorker task queue.
TEMPORAL_API_KEYunsetInjected into connection metadata for Cloud/API auth.
TEMPORAL_CLOUD_ADDRESSunsetTemporal Cloud Ops endpoint (defaults to saas-api.tmprl.cloud:443 when Cloud API is enabled).
TEMPORAL_CLOUD_API_KEYunsetAPI key for Temporal Cloud Ops API (Bearer token).
TEMPORAL_CLOUD_API_VERSIONunsetCloud API version header (defaults to 2025-05-31 when Cloud API is enabled).
TEMPORAL_TLS_CA_PATHunsetPath to trusted CA bundle.
TEMPORAL_TLS_CERT_PATH / TEMPORAL_TLS_KEY_PATHunsetmTLS client certificate and key (require both).
TEMPORAL_TLS_SERVER_NAMEunsetOverrides TLS server name verification.
TEMPORAL_ALLOW_INSECURE / ALLOW_INSECURE_TLSfalseAccepts 1/true/on to skip certificate verification.
TEMPORAL_WORKER_IDENTITY_PREFIXtemporal-bun-workerWorker identity prefix (host and PID are appended).
TEMPORAL_WORKER_BUILD_IDunsetWorker build ID; auto-derived when unset.
TEMPORAL_WORKFLOW_CONCURRENCY4Workflow poller concurrency.
TEMPORAL_ACTIVITY_CONCURRENCY4Activity poller concurrency.
TEMPORAL_STICKY_CACHE_SIZE128Sticky cache size for determinism snapshots.
TEMPORAL_STICKY_TTL_MS300000Sticky cache TTL in milliseconds.
TEMPORAL_STICKY_SCHEDULING_ENABLEDtrue when cache size > 0Enable sticky scheduling; set to 0/false to disable.
TEMPORAL_ACTIVITY_HEARTBEAT_INTERVAL_MS5000Activity heartbeat throttle interval.
TEMPORAL_ACTIVITY_HEARTBEAT_RPC_TIMEOUT_MS5000Activity heartbeat RPC timeout.
TEMPORAL_LOG_FORMATprettySelect json or pretty logging output for worker/client runs.
TEMPORAL_LOG_LEVELinfoMinimum log severity (debug, info, warn, error).
TEMPORAL_TRACING_INTERCEPTORS_ENABLEDtrueSet to false to disable tracing/audit interceptors.
TEMPORAL_SHOW_STACK_SOURCESfalseInclude stack trace source maps in errors.
TEMPORAL_METRICS_EXPORTERin-memoryMetrics sink: in-memory, file, prometheus, or otlp.
TEMPORAL_METRICS_ENDPOINTunsetPath/URL for file, Prometheus, or OTLP exporters.
TEMPORAL_CLIENT_RETRY_MAX_ATTEMPTS5WorkflowService RPC attempt budget.
TEMPORAL_CLIENT_RETRY_INITIAL_MS200Initial retry delay (milliseconds).
TEMPORAL_CLIENT_RETRY_MAX_MS5000Maximum retry delay (milliseconds).
TEMPORAL_CLIENT_RETRY_BACKOFF2Exponential backoff multiplier applied per attempt.
TEMPORAL_CLIENT_RETRY_JITTER_FACTOR0.2Decorrelated jitter factor between 0 and 1.
TEMPORAL_CLIENT_RETRY_STATUS_CODESUNAVAILABLE,RESOURCE_EXHAUSTED,DEADLINE_EXCEEDED,INTERNALComma-separated Connect codes that should be retried.
TEMPORAL_PAYLOAD_CODECSunsetComma-separated payload codecs applied in order (e.g. gzip,aes-gcm).
TEMPORAL_CODEC_AES_KEYunsetBase64 or hex AES key (128/192/256-bit) required when aes-gcm is enabled.
TEMPORAL_CODEC_AES_KEY_IDdefaultOptional key identifier recorded in payload metadata for rotation/diagnostics.

loadTemporalConfig() returns typed values that the client and worker factories consume directly, so you never have to stitch addresses or TLS buffers together by hand.

For a focused Cloud setup guide, including API key auth, custom CA bundles, and mTLS, see Temporal Cloud and TLS.

Worker build IDs and versioning

Workers derive their build ID (in priority order) from:

  1. deployment.buildId passed to createWorker(...) / WorkerRuntime.create(...)
  2. TEMPORAL_WORKER_BUILD_ID
  3. a derived value based on the configured workflow sources (workflowsPath)

When you enable worker versioning via deployment.versioningMode, the worker includes deployment metadata (deployment name + build ID) in poll/response requests so the server can route workflow tasks to the correct build.

The Bun SDK does not call the deprecated Build ID Compatibility APIs (Version Set-based “worker versioning v0.1”), since they may be disabled on some namespaces.

OpenTelemetry export

Set TEMPORAL_OTEL_ENABLED=true to start the OpenTelemetry SDK inside the worker process. Configure exporters with standard OTEL environment variables:

  • OTEL_EXPORTER_OTLP_TRACES_ENDPOINT and OTEL_EXPORTER_OTLP_METRICS_ENDPOINT (or OTEL_EXPORTER_OTLP_ENDPOINT to share a base URL).
  • OTEL_EXPORTER_OTLP_PROTOCOL (or per-signal OTEL_EXPORTER_OTLP_TRACES_PROTOCOL / OTEL_EXPORTER_OTLP_METRICS_PROTOCOL) to choose http/json (default) or http/protobuf. The SDK warns and falls back to HTTP if gRPC is requested.
  • OTEL_SERVICE_NAME, OTEL_SERVICE_NAMESPACE, and OTEL_SERVICE_INSTANCE_ID to label service identity.
  • OTEL_RESOURCE_ATTRIBUTES for additional resource tags.
  • OTEL_EXPORTER_OTLP_TIMEOUT (or per-signal OTEL_EXPORTER_OTLP_TRACES_TIMEOUT / OTEL_EXPORTER_OTLP_METRICS_TIMEOUT) to increase OTLP request timeouts.
  • OTEL_METRIC_EXPORT_INTERVAL and OTEL_METRIC_EXPORT_TIMEOUT to tune metric export cadence.
  • OTEL_EXPORTER_OTLP_COMPRESSION (or per-signal OTEL_EXPORTER_OTLP_TRACES_COMPRESSION / OTEL_EXPORTER_OTLP_METRICS_COMPRESSION) to enable gzip payload compression.

Auto-instrumentation stays disabled by default; enable it explicitly with TEMPORAL_OTEL_AUTO_INSTRUMENTATION=true.

WorkflowService client resilience

createTemporalClient() automatically wraps WorkflowService RPCs with our retry helper and telemetry interceptors:

  • Configurable retries - config.rpcRetryPolicy is populated from the TEMPORAL_CLIENT_RETRY_* env vars (or overrides passed to loadTemporalConfig). All client methods use the resulting jittered exponential backoff policy, and you can override per-call values via TemporalClientCallOptions.retryPolicy.

  • Optional call options - startWorkflow, signalWorkflow, queryWorkflow, signalWithStart, terminateWorkflow, and describeNamespace accept an optional trailing callOptions argument (headers, timeout, abort signal, retry policy). Use temporalCallOptions() to brand the object so payloads are not mistaken for options:

    import { temporalCallOptions } from '@proompteng/temporal-bun-sdk'
    
    await client.signalWorkflow(
      handle,
      'updateState',
      { signal: 'start' },
      temporalCallOptions({
        headers: { 'x-trace-id': traceId },
        timeoutMs: 5_000,
      }),
    )
  • Default interceptors - inbound/outbound hooks wrap every workflow RPC and operation: namespace/identity headers are injected, retries use jittered backoff, and latency/error metrics flow through the configured registry/exporter. Tracing spans are opt in via TEMPORAL_TRACING_INTERCEPTORS_ENABLED (or tracingEnabled in code). Append custom middleware with clientInterceptors (client) or interceptors (transport) to add auth headers, audit logs, or bespoke telemetry.

  • Memo/search helpers - client.memo and client.searchAttributes expose encode/decode helpers that reuse the client's DataConverter, making it easy to prepare payloads for raw WorkflowService requests.

  • TLS validation - TLS buffers are checked up front (missing files, invalid PEMs, and mismatched cert/key pairs throw TemporalTlsConfigurationError) and transport failures surface as TemporalTlsHandshakeError with remediation hints.

RPC coverage (Workflow, Operator, Cloud)

The SDK exposes both high-level helpers and low-level RPC access:

  • client.rpc.workflow.call(...) for any WorkflowService RPC.
  • client.operator.* for OperatorService convenience methods, plus client.rpc.operator.call(...) for full coverage.
  • client.cloud.call(...) for Temporal Cloud Ops API (configure via TEMPORAL_CLOUD_*).

All RPC entrypoints accept TemporalClientCallOptions for headers, timeouts, retry overrides, and abort signals.

Payload codecs and failure conversion

The SDK's DataConverter supports an ordered codec chain so you can compress and encrypt payloads without giving up deterministic replay:

  • Enable codecs with TEMPORAL_PAYLOAD_CODECS (e.g. gzip,aes-gcm); AES-GCM requires TEMPORAL_CODEC_AES_KEY (128/192/256-bit, base64/hex) and optionally TEMPORAL_CODEC_AES_KEY_ID for rotation tracking.
  • Codecs wrap the entire payload proto, so replay remains compatible as long as the chain stays stable for a given workflow history.
  • Codec metrics are emitted per codec/direction (temporal_payload_codec_encode_total_*, *_decode_total_*, *_errors_total_*) and failures log the offending codec/direction.
  • The failure converter returns a structured TemporalFailureError that preserves details and cause payloads using the same codec chain, so workflow/activity/update/query errors decode cleanly on clients.
  • temporal-bun doctor builds the codec chain from config and fails fast on missing/invalid keys while printing the resolved codec list.

Observability

The Temporal Bun SDK ships with structured logging and metrics layers so you can operate Bun workers/clients like any other service in your stack. Configure the behavior with the same environment variables listed above:

  • TEMPORAL_LOG_FORMAT - controls the log formatter (pretty or json).
  • TEMPORAL_LOG_LEVEL - sets the minimum log severity that makes it into the sink.
  • TEMPORAL_METRICS_EXPORTER / TEMPORAL_METRICS_ENDPOINT - select a sink (in-memory, file, prometheus, or otlp) and its path/URL.
  • TEMPORAL_METRICS_FLUSH_INTERVAL_MS - override the worker metrics flush cadence (defaults to 10 seconds).

Want to verify your configuration without running a worker? temporal-bun includes a doctor command that loads the shared config, builds the interceptor chain, validates the retry presets, spins up observability services, emits a log, increments a counter, and flushes the selected exporter:

bunx @proompteng/temporal-bun-sdk doctor --log-format=json --metrics=file:./metrics.json

The command prints a success summary (including active interceptors and the resolved retry policy) once the JSON log is emitted and the metrics file is written, so you can script it into CI or pre-deployment checks.

Effect layers and runtime helpers

Temporal Bun exposes Effect Layers so workers, clients, and CLI tools can share managed dependencies without hand-wiring config or observability plumbing:

  • createTemporalClientLayer / TemporalClientLayer - managed Temporal client lifecycle (auto-shutdown on scope exit).
  • createWorkerRuntimeLayer / WorkerRuntimeLayer - run the worker runtime inside an Effect.scoped program.
  • createWorkerAppLayer / runWorkerApp - compose config + observability + WorkflowService + worker runtime in one call.
  • createTemporalCliLayer / runTemporalCliEffect - run Effect programs with the same config/observability/workflow service stack used by the CLI.

Example: run a one-off CLI task with the same layers used by temporal-bun:

import { Effect } from 'effect'
import { makeTemporalClientEffect, runTemporalCliEffect } from '@proompteng/temporal-bun-sdk'

await runTemporalCliEffect(
  makeTemporalClientEffect().pipe(
    Effect.tap(({ client }) => Effect.promise(() => client.describeNamespace())),
    Effect.tap(({ client }) => Effect.promise(() => client.shutdown())),
  ),
)

Prefer the plain async helpers? createWorker() and createTemporalClient() wrap the same configuration and observability defaults without requiring Effect plumbing.

Replay workflow histories

temporal-bun replay lets you ingest workflow histories, diff determinism state, and share diagnostics with incident responders without writing ad hoc scripts. It reuses loadTemporalConfig, the observability sinks, and the same ingestion pipeline that powers the worker sticky cache.

  • --history-file <path> - replay a JSON capture (temporal workflow show --history --output json) or a fixture envelope with history + info.
  • --execution <workflowId/runId> - fetch live history via the Temporal CLI or WorkflowService RPC (--source cli|service|auto).
  • --workflow-type, --namespace, --temporal-cli, --json - supply workflow metadata, namespace overrides, a custom CLI binary path, and machine-readable summaries respectively.
  • Exit codes: 0 success, 2 nondeterminism, 1 IO/configuration failures.
# Replay a saved history fixture
bunx temporal-bun replay \
  --history-file packages/temporal-bun-sdk/tests/replay/fixtures/timer-workflow.json \
  --workflow-type timerWorkflow \
  --json

# Diff a live execution using the Temporal CLI harness
TEMPORAL_ADDRESS=127.0.0.1:7233 TEMPORAL_NAMESPACE=temporal-bun-integration \
  bunx temporal-bun replay \
  --execution workflow-id/run-id \
  --workflow-type integrationWorkflow \
  --namespace temporal-bun-integration \
  --source cli

Set TEMPORAL_CLI_PATH or pass --temporal-cli when the CLI binary is not on PATH, and rely on --source service to route through WorkflowService when the CLI is unavailable (for example, in CI). The command logs history provenance, event counts, mismatch metadata, and writes a compact JSON summary when --json is supplied so you can feed the output into other tooling.

Author activities

Activities are plain Bun functions. Keep them deterministic from Temporal's perspective and delegate external side effects to this layer.

workers/activities.ts
export type Activities = {
  echo(input: { message: string }): Promise<string>
  sleep(milliseconds: number): Promise<void>
}

export const activities: Activities = {
  async echo({ message }) {
    return message
  },

  async sleep(milliseconds) {
    await Bun.sleep(milliseconds)
  },
}

Author workflows

Import workflow primitives from the SDK so Bun can bundle Temporal's workflow runtime correctly.

workers/workflows.ts
import { Effect } from 'effect'
import * as Schema from 'effect/Schema'
import { defineWorkflow } from '@proompteng/temporal-bun-sdk/workflow'

export const workflows = [
  defineWorkflow('helloWorkflow', Schema.Array(Schema.String), ({ input, activities, determinism }) =>
    Effect.gen(function* () {
      const [rawName] = input
      const name = typeof rawName === 'string' && rawName.length > 0 ? rawName : 'Temporal'

      yield* activities.schedule('sleep', [10])
      yield* activities.schedule('echo', [{ message: `Hello, ${name}!` }])

      return `Greeting queued at ${new Date(determinism.now()).toISOString()}`
    }),
  ),
]

export default workflows

Export your workflows from an index file so the worker can register them all at once:

workers/workflows/index.ts
export * from './workflows.ts'

Run a worker

createWorker() wires up the Temporal connection, registers your workflows and activities, and hands back both the worker instance and the resolved config.

worker.ts
import { fileURLToPath } from 'node:url'
import { createWorker } from '@proompteng/temporal-bun-sdk/worker'
import { activities } from './workers/activities.ts'

const { worker } = await createWorker({
  activities,
  workflowsPath: fileURLToPath(new URL('./workers/workflows/index.ts', import.meta.url)),
})

const shutdown = async (signal: string) => {
  console.log(`Received ${signal}. Shutting down worker...`)
  await worker.shutdown()
  process.exit(0)
}

process.on('SIGINT', () => void shutdown('SIGINT'))
process.on('SIGTERM', () => void shutdown('SIGTERM'))

await worker.run()

For quick tests, run the bundled binary instead of compiling your own entry point:

bunx temporal-bun-worker

It uses the same configuration loader and ships with example workflows if you need a smoke test.

Start and manage workflows from Bun

createTemporalClient() produces a Bun-native Temporal client that already understands the config loader, workflow handles, and retry policies.

scripts/start-workflow.ts
import { createTemporalClient } from '@proompteng/temporal-bun-sdk'

const { client } = await createTemporalClient()

const start = await client.startWorkflow({
  workflowId: `hello-${Date.now()}`,
  workflowType: 'helloWorkflow',
  taskQueue: 'demo-worker',
  args: ['Proompteng'],
})

console.log('Workflow started:', start.runId)

await client.signalWorkflow(start.handle, 'complete', { ok: true })
await client.terminateWorkflow(start.handle, { reason: 'demo complete' })
await client.shutdown()

All workflow operations (startWorkflow, signalWorkflow, queryWorkflow, terminateWorkflow, cancelWorkflow, and signalWithStart) share the same handle structure, so you can persist it between processes without extra serialization code.

Workflow updates and queries

The SDK supports workflow updates and queries with Effect Schema validators. Define handlers alongside workflows, then invoke them via the client.workflow helpers:

workers/updates.ts
import { Effect } from 'effect'
import * as Schema from 'effect/Schema'
import { defineWorkflow, defineWorkflowUpdates } from '@proompteng/temporal-bun-sdk/workflow'

const updates = defineWorkflowUpdates([
  {
    name: 'setCounter',
    input: Schema.Number,
    handler: (_ctx, value: number) => Effect.sync(() => value),
  },
])

export const counterWorkflow = defineWorkflow(
  'counterWorkflow',
  Schema.Number,
  ({ input }) => Effect.sync(() => input),
  { updates },
)
scripts/update-counter.ts
import { createTemporalClient } from '@proompteng/temporal-bun-sdk'

const { client } = await createTemporalClient()
const { handle } = await client.startWorkflow({
  workflowId: `counter-${Date.now()}`,
  workflowType: 'counterWorkflow',
  taskQueue: 'demo-worker',
  args: [0],
})

const result = await client.workflow.update(handle, {
  updateName: 'setCounter',
  args: [42],
  waitForStage: 'completed',
})

if (result.outcome?.status === 'success') {
  console.log('Counter updated to', result.outcome.result)
}

await client.shutdown()

Use defineWorkflowQueries and client.queryWorkflow for query handlers; query execution runs in read-only mode and rejects non-deterministic operations.

CLI quick reference

The temporal-bun CLI is available through bunx @proompteng/temporal-bun-sdk <command> before installation, or bunx temporal-bun <command> inside a project that already depends on the package.

  • init [directory] [--force] - scaffold a Bun worker project with example workflows, activities, Dockerfile, and scripts.
  • doctor - validate the SDK config, emit a JSON log, and flush the selected metrics exporter.
  • docker-build [--tag <name>] [--context <path>] [--file <path>] - package the current directory into a worker image.
  • replay - diff workflow determinism from JSON fixtures or live histories.
  • help - print the command reference.

Legacy binary status

The pure TypeScript runtime is the default (and only) supported execution path. Historical assets remain in packages/temporal-bun-sdk/bruke/ for reference, but environment flags such as TEMPORAL_BUN_SDK_USE_ZIG are no longer wired into the worker or client. Future experiments should introduce new, explicit configuration rather than relying on retired flags.

Local development and production tips

  • Use Bun's --watch flag (bun run --watch worker.ts) to restart the worker on changes.
  • Keep activities free of Temporal SDK imports so they remain tree-shakeable and easy to unit test.
  • Expose Prometheus metrics via the worker runtime and forward them to your observability stack.
  • Prefer Temporal schedules to cron jobs for recurring workloads.
  • Store long-lived credentials in a secrets manager and inject them via the worker environment.

With @proompteng/temporal-bun-sdk, you can reuse existing Temporal workflows while adopting Bun's fast startup times and fully typed client/worker helpers.

Architecture overview

  • Workflow runtime - executes deterministic workflows inside Bun with Effect fibers. Command intents (schedule-activity, timers, child workflows, signals, continue-as-new) emit Temporal protobufs directly and are guarded by a determinism snapshot that captures command order, random values, and logical timestamps.
  • Worker runtime - wraps pollers, sticky cache routing, activity execution, and build ID registration. It consumes the same loadTemporalConfig environment contract as our Go worker and exposes concurrency knobs via TEMPORAL_WORKFLOW_CONCURRENCY, TEMPORAL_ACTIVITY_CONCURRENCY, and sticky cache variables.
  • Client - a Connect transport with branded call options, memo/search helpers, TLS diagnostics, and retry policies derived from environment variables. All WorkflowService RPCs run through logging/metrics interceptors so Bun services get observability parity with the worker runtime.
  • CLI and tooling - temporal-bun provides scaffolding, config validation, Docker packaging, and deterministic replay. The CLI shares the same config and observability layers, ensuring every command fails fast with actionable logs.

Tutorials and recipes

Follow the quickstart above to scaffold workflows/activities, then explore the example app in packages/temporal-bun-sdk-example for:

  • Activity heartbeats and cancellation propagation via the runtime lifecycle helpers.
  • Signals, queries, and updates using defineWorkflowSignals, defineWorkflowQueries, and defineWorkflowUpdates.
  • Deterministic helpers (determinism.now, determinism.random, determinism.getVersion) that make replay diagnostics trivial.

We are actively porting these recipes into standalone guides (heartbeats, signals, updates, schedules). Each guide links back to runnable snippets so teams can copy/paste into new Bun workers.

CLI and tooling reference

CommandPurposeNotes
temporal-bun initScaffold a worker + Docker assetsHonors --force to overwrite existing files.
temporal-bun doctorLoad config, emit log + metrics, verify TLSAccepts --log-format, --log-level, --metrics, --metrics-exporter, --metrics-endpoint.
temporal-bun docker-buildBuild worker imageSupports --tag, --context, and --file.
temporal-bun replayDiff workflow determinism from JSON or live historiesSupports --history-file, --execution, --source, and --json; valid source values are cli, service, and auto.

Proto regeneration lives under packages/temporal-bun-sdk/scripts/update-temporal-protos.ts. Pass --version <tag> (or omit to use the latest release) and optionally --repo <owner/name> before publishing a new SDK version.