Temporal Bun SDK
Run Temporal workers and clients on Bun with replay tooling, Docker helpers, and Temporal Cloud/TLS support.
@proompteng/temporal-bun-sdk runs Temporal workers and clients on Bun.
Prerequisites
- Bun 1.3.10 or newer (matches the package engine requirement).
- Access to a Temporal Cloud namespace or self-hosted cluster.
- (Optional) The
temporalCLI for namespace administration and replaying live executions. dockerif you plan to build container images with the provided helpers.
Quickstart
Create a new worker project outside another Bun workspace:
bunx @proompteng/temporal-bun-sdk init my-worker
cd my-worker
bun installAdd .env:
printf "TEMPORAL_ADDRESS=127.0.0.1:7233\nTEMPORAL_NAMESPACE=default\nTEMPORAL_TASK_QUEUE=hello-bun\n" > .envStart Temporal:
temporal server start-dev --headlessStart the worker:
bun run devStart a workflow in another shell:
temporal workflow start \
--task-queue hello-bun \
--type helloWorkflow \
--input '"Codex"'Add to an existing Bun project
Add the SDK to an existing Bun workspace:
bun add @proompteng/temporal-bun-sdkThe template includes example workflows, activities, and Docker packaging scripts that map one-to-one with the library's defaults.
Strict mode
The generated worker defaults to workflowGuards: 'warn' so local setup works
with temporal server start-dev.
If you switch to strict mode, configure worker versioning and stable build IDs.
See Worker build IDs and versioning below.
Configure your Temporal connection
Configuration flows through loadTemporalConfig(), which reads environment
variables, normalizes paths, and enforces required values. Drop a .env file in
your worker project and tailor the defaults as needed:
TEMPORAL_ADDRESS=127.0.0.1:7233
TEMPORAL_NAMESPACE=default
TEMPORAL_TASK_QUEUE=demo-worker
# Add these when connecting to Temporal Cloud or a TLS-enabled cluster:
# TEMPORAL_API_KEY=your-cloud-api-key
# TEMPORAL_TLS_CERT_PATH=./certs/client.crt
# TEMPORAL_TLS_KEY_PATH=./certs/client.key
# TEMPORAL_TLS_CA_PATH=./certs/ca.pemEnvironment variables supported by the config loader:
| Variable | Default | Description |
|---|---|---|
TEMPORAL_ADDRESS | ${TEMPORAL_HOST}:${TEMPORAL_GRPC_PORT} | Direct address override (e.g. temporal.example.com:7233). |
TEMPORAL_HOST | 127.0.0.1 | Hostname used when TEMPORAL_ADDRESS is unset. |
TEMPORAL_GRPC_PORT | 7233 | Temporal gRPC port. |
TEMPORAL_NAMESPACE | default | Namespace passed to the worker and client. |
TEMPORAL_TASK_QUEUE | replay-fixtures | Worker task queue. |
TEMPORAL_API_KEY | unset | Injected into connection metadata for Cloud/API auth. |
TEMPORAL_CLOUD_ADDRESS | unset | Temporal Cloud Ops endpoint (defaults to saas-api.tmprl.cloud:443 when Cloud API is enabled). |
TEMPORAL_CLOUD_API_KEY | unset | API key for Temporal Cloud Ops API (Bearer token). |
TEMPORAL_CLOUD_API_VERSION | unset | Cloud API version header (defaults to 2025-05-31 when Cloud API is enabled). |
TEMPORAL_TLS_CA_PATH | unset | Path to trusted CA bundle. |
TEMPORAL_TLS_CERT_PATH / TEMPORAL_TLS_KEY_PATH | unset | mTLS client certificate and key (require both). |
TEMPORAL_TLS_SERVER_NAME | unset | Overrides TLS server name verification. |
TEMPORAL_ALLOW_INSECURE / ALLOW_INSECURE_TLS | false | Accepts 1/true/on to skip certificate verification. |
TEMPORAL_WORKER_IDENTITY_PREFIX | temporal-bun-worker | Worker identity prefix (host and PID are appended). |
TEMPORAL_WORKER_BUILD_ID | unset | Worker build ID; auto-derived when unset. |
TEMPORAL_WORKFLOW_CONCURRENCY | 4 | Workflow poller concurrency. |
TEMPORAL_ACTIVITY_CONCURRENCY | 4 | Activity poller concurrency. |
TEMPORAL_STICKY_CACHE_SIZE | 128 | Sticky cache size for determinism snapshots. |
TEMPORAL_STICKY_TTL_MS | 300000 | Sticky cache TTL in milliseconds. |
TEMPORAL_STICKY_SCHEDULING_ENABLED | true when cache size > 0 | Enable sticky scheduling; set to 0/false to disable. |
TEMPORAL_ACTIVITY_HEARTBEAT_INTERVAL_MS | 5000 | Activity heartbeat throttle interval. |
TEMPORAL_ACTIVITY_HEARTBEAT_RPC_TIMEOUT_MS | 5000 | Activity heartbeat RPC timeout. |
TEMPORAL_LOG_FORMAT | pretty | Select json or pretty logging output for worker/client runs. |
TEMPORAL_LOG_LEVEL | info | Minimum log severity (debug, info, warn, error). |
TEMPORAL_TRACING_INTERCEPTORS_ENABLED | true | Set to false to disable tracing/audit interceptors. |
TEMPORAL_SHOW_STACK_SOURCES | false | Include stack trace source maps in errors. |
TEMPORAL_METRICS_EXPORTER | in-memory | Metrics sink: in-memory, file, prometheus, or otlp. |
TEMPORAL_METRICS_ENDPOINT | unset | Path/URL for file, Prometheus, or OTLP exporters. |
TEMPORAL_CLIENT_RETRY_MAX_ATTEMPTS | 5 | WorkflowService RPC attempt budget. |
TEMPORAL_CLIENT_RETRY_INITIAL_MS | 200 | Initial retry delay (milliseconds). |
TEMPORAL_CLIENT_RETRY_MAX_MS | 5000 | Maximum retry delay (milliseconds). |
TEMPORAL_CLIENT_RETRY_BACKOFF | 2 | Exponential backoff multiplier applied per attempt. |
TEMPORAL_CLIENT_RETRY_JITTER_FACTOR | 0.2 | Decorrelated jitter factor between 0 and 1. |
TEMPORAL_CLIENT_RETRY_STATUS_CODES | UNAVAILABLE,RESOURCE_EXHAUSTED,DEADLINE_EXCEEDED,INTERNAL | Comma-separated Connect codes that should be retried. |
TEMPORAL_PAYLOAD_CODECS | unset | Comma-separated payload codecs applied in order (e.g. gzip,aes-gcm). |
TEMPORAL_CODEC_AES_KEY | unset | Base64 or hex AES key (128/192/256-bit) required when aes-gcm is enabled. |
TEMPORAL_CODEC_AES_KEY_ID | default | Optional key identifier recorded in payload metadata for rotation/diagnostics. |
loadTemporalConfig() returns typed values that the client and worker factories
consume directly, so you never have to stitch addresses or TLS buffers together
by hand.
For a focused Cloud setup guide, including API key auth, custom CA bundles, and mTLS, see Temporal Cloud and TLS.
Worker build IDs and versioning
Workers derive their build ID (in priority order) from:
deployment.buildIdpassed tocreateWorker(...)/WorkerRuntime.create(...)TEMPORAL_WORKER_BUILD_ID- a derived value based on the configured workflow sources (
workflowsPath)
When you enable worker versioning via deployment.versioningMode, the worker
includes deployment metadata (deployment name + build ID) in poll/response
requests so the server can route workflow tasks to the correct build.
The Bun SDK does not call the deprecated Build ID Compatibility APIs (Version Set-based “worker versioning v0.1”), since they may be disabled on some namespaces.
OpenTelemetry export
Set TEMPORAL_OTEL_ENABLED=true to start the OpenTelemetry SDK inside the worker
process. Configure exporters with standard OTEL environment variables:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINTandOTEL_EXPORTER_OTLP_METRICS_ENDPOINT(orOTEL_EXPORTER_OTLP_ENDPOINTto share a base URL).OTEL_EXPORTER_OTLP_PROTOCOL(or per-signalOTEL_EXPORTER_OTLP_TRACES_PROTOCOL/OTEL_EXPORTER_OTLP_METRICS_PROTOCOL) to choosehttp/json(default) orhttp/protobuf. The SDK warns and falls back to HTTP if gRPC is requested.OTEL_SERVICE_NAME,OTEL_SERVICE_NAMESPACE, andOTEL_SERVICE_INSTANCE_IDto label service identity.OTEL_RESOURCE_ATTRIBUTESfor additional resource tags.OTEL_EXPORTER_OTLP_TIMEOUT(or per-signalOTEL_EXPORTER_OTLP_TRACES_TIMEOUT/OTEL_EXPORTER_OTLP_METRICS_TIMEOUT) to increase OTLP request timeouts.OTEL_METRIC_EXPORT_INTERVALandOTEL_METRIC_EXPORT_TIMEOUTto tune metric export cadence.OTEL_EXPORTER_OTLP_COMPRESSION(or per-signalOTEL_EXPORTER_OTLP_TRACES_COMPRESSION/OTEL_EXPORTER_OTLP_METRICS_COMPRESSION) to enable gzip payload compression.
Auto-instrumentation stays disabled by default; enable it explicitly with
TEMPORAL_OTEL_AUTO_INSTRUMENTATION=true.
WorkflowService client resilience
createTemporalClient() automatically wraps WorkflowService RPCs with our retry
helper and telemetry interceptors:
-
Configurable retries -
config.rpcRetryPolicyis populated from theTEMPORAL_CLIENT_RETRY_*env vars (or overrides passed toloadTemporalConfig). All client methods use the resulting jittered exponential backoff policy, and you can override per-call values viaTemporalClientCallOptions.retryPolicy. -
Optional call options -
startWorkflow,signalWorkflow,queryWorkflow,signalWithStart,terminateWorkflow, anddescribeNamespaceaccept an optional trailingcallOptionsargument (headers, timeout, abort signal, retry policy). UsetemporalCallOptions()to brand the object so payloads are not mistaken for options:import { temporalCallOptions } from '@proompteng/temporal-bun-sdk' await client.signalWorkflow( handle, 'updateState', { signal: 'start' }, temporalCallOptions({ headers: { 'x-trace-id': traceId }, timeoutMs: 5_000, }), ) -
Default interceptors - inbound/outbound hooks wrap every workflow RPC and operation: namespace/identity headers are injected, retries use jittered backoff, and latency/error metrics flow through the configured registry/exporter. Tracing spans are opt in via
TEMPORAL_TRACING_INTERCEPTORS_ENABLED(ortracingEnabledin code). Append custom middleware withclientInterceptors(client) orinterceptors(transport) to add auth headers, audit logs, or bespoke telemetry. -
Memo/search helpers -
client.memoandclient.searchAttributesexposeencode/decodehelpers that reuse the client'sDataConverter, making it easy to prepare payloads for raw WorkflowService requests. -
TLS validation - TLS buffers are checked up front (missing files, invalid PEMs, and mismatched cert/key pairs throw
TemporalTlsConfigurationError) and transport failures surface asTemporalTlsHandshakeErrorwith remediation hints.
RPC coverage (Workflow, Operator, Cloud)
The SDK exposes both high-level helpers and low-level RPC access:
client.rpc.workflow.call(...)for any WorkflowService RPC.client.operator.*for OperatorService convenience methods, plusclient.rpc.operator.call(...)for full coverage.client.cloud.call(...)for Temporal Cloud Ops API (configure viaTEMPORAL_CLOUD_*).
All RPC entrypoints accept TemporalClientCallOptions for headers, timeouts,
retry overrides, and abort signals.
Payload codecs and failure conversion
The SDK's DataConverter supports an ordered codec chain so you can compress
and encrypt payloads without giving up deterministic replay:
- Enable codecs with
TEMPORAL_PAYLOAD_CODECS(e.g.gzip,aes-gcm); AES-GCM requiresTEMPORAL_CODEC_AES_KEY(128/192/256-bit, base64/hex) and optionallyTEMPORAL_CODEC_AES_KEY_IDfor rotation tracking. - Codecs wrap the entire payload proto, so replay remains compatible as long as the chain stays stable for a given workflow history.
- Codec metrics are emitted per codec/direction
(
temporal_payload_codec_encode_total_*,*_decode_total_*,*_errors_total_*) and failures log the offending codec/direction. - The failure converter returns a structured
TemporalFailureErrorthat preservesdetailsandcausepayloads using the same codec chain, so workflow/activity/update/query errors decode cleanly on clients. temporal-bun doctorbuilds the codec chain from config and fails fast on missing/invalid keys while printing the resolved codec list.
Observability
The Temporal Bun SDK ships with structured logging and metrics layers so you can operate Bun workers/clients like any other service in your stack. Configure the behavior with the same environment variables listed above:
TEMPORAL_LOG_FORMAT- controls the log formatter (prettyorjson).TEMPORAL_LOG_LEVEL- sets the minimum log severity that makes it into the sink.TEMPORAL_METRICS_EXPORTER/TEMPORAL_METRICS_ENDPOINT- select a sink (in-memory,file,prometheus, orotlp) and its path/URL.TEMPORAL_METRICS_FLUSH_INTERVAL_MS- override the worker metrics flush cadence (defaults to 10 seconds).
Want to verify your configuration without running a worker? temporal-bun
includes a doctor command that loads the shared config, builds the interceptor
chain, validates the retry presets, spins up observability services, emits a
log, increments a counter, and flushes the selected exporter:
bunx @proompteng/temporal-bun-sdk doctor --log-format=json --metrics=file:./metrics.jsonThe command prints a success summary (including active interceptors and the resolved retry policy) once the JSON log is emitted and the metrics file is written, so you can script it into CI or pre-deployment checks.
Effect layers and runtime helpers
Temporal Bun exposes Effect Layers so workers, clients, and CLI tools can share managed dependencies without hand-wiring config or observability plumbing:
createTemporalClientLayer/TemporalClientLayer- managed Temporal client lifecycle (auto-shutdown on scope exit).createWorkerRuntimeLayer/WorkerRuntimeLayer- run the worker runtime inside anEffect.scopedprogram.createWorkerAppLayer/runWorkerApp- compose config + observability + WorkflowService + worker runtime in one call.createTemporalCliLayer/runTemporalCliEffect- run Effect programs with the same config/observability/workflow service stack used by the CLI.
Example: run a one-off CLI task with the same layers used by temporal-bun:
import { Effect } from 'effect'
import { makeTemporalClientEffect, runTemporalCliEffect } from '@proompteng/temporal-bun-sdk'
await runTemporalCliEffect(
makeTemporalClientEffect().pipe(
Effect.tap(({ client }) => Effect.promise(() => client.describeNamespace())),
Effect.tap(({ client }) => Effect.promise(() => client.shutdown())),
),
)Prefer the plain async helpers? createWorker() and createTemporalClient()
wrap the same configuration and observability defaults without requiring Effect
plumbing.
Replay workflow histories
temporal-bun replay lets you ingest workflow histories, diff determinism
state, and share diagnostics with incident responders without writing ad hoc
scripts. It reuses loadTemporalConfig, the observability sinks, and the same
ingestion pipeline that powers the worker sticky cache.
--history-file <path>- replay a JSON capture (temporal workflow show --history --output json) or a fixture envelope withhistory+info.--execution <workflowId/runId>- fetch live history via the Temporal CLI or WorkflowService RPC (--source cli|service|auto).--workflow-type,--namespace,--temporal-cli,--json- supply workflow metadata, namespace overrides, a custom CLI binary path, and machine-readable summaries respectively.- Exit codes:
0success,2nondeterminism,1IO/configuration failures.
# Replay a saved history fixture
bunx temporal-bun replay \
--history-file packages/temporal-bun-sdk/tests/replay/fixtures/timer-workflow.json \
--workflow-type timerWorkflow \
--json
# Diff a live execution using the Temporal CLI harness
TEMPORAL_ADDRESS=127.0.0.1:7233 TEMPORAL_NAMESPACE=temporal-bun-integration \
bunx temporal-bun replay \
--execution workflow-id/run-id \
--workflow-type integrationWorkflow \
--namespace temporal-bun-integration \
--source cliSet TEMPORAL_CLI_PATH or pass --temporal-cli when the CLI binary is not on
PATH, and rely on --source service to route through WorkflowService when the
CLI is unavailable (for example, in CI). The command logs history provenance,
event counts, mismatch metadata, and writes a compact JSON summary when
--json is supplied so you can feed the output into other tooling.
Author activities
Activities are plain Bun functions. Keep them deterministic from Temporal's perspective and delegate external side effects to this layer.
export type Activities = {
echo(input: { message: string }): Promise<string>
sleep(milliseconds: number): Promise<void>
}
export const activities: Activities = {
async echo({ message }) {
return message
},
async sleep(milliseconds) {
await Bun.sleep(milliseconds)
},
}Author workflows
Import workflow primitives from the SDK so Bun can bundle Temporal's workflow runtime correctly.
import { Effect } from 'effect'
import * as Schema from 'effect/Schema'
import { defineWorkflow } from '@proompteng/temporal-bun-sdk/workflow'
export const workflows = [
defineWorkflow('helloWorkflow', Schema.Array(Schema.String), ({ input, activities, determinism }) =>
Effect.gen(function* () {
const [rawName] = input
const name = typeof rawName === 'string' && rawName.length > 0 ? rawName : 'Temporal'
yield* activities.schedule('sleep', [10])
yield* activities.schedule('echo', [{ message: `Hello, ${name}!` }])
return `Greeting queued at ${new Date(determinism.now()).toISOString()}`
}),
),
]
export default workflowsExport your workflows from an index file so the worker can register them all at once:
export * from './workflows.ts'Run a worker
createWorker() wires up the Temporal connection, registers your workflows and
activities, and hands back both the worker instance and the resolved config.
import { fileURLToPath } from 'node:url'
import { createWorker } from '@proompteng/temporal-bun-sdk/worker'
import { activities } from './workers/activities.ts'
const { worker } = await createWorker({
activities,
workflowsPath: fileURLToPath(new URL('./workers/workflows/index.ts', import.meta.url)),
})
const shutdown = async (signal: string) => {
console.log(`Received ${signal}. Shutting down worker...`)
await worker.shutdown()
process.exit(0)
}
process.on('SIGINT', () => void shutdown('SIGINT'))
process.on('SIGTERM', () => void shutdown('SIGTERM'))
await worker.run()For quick tests, run the bundled binary instead of compiling your own entry point:
bunx temporal-bun-workerIt uses the same configuration loader and ships with example workflows if you need a smoke test.
Start and manage workflows from Bun
createTemporalClient() produces a Bun-native Temporal client that already
understands the config loader, workflow handles, and retry policies.
import { createTemporalClient } from '@proompteng/temporal-bun-sdk'
const { client } = await createTemporalClient()
const start = await client.startWorkflow({
workflowId: `hello-${Date.now()}`,
workflowType: 'helloWorkflow',
taskQueue: 'demo-worker',
args: ['Proompteng'],
})
console.log('Workflow started:', start.runId)
await client.signalWorkflow(start.handle, 'complete', { ok: true })
await client.terminateWorkflow(start.handle, { reason: 'demo complete' })
await client.shutdown()All workflow operations (startWorkflow, signalWorkflow, queryWorkflow,
terminateWorkflow, cancelWorkflow, and signalWithStart) share the same
handle structure, so you can persist it between processes without extra
serialization code.
Workflow updates and queries
The SDK supports workflow updates and queries with Effect Schema validators.
Define handlers alongside workflows, then invoke them via the client.workflow
helpers:
import { Effect } from 'effect'
import * as Schema from 'effect/Schema'
import { defineWorkflow, defineWorkflowUpdates } from '@proompteng/temporal-bun-sdk/workflow'
const updates = defineWorkflowUpdates([
{
name: 'setCounter',
input: Schema.Number,
handler: (_ctx, value: number) => Effect.sync(() => value),
},
])
export const counterWorkflow = defineWorkflow(
'counterWorkflow',
Schema.Number,
({ input }) => Effect.sync(() => input),
{ updates },
)import { createTemporalClient } from '@proompteng/temporal-bun-sdk'
const { client } = await createTemporalClient()
const { handle } = await client.startWorkflow({
workflowId: `counter-${Date.now()}`,
workflowType: 'counterWorkflow',
taskQueue: 'demo-worker',
args: [0],
})
const result = await client.workflow.update(handle, {
updateName: 'setCounter',
args: [42],
waitForStage: 'completed',
})
if (result.outcome?.status === 'success') {
console.log('Counter updated to', result.outcome.result)
}
await client.shutdown()Use defineWorkflowQueries and client.queryWorkflow for query handlers; query
execution runs in read-only mode and rejects non-deterministic operations.
CLI quick reference
The temporal-bun CLI is available through bunx @proompteng/temporal-bun-sdk <command>
before installation, or bunx temporal-bun <command> inside a project that
already depends on the package.
init [directory] [--force]- scaffold a Bun worker project with example workflows, activities, Dockerfile, and scripts.doctor- validate the SDK config, emit a JSON log, and flush the selected metrics exporter.docker-build [--tag <name>] [--context <path>] [--file <path>]- package the current directory into a worker image.replay- diff workflow determinism from JSON fixtures or live histories.help- print the command reference.
Legacy binary status
The pure TypeScript runtime is the default (and only) supported execution path.
Historical assets remain in packages/temporal-bun-sdk/bruke/ for reference, but
environment flags such as TEMPORAL_BUN_SDK_USE_ZIG are no longer wired into the
worker or client. Future experiments should introduce new, explicit
configuration rather than relying on retired flags.
Local development and production tips
- Use Bun's
--watchflag (bun run --watch worker.ts) to restart the worker on changes. - Keep activities free of Temporal SDK imports so they remain tree-shakeable and easy to unit test.
- Expose Prometheus metrics via the worker runtime and forward them to your observability stack.
- Prefer Temporal schedules to cron jobs for recurring workloads.
- Store long-lived credentials in a secrets manager and inject them via the worker environment.
With @proompteng/temporal-bun-sdk, you can reuse existing Temporal workflows
while adopting Bun's fast startup times and fully typed client/worker helpers.
Architecture overview
- Workflow runtime - executes deterministic workflows inside Bun with Effect fibers. Command intents (schedule-activity, timers, child workflows, signals, continue-as-new) emit Temporal protobufs directly and are guarded by a determinism snapshot that captures command order, random values, and logical timestamps.
- Worker runtime - wraps pollers, sticky cache routing, activity execution,
and build ID registration. It consumes the same
loadTemporalConfigenvironment contract as our Go worker and exposes concurrency knobs viaTEMPORAL_WORKFLOW_CONCURRENCY,TEMPORAL_ACTIVITY_CONCURRENCY, and sticky cache variables. - Client - a Connect transport with branded call options, memo/search helpers, TLS diagnostics, and retry policies derived from environment variables. All WorkflowService RPCs run through logging/metrics interceptors so Bun services get observability parity with the worker runtime.
- CLI and tooling -
temporal-bunprovides scaffolding, config validation, Docker packaging, and deterministic replay. The CLI shares the same config and observability layers, ensuring every command fails fast with actionable logs.
Tutorials and recipes
Follow the quickstart above to scaffold workflows/activities, then explore the
example app in packages/temporal-bun-sdk-example for:
- Activity heartbeats and cancellation propagation via the runtime lifecycle helpers.
- Signals, queries, and updates using
defineWorkflowSignals,defineWorkflowQueries, anddefineWorkflowUpdates. - Deterministic helpers (
determinism.now,determinism.random,determinism.getVersion) that make replay diagnostics trivial.
We are actively porting these recipes into standalone guides (heartbeats, signals, updates, schedules). Each guide links back to runnable snippets so teams can copy/paste into new Bun workers.
CLI and tooling reference
| Command | Purpose | Notes |
|---|---|---|
temporal-bun init | Scaffold a worker + Docker assets | Honors --force to overwrite existing files. |
temporal-bun doctor | Load config, emit log + metrics, verify TLS | Accepts --log-format, --log-level, --metrics, --metrics-exporter, --metrics-endpoint. |
temporal-bun docker-build | Build worker image | Supports --tag, --context, and --file. |
temporal-bun replay | Diff workflow determinism from JSON or live histories | Supports --history-file, --execution, --source, and --json; valid source values are cli, service, and auto. |
Proto regeneration lives under packages/temporal-bun-sdk/scripts/update-temporal-protos.ts.
Pass --version <tag> (or omit to use the latest release) and optionally
--repo <owner/name> before publishing a new SDK version.