Management API — contract & automation

Machine-readable and narrative reference for workspace automation (same Authorization: Bearer keys as ingestion, where applicable). Field rules mirror lib/utils/pipe_validators.js (Zod) and routes/management_api_routes.js.

OpenAPI 3.0 (beta): https://www.batchpipe.com/openapi/management-v1.openapi.yaml — import into codegen, Postman, or contract tests; treat as beta and diff on upgrade.

← HTTP API

Auth & integration assumptions

  • Workspace scope: the Bearer token is a workspace API key; every management URL operates on that workspace’s pipes, keys, and limits.
  • Ingest path unchanged: POST https://www.batchpipe.com/v1/ingest/:pipe_id (and related ingest docs) stay the data path; management keys can create sibling keys and must be stored like root credentials.
  • Confirm: there is no separate “UserRadar” credential type — automation uses the customer’s own workspace key (option A).

Validation errors (400)

Failed Zod validation returns JSON shaped as:

{
  "error": "validation_error",
  "message": "Request body failed validation",
  "errors": {
    "pipe_name": "Pipe name can only contain letters, numbers, hyphens, and underscores",
    "columns.0.column_name": "Column name is required",
    "destination_config.url": "URL is required for HTTP endpoint destinations"
  }
}

errors is a flat object: keys are Zod issue.path segments joined with . (array indices appear as digits, e.g. columns.2.source_field). Values are human-readable strings (not machine codes). Implementation: middleware/validation.js.

See also the HTTP API — validation section.

Secret redaction (GET pipe / destination payloads)

Before JSON is returned, sensitive fields in destination_config are replaced with the string [redacted] (never omitted — the key stays present so clients know the field exists):

destination_type Redacted key(s)
destination_database destination_config.password
destination_object_store destination_config.secret_key
destination_http_endpoint destination_config.auth_header

GET /v1/api-keys never returns full secrets — only api_key_prefix and metadata (lib/domain/api_key.js).

Postgres / MySQL destination_config

Delivery reads host, port (default 5432 / 3306), database, username, password, table, and optional dialect / sql_dialect (lib/delivery/database_destination.js). There is no per-destination JSON flag for TLS today: TLS is used when DESTINATION_DB_SSL=true or when host is not localhost / 127.0.0.1, with rejectUnauthorized: false for the client pool.

Column mapping: each row maps a SQL column to a top-level JSON key on ingested records via source_field (lib/delivery/record_value.js — nested JSONPath not supported). column_type is one of the JSON types string|number|boolean|object|array|null before casting to SQL.

PATCH …/destinations/:id — empty string keeps secrets: if destination_config.password (database), secret_key (object store), or auth_header (HTTP) is sent as "", the server keeps the previously stored value after shallow-merge (routes/management_api_routes.js).

Replacing column mappings: sending columns on PATCH replaces all destination columns (delete + insert) for database destinations.

Server defaults (create pipe / destination)

  • POST /v1/pipespipe_status is always active. pipe_add_ingest_ts defaults true; pipe_add_ingest_ip defaults false; pipe_ingest_ts_field / pipe_ingest_ip_field default to ingest_ts / ingest_ip; pipe_api_key_required defaults true; pipe_allow_any_browser_origin defaults false (lib/domain/pipe.js).
  • Uniqueness: (workspace_id, pipe_name) is unique — duplicate returns 409 duplicate_pipe_name.
  • POST destinationdestination_max_retry_seconds defaults 86400; destination_status starts active.
  • GET /v1/pipes/:pipe_id — includes each database destination’s columns array (joined server-side) so automation can read mappings without a second round-trip.

Website analytics alignment

The Website stats & events guide documents recommended fact tables (pageview, page_event, funnels). BatchPipe does not ship a separate “analytics preset” API flag: you model the warehouse with DDL from that doc, then create a pipe + Postgres destination whose columns map ingested JSON keys to those table columns.

Typical choices: enable pipe_add_ingest_ts and map semantic_role: received_at (or a dedicated column) if you want server receive time separate from client record_ts; tune PUT /v1/pipes/:id/limits for UI-driven traffic. The OpenAPI example under POST /v1/pipes shows analytics-friendly field names.

Minimal end-to-end (conceptual JSON)

Replace UUIDs and secrets; order matters only where a step needs a prior pipe_id / destination_id.

  1. POST /v1/pipes — create pipe (see OpenAPI example).
  2. POST /v1/pipes/{pipe_id}/destinations — attach Postgres destination + columns.
  3. PUT /v1/pipes/{pipe_id}/limits — set batching / rate caps.
  4. POST /v1/pipes/{pipe_id}/allowed-origins — if the pipe will accept browser ingest without a per-request key.
  5. POST /v1/ingest/:pipe_id — send JSON objects whose top-level keys match source_field mappings (see ingest reference).

Keys, limits, rate limits

  • Key scoping: prefer a dedicated automation key per environment, name it clearly (api_key_name), rotate by creating a new key and revoking the old one (DELETE /v1/api-keys/:id cannot revoke the key used for the current request).
  • Management rate limits: there is no workspace-specific Express rate limiter on /v1/pipes… today (unlike auth forms). Provision flows should still backoff on 429 if introduced at the edge.
  • OpenAPI URL: stable path on each deployment: https://www.batchpipe.com/openapi/management-v1.openapi.yaml