Core BatchPipe terms as used in the app and schema.
A company-level account and billing boundary. Pipes, API keys, billing, and events are isolated per workspace.
A human identity (email + password) that can belong to multiple workspaces.
A membership mapping between a user account and a workspace, with a role (owner/admin/member).
A named stream that receives records and delivers batches to configured destinations. Pipes also define enrichment and limits.
One JSON object ingested through the ingestion API (clients may send one record or an array per request).
A workspace credential used for ingestion (POST /v1/ingest/…) and, with the same Bearer token, for the JSON management API (/v1/pipes, /v1/api-keys, etc.). Keys are stored hashed; the secret is shown only once when created.
A short, non-secret part of the key format so you can tell which key was used without exposing the secret.
Optional browser origins allowed for a pipe. CORS is enforced by browsers; it is not a strong identity check for non-browser clients. Any browser origin skips allowlist checks and successful ingest/preflight responses echo the request Origin in Access-Control-Allow-Origin: with an API key, a valid Bearer token is still required; in CORS-only mode there is no shared secret beyond the ingest URL—use limits and treat the URL as sensitive.
Accepting records into BatchPipe (authenticate, validate, enrich, enforce limits, and buffer/queue). Ingest answers: “How much data did we accept into the pipeline?”
Workers take buffered records, build batches, and write/send them to destinations (DB/object storage/HTTP). Delivery answers: “How much did we successfully push out to destinations?”
A group of records delivered together as a unit, controlled by size/time thresholds.
A configured endpoint where pipe batches are delivered (database, object store, or HTTP endpoint).
JSON settings for the destination (connection details, URL, table name, etc.). Treat as sensitive; it should be encrypted at rest.
For database destinations, how ingested JSON fields map to destination table columns (name, source field, type, nullable, optional semantic role).
Operational state of a destination: active = deliveries permitted, blocked = do not attempt delivery until unblocked (e.g., repeated failures or operator action).
The JSON field name in the ingested record that should be written into a destination database column.
Most values come directly from your ingestion payload (e.g. user_id).
Some values can come from BatchPipe enrichment if enabled on the pipe (e.g. an ingestion timestamp field or client IP field).
For database destinations, destination_column_type stores the
JSON
value type of the field: string,
number,
boolean,
object,
array, or
null.
Dates and timestamps are usually ingested as string (e.g. ISO-8601).
Delivery uses this to cast values into the actual SQL column type at the destination.
Optional “special meaning” on top of normal column mapping (primarily for database destinations):
DO NOTHING; MySQL uses a no-op assignment on one key column. MySQL uses row-alias upsert syntax (8.0.19+).pl_ingest_max_records_per_second: ingest rate cap (per pipe)
pl_ingest_max_records_per_day: ingest daily record cap (per pipe)
pl_ingest_max_records_per_request: max records in one ingest HTTP body
pl_delivery_max_records_per_flush: max records per delivery flush
pl_delivery_max_seconds_per_flush: max seconds before a time-based flush
pl_delivery_max_bytes_per_flush: max payload bytes per delivery flush
billing_plan_max_seconds_per_flush: plan ceiling for delivery flush timing (per billing_plan tier)
Append-only ingestion usage per time window (per workspace + pipe): records and bytes accepted.
Append-only delivery usage emitted by workers: batches attempted/succeeded/failed and bytes delivered.
Derived daily aggregates used for dashboards and invoicing (computed from the raw tables).
A stored alert or signal per workspace (for example destination auth failures, slow delivery, schema mismatch, or backlog growth).