Documentation

Blocklog docs: product model, API contract, and operating guide.

This page is the high-level map of the platform. It explains what Blocklog does, how the API is structured, how proof generation fits into the product, which credentials to use, which endpoints matter in production, and where to go next for deeper details.

What Blocklog Is

A tamper-evident audit evidence layer for logs and verification workflows.

Blocklog is not just a log viewer and not just a verification widget. It is a product and API surface for sending canonical events into an integrity-preserving pipeline, generating portable proof artifacts, and giving operators or auditors a way to validate that records were not silently changed after ingestion.

Blocklog accepts canonical JSON events over authenticated HTTP endpoints.
Each event is normalized, timestamped, hashed, and linked into a tamper-evident chain.
Logs can later be sealed into batches, exported as proof bundles, and verified independently.
Operational surfaces expose health, metrics, integrity, usage, and debug signals for production review.
Getting started

Bootstrap a tenant, sign in, create credentials, and send the first log.

Authentication

Understand bearer tokens, integration API keys, and when each is appropriate.

Log ingestion

Send canonical events, choose idempotency keys, and understand ingestion behavior.

Batch logs

Ingest larger event sets safely and prepare them for later proof generation.

Verification

Verify proofs by proof ID, log ID, or batch ID across public and tenant scopes.

Operations

Operate the system with health, usage, integrity, webhook, and metrics endpoints.

Admin portal

Review company-level controls, API keys, kill switches, and operational posture.

Auditor portal

Understand the verification surfaces intended for reviewers and external auditors.

SDKs

Use the Node and Python reference SDKs for retries, batching, timestamps, and idempotency.

Mental Model

The normal request lifecycle.

  1. Authenticate as a user or an integration.
  2. Send a single log or a batch of logs with `event_type`, `source`, `data`, and an `idempotency_key` when retries are possible.
  3. Blocklog normalizes the payload, archives raw ingestion context, links the event into the tenant chain, and records verification metadata.
  4. Batches can later be sealed and anchored for stronger audit portability.
  5. Auditors or operators verify by proof ID, log ID, or batch ID without depending on screenshots or internal assurances.

Reference Flow

The API flow most teams start with.

POST /api/v1/auth/login
POST /api/v1/auth/api_keys
POST /api/v1/logs
POST /api/v1/logs/batch
POST /api/v1/batches/seal
POST /api/v1/batches/{batch_id}/anchor
GET  /api/v1/logs/{log_id}/verify
GET  /api/v1/public/verify/{proof_id}
GET  /api/v1/usage
GET  /api/v1/integrity/status

Authentication mode

Bearer auth for product surfaces

Use bearer tokens for console-driven product workflows such as dashboard views, admin actions, and authenticated verification. This is the default mode for signed-in users.

Authentication mode

API keys for external integrations

Use company API keys for server-to-server ingestion and stable integration credentials. This is the right choice for production services sending logs continuously.

Canonical Ingestion Contract

What a correct ingestion payload looks like.

For current ingestion endpoints, treat `data` as the event body, `event_type` as the semantic event label, `source` as the producing system, and `idempotency_key` as the replay guard. If the client omits a timestamp, the SDKs or backend can supply one.

Single log

POST /api/v1/logs
X-API-Key: <integration_key>

{
  "event_type": "payment.created",
  "source": "payments-api",
  "idempotency_key": "evt_payment_123_created",
  "timestamp": "2026-03-27T10:20:00Z",
  "data": {
    "user_id": "123",
    "amount": 2000,
    "currency": "USD"
  }
}

Batch ingestion

POST /api/v1/logs/batch
X-API-Key: <integration_key>

{
  "logs": [
    {
      "event_type": "payment.created",
      "source": "payments-api",
      "idempotency_key": "evt_payment_123_created",
      "data": {
        "user_id": "123",
        "amount": 2000
      }
    },
    {
      "event_type": "payment.updated",
      "source": "payments-api",
      "idempotency_key": "evt_payment_123_updated",
      "data": {
        "user_id": "123",
        "status": "captured"
      }
    }
  ]
}

Verification Surfaces

How reviewers validate evidence.

GET /api/v1/public/verify/{proof_id}
GET /api/v1/verify/log/{log_id}
GET /api/v1/logs/{log_id}/verify
GET /api/v1/verify/batch/{batch_id}
GET /api/v1/evidence/batch/{batch_id}
POST /api/v1/exports/{batch_id}

Use public verification for portable proof checks and tenant-scoped verification when operators need richer context tied to the company boundary.

Operational Endpoints

What operators should watch in production.

GET /api/v1/health
GET /api/v1/health/live
GET /api/v1/health/ready
GET /api/v1/metrics
GET /api/v1/usage
GET /api/v1/integrity/status
GET /api/v1/integrity/report
GET /api/v1/logs/debug/recent
GET /api/v1/webhooks/events

These endpoints support rollout verification, system health review, integrity checks, usage tracking, and ingestion troubleshooting.

Operator Checklist

What to verify before sending production traffic.

Create a company and founder account before onboarding real traffic.
Use bearer auth for interactive product usage and API keys only for long-running integrations.
Prefer explicit `idempotency_key` values whenever clients might retry requests.
Treat `data` as the canonical event body for ingestion requests.
Check `/usage`, `/integrity/status`, `/integrity/report`, and `/metrics` during rollout and incident review.
Export proof bundles early in pilot evaluations so stakeholders can review the full evidence path.
Start with Getting StartedGo to Log IngestionGo to Operations