Quickstart
Ceradela Storage in 5 minutes. Zero AWS setup on your side — it's a pure API. You send rows, you read them back.
1. Get a tenant name + token
Email hello@ceradela.com
with a tenant name you'd like (e.g. acme). You'll receive
back a 64-hex service token and the API base URL. Treat the token
like a database password.
2. Push rows
Send a batch of rows and we encode them into BEBO (our columnar format), zstd-compress them, and store them under your tenant prefix. One HTTP call per batch, up to 50,000 rows per request.
curl -X POST https://storage.../api/tenant/acme/ingest \
-H "X-Service-Token: $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"table": "orders",
"rows": [
{"id":1,"total_cents":12500,"created_at":"2026-04-01T00:00:00Z"},
{"id":2,"total_cents":8900, "created_at":"2026-04-01T00:05:12Z"}
]
}'
{
"ok": true,
"tenant": "acme",
"table": "orders",
"partition": "2026-04",
"rows_stored": 2,
"bytes_stored": 480,
"storage_key": "acme/tables/orders/2026-04_ingest_1776563607442.cbebomth"
} 3. List your archives
curl https://storage.../api/tenant/acme/archives \
-H "X-Service-Token: $TOKEN"
{
"tenant": "acme",
"rows": [{
"storage_key": "acme/tables/orders/2026-04_ingest_1776563607442.cbebomth",
"table_name": "orders",
"partition": "2026-04",
"row_count": 2,
"byte_size": 480,
"columns": "created_at,id,total_cents",
"created_at": "2026-04-19T01:53:27Z"
}],
"total": 1,
"page": 1,
"limit": 50
} 4. Read rows back
curl "https://storage.../api/tenant/acme/cold/orders?partition=2026-04&page=1&limit=50" \
-H "X-Service-Token: $TOKEN"
{
"tenant": "acme",
"table": "orders",
"partition": "2026-04",
"columns": ["created_at","id","total_cents"],
"rows": [
{"created_at":"2026-04-01T00:00:00Z","id":1,"total_cents":12500},
{"created_at":"2026-04-01T00:05:12Z","id":2,"total_cents":8900}
],
"total": 2,
"page": 1,
"limit": 50,
"files": 1
} 5. Verify any archive locally (optional)
BEBO is an open format and the decoder is standalone. You can pull any archive down and inspect it yourself — this is the basis of our no-lock-in commitment.
bebo verify --deep orders-2026-04.cbebomth
{ "decode_ok": true, "rows_decoded": 2, "sha256": "bd77..." }
bebo export orders-2026-04.cbebomth --to=csv Mental model
- You own the rows. We own the storage path.
- No AWS setup on your side. We host S3, IAM, the Lambda, everything.
- Per-tenant isolation. Your token only unlocks your prefix. Cross-tenant requests return 403 even with a valid token.
- Atomic ingest. Each POST writes one BEBO file. No partial writes, no lost rows.
- Monthly partitioning. Files bucket by the current UTC month automatically.
- Exit is trivial.
bebo export --to=parquetor download raw files. No lock-in.
What's next
- API reference — every endpoint, every field, error shapes
- CLI reference — reading and exporting archives outside the API
- BEBO format spec — what's inside a
.cbebomthfile