Destinations
Destinations are where webhooks get delivered after processing. Hookbase supports two categories of destinations: HTTP endpoints for real-time delivery and warehouse destinations for batch ingestion into object storage.
Destination Types
| Type | Description | Use Case |
|---|---|---|
| HTTP | Delivers webhooks to an HTTP(S) endpoint | Real-time integrations, API triggers |
| Amazon S3 | Uploads batched events to an S3 bucket | AWS data lakes, Athena queries |
| Cloudflare R2 | Uploads batched events to an R2 bucket | Zero-egress archival, S3-compatible storage |
| Google Cloud Storage | Uploads batched events to a GCS bucket | BigQuery external tables, GCP pipelines |
| Azure Blob Storage | Uploads batched events to an Azure container | Azure Data Lake, Synapse Analytics |
HTTP Destinations
HTTP destinations deliver each webhook event to an endpoint in real time. This is the default destination type.
Creating an HTTP Destination
Required Fields
| Field | Description |
|---|---|
name | Human-readable name |
url | The HTTP(S) URL to deliver webhooks to |
Optional Fields
| Field | Description |
|---|---|
description | Optional description |
headers | Custom headers to include in requests |
authType | Authentication type (none, bearer, basic, api_key) |
authConfig | Authentication configuration |
timeoutMs | Request timeout in milliseconds (default: 30000) |
retryPolicy | Configuration for retry behavior |
enabled | Whether the destination is active (default: true) |
Custom Headers
Add custom headers to every webhook delivery:
{
"headers": {
"Authorization": "Bearer your-api-key",
"X-Custom-Header": "custom-value",
"X-Source": "hookbase"
}
}Common use cases:
- Authentication tokens
- API keys
- Tracking headers
- Environment identifiers
Retry Policy
Configure how Hookbase handles failed deliveries:
{
"retryPolicy": {
"maxRetries": 5,
"initialDelay": 1000,
"maxDelay": 60000,
"backoffMultiplier": 2
}
}| Field | Description | Default |
|---|---|---|
maxRetries | Maximum number of retry attempts | 5 |
initialDelay | First retry delay in milliseconds | 1000 |
maxDelay | Maximum delay between retries | 60000 |
backoffMultiplier | Multiplier for exponential backoff | 2 |
Retry Schedule Example
With default settings, retries occur at:
- Immediate delivery attempt
- 1 second later
- 2 seconds later
- 4 seconds later
- 8 seconds later
- 16 seconds later (if max 5 retries)
Success Criteria
A delivery is considered successful when:
- HTTP status code is 2xx (200-299)
- Response is received within timeout (30 seconds)
Failure Handling
After all retries are exhausted:
- The delivery is marked as
failed - The event is moved to the dead letter queue
- You can manually replay from the dashboard
Example: Creating an HTTP Destination
curl -X POST https://api.hookbase.app/api/destinations \
-H "Authorization: Bearer whr_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"name": "Production API",
"type": "http",
"url": "https://api.yourapp.com/webhooks/handler",
"headers": {
"Authorization": "Bearer sk_live_xxx",
"X-Webhook-Source": "hookbase"
},
"retryPolicy": {
"maxRetries": 5,
"initialDelay": 1000,
"maxDelay": 300000
}
}'Warehouse Destinations
Warehouse destinations batch webhook events and upload them as structured files (JSONL or JSON) to object storage. This is useful for analytics, compliance archival, and batch processing pipelines.
Plan Requirement
Warehouse destinations are available on Pro and Business plans.
How Batching Works
When events arrive, Hookbase queues them for batch delivery. The warehouse queue accumulates up to 100 events or waits 30 seconds (whichever comes first), then uploads a single file to your bucket.
Events for different destinations are grouped separately, so each destination receives its own files.
File Path Pattern
Files are stored using a predictable path structure:
{prefix}/{partition}/{timestamp}-{destination_id}.{format}The partition segment depends on your configuration:
| Partition Strategy | Example Path |
|---|---|
| Date (default) | webhooks/stripe/2026-02-21/1740150000-a1b2c3d4.jsonl |
| Hour | webhooks/stripe/2026-02-21/14/1740150000-a1b2c3d4.jsonl |
| Source | webhooks/stripe/2026-02-21/my-stripe-source/1740150000-a1b2c3d4.jsonl |
File Formats
JSONL (Recommended)
Each line is a self-contained JSON object. Works well with AWS Athena, BigQuery external tables, and streaming processors:
{"event_id":"evt_abc123","received_at":"2026-02-21T14:30:00Z","payload":{"type":"payment_intent.succeeded","data":{"amount":2500}}}
{"event_id":"evt_def456","received_at":"2026-02-21T14:30:01Z","payload":{"type":"customer.created","data":{"email":"user@example.com"}}}JSON Array
A single JSON array containing all events in the batch:
[
{
"event_id": "evt_abc123",
"received_at": "2026-02-21T14:30:00Z",
"payload": { "type": "payment_intent.succeeded" }
}
]Amazon S3
{
"type": "s3",
"config": {
"bucket": "my-data-lake",
"region": "us-east-1",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"secretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"prefix": "webhooks/stripe/",
"fileFormat": "jsonl",
"partitionBy": "date"
}
}| Field | Required | Description |
|---|---|---|
bucket | Yes | S3 bucket name |
region | Yes | AWS region (e.g., us-east-1) |
accessKeyId | Yes | AWS access key ID |
secretAccessKey | Yes | AWS secret access key (encrypted at rest) |
prefix | No | Path prefix inside the bucket |
fileFormat | No | jsonl (default) or json |
partitionBy | No | date (default), hour, or source |
Cloudflare R2
R2 destinations use a native Cloudflare binding. No external credentials are needed.
{
"type": "r2",
"config": {
"bucket": "my-webhook-archive",
"prefix": "webhooks/",
"fileFormat": "jsonl",
"partitionBy": "date"
}
}| Field | Required | Description |
|---|---|---|
bucket | Yes | R2 bucket name |
prefix | No | Path prefix inside the bucket |
fileFormat | No | jsonl (default) or json |
partitionBy | No | date (default), hour, or source |
Google Cloud Storage
{
"type": "gcs",
"config": {
"bucket": "my-gcs-bucket",
"projectId": "my-gcp-project",
"serviceAccountKey": "{\"type\":\"service_account\",\"project_id\":\"...\"}",
"prefix": "webhooks/",
"fileFormat": "jsonl",
"partitionBy": "date"
}
}| Field | Required | Description |
|---|---|---|
bucket | Yes | GCS bucket name |
projectId | Yes | Google Cloud project ID |
serviceAccountKey | Yes | Service account key JSON (encrypted at rest) |
prefix | No | Path prefix inside the bucket |
fileFormat | No | jsonl (default) or json |
partitionBy | No | date (default), hour, or source |
Azure Blob Storage
{
"type": "azure_blob",
"config": {
"accountName": "myaccount",
"accountKey": "base64-encoded-key",
"containerName": "webhook-data",
"prefix": "webhooks/",
"fileFormat": "jsonl",
"partitionBy": "date"
}
}| Field | Required | Description |
|---|---|---|
accountName | Yes | Azure storage account name |
accountKey | Yes | Storage account key (encrypted at rest) |
containerName | Yes | Blob container name |
prefix | No | Path prefix inside the container |
fileFormat | No | jsonl (default) or json |
partitionBy | No | date (default), hour, or source |
Credential Security
Sensitive fields (secretAccessKey, serviceAccountKey, accountKey) are encrypted at rest using AES-256-GCM with organization-scoped keys. When you retrieve a destination via the API or dashboard, credentials are redacted and displayed as •••• followed by the last 4 characters.
When updating a destination, sending back the redacted value preserves the existing credential. You only need to provide a new value if you are rotating keys.
Field Mapping
By default, warehouse files include event_id, received_at, and the raw payload. If you need a flat, structured schema, configure field mappings to extract specific fields from the payload:
{
"fieldMapping": [
{ "source": "$.payload.data.amount", "target": "amount", "type": "number" },
{ "source": "$.payload.data.currency", "target": "currency", "type": "string" },
{ "source": "$.payload.type", "target": "event_type", "type": "string" },
{ "source": "$.payload.created", "target": "created_at", "type": "timestamp" }
]
}| Mapping Field | Description |
|---|---|
source | JSONPath expression (e.g., $.payload.amount) |
target | Output column name |
type | string, number, boolean, timestamp, or json |
default | Default value if the source path is missing (optional) |
With field mapping enabled, each event uses the mapped schema. Metadata fields _event_id and _received_at are always included:
{"_event_id":"evt_abc123","_received_at":"2026-02-21T14:30:00Z","amount":2500,"currency":"usd","event_type":"payment_intent.succeeded","created_at":"2026-02-21T14:29:58.000Z"}Testing a Warehouse Destination
Use the Test Connection action (in the dashboard or via API) to verify credentials and permissions. Hookbase uploads a small test file to your bucket and returns the file key, size, and event count.
curl -X POST https://api.hookbase.app/api/destinations/{destinationId}/test \
-H "Authorization: Bearer whr_your_api_key"{
"success": true,
"result": {
"key": "hookbase/2026-02-21/test-1740150000000.jsonl",
"size": 142,
"count": 1
}
}Managing Destinations
List Destinations
curl https://api.hookbase.app/api/destinations \
-H "Authorization: Bearer whr_your_api_key"Get Destination Details
curl https://api.hookbase.app/api/destinations/{destinationId} \
-H "Authorization: Bearer whr_your_api_key"Update Destination
curl -X PATCH https://api.hookbase.app/api/destinations/{destinationId} \
-H "Authorization: Bearer whr_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"url": "https://api.yourapp.com/webhooks/v2/handler"
}'Delete Destination
curl -X DELETE https://api.hookbase.app/api/destinations/{destinationId} \
-H "Authorization: Bearer whr_your_api_key"Destination Health
Monitor destination health in the dashboard:
- Success Rate: Percentage of successful deliveries
- Average Latency: Mean response time
- Last Delivery: Timestamp of most recent delivery
- Status: Active, degraded, or failing
Best Practices
Use HTTPS: Always use HTTPS URLs for HTTP destinations
Set appropriate timeouts: Ensure your endpoint responds within 30 seconds
Return 2xx quickly: Acknowledge receipt quickly, process asynchronously if needed
Handle duplicates: Webhooks may be delivered more than once; implement idempotency
Monitor health: Set up alerts for destinations with high failure rates
Use staging destinations: Test changes with a staging destination before production
Choose the right partition strategy: Use
datefor low-volume sources,hourfor high-volume sources, andsourcewhen you need to query by originUse JSONL for analytics: JSONL is better suited for data lake queries than JSON arrays
Troubleshooting
Destination shows high failure rate
- Check if the destination URL is accessible
- Verify authentication headers are correct
- Check your application logs for errors
- Use the Event Debugger to inspect request/response
Webhooks not being delivered
- Verify the destination is enabled
- Check if routes are configured correctly
- Ensure filters aren't blocking events
- Check the dead letter queue for failed deliveries
Warehouse test connection fails
- Verify bucket/container name is correct
- Check that credentials have write permissions
- For S3, ensure the region matches the bucket's actual region
- For GCS, verify the service account key JSON is valid
- For Azure, confirm the account key is base64-encoded