Production Logging
Pixflare can automatically push Worker logs to R2 for long-term storage. This gives you historical logs for compliance, troubleshooting production issues, and analyzing patterns over time. Cloudflare keeps real-time logs for only 24 hours, so Logpush is essential for preserving your log history.
Logpush captures errors, warnings, and slow requests - the stuff you actually care about - while skipping routine debug logs that would just waste storage and money.
Setup
Enable Logpush by setting three environment variables and creating R2 API tokens:
# Enable Logpush
ENABLE_LOGPUSH='true'
# Create R2 API tokens (instructions below)
R2_ACCESS_KEY_ID='your-access-key-id'
R2_SECRET_ACCESS_KEY='your-secret-access-key'
# Optional: Customize storage location
LOGPUSH_R2_BUCKET='my-logs-bucket' # Defaults to: {main-bucket}-logs
LOGPUSH_PATH_PREFIX='logs/workers' # Defaults to: logs/workersCreating R2 API Tokens
- Go to your Cloudflare dashboard: R2 > Manage R2 API Tokens
- Click Create API Token
- Set permissions to Admin Read & Write
- Copy the Access Key ID and Secret Access Key
- Add them to your
.envfile
Deploy
After adding the environment variables:
# Regenerate wrangler config with Logpush enabled
make generate-wrangler
# Deploy the updated worker
make deploy-workers
# Run the setup script to create the Logpush job
make non-terraform-setupThe setup script will:
- Create a separate R2 bucket for logs (keeps them isolated from your images)
- Configure a Logpush job to send Worker logs to R2
- Verify everything is working correctly
Within 5-10 minutes of generating traffic, logs will start appearing in your R2 bucket.
How It Works
Here's what happens when your Worker receives requests:
graph LR
A[User Request] --> B[Worker]
B --> C{Log Level}
C -->|debug| D[Observability Tab Only]
C -->|info/warn/error| E[Cloudflare Logs]
E -->|10% sample| F[Logpush Service]
F -->|Every 5 min| G[R2 Bucket]
style D fill:#e3f2fd
style E fill:#fff3e0
style G fill:#e8f5e9What Gets Logged
Logpush captures important events and skips the noise:
Captured (sent to R2):
- Errors and exceptions with stack traces
- Slow requests (>5 seconds)
- HTTP errors (4xx, 5xx status codes)
- Request metadata (timestamp, method, path, status)
Filtered out (not sent to R2):
- Debug logs ("Incoming request", "User ensured")
- Successful routine operations
- Internal implementation details
The head_sampling_rate = 0.1 means only 10% of logs are saved, which keeps costs minimal while giving you plenty of data to work with. Even at scale, you'll stay well under $5/month.
Storage Layout
Logs are organized by date in your R2 bucket:
r2://your-bucket-logs/
logs/
workers/
20260108/
20260108T000852Z_20260108T000940Z_6393db3e.log.gz
20260108T001352Z_20260108T001440Z_7b24cd4f.log.gz
...Each file contains 5 minutes of compressed logs in newline-delimited JSON format.
Using the Logs
Quick Health Check
See recent log files to verify Logpush is working:
# List today's logs
npx wrangler r2 object list your-bucket-logs \
--prefix logs/workers/$(date +%Y%m%d) | head -20Download and View Logs
Grab a specific log file and inspect it:
# Download a log file
npx wrangler r2 object get \
your-bucket-logs/logs/workers/20260108/FILE.log.gz \
--file /tmp/logs.gz
# Decompress and view
gunzip /tmp/logs.gz
cat /tmp/logs | jq '.' | lessFind Errors
Search for errors from a specific time period:
# Get errors with timestamps
cat /tmp/logs | jq -r '
select(.Outcome == "exception") |
"\(.EventTimestampMs | tonumber / 1000 | strftime("%Y-%m-%d %H:%M:%S")) - \(.Exceptions[0].message)"
'Example output:
2026-01-08 12:34:56 - Failed to upload image: Invalid file format
2026-01-08 13:22:10 - Database query timeoutAnalyze Slow Requests
Find performance bottlenecks:
# Extract slow request warnings
cat /tmp/logs | jq -r '
.Logs[]? |
select(.message == "Slow request detected") |
"\(.timestamp) - \(.path) - \(.duration)ms"
'Monitor Error Rates
Count errors by type over time:
# Download all logs from yesterday
DATE=$(date -d "yesterday" +%Y%m%d)
for file in $(npx wrangler r2 object list your-bucket-logs --prefix logs/workers/$DATE | awk '{print $1}'); do
npx wrangler r2 object get your-bucket-logs/$file --file - | gunzip
done > /tmp/yesterday.log
# Count errors by exception type
cat /tmp/yesterday.log | jq -r '.Exceptions[]?.name' | sort | uniq -c | sort -rnCheck Request Patterns
See what endpoints are being hit:
# Extract request paths and count them
cat /tmp/logs | jq -r '.Logs[]? | select(.path) | .path' | sort | uniq -c | sort -rnCosts
With 10% sampling (the default), Logpush costs are minimal:
- Logpush: $0.05 per million log entries x 10% = negligible
- R2 Storage: ~$0.015/GB/month for stored logs
- R2 Operations: Free for Class B reads
Most deployments will spend $1-3/month even with heavy traffic. You're already paying $5/month for Workers Paid (required for Logpush), so this is a tiny addition.
If you need to reduce costs further, you can:
- Lower sampling rate in
wrangler.toml:head_sampling_rate = 0.05(5%) - Set up log rotation to delete old logs after 90 days
- Use a smaller R2 bucket in a different region
Troubleshooting
Logs not appearing in R2?
Check the Logpush job status:
curl -s -X GET \
"https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/logpush/jobs" \
-H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}" | \
jq '.result[] | {name, enabled, last_complete, last_error}'Look for:
enabled: true(job is active)last_complete: <timestamp>(job has run successfully)last_error: null(no errors)
Still not working?
- Verify you're on Workers Paid plan ($5/month minimum)
- Check R2 API token has Admin Read & Write permissions
- Ensure
logpush = trueis in yourwrangler.production.toml - Generate some traffic and wait 10 minutes for the first batch
Need more help?
Check the Cloudflare Logpush documentation or open an issue on GitHub.
Multiple Destinations
You can send logs to multiple destinations simultaneously - R2 for long-term storage plus a monitoring service for real-time alerts.
Cloudflare Logpush supports:
- DataDog - Real-time monitoring and alerting
- Splunk - Enterprise log aggregation
- New Relic - Application performance monitoring
- SentinelOne - Security monitoring
- IBM Cloud Logs - IBM cloud integration
- S3-compatible providers - Any S3 API (Backblaze B2, MinIO, etc.)
- HTTP endpoint - Custom log processing pipeline
To add another destination, create a second Logpush job:
# Example: Send logs to DataDog
curl -X POST \
"https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/logpush/jobs" \
-H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"name": "pixflare-datadog-logs",
"dataset": "workers_trace_events",
"destination_conf": "datadog://<DATADOG_ENDPOINT>?header_DD-API-KEY=<API_KEY>",
"logpull_options": "fields=EventTimestampMs,EventType,Outcome,ScriptName,Logs,Exceptions×tamps=rfc3339",
"enabled": true,
"filter": "{\"where\":{\"and\":[{\"key\":\"ScriptName\",\"operator\":\"eq\",\"value\":\"pixflare-production-api\"}]}}"
}'Each destination has its own configuration format. Check the Cloudflare Logpush destinations docs for specific setup instructions.