Checking Logs
API Worker Logs
This will usually be the first place to start. See real-time logs from the main Hono API on Cloudflare Workers
npx wrangler tail --config ./packages/api/wrangler.production.toml--status error- Only show errors--status ok- Only successful requests--method POST GET- Filter by HTTP methods--format json- JSON output for parsing
Gateway Worker Logs
The gateway worker functions very similarly to the API worker, but handles routing for paths
npx wrangler tail --config ./packages/gateway/wrangler.tomlFrontend Logs
The frontend is build with SvelteKit, and also has some server-side (SSR) components, for which logs can be seen with:
wrangler pages deployment tail --project-name pixflare-frontend --environment production--environment preview- Preview deployments instead of production--method POST GET- Filter by HTTP methods--status error- Only show errors--format json- JSON output
Docs Site Logs
Logs from your VitePress documentation site deployment.
wrangler pages deployment tail --project-name pixflare-docs --environment productionQueue Consumer Logs
Logs from background queue processors (image variants, backups, custom domains).
Via Cloudflare Dashboard:
- Go to Workers & Pages
- Select your API worker
- Click Logs tab
- Filter by "Queue Consumer" in the logs stream
Via CLI (same as API worker):
npx wrangler tail --config ./packages/api/wrangler.production.tomlCron Trigger Logs
Scheduled job execution logs (cleanup, analytics aggregation, backup sync).
Via Dashboard:
- Go to Workers & Pages
- Select your API worker
- Click Logs tab
- Look for entries with
"trigger": "scheduled"
Via CLI (included in worker tail):
npx wrangler tail --config ./packages/api/wrangler.production.tomlD1 Database Logs
Query execution logs and performance metrics.
- Go to Cloudflare Dashboard → Storage & Databases → D1
- Select your database (
pixflare-production-db) - Click Metrics tab for query performance
- Individual queries logged in worker logs above
R2 Bucket Logs
Access logs for image storage operations (uploads, downloads, deletions).
Via Logpush (requires setup):
- Go to Analytics & Logs → Logpush
- Click Create job
- Select R2 as dataset
- Choose destination (S3, HTTP, etc.)
Real-time access via Worker logs - all R2 operations are logged in API worker logs.
Analytics Engine Logs
Real-time analytics events (page views, image requests, bandwidth usage).
- Go to Workers & Pages → Your API worker
- Click Analytics Engine tab
- Query data with SQL:
SELECT * FROM analytics_events
ORDER BY timestamp DESC
LIMIT 100Cloudflare Access Logs
Authentication attempts, login events, and access policy decisions.
- Go to Zero Trust → Logs → Access
- View authentication events, user logins, policy blocks
- Filter by user, application, or action
Build Logs
Deployment build logs for Workers and Pages.
Workers (API/Gateway):
wrangler deployments list --config ./packages/api/wrangler.production.tomlPages (Frontend/Docs):
- Go to Workers & Pages
- Select your project
- Click Deployments tab
- Click any deployment to view build logs
Client-Side Logs
Frontend JavaScript errors and console logs from user browsers.
- Open browser devtools (F12)
- Go to Console tab (or Ctrl + Shift + J)
- Refresh the page to see logs
Historical Logs (Logpush)
Pixelflare can automatically push logs to your R2 bucket for long-term retention and compliance.
Automatic Setup
Logpush is configured automatically during deployment if you have:
- Workers Paid plan ($5/month)
- Enable Logpush in your
.env:bashENABLE_LOGPUSH=true - R2 permissions on your existing
CLOUDFLARE_API_TOKEN
That's it! The deployment script automatically:
- Derives S3-compatible credentials from your existing API token
- Creates a separate logs bucket (
{your-bucket}-logs) to keep logs isolated from app data - Configures Logpush to push Worker logs to R2
Note: Your CLOUDFLARE_API_TOKEN must have "Admin Read & Write" permissions for R2 buckets (check at R2 API Tokens → R2 → Manage R2 API Tokens)
What Gets Logged
- All Worker requests and responses
console.log(),console.error(), etc. output- Uncaught exceptions with stack traces
- Request metadata (timestamp, duration, status)
Log Organization
Logs are stored in your R2 bucket at:
r2://your-bucket/logs/workers/YYYY/MM/DD/filename.logEach file contains newline-delimited JSON:
{"EventTimestampMs":1704629400000,"EventType":"fetch","Outcome":"ok","ScriptName":"pixflare-production-api","Logs":[{"level":"info","message":"Request completed"}]}
{"EventTimestampMs":1704629401000,"EventType":"fetch","Outcome":"exception","ScriptName":"pixflare-production-api","Exceptions":[{"name":"Error","message":"Upload failed"}]}Querying Logs
# List recent log files
npx wrangler r2 object list your-bucket --prefix logs/workers
# Download specific day
npx wrangler r2 object get your-bucket/logs/workers/2026/01/07/file.log
# Find all errors from January 7th
npx wrangler r2 object get your-bucket/logs/workers/2026/01/07/file.log | \
jq -r 'select(.Outcome == "exception") | [.EventTimestampMs, .Exceptions[0].message] | @csv'
# Count requests by endpoint
npx wrangler r2 object get your-bucket/logs/workers/2026/01/07/file.log | \
jq -r '.Logs[].message | select(contains("Request completed"))' | wc -lManual Setup
If automatic setup fails, create Logpush job manually:
- Go to Analytics & Logs → Logpush
- Click Create job
- Select Workers Trace Events dataset
- Choose R2 as destination
- Configure filters and enable
Cost
- Logpush: $0.05 per million log entries (sampled at 10% by default)
- R2 Storage: ~$0.015/GB/month
- Typical usage: $1-5/month for most deployments
Log Filtering & Formatting
All wrangler tail commands support these options:
# JSON format (for parsing)
wrangler tail --format json
# Filter by HTTP status
wrangler tail --status error # Only errors
wrangler tail --status ok # Only successful requests
# Filter by HTTP method
wrangler tail --method POST GET
# Filter by header
wrangler tail --header "User-Agent: *mobile*"
# Multiple filters
wrangler tail --status error --method POST --format jsonTroubleshooting
No logs appearing?
- Check you're using the correct environment (
productionvsdev) - Verify worker/project names match your deployment
- Ensure you have sufficient permissions in your API token
Too many logs?
- Use
--status errorto only see errors - Add
--methodfilters for specific HTTP methods - Use
--sampling-rate 0.1for 10% sample
Need historical logs?
- Set up Logpush for long-term retention
- Cloudflare only keeps real-time logs for ~24 hours