Skip to content

Architecture

Pixelflare is a self-hosted image CDN built entirely on Cloudflare's serverless platform. The system runs at the edge with global distribution, automatic scaling, and zero server management. Every component - from the frontend to the API to storage - leverages Cloudflare Workers, Pages, R2, D1, and other edge services.

System Overview

mermaid
graph TB
    User[User/Browser]
    GW[Gateway Worker]
    Frontend[SvelteKit Frontend<br/>Cloudflare Pages]
    API[Hono API Worker]
    R2[R2 Object Storage]
    D1[(D1 SQLite Database)]
    KV[KV Namespace]
    Queues[Message Queues]
    AI[Workers AI]
    Analytics[Analytics Engine]

    User --> GW
    GW --> Frontend
    GW --> API
    API --> R2
    API --> D1
    API --> KV
    API --> Queues
    API --> AI
    API --> Analytics
    Queues --> API
    Frontend --> API

The architecture follows a serverless, edge-first design where all compute runs on Cloudflare's global network. This provides sub-100ms latency worldwide, automatic DDoS protection, and pay-per-use pricing that scales from zero to millions of requests.

Monorepo Structure

Pixelflare is organized as a pnpm workspace monorepo. Each package is independently deployable but shares common code through internal dependencies.

Core Packages:

Shared Packages:

The monorepo approach ensures type safety across the entire stack. When the API changes a response type, TypeScript immediately catches breaking changes in the frontend.

Frontend Architecture

The web interface is built with SvelteKit using the Cloudflare Pages adapter for edge deployment. The frontend handles user authentication, image uploads, gallery management, and settings configuration.

Route Structure

All authenticated routes live under /src/routes/app/:

  • /app - Dashboard with recent uploads and quick stats
  • /app/images - Image gallery with infinite scroll
  • /app/albums - Album management and organization
  • /app/search - Full-text image search
  • /app/stats - Analytics dashboard with charts
  • /app/settings/* - User preferences, upload defaults, custom domains, S3 backup
  • /app/bin - Recycle bin for soft-deleted images

Each route uses SvelteKit's file-based routing with +page.svelte components and +page.server.ts server-side logic.

State Management

The frontend uses Svelte 5's runes and stores for reactive state:

Upload System

File uploads use Uppy for chunked uploads with progress tracking. The flow:

  1. User selects files via Uppy drag-and-drop or file picker
  2. Frontend calls POST /v1/images to create metadata records and get signed upload URLs
  3. Uppy uploads files directly to the API's upload endpoint with the signed token
  4. Progress updates stream to the upload store for real-time UI feedback
  5. After completion, images are queued for variant generation

The upload component supports:

  • Drag-and-drop and paste
  • Multiple file selection
  • Progress bars per file
  • EXIF metadata stripping option
  • Album and tag assignment during upload
  • Retry on failure

UI Framework

  • Tailwind CSS with DaisyUI components for consistent styling
  • Lucide Svelte icon library
  • ECharts for analytics visualizations
  • svelte-virtual for virtualized scrolling in large image grids
  • exifreader for EXIF metadata viewing

API Architecture

The API is built with Hono, a lightweight web framework optimized for edge runtimes. It provides a RESTful API for image management, authentication, and analytics.

Entry Point and Middleware Chain

The src/index.ts file initializes the Hono application with OpenAPI documentation:

mermaid
graph LR
    Request[Incoming Request]
    Logging[Logging Middleware]
    Security[Security Headers]
    Auth[Authentication]
    Audit[Audit Logger]
    Router[Route Handler]
    Error[Error Handler]
    Response[Response]

    Request --> Logging
    Logging --> Security
    Security --> Auth
    Auth --> Audit
    Audit --> Router
    Router --> Error
    Error --> Response

Middleware layers:

  1. Logging (src/middleware/logging.ts) - Structured logging with request IDs
  2. Security (src/middleware/security.ts) - CSP, X-Frame-Options, HSTS, and other security headers
  3. Auth (src/middleware/auth.ts) - JWT verification and API key validation
  4. Audit Logger (src/middleware/audit-logger.ts) - Tracks all mutations (POST/PUT/PATCH/DELETE)
  5. Error Handler - Centralized error formatting with proper status codes

Route Organization

The API exposes two main route groups:

Private API Routes (/v1/*):

Public CDN Routes (/{owner}/{album}/{filename}):

  • cdn.ts - Image serving with on-the-fly transformations

Authentication System

The API supports three authentication modes for different environments:

1. Cloudflare Access (Production)

Cloudflare Access provides OAuth authentication through GitHub. When a user logs in:

  1. Frontend redirects to Cloudflare Access login page
  2. User authenticates with GitHub OAuth
  3. Cloudflare Access issues a JWT token in a cookie (CF_Authorization)
  4. API verifies the JWT using Cloudflare's public keys (JWKS)

The auth middleware (src/middleware/auth.ts) validates JWTs:

  • Fetches Cloudflare's signing certificates (cached for 1 hour)
  • Verifies JWT signature using RS256 algorithm
  • Extracts user identity from sub claim
  • Hashes the sub to create a stable owner identifier

2. API Keys

Users can generate API keys for programmatic access. API keys support:

  • Scoped permissions (read, write, admin)
  • IP allowlisting (CIDR notation)
  • Expiration dates
  • Revocation

API keys are hashed using HMAC-SHA256 before storage. The auth middleware checks the Authorization: Bearer <key> header and validates the key against the database.

3. Mock Auth (Development)

For local testing, mock auth mode allows hardcoded user identities via the X-Mock-Auth header. This bypasses JWT verification entirely.

OpenAPI Documentation

The API generates OpenAPI v3.1 documentation automatically using Hono's OpenAPI plugin. Interactive API docs are served at /docs using Scalar.

Route handlers are annotated with OpenAPI schemas for request/response validation:

typescript
.openapi(createRoute({
  method: 'post',
  path: '/images',
  request: {
    body: {
      content: { 'application/json': { schema: CreateImageSchema } }
    }
  },
  responses: {
    201: { description: 'Image created', content: { 'application/json': { schema: ImageResourceSchema } } }
  }
}))

Database Schema

The database uses Cloudflare D1 (SQLite at the edge) with Drizzle ORM for type-safe queries. The schema is defined in packages/database/schema.ts.

Core Tables

users - User accounts and plan information

sql
CREATE TABLE users (
  owner TEXT PRIMARY KEY,        -- Hashed Cloudflare sub
  cf_sub TEXT UNIQUE,            -- Original Cloudflare UUID
  plan_tier TEXT DEFAULT 'free', -- free/starter/pro
  display_name TEXT,
  email TEXT,
  created_at TEXT,
  updated_at TEXT
)

images - Image metadata and status

sql
CREATE TABLE images (
  id TEXT PRIMARY KEY,
  owner TEXT REFERENCES users(owner),
  filename TEXT,
  album_slug TEXT,
  content_type TEXT,
  size_bytes INTEGER,
  width INTEGER,
  height INTEGER,
  is_private INTEGER DEFAULT 0,  -- Boolean as 0/1
  strip_metadata INTEGER DEFAULT 0,
  nsfw INTEGER DEFAULT 0,
  visibility TEXT DEFAULT 'public', -- public/private/unlisted
  expiration_date TEXT,
  uploaded INTEGER DEFAULT 0,
  variants_ready TEXT,           -- JSON array
  deleted_at TEXT,               -- Soft delete timestamp
  backup_status TEXT,            -- pending/synced/failed
  created_at TEXT,
  updated_at TEXT,
  UNIQUE(owner, filename, album_slug)
)

albums - Image collections

sql
CREATE TABLE albums (
  owner TEXT,
  album_slug TEXT,
  title TEXT,
  description TEXT,
  created_at TEXT,
  updated_at TEXT,
  PRIMARY KEY (owner, album_slug)
)

tags - User-defined labels

sql
CREATE TABLE tags (
  owner TEXT,
  tag_slug TEXT,
  color TEXT,
  description TEXT,
  created_at TEXT,
  PRIMARY KEY (owner, tag_slug)
)

image_tags - Many-to-many relationship

sql
CREATE TABLE image_tags (
  image_id TEXT REFERENCES images(id),
  tag_slug TEXT,
  owner TEXT,
  created_at TEXT,
  PRIMARY KEY (image_id, tag_slug)
)

imageAi - AI classification results

sql
CREATE TABLE imageAi (
  image_id TEXT PRIMARY KEY REFERENCES images(id),
  labels_json TEXT,              -- Classification labels
  label_scores_json TEXT,        -- Confidence scores
  caption TEXT,                  -- AI-generated description
  colors_json TEXT,              -- Dominant colors
  nsfw_score REAL,               -- NSFW probability
  nsfw_flagged INTEGER,
  model_version TEXT,
  processing_time_ms INTEGER,
  created_at TEXT
)

apiKeys - API authentication

sql
CREATE TABLE apiKeys (
  key_id TEXT PRIMARY KEY,
  owner TEXT REFERENCES users(owner),
  key_hash TEXT UNIQUE,          -- HMAC-SHA256 hash
  name TEXT,
  scopes TEXT,                   -- JSON array
  allowed_ips TEXT,              -- CIDR ranges
  expires_at TEXT,
  revoked_at TEXT,
  last_used_at TEXT,
  created_at TEXT
)

auditLogs - Activity tracking

sql
CREATE TABLE auditLogs (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  owner TEXT REFERENCES users(owner),
  action TEXT,                   -- create/update/delete/restore
  resource_type TEXT,            -- image/album/tag/key
  resource_id TEXT,
  status TEXT,                   -- success/failed
  metadata TEXT,                 -- JSON event details
  created_at TEXT
)

customDomains - Custom domain mapping

sql
CREATE TABLE customDomains (
  id TEXT PRIMARY KEY,
  owner TEXT REFERENCES users(owner),
  hostname TEXT UNIQUE,
  status TEXT,                   -- pending/verified/failed
  cloudflare_id TEXT,
  cname_target TEXT,
  txt_name TEXT,
  txt_value TEXT,
  created_at TEXT,
  verified_at TEXT
)

Indexing Strategy

The schema includes composite indexes for common query patterns:

sql
CREATE INDEX idx_images_owner_created ON images(owner, created_at);
CREATE INDEX idx_images_owner_album ON images(owner, album_slug);
CREATE INDEX idx_image_tags_owner ON image_tags(owner, tag_slug);
CREATE INDEX idx_audit_logs_owner ON auditLogs(owner, created_at);

These indexes optimize the most frequent queries:

  • Fetching a user's images sorted by date
  • Filtering images by album
  • Searching images by tag
  • Querying audit logs by user

Images support FTS5 full-text search on filenames and AI labels:

sql
CREATE VIRTUAL TABLE images_fts USING fts5(
  image_id,
  filename,
  ai_labels,
  content=images
);

This enables fast searches like "find images with 'sunset' or 'beach' in filename or AI labels".

Migrations

Database schema changes are managed through versioned SQL migration files in packages/database/migrations/. Wrangler applies migrations sequentially during deployment.

Image Processing Pipeline

The image processing pipeline handles uploads, storage, variant generation, and serving. The entire flow is asynchronous and queue-based for performance.

Upload Flow

mermaid
sequenceDiagram
    participant User
    participant Frontend
    participant API
    participant D1
    participant R2
    participant Queue

    User->>Frontend: Select image files
    Frontend->>API: POST /v1/images (metadata)
    API->>D1: Insert image record (uploaded=0)
    D1-->>API: Image ID
    API->>API: Generate signed upload token
    API-->>Frontend: {imageId, uploadUrl, token}
    Frontend->>API: PUT /v1/upload/:id?token=xxx (binary)
    API->>API: Validate token & optional EXIF strip
    API->>R2: Store at {owner}/{album}/{filename}
    API->>D1: Update uploaded=1
    API->>Queue: Enqueue variant generation
    API-->>Frontend: 201 Created
    Frontend-->>User: Upload complete

Key files:

Variant Generation

After upload, images are queued for variant generation. Variants are pre-sized versions optimized for different use cases (thumbnails, previews, etc.).

Variant presets (packages/config/src/image-variants.ts):

  • w128, w256, w512, w1024, w1536, w2048 - Width-constrained variants
  • thumb - Square thumbnail (200x200)
  • og-image - Open Graph preview (1200x630)

The queue consumer (src/queue/variant-consumer.ts) processes variant requests:

  1. Dequeue message from VARIANT_QUEUE
  2. Fetch original image from R2
  3. Apply transformation using Cloudflare Image Resizing
  4. Store transformed image at {owner}/{album}/{filename}/{variant}.{format}
  5. Update variants_ready array in database
  6. If all variants complete, trigger AI classification (if enabled)

Image Serving (CDN)

The CDN router (src/routes/cdn.ts) handles public image requests:

Path patterns:

  • GET /{owner}/{album}/{filename} - Original or default variant
  • GET /{owner}/{album}/{filename}/{variant} - Specific variant
  • GET /i/{imageId} - Short URL redirect (301 to full path)

Serving logic:

mermaid
graph TD
    Request[CDN Request]
    Auth{Private image?}
    CheckVariant{Variant requested?}
    VariantExists{Variant exists?}
    ServeCached[Serve from R2]
    Generate[Generate on-the-fly]
    RateLimit[Rate limit check]
    Analytics[Log to Analytics Engine]
    Response[Return image]

    Request --> RateLimit
    RateLimit --> Auth
    Auth -->|Public| CheckVariant
    Auth -->|Private| Unauthorized
    CheckVariant -->|Yes| VariantExists
    CheckVariant -->|No| ServeCached
    VariantExists -->|Yes| ServeCached
    VariantExists -->|No| Generate
    Generate --> ServeCached
    ServeCached --> Analytics
    Analytics --> Response

Performance optimizations:

  • Browser caching (1 year cache for immutable images)
  • Cloudflare CDN caching
  • KV-based rate limiting to prevent abuse
  • Analytics tracking for bandwidth monitoring

R2 Storage Structure

Images are stored in R2 with the following path structure:

{owner}/{album}/{filename}           <- Original image
{owner}/{album}/{filename}/w512.webp <- Variant (512px width, WebP)
{owner}/{album}/{filename}/thumb.jpg <- Thumbnail variant

This structure allows:

  • Owner-based isolation (each user's images in separate "directory")
  • Album organization
  • Variant storage alongside originals
  • Efficient prefix-based listing

Key file: src/lib/storage.ts

Queue System

Pixelflare uses Cloudflare Queues for asynchronous background processing. Queues enable the API to respond quickly while expensive operations run in the background.

Queue Types

1. Image Processing Queue (VARIANT_QUEUE)

Handles variant generation for uploaded images.

Message schema:

typescript
{
  image_id: string;
  owner: string;
  album_slug: string;
  filename: string;
  variant: string; // w512, thumb, etc.
}

Consumer: src/queue/variant-consumer.ts

Configuration:

  • Max batch size: 5
  • Timeout: 30 seconds
  • Retries: 3 (with exponential backoff)

2. Backup Queue (BACKUP_QUEUE)

Syncs images to S3-compatible storage for disaster recovery.

Message schema:

typescript
{
  image_id: string;
  owner: string;
  action: 'upload' | 'delete';
}

Consumer: src/queue/backup-consumer.ts

3. Custom Domain Queue (CUSTOM_DOMAIN_QUEUE)

Verifies DNS records for custom domain setup.

Message schema:

typescript
{
  domain_id: string;
  hostname: string;
  action: 'verify' | 'cleanup';
}

Consumer: src/queue/custom-domain-consumer.ts

Scheduled Jobs (Cron)

Cloudflare Workers Cron Triggers run periodic maintenance tasks:

1. Image Cleanup (Daily at 2 AM UTC)

  • Soft-delete expired images
  • Permanently delete soft-deleted images older than retention period
  • Remove orphaned variants from R2

Handler: src/scheduled/cleanup.ts

2. Analytics Aggregation (Hourly)

  • Roll up raw Analytics Engine data points
  • Generate hourly/daily/monthly summaries
  • Calculate bandwidth and storage usage

3. Backup Sync (Configurable, default: daily)

  • Queue pending images for S3 backup
  • Verify backup status
  • Update sync timestamps

Authentication & Security

Security is a critical focus given the public-facing nature of a CDN. The system implements multiple layers of defense.

Authentication Architecture

mermaid
sequenceDiagram
    participant Browser
    participant Gateway
    participant API
    participant CFAccess as Cloudflare Access
    participant GitHub
    participant D1

    Browser->>Gateway: GET /app (protected route)
    Gateway->>CFAccess: Check CF_Authorization cookie
    CFAccess-->>Gateway: JWT valid
    Gateway->>API: Forward request with JWT
    API->>API: Verify JWT signature (RS256)
    API->>API: Extract sub claim
    API->>API: Hash sub → owner
    API->>D1: SELECT * FROM users WHERE owner=...
    D1-->>API: User record
    API-->>Gateway: Authenticated response
    Gateway-->>Browser: Render page

JWT Verification Process:

The auth middleware (src/middleware/auth.ts) implements robust JWT validation:

  1. Extract token from CF_Authorization cookie
  2. Decode header to get kid (key ID)
  3. Fetch public keys from Cloudflare JWKS endpoint (cached for 1 hour in KV)
  4. Verify signature using RS256 algorithm and matching public key
  5. Validate claims:
    • aud matches configured audience
    • exp (expiration) is in the future
    • nbf (not before) is in the past
    • iss (issuer) is Cloudflare Access
  6. Extract identity from sub claim
  7. Hash sub to create stable owner identifier (prevents exposing Cloudflare UUIDs)

Why hash the sub?

The sub claim contains the user's Cloudflare UUID. To avoid leaking this internal identifier, we hash it with HMAC-SHA256 to create a public-safe owner identifier. This owner is used throughout the system for data isolation.

API Key System

API keys provide programmatic access without requiring a browser session. Users can create multiple keys with different scopes and restrictions.

Key features:

  • Scoped permissions - Keys can have read, write, or admin scopes
  • IP allowlisting - Restrict keys to specific CIDR ranges
  • Expiration - Optional expiration dates
  • Revocation - Keys can be revoked instantly
  • Last used tracking - Monitor key usage

Security implementation:

  • Keys are generated using crypto.randomUUID()
  • Only the HMAC-SHA256 hash is stored in the database
  • Raw keys are shown once during creation
  • Validation uses constant-time comparison to prevent timing attacks

Key file: src/lib/api-keys.ts

Security Headers

The security middleware (src/middleware/security.ts) applies defense-in-depth headers:

http
Content-Security-Policy: default-src 'self'; ...
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-XSS-Protection: 1; mode=block
Referrer-Policy: strict-origin-when-cross-origin
Permissions-Policy: geolocation=(), camera=(), microphone=()

CORS configuration:

  • API allows cross-origin requests from the frontend domain
  • CDN routes allow embedding from any origin (for public images)
  • Preflight caching reduces OPTIONS request overhead

Rate Limiting

Rate limiting prevents abuse and ensures fair resource usage. The implementation uses a token bucket algorithm backed by Cloudflare KV.

Algorithm:

mermaid
graph TD
    Request[Incoming Request]
    GetBucket[Get token bucket from KV]
    HasTokens{Tokens available?}
    ConsumeToken[Consume 1 token]
    Refill[Refill tokens based on time]
    Allow[Allow request]
    Deny[429 Too Many Requests]

    Request --> GetBucket
    GetBucket --> Refill
    Refill --> HasTokens
    HasTokens -->|Yes| ConsumeToken
    HasTokens -->|No| Deny
    ConsumeToken --> Allow

Limits by operation:

  • Uploads: 100 per hour per user
  • API reads: 1000 per hour per user
  • CDN variant generation: 500 per hour per IP

Key file: src/lib/rate-limit.ts

Encryption

Sensitive data is encrypted at rest using Web Crypto API:

S3 Backup Credentials:

  1. User submits S3 access key and secret
  2. API encrypts credentials using AES-GCM with a random IV
  3. Encrypted blob and IV stored in KV
  4. Reference hash stored in D1
  5. When needed, decrypt from KV using the same key

Key derivation: The encryption key is derived from the ENCRYPTION_SECRET environment variable using PBKDF2.

Key file: src/lib/crypto.ts

AI Classification System

Pixelflare integrates Cloudflare Workers AI for automatic image analysis. The AI system runs asynchronously after uploads to classify images, generate captions, and detect inappropriate content.

AI Models

The system uses multiple AI models via Workers AI binding:

1. Image Classification (ResNet-50)

  • Identifies objects, scenes, and concepts in images
  • Returns top-K labels with confidence scores
  • Example: ["sunset", "beach", "ocean", "sky"]

2. Image Captioning

  • Generates natural language descriptions
  • Example: "A person standing on a beach at sunset"

3. Color Extraction

  • Identifies dominant colors in RGB format
  • Used for theming and search filtering

4. NSFW Detection

  • Scores images for inappropriate content
  • Configurable threshold for auto-flagging

Processing Flow

mermaid
sequenceDiagram
    participant Queue as Variant Queue
    participant Consumer
    participant AI as Workers AI
    participant D1
    participant Image as R2 Image

    Queue->>Consumer: Variant generation complete
    Consumer->>Image: Fetch image data
    Consumer->>AI: POST /classification
    AI-->>Consumer: {labels, scores}
    Consumer->>AI: POST /caption
    AI-->>Consumer: {caption}
    Consumer->>AI: POST /nsfw-detection
    AI-->>Consumer: {nsfw_score}
    Consumer->>D1: INSERT INTO imageAi (...)
    Consumer->>D1: UPDATE images SET nsfw=...

Implementation: src/services/ai-classification.ts

Search Integration

AI-generated labels and captions are indexed in the FTS5 full-text search table. This enables semantic search:

  • User searches for "sunset"
  • FTS matches images with "sunset" in filename OR AI labels
  • Results include images never explicitly tagged but identified by AI

Performance

AI classification runs asynchronously to avoid blocking uploads. The system tracks:

  • Model version (for reproducibility)
  • Processing time (for monitoring)
  • Completion status (for retry logic)

If AI processing fails, the image is still accessible - AI is an enhancement, not a requirement.

Gateway Worker

The Gateway Worker is the single entry point for all requests. It routes traffic to the appropriate Cloudflare service based on the request path.

Routing Logic

mermaid
graph TD
    Request[Incoming Request]
    Health{/_gateway/health?}
    AuthCheck{/_auth-check?}
    API{/api/*?}
    Docs{/docs/*?}
    ShortURL{/i/:id?}
    CDN{/:user/:album/:file?}
    Default[Default]

    Health1[Return 200 OK]
    AuthCheck1[Route to API Worker]
    API1[Strip /api prefix, route to API]
    Docs1[Route to Docs Pages]
    ShortURL1[Rewrite to API Worker]
    CDN1[Route to API CDN handler]
    Default1[Route to Frontend Pages]

    Request --> Health
    Health -->|Yes| Health1
    Health -->|No| AuthCheck
    AuthCheck -->|Yes| AuthCheck1
    AuthCheck -->|No| API
    API -->|Yes| API1
    API -->|No| Docs
    Docs -->|Yes| Docs1
    Docs -->|No| ShortURL
    ShortURL -->|Yes| ShortURL1
    ShortURL -->|No| CDN
    CDN -->|Yes| CDN1
    CDN -->|No| Default
    Default --> Default1

Path Resolution Rules

1. Reserved Paths

These paths bypass CDN routing to prevent conflicts:

typescript
const RESERVED_PATHS = [
  'api', 'app', 'auth', 'docs', 'admin',
  '_auth-check', '_gateway', 'i', 'health'
];

If a username matches a reserved path, it cannot be used for CDN routing.

2. CDN Path Validation

CDN paths must follow the pattern /{user}/{album}/{filename} where:

  • user and album cannot contain dots (to prevent confusion with file extensions)
  • filename must have an extension
  • All segments must be URL-safe

3. Service Bindings

The gateway uses Cloudflare Service Bindings for zero-latency routing:

typescript
// API Worker binding
const apiResponse = await env.API_WORKER.fetch(request);

// Frontend Pages binding
const frontendResponse = await env.FRONTEND_PAGES.fetch(request);

Service bindings bypass the network stack entirely - requests go directly to the target worker in the same isolate.

Implementation: packages/gateway/src/index.ts

Advanced Features

S3 Backup & Disaster Recovery

Users can configure automatic backup to any S3-compatible storage provider (AWS S3, MinIO, DigitalOcean Spaces, Backblaze B2, etc.).

Setup flow:

  1. User enters S3 endpoint, bucket, access key, and secret in settings
  2. API tests connection by attempting to list bucket
  3. If successful, credentials are encrypted and stored in KV
  4. Backup queue consumer syncs images to S3 on a schedule

Sync process:

mermaid
graph TD
    Cron[Scheduled Backup Job]
    Pending[Query pending images]
    Queue[Enqueue backup jobs]
    Consumer[Backup Consumer]
    Fetch[Fetch image from R2]
    Upload[Upload to S3]
    Update[Update backup_status]

    Cron --> Pending
    Pending --> Queue
    Queue --> Consumer
    Consumer --> Fetch
    Fetch --> Upload
    Upload --> Update

Features:

  • Incremental sync (only new/changed images)
  • Deletion sync (remove deleted images from S3)
  • Error handling with retry logic
  • Status tracking per image

Key files:

Custom Domains

Users can serve images from their own domain (e.g., images.example.com) instead of the default Pixelflare domain.

Verification flow:

mermaid
sequenceDiagram
    participant User
    participant API
    participant Queue
    participant CloudflareAPI
    participant DNS

    User->>API: POST /custom-domains {hostname}
    API->>CloudflareAPI: Create zone record
    CloudflareAPI-->>API: {cname_target, txt_name, txt_value}
    API->>Queue: Enqueue verification job
    API-->>User: {verificationRecords}
    User->>DNS: Add CNAME and TXT records
    Queue->>CloudflareAPI: Check DNS records
    CloudflareAPI-->>Queue: Records verified
    Queue->>API: Update status to "verified"
    API-->>User: Domain ready

Verification checks:

  1. CNAME points to Pixelflare's CDN (cdn.pixelflare.cc)
  2. TXT record contains verification token
  3. DNS propagation complete (checked via Cloudflare API)

Once verified, images are accessible at https://images.example.com/{album}/{filename}.

Implementation: src/services/custom-domains/

Analytics Pipeline

The analytics system tracks image views, bandwidth usage, and geographic distribution using Cloudflare Analytics Engine.

Data collection:

Every CDN request writes a data point:

typescript
{
  blobs: [owner, imageId, variant],
  doubles: [bytesTransferred, statusCode],
  indexes: [country, requestId]
}

Aggregation:

A scheduled job runs hourly to aggregate raw data points into summaries:

sql
SELECT
  owner,
  imageId,
  COUNT(*) as views,
  SUM(bytesTransferred) as bandwidth,
  COUNT(DISTINCT country) as countries
FROM analytics_datapoints
WHERE timestamp > ?
GROUP BY owner, imageId

Visualization:

The frontend uses ECharts to render:

  • Time-series view charts
  • Bandwidth usage over time
  • Geographic heatmaps
  • Top images by views

Key files:

Soft Deletion & Audit Logging

Soft Deletion:

Deleted images aren't immediately removed. Instead:

  1. deleted_at timestamp is set
  2. Image is excluded from normal queries
  3. Image remains accessible in the recycle bin
  4. After retention period (default: 30 days), permanent deletion occurs

Users can restore soft-deleted images with a single API call. This protects against accidental deletion.

Audit Logging:

All mutations are logged to the auditLogs table:

typescript
{
  owner: 'user123',
  action: 'delete',
  resource_type: 'image',
  resource_id: 'img_abc123',
  status: 'success',
  metadata: {
    filename: 'photo.jpg',
    album: 'vacation',
    ip_address: '1.2.3.4'
  }
}

Audit logs support:

  • Filtering by date range, action, resource type
  • CSV export for compliance
  • Permanent retention (never auto-deleted)

Implementation: src/middleware/audit-logger.ts

Deployment Architecture

Pixelflare deploys entirely to Cloudflare's edge network. No traditional servers, load balancers, or databases are required.

Cloudflare Resources

Workers:

  • API Worker - Handles REST API requests
  • Gateway Worker - Routes requests to appropriate services
  • Queue Consumers - Process background jobs

Pages:

  • Frontend - SvelteKit application
  • Docs - VitePress documentation

Storage:

  • R2 Bucket - Object storage for images and variants
  • D1 Database - SQLite at the edge for metadata
  • KV Namespace - Key-value store for caching and rate limiting

Compute:

  • Queues - Message queues for async processing
  • Cron Triggers - Scheduled job execution
  • Workers AI - AI model inference

Networking:

  • Custom Domains - User-configured domains
  • Analytics Engine - Request and bandwidth tracking

Terraform Provisioning

Infrastructure is defined as code using Terraform. The terraform/ directory contains:

  • main.tf - Resource definitions
  • variables.tf - Configuration variables
  • outputs.tf - Generated values (Zone ID, Account ID, etc.)

Deployment:

bash
# Initialize Terraform
terraform init

# Preview changes
terraform plan

# Apply infrastructure
terraform apply

# Generate Wrangler config from outputs
terraform output -json > wrangler.production.json

Terraform provisions:

  • D1 database with schema
  • R2 bucket with CORS rules
  • KV namespace
  • Queues and cron triggers
  • DNS records
  • Access policies

Wrangler Configuration

Each package has two Wrangler configs:

1. Development (wrangler.dev.toml)

  • Uses local D1 database
  • Local KV simulator
  • No authentication (mock mode)
  • Hot reload enabled

2. Production (wrangler.production.toml)

  • Generated from Terraform outputs
  • References live Cloudflare resources
  • Full authentication enabled
  • Custom routes and domains

Deployment commands:

bash
# Deploy API Worker
pnpm --filter @pixflare/api deploy

# Deploy Frontend to Pages
pnpm --filter @pixflare/frontend deploy

# Deploy Gateway Worker
pnpm --filter @pixflare/gateway deploy

Environment Variables

Required environment variables:

bash
# Authentication
CF_ACCESS_AUD=your-audience-id
CF_ACCESS_TEAM_DOMAIN=your-team.cloudflareaccess.com

# Cloudflare
CLOUDFLARE_ZONE_ID=zone-id-for-custom-domains
CLOUDFLARE_API_TOKEN=token-with-dns-permissions

# Security
ENCRYPTION_SECRET=random-secret-for-aes-gcm

# Features
ENABLE_AI_CLASSIFICATION=true
ENABLE_AUDIT_LOGGING=true

Build Pipeline

The monorepo uses Turborepo for efficient builds:

bash
# Build all packages in dependency order
pnpm build

# Run type checking across all packages
pnpm type-check

# Run tests
pnpm test

# Lint all code
pnpm lint

Build outputs:

  • @pixflare/api → Worker JavaScript bundle
  • @pixflare/frontend → Static assets + SvelteKit server
  • @pixflare/gateway → Worker JavaScript bundle
  • @pixflare/docs → Static HTML/CSS/JS

Continuous Deployment

GitHub Actions automates deployment:

  1. On push to main:

    • Run tests and type checks
    • Build all packages
    • Apply Terraform changes
    • Deploy API Worker
    • Deploy Frontend to Pages
    • Deploy Gateway Worker
  2. On pull request:

    • Run tests
    • Deploy preview environment (Pages preview)

Workflow file: .github/workflows/deploy.yml


Summary

Pixelflare demonstrates a modern, edge-first architecture for building production web applications. The system leverages:

  • Serverless compute for automatic scaling and zero operational overhead
  • Edge-native storage (D1, R2, KV) for global low-latency access
  • Queue-based processing for asynchronous operations
  • AI integration for intelligent image analysis
  • Type-safe development with end-to-end TypeScript
  • Infrastructure as code with Terraform

The most technically impressive aspects:

  1. Complete edge deployment - Every component runs on Cloudflare's edge network
  2. Queue-based image processing - Async variant generation with retry logic
  3. Multi-mode authentication - Flexible auth supporting JWT, API keys, and mock mode
  4. AI-powered classification - Automatic tagging and NSFW detection
  5. Encrypted backup system - S3-compatible backup with client-side encryption
  6. Smart gateway routing - Zero-latency service binding-based routing
  7. Token bucket rate limiting - KV-backed rate limiting without external dependencies

The architecture prioritizes developer experience (type safety, fast builds), operational simplicity (serverless, no servers to manage), and end-user performance (edge execution, global CDN).

Released under the MIT License.