Designing a Contact Email Pipeline

Reliability in Form Submissions

10 min readFull-Stack

Email systems fail silently without proper monitoring. A form submission disappears into the void. The user never knows. You never know. This is unacceptable in production.

Architecture

The flow is simple but each step matters. Let me break down why:

Validation → Rate Limit → Queue → Send → Log & Retry

Input Validation

Validate at the boundary. Never trust client data. Reject invalid requests before they consume resources:

// api/send/route.ts
import { z } from "zod";

const contactSchema = z.object({
  email: z.string().email(),
  subject: z.string().min(1).max(256),
  message: z.string().min(10).max(5000),
  name: z.string().min(1).max(100),
});

export async function POST(request: NextRequest) {
  try {
    const data = await request.json();
    const validated = contactSchema.parse(data);
    return handleValidatedContact(validated);
  } catch (error) {
    if (error instanceof z.ZodError) {
      return NextResponse.json(
        { error: "Validation failed", details: error.errors },
        { status: 400 }
      );
    }
    throw error;
  }
}

Rate Limiting

Prevent abuse. A single IP should not be able to send 1000 emails in one second. Use Redis for distributed rate limiting:

// lib/rateLimit.ts
import { Redis } from "@upstash/redis";

const redis = new Redis({ url: process.env.REDIS_URL });

export async function rateLimit(key: string, limit: number, window: number) {
  const current = await redis.incr(key);
  
  if (current === 1) {
    await redis.expire(key, window);
  }
  
  return current <= limit;
}

// In your handler
const clientIp = request.headers.get("x-forwarded-for") || "unknown";
const allowed = await rateLimit(`contact:${clientIp}`, 5, 3600);

if (!allowed) {
  return NextResponse.json(
    { error: "Too many requests. Try again in 1 hour." },
    { status: 429 }
  );
}

Queuing Pattern

Never send email synchronously. Queue it. Let a background worker handle retries and failures:

// api/send/route.ts
export async function POST(request: NextRequest) {
  // ... validation and rate limiting ...

  // Queue the email, don't send synchronously
  await emailQueue.enqueue({
    to: validated.email,
    subject: validated.subject,
    body: validated.message,
    metadata: { ip: clientIp, timestamp: Date.now() },
  });

  // Return immediately
  return NextResponse.json(
    { message: "Message received. We'll be in touch soon." },
    { status: 202 } // 202 Accepted
  );
}

Monitoring and Alerts

Track failures, retries, and delivery status. Alert when something breaks:

// lib/emailQueue.ts
async function processQueue() {
  const batch = await queue.getBatch(10);

  for (const job of batch) {
    try {
      await sendEmail(job);
      await job.complete();
      metrics.increment("email.sent");
    } catch (error) {
      job.incrementRetries();
      
      if (job.retries >= 3) {
        await job.deadLetter();
        await alerting.send(`Email failed: ${job.to}`);
        metrics.increment("email.failed");
      } else {
        await job.retry();
        metrics.increment("email.retry");
      }
    }
  }
}

Key Takeaways

  • Queueing separates concerns and enables retries
  • Rate limiting prevents abuse and protects infrastructure
  • Status codes matter: 202 Accepted is correct here, not 200
  • Monitoring failures prevents silent data loss
  • Dead letter queues catch problems for later investigation

Future Improvements

  • Add dead-letter queue for investigating failures
  • Implement exponential backoff for retries
  • Send user-facing confirmation emails
  • Collect metrics for SLA monitoring
← Back to all articles