AI-Ready websites have public API endpoints. That is the whole point -- AI agents should be able to use services without human intervention. But public endpoints also mean: anyone can call them. Not just helpful AI agents, but also bots, scrapers, and attackers.
If you build an AI-Ready website and do not think about security from day one, you are building an open gate.
The Problem: Public Means Attackable
A classic website has a contact form. Spam protection? reCAPTCHA, honeypot fields, maybe a rate limit. That works because humans fill out forms -- and most bots cannot solve CAPTCHAs.
An AI-Ready website has API endpoints like /api/v1/quote or /api/v1/consultation. These endpoints are designed for machine use. A CAPTCHA would defeat the entire purpose -- AI agents are supposed to access them programmatically.
This creates a dilemma: The door must be open for legitimate agents, but closed for abuse. The solution is not a single mechanism but multiple layers.
Layer 1: Input Validation with Zod
The first line of defense is the simplest: check whether incoming data is valid before it triggers anything.
Zod is a TypeScript validation library that checks data against a defined schema. No magic, no AI -- pure structural validation.
Why Not Just if Statements?
You could. But with complex data structures, this quickly becomes unmanageable and error-prone. Zod enforces a clear structure:
import { z } from "zod";
const quoteSchema = z.object({
projectType: z.enum(["website", "webshop", "webapp", "redesign"]),
pages: z.string().optional(),
features: z.string().optional(),
});
// In the API route:
const parsed = quoteSchema.safeParse(body);
if (!parsed.success) {
return ApiError.validation(
"Invalid input data",
parsed.error.issues
);
}
What Zod Specifically Prevents
- Missing required fields: A request without
projectTypeis immediately rejected - Wrong types: A
pagesvalue oftrueinstead of a string is caught - Unexpected values: A
projectTypeof"hacking"is blocked by the enum - Injection attempts: SQL or NoSQL injection through manipulated fields is made harder by strict typing
This sounds basic -- and it is. But the majority of security vulnerabilities arise not from sophisticated attacks, but from missing input validation.
Practice: Validating a Consultation Request
const consultationSchema = z.object({
name: z.string().min(2).max(100),
email: z.string().email(),
phone: z.string().optional(),
topic: z.string().min(5).max(500),
});
This schema check ensures:
- Name has at least 2 and at most 100 characters (no empty strings, no novel-length entries)
- Email is a valid email format (no freetext injection)
- Topic has 5-500 characters (no empty submit, no 10MB payload)
Layer 2: Rate Limiting
Validation protects against bad data. Rate limiting protects against too many requests.
How Rate Limiting Works
The principle is simple: count how many requests an IP address makes within a time window. If it exceeds the limit, it receives a 429 Too Many Requests status back.
IP 192.168.1.1:
Window: 60 seconds
Limit: 60 requests
Current: 47
→ Allowed
IP 10.0.0.5:
Window: 60 seconds
Limit: 60 requests
Current: 61
→ Blocked (Retry-After: 23s)
Different Limits for Different Endpoints
Not every endpoint needs the same limit:
| Endpoint Type | Limit | Reason |
|---|---|---|
| GET endpoints (Portfolio, Services) | 60/minute per IP | Reading is cheap, higher tolerance |
| POST endpoints (Quote, Consultation) | 10/minute per IP | Writing is expensive, tighter control |
| External-call endpoints (AI-Ready Check) | 10/minute per IP | Make HTTP calls externally, high abuse risk |
Why the difference? A GET endpoint reads data from the database -- that is fast and cheap. A POST endpoint writes data, potentially sends emails, or triggers other actions. And an endpoint that calls external websites (like our AI-Ready Check) could be abused for DDoS attacks against third parties.
The Response on Rate Limiting
When a limit is reached, the client receives a clear response:
{
"error": "Rate limit exceeded. Max 60 requests per minute.",
"retryAfterMs": 23000
}
Plus HTTP headers that tell the client (or agent) where things stand:
HTTP/1.1 429 Too Many Requests
Retry-After: 23
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1739654400
A well-built AI agent reads these headers and waits automatically. An attacker at least gets a throttle.
Layer 3: Error Sanitization
When something goes wrong -- and something always goes wrong eventually -- the error message must not reveal internal details.
What an Attacker Learns from Error Messages
An unsanitized error could look like this:
{
"error": "PrismaClientKnownRequestError: Invalid `prisma.client.create()` invocation in /home/user/app/lib/db.ts:47:3",
"stack": "at Object.create (/home/user/node_modules/@prisma/client/runtime/library.js:123:45)"
}
What does this reveal? The ORM (Prisma), the file path, the line in the code, the Node.js version. An attacker now knows where to look.
How We Return Errors
Our API always returns the same format:
{
"success": false,
"error": {
"code": "INTERNAL_ERROR",
"message": "Internal server error"
}
}
In development mode, developers see the full error message. In production, every caller sees only "Internal server error". No stack trace, no file path, no ORM name.
The Error Codes
We use six defined error codes:
| Code | HTTP Status | Meaning |
|---|---|---|
UNAUTHORIZED | 401 | API key missing or invalid |
FORBIDDEN | 403 | Key lacks required permission |
RATE_LIMITED | 429 | Too many requests |
VALIDATION_ERROR | 400 | Invalid input data |
NOT_FOUND | 404 | Resource not found |
INTERNAL_ERROR | 500 | Server error (no details) |
An AI agent can read these codes and react accordingly: wait on 429, correct the request on 400, authenticate on 401.
Layer 4: CORS Configuration
CORS (Cross-Origin Resource Sharing) controls who can access the API from where.
For AI-Ready discovery endpoints (like agents.json), CORS must be open -- AI agents come from everywhere. But there are nuances:
Discovery Endpoints (GET only):
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET
→ Anyone can read, but only read.
API Endpoints:
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: Content-Type, Authorization
→ Full access, but only with allowed headers.
The OPTIONS method is for preflight requests -- browsers first ask "May I?" before sending the actual request. We cache this preflight response for 24 hours (Access-Control-Max-Age: 86400) to avoid unnecessary requests.
Layer 5: What Is Not Exposed
Just as important as securing public endpoints is the question: What stays private?
For us, that includes:
- Internal MCP tools (242 tools for website development -- not public)
- Database IDs and structures (no Prisma IDs in API responses)
- Authentication tokens (never in logs or responses)
- Server configuration (no version numbers, no paths)
- Personal user data (GDPR anyway, but also technically isolated in the API)
This sounds obvious, but it is not. Many APIs return database IDs ("id": 47), from which the total number of records can be inferred. Or they log stack traces that are publicly readable through error-tracking tools.
Auth Concepts: Today and Tomorrow
What We Have Today
For the public business endpoints (Portfolio, Services, Quote), there is deliberately no authentication. The reason: these endpoints deliver information that is also publicly visible on the website. A price calculator is not a secret.
For advanced endpoints (Animations API, Analytics), we use API keys with scoped permissions:
Bearer sk_live_abc123...
Scopes: animations:read, animations:generate
Rate Limit: 100/min (configurable per key)
What the Future Holds
As A2A and similar protocols gain broader adoption, the question of agent authentication becomes central. How does an AI agent prove it is acting on behalf of a specific user?
Possible approaches:
- OAuth 2.0 Token Relay: The agent receives a token from the user and forwards it
- Agent Identity Certificates: Digital certificates for verified agents
- DID (Decentralized Identifiers): Self-sovereign identities for agents
As of today, none of these approaches are standardized for agent communication. But the foundations (OAuth, certificates) exist -- what is missing is standardization in the A2A context.
The Golden Rule: Security Is Not a Feature, It Is a Prerequisite
The most common mistake with AI-Ready websites: build the endpoints first, secure them "later." In our experience, "later" never comes -- or comes only after the first incident.
Our Checklist for Every New Endpoint
[ ] Zod schema defined?
[ ] Rate limit configured (GET: 60/min, POST: 10/min)?
[ ] Error responses sanitized (no stack traces)?
[ ] CORS headers set?
[ ] No database IDs in the response?
[ ] No personal data without auth?
[ ] Logging without sensitive data?
This checklist is not overhead. It is the difference between an endpoint that is useful and one that becomes a problem.
Summary
| Layer | Protects Against | Mechanism |
|---|---|---|
| Input Validation | Wrong/manipulated data | Zod schemas with strict types |
| Rate Limiting | Overload and brute force | IP-based counters with time windows |
| Error Sanitization | Information leakage | Generic error messages in production |
| CORS | Unauthorized cross-origin access | Header-based access control |
| Auth (API Keys) | Unauthorized access to premium endpoints | Bearer tokens with scoped permissions |
| Non-Exposure | Accidental data disclosure | Conscious decisions about what stays private |
Security for AI-Ready websites is not a specialty topic. It follows the same fundamental principles as any API -- validation, rate limiting, clean error handling. The difference is that the endpoints are designed for machine access, which makes the attack surface larger from day one.
Those who think about this from the start have no problem. Those who retrofit it have work to do.
