All articles

Top 5 Security Flaws Cursor AI Writes in Next.js 15

Top 5 Security Flaws Cursor AI Writes in Next.js 15

Vibe-coding is fast but often sacrifices security. The top 5 vulnerabilities found in Next.js apps generated by Cursor AI and how to fix them.

March 18, 2026VibeShield Team9 min read

Vibe-coding has completely revolutionized how fast we can ship web applications. Tools like Cursor, v0, and Lovable allow startup founders to build entire SaaS products in weekends. But speed comes with a cost: security blind spots.

Why Cursor Specifically?

Cursor is the dominant AI coding assistant for professional developers as of 2026. Unlike chat-based tools, Cursor has deep IDE integration — it sees your full codebase, suggests multi-file changes, and can complete entire features in one shot. That makes it uniquely powerful and uniquely risky.

Cursor's code generation is optimized for the most common patterns in its training data. When you ask Cursor to "add a delete endpoint," it generates code that works. It will scaffold the route, wire up the database call, and return the right HTTP status codes. What it won't do — unless you explicitly ask — is add authentication checks, ownership verification, rate limiting, or input validation.

This isn't a bug in Cursor. It's a consequence of how LLMs learn: from code that was written to work, not code that was written to be secure. The responsibility for security review falls on the developer.

Here are the seven most consistent security flaws we see in Cursor-generated Next.js 15 applications.


Flaw 1: Missing Authorization in Server Actions

Server Actions in Next.js 14/15 are incredibly convenient. They feel like frontend code, and Cursor generates them fluently:

// Generated by Cursor — looks fine, works fine, is dangerous
"use server"
 
export async function deletePost(postId: string) {
  await db.post.delete({ where: { id: postId } });
  revalidatePath("/dashboard");
}

The problem is that Server Actions are exposed as HTTP POST endpoints. An attacker can open the Network tab, find the action endpoint (identified by the Next-Action header), and send arbitrary requests to it with any postId. There's no session check, no ownership check — anyone can delete any post.

The Complete Fix

"use server"
import { auth } from "@/auth";
 
export async function deletePost(postId: string) {
  // Step 1: Verify the user is authenticated
  const session = await auth();
  if (!session?.user?.id) {
    throw new Error("Unauthorized");
  }
 
  // Step 2: Verify the user owns this post (IDOR prevention)
  const post = await db.post.findUnique({
    where: { id: postId },
    select: { authorId: true }
  });
 
  if (!post || post.authorId !== session.user.id) {
    throw new Error("Forbidden");
  }
 
  await db.post.delete({ where: { id: postId } });
  revalidatePath("/dashboard");
}

Blanket Protection with Middleware

For pages, you can use middleware to require authentication. But note that middleware.ts does not protect Server Actions or API routes — it only protects page routes:

// middleware.ts — protects pages but NOT API routes or Server Actions
import { auth } from "@/auth";
 
export default auth((req) => {
  if (!req.auth && req.nextUrl.pathname.startsWith("/dashboard")) {
    return Response.redirect(new URL("/login", req.url));
  }
});
 
export const config = {
  matcher: ["/dashboard/:path*", "/settings/:path*"],
};

Every Server Action and API route must have its own auth() call. Middleware cannot substitute for per-route authorization.


Flaw 2: Exposed Database Secrets in Client Components

Cursor knows about NEXT_PUBLIC_ environment variables and will use them to make prototypes work quickly:

// Generated by Cursor to "connect React component to Firebase"
import { initializeApp } from "firebase/app";
 
const app = initializeApp({
  apiKey: process.env.NEXT_PUBLIC_FIREBASE_API_KEY,
  projectId: process.env.NEXT_PUBLIC_FIREBASE_PROJECT_ID,
  // ...
});

Firebase web API keys are actually designed to be public (they're restricted by domain in the Firebase console). But Cursor applies the same pattern to keys that are emphatically not public:

// Cursor-generated — catastrophically wrong
const supabase = createClient(
  process.env.NEXT_PUBLIC_SUPABASE_URL!,
  process.env.NEXT_PUBLIC_SUPABASE_SERVICE_ROLE_KEY!  // ❌ bypasses all RLS
);

The service_role key bypasses all Row-Level Security policies. Exposing it in the client bundle gives any user unrestricted database access.

The Fix

# Safe to expose (public by design):
NEXT_PUBLIC_SUPABASE_URL=https://xxx.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGci...  # restricted by RLS
 
# NEVER expose these:
SUPABASE_SERVICE_ROLE_KEY=eyJhbGci...  # no NEXT_PUBLIC_ prefix
OPENAI_API_KEY=sk-proj-...
STRIPE_SECRET_KEY=sk_live_...

For a thorough treatment of all the patterns where keys end up in bundles, see How Exposed API Keys End Up in JS Bundles.


Flaw 3: Server-Side Request Forgery (SSRF)

When you ask Cursor to "build a feature that fetches a URL summary" or "create an image proxy endpoint," it almost always generates an SSRF-vulnerable implementation:

// Cursor's typical output for "fetch URL preview"
export async function GET(req: Request) {
  const url = new URL(req.url).searchParams.get("url");
  const res = await fetch(url!);  // ❌ no validation
  return Response.json(await res.json());
}

This gives any attacker access to every service reachable from your server's network — including AWS metadata endpoints, internal databases, and admin panels.

The Complete Fix with DNS Pre-Validation

The hostname-based blocklist in the basic fix is insufficient — an attacker can use a DNS record that resolves to an internal IP. The correct fix resolves DNS first:

import dns from "dns/promises";
import { URL } from "url";
 
const PRIVATE_IP_PATTERNS = [
  /^10\.\d+\.\d+\.\d+$/,
  /^172\.(1[6-9]|2\d|3[01])\.\d+\.\d+$/,
  /^192\.168\.\d+\.\d+$/,
  /^127\.\d+\.\d+\.\d+$/,
  /^169\.254\.\d+\.\d+$/,
  /^::1$/,
  /^fc[0-9a-f]{2}:/i,
  /^fe80:/i,
];
 
async function isSafeUrl(rawUrl: string): Promise<{ safe: boolean; reason?: string }> {
  let parsed: URL;
  try {
    parsed = new URL(rawUrl);
  } catch {
    return { safe: false, reason: "Invalid URL" };
  }
 
  if (!["http:", "https:"].includes(parsed.protocol)) {
    return { safe: false, reason: "Protocol not allowed" };
  }
 
  let addresses: string[];
  try {
    addresses = await dns.resolve4(parsed.hostname);
  } catch {
    return { safe: false, reason: "DNS resolution failed" };
  }
 
  for (const ip of addresses) {
    if (PRIVATE_IP_PATTERNS.some(pattern => pattern.test(ip))) {
      return { safe: false, reason: "Private IP range blocked" };
    }
  }
 
  return { safe: true };
}
 
export async function GET(req: Request) {
  const url = new URL(req.url).searchParams.get("url");
  if (!url) return new Response("Missing url", { status: 400 });
 
  const check = await isSafeUrl(url);
  if (!check.safe) {
    return new Response(check.reason ?? "Forbidden", { status: 403 });
  }
 
  const response = await fetch(url, { signal: AbortSignal.timeout(5000) });
  return Response.json({ text: await response.text() });
}

For a comprehensive SSRF guide including blind SSRF, real-world incidents, and framework-specific fixes, see SSRF from ChatGPT and Claude.


Flaw 4: No Rate Limiting on Sensitive Endpoints

Cursor-generated login and registration routes almost never include rate limiting. This leaves apps vulnerable to credential stuffing (trying breached username/password pairs at scale) and brute force attacks.

// Cursor-generated login — no rate limiting
export async function POST(req: Request) {
  const { email, password } = await req.json();
  const user = await signIn("credentials", { email, password });
  return Response.json(user);
}

The Full Upstash Redis Rate Limit Solution

import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";
 
const ratelimit = new Ratelimit({
  redis: Redis.fromEnv(),
  limiter: Ratelimit.slidingWindow(5, "15 m"),  // 5 attempts per 15 minutes
  analytics: true,
  prefix: "login_rate_limit",
});
 
export async function POST(req: Request) {
  // Rate limit by IP address
  const ip = req.headers.get("x-forwarded-for")?.split(",")[0].trim()
    ?? req.headers.get("x-real-ip")
    ?? "unknown";
 
  const { success, limit, remaining, reset } = await ratelimit.limit(ip);
 
  if (!success) {
    return new Response(
      JSON.stringify({ error: "Too many login attempts. Try again later." }),
      {
        status: 429,
        headers: {
          "Content-Type": "application/json",
          "X-RateLimit-Limit": limit.toString(),
          "X-RateLimit-Remaining": "0",
          "X-RateLimit-Reset": reset.toString(),
          "Retry-After": Math.ceil((reset - Date.now()) / 1000).toString(),
        }
      }
    );
  }
 
  // Proceed with authentication
  const { email, password } = await req.json();
  try {
    const result = await signIn("credentials", { email, password, redirect: false });
    return Response.json({ success: true });
  } catch (error) {
    return new Response(
      JSON.stringify({ error: "Invalid credentials" }),
      { status: 401, headers: { "Content-Type": "application/json" } }
    );
  }
}

Apply rate limiting to: login, registration, password reset, email verification resend, and any AI-powered endpoints that have per-request costs.


Flaw 5: Weak Supabase RLS Policies

When Cursor scaffolds a Supabase backend, it often generates permissive policies to make the app work quickly:

-- Cursor-generated "working" policy
CREATE POLICY "Enable read access for all users"
ON public.posts
FOR SELECT USING (true);  -- ❌ any authenticated user reads all rows

This is one of the most dangerous patterns because it looks reasonable — it's syntactically correct, the app functions properly during development, and the problem only becomes apparent in production when users start seeing each other's data.

The Fix

-- Each user only sees their own posts
CREATE POLICY "Users read own posts"
ON public.posts FOR SELECT
TO authenticated
USING (auth.uid() = author_id);
 
-- Only post owner can update
CREATE POLICY "Users update own posts"
ON public.posts FOR UPDATE
TO authenticated
USING (auth.uid() = author_id)
WITH CHECK (auth.uid() = author_id);

For the complete RLS guide including multi-tenant patterns and auditing queries, see How to Secure Supabase RLS.


Flaw 6: Missing CSRF Protection in API Routes

Next.js App Router API routes don't include CSRF protection by default. Cursor-generated routes often accept form submissions or JSON from any origin:

// Vulnerable to CSRF — no origin check
export async function POST(req: Request) {
  const { action } = await req.json();
  await performSensitiveAction(action);
  return Response.json({ success: true });
}

An attacker can host a malicious website that submits requests to your API routes using the victim's browser cookies.

The Fix

export async function POST(req: Request) {
  // Check Origin header against allowed origins
  const origin = req.headers.get("origin");
  const allowedOrigins = [
    process.env.NEXT_PUBLIC_APP_URL,
    "https://yourdomain.com",
  ].filter(Boolean);
 
  if (!origin || !allowedOrigins.includes(origin)) {
    return new Response("Forbidden", { status: 403 });
  }
 
  // For session-based requests, also verify the session
  const session = await auth();
  if (!session) return new Response("Unauthorized", { status: 401 });
 
  // proceed
}

For Server Actions, Next.js 14+ includes built-in CSRF protection via the Origin header check — but only for form submissions, not JSON requests.


Flaw 7: Unsafe Redirects (Open Redirect)

Cursor often generates redirect-after-login flows that use a callbackUrl query parameter without validation:

// Cursor-generated auth callback — open redirect
export async function GET(req: Request) {
  const callbackUrl = new URL(req.url).searchParams.get("callbackUrl");
  // ... authenticate user ...
  return Response.redirect(callbackUrl ?? "/dashboard");  // ❌ no validation
}

An attacker can craft a link: https://yourapp.com/api/auth/callback?callbackUrl=https://evil.com. After login, the user is redirected to the attacker's site — which can be a phishing page that looks identical to yours.

The Fix

function isSafeRedirect(url: string, baseUrl: string): boolean {
  try {
    const parsed = new URL(url, baseUrl);
    const base = new URL(baseUrl);
    // Only allow same-origin redirects
    return parsed.origin === base.origin;
  } catch {
    return false;
  }
}
 
export async function GET(req: Request) {
  const baseUrl = process.env.NEXT_PUBLIC_APP_URL!;
  const callbackUrl = new URL(req.url).searchParams.get("callbackUrl") ?? "/dashboard";
 
  const redirectTo = isSafeRedirect(callbackUrl, baseUrl)
    ? callbackUrl
    : "/dashboard";
 
  return Response.redirect(new URL(redirectTo, baseUrl));
}

How to Audit a Cursor-Generated Codebase

Run these searches to quickly identify the highest-priority issues:

# 1. Server Actions without auth() calls
grep -rn "\"use server\"" ./app --include="*.ts" --include="*.tsx" -l | \
  xargs grep -rL "auth()\|getSession\|currentUser"
 
# 2. fetch() calls with user-controlled URLs
grep -rn "fetch(.*params\.\|fetch(.*body\.\|fetch(.*req\." ./app/api
 
# 3. Environment variables that should NOT be NEXT_PUBLIC_
grep -rn "NEXT_PUBLIC_.*SUPABASE_SERVICE\|NEXT_PUBLIC_.*SECRET\|NEXT_PUBLIC_.*PRIVATE" .env*
 
# 4. API routes missing auth
find ./app/api -name "route.ts" | xargs grep -L "auth()\|getServerSession"
 
# 5. Supabase policies with USING (true)
# Run in Supabase SQL editor:
# SELECT tablename, policyname FROM pg_policies WHERE qual = 'true';

FAQ

Does Cursor have a security mode or setting I should enable?

Cursor doesn't have a dedicated security mode, but you can add security requirements to your project's .cursorrules file or system prompt. Something like: "Every API route and Server Action must call auth() and verify ownership before database operations. Never use NEXT_PUBLIC_ prefix for API keys." This significantly improves output quality, though manual review is still necessary.

Are these flaws specific to Cursor, or do other AI tools have the same problems?

The same patterns appear in code generated by GitHub Copilot, v0, Lovable, Bolt, and direct ChatGPT/Claude usage. The underlying cause is the same: LLMs trained on code that prioritizes functionality. Cursor gets the spotlight because it's the most widely used for production app development, so we see more examples of its output.

How often should I run security scans on a vibe-coded app?

Run an automated scan before every significant deployment — at minimum when you add new API routes or integrate new services. For apps in active development, integrate VibeShield into your CI/CD pipeline so every deployment gets checked automatically.

If I use TypeScript strictly, does that help?

TypeScript prevents a narrow class of injection vulnerabilities (type errors catch obvious string-where-number issues) and eliminates many null dereference bugs. But it doesn't prevent missing auth checks, SSRF, or IDOR — those are logical vulnerabilities, not type errors. TypeScript is a necessary but insufficient security tool.


AI is incredible for velocity, but it shouldn't replace your security posture. Always conduct a thorough review or use dynamic tools like VibeShield to catch these specific vibe-coded vulnerabilities before an attacker does.

Scan your Next.js app →

Free security scan

Test your app for these vulnerabilities

VibeShield automatically scans for everything covered in this article and more — 18 security checks in under 3 minutes.

Scan your app free