← All posts

I Audited Vibe-Coded Applications: Here Are the Security Nightmares I Found

I Audited Vibe-Coded Applications: Here Are the Security Nightmares I Found

In February 2025, Andrej Karpathy casually coined a term that would define one of the biggest debates in modern software development: "vibe coding." The idea is simple — you describe what you want, an AI generates the code, and you accept it without fully understanding or reviewing what it produced. You go with the vibes.

The concept resonated so deeply that it now has its own Wikipedia page. Ironically, Karpathy himself has since suggested retiring the term in favor of "agentic engineering" — a more precise description of AI-assisted development done responsibly. But the genie is out of the bottle. Developers everywhere are shipping AI-generated code at unprecedented speed.

The question nobody was asking loudly enough was: how secure is that code? I decided to find out. I audited multiple vibe-coded applications — projects built primarily by accepting AI-generated code with minimal human review. What I found was deeply concerning.

The Numbers Are Alarming

Before diving into my own findings, let us look at what the research says. The data paints a troubling picture of AI-generated code security.

A comprehensive study by Black Duck found that AI co-authored code contains 75% more misconfigurations than code written entirely by humans. These are not minor style issues — they are configuration errors that directly impact security posture.

Research published on Towards Data Science revealed that AI-generated code has a 2.74x higher rate of security vulnerabilities compared to human-written code. That means for every vulnerability a human developer introduces, an AI coding assistant introduces nearly three.

Perhaps most striking: 24.7% of AI-generated code contains at least one security flaw. That is roughly one in four code suggestions shipping with a vulnerability baked in. When you consider how many AI suggestions developers accept per day, the scale of the problem becomes staggering.

These are not hypothetical risks. They are measurable, reproducible patterns showing up across languages, frameworks, and AI models.

The Top 5 Security Nightmares

During my audit, I encountered the same categories of vulnerabilities over and over again. Here are the five most common and most dangerous patterns in vibe-coded applications.

1. SQL Injection — The Classic That AI Refuses to Learn

SQL injection has been a known vulnerability for over two decades. It is in every security textbook, every OWASP list, every beginner tutorial. And yet, AI consistently generates code that is vulnerable to it.

Here is what AI-generated code frequently looks like:

insecure-query.js
// INSECURE: AI-generated code with SQL injection vulnerability
app.get('/api/users', async (req, res) => {
  const { search } = req.query;

  // AI builds the query using string concatenation
  const query = `SELECT * FROM users WHERE name LIKE '%${search}%'`;
  const results = await db.query(query);

  res.json(results);
});

// An attacker sends: ?search=' OR '1'='1' --
// The query becomes: SELECT * FROM users WHERE name LIKE '%' OR '1'='1' --%'
// Result: The entire users table is dumped

The AI generates string concatenation because that is what a huge percentage of its training data contains. Stack Overflow answers from 2010, tutorial blogs, quick-start guides — they all use string concatenation because it is simpler to explain.

The secure version uses parameterized queries:

secure-query.js
// SECURE: Parameterized query prevents SQL injection
app.get('/api/users', async (req, res) => {
  const { search } = req.query;

  // Parameters are escaped automatically by the database driver
  const query = 'SELECT * FROM users WHERE name LIKE $1';
  const results = await db.query(query, [`%${search}%`]);

  res.json(results);
});

For ORMs, the same problem appears in raw query methods:

orm-injection.py
# INSECURE: AI-generated raw SQL in an ORM context
def search_products(search_term):
    # AI uses f-string in raw SQL — classic injection vector
    return Product.objects.raw(
        f"SELECT * FROM products WHERE name ILIKE '%{search_term}%'"
    )

# SECURE: Using ORM query methods or parameterized raw queries
def search_products(search_term):
    # Option 1: Use the ORM properly
    return Product.objects.filter(name__icontains=search_term)

    # Option 2: If raw SQL is needed, use parameters
    return Product.objects.raw(
        "SELECT * FROM products WHERE name ILIKE %s",
        [f"%{search_term}%"]
    )

2. Hardcoded Secrets — AI Puts Your Keys in Plain Sight

This is the most embarrassingly common flaw in AI-generated code. When you ask an AI to integrate with an API, it almost always puts the credentials directly in the source code.

hardcoded-secrets.js
// INSECURE: AI-generated code with hardcoded secrets
const stripe = require('stripe')('sk_live_4eC39HqLyjWDarjtT1zdp7dc');

const AWS = require('aws-sdk');
AWS.config.update({
  accessKeyId: 'AKIAIOSFODNN7EXAMPLE',
  secretAccessKey: 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
  region: 'us-east-1'
});

const db = mysql.createConnection({
  host: 'production-db.company.com',
  user: 'admin',
  password: 'SuperSecret123!',
  database: 'users'
});

I found live API keys in three out of five vibe-coded applications I audited. One application had a Stripe secret key committed to a public GitHub repository. Another had AWS root credentials — not IAM, root — embedded in a frontend JavaScript file that was served to every visitor.

Secrets should always come from environment variables or a secrets manager:

secure-secrets.js
// SECURE: Load secrets from environment variables
const stripe = require('stripe')(process.env.STRIPE_SECRET_KEY);

const AWS = require('aws-sdk');
// AWS SDK automatically reads from environment or IAM roles
// No credentials in code at all

const db = mysql.createConnection({
  host: process.env.DB_HOST,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
  database: process.env.DB_NAME,
});

// Validate that required env vars exist at startup
const required = ['STRIPE_SECRET_KEY', 'DB_HOST', 'DB_USER', 'DB_PASSWORD'];
for (const envVar of required) {
  if (!process.env[envVar]) {
    console.error(`Missing required environment variable: ${envVar}`);
    process.exit(1);
  }
}

Your .gitignore should also prevent accidental commits:

.gitignore
# .gitignore — always include these
.env
.env.local
.env.production
*.pem
*.key
credentials.json
service-account.json

3. Missing Input Validation — AI Trusts Everything

AI-generated code almost never validates input. It assumes every request is well-formed, every user is honest, and every payload conforms to expectations. This leads to a cascade of vulnerabilities.

insecure-transfer.js
// INSECURE: AI-generated API endpoint with zero validation
app.post('/api/transfer', async (req, res) => {
  const { fromAccount, toAccount, amount } = req.body;

  // No validation at all — AI trusts the client completely
  await db.query(
    'UPDATE accounts SET balance = balance - $1 WHERE id = $2',
    [amount, fromAccount]
  );
  await db.query(
    'UPDATE accounts SET balance = balance + $1 WHERE id = $2',
    [amount, toAccount]
  );

  res.json({ success: true });
});

// Problems:
// - amount could be negative (reverse the transfer)
// - amount could be zero or a string
// - fromAccount could belong to someone else
// - No check if fromAccount has sufficient balance
// - No transaction — partial failure leaves inconsistent state

A properly validated version looks very different:

secure-transfer.js
// SECURE: Comprehensive input validation and business logic checks
import { z } from 'zod';

const TransferSchema = z.object({
  fromAccount: z.string().uuid(),
  toAccount: z.string().uuid(),
  amount: z.number().positive().max(1_000_000).multipleOf(0.01),
});

app.post('/api/transfer', authenticate, async (req, res) => {
  // 1. Validate input shape and types
  const parsed = TransferSchema.safeParse(req.body);
  if (!parsed.success) {
    return res.status(400).json({
      error: 'Invalid input',
      details: parsed.error.issues
    });
  }

  const { fromAccount, toAccount, amount } = parsed.data;

  // 2. Verify ownership — user can only transfer from their own account
  const account = await db.query(
    'SELECT * FROM accounts WHERE id = $1 AND owner_id = $2',
    [fromAccount, req.user.id]
  );
  if (!account.rows.length) {
    return res.status(403).json({ error: 'Account not found or unauthorized' });
  }

  // 3. Prevent self-transfer
  if (fromAccount === toAccount) {
    return res.status(400).json({ error: 'Cannot transfer to the same account' });
  }

  // 4. Use a database transaction for atomicity
  const client = await db.connect();
  try {
    await client.query('BEGIN');

    // 5. Check sufficient balance with row lock
    const balanceCheck = await client.query(
      'SELECT balance FROM accounts WHERE id = $1 FOR UPDATE',
      [fromAccount]
    );
    if (balanceCheck.rows[0].balance < amount) {
      await client.query('ROLLBACK');
      return res.status(400).json({ error: 'Insufficient balance' });
    }

    await client.query(
      'UPDATE accounts SET balance = balance - $1 WHERE id = $2',
      [amount, fromAccount]
    );
    await client.query(
      'UPDATE accounts SET balance = balance + $1 WHERE id = $2',
      [amount, toAccount]
    );

    await client.query('COMMIT');
    res.json({ success: true });
  } catch (err) {
    await client.query('ROLLBACK');
    throw err;
  } finally {
    client.release();
  }
});

The difference is stark. The AI-generated version is 10 lines. The secure version is 55 lines. That gap represents every attack vector the AI did not think about.

4. Insecure Dependencies — AI Imports Yesterday's Vulnerabilities

When AI generates code, it recommends packages based on training data that may be months or years out of date. This means it frequently suggests packages with known CVEs, deprecated APIs, or abandoned projects.

insecure-package.json
// INSECURE: AI-suggested dependencies with known vulnerabilities
{
  "dependencies": {
    "lodash": "4.17.15",       // CVE-2020-28500: ReDoS vulnerability
    "minimist": "1.2.5",       // CVE-2021-44906: Prototype pollution
    "node-fetch": "2.6.1",     // CVE-2022-0235: Credential exposure
    "jsonwebtoken": "8.5.1",   // CVE-2022-23529: Insecure key handling
    "express": "4.17.1",       // Multiple unpatched vulnerabilities
    "tar": "4.4.13"            // CVE-2021-32803: Path traversal
  }
}

Every one of these versions has published CVEs. The AI suggests them because they were the latest stable versions when the training data was collected. The developer who accepts the suggestion without checking inherits every vulnerability.

Always audit dependencies and pin to secure versions:

dependency-audit.sh
# Run security audits regularly
npm audit
npm audit fix

# Use automated tools in CI/CD
npx audit-ci --high

# Check for outdated packages
npm outdated

# Use lockfile maintenance
npm ci  # Install from lockfile exactly

# Pin exact versions to prevent supply-chain attacks
npm config set save-exact true

For Python projects, the same problem applies:

requirements.txt
# INSECURE: AI-generated requirements with outdated packages
# requirements.txt
flask==1.1.2           # CVE-2023-30861: Session cookie vulnerability
requests==2.25.1       # Older version, missing security patches
pyyaml==5.3.1          # CVE-2020-14343: Arbitrary code execution
pillow==8.0.1          # Multiple CVEs for image parsing
django==3.1.0          # End of life, unpatched vulnerabilities

# SECURE: Use pip-audit and pin updated versions
# pip install pip-audit
# pip-audit
# pip install --upgrade flask requests pyyaml pillow django

5. Broken Authentication — AI Builds Doors That Look Locked

Authentication and authorization are among the hardest things to get right in software. AI-generated auth code often looks correct on the surface but contains critical flaws that an attacker can easily exploit.

insecure-auth.js
// INSECURE: AI-generated authentication with multiple flaws
app.post('/api/login', async (req, res) => {
  const { email, password } = req.body;

  const user = await db.query(
    'SELECT * FROM users WHERE email = $1',
    [email]
  );

  if (!user.rows.length) {
    // FLAW 1: Different error messages reveal whether email exists
    return res.status(401).json({ error: 'User not found' });
  }

  // FLAW 2: Comparing passwords in plain text — no hashing
  if (user.rows[0].password !== password) {
    return res.status(401).json({ error: 'Wrong password' });
  }

  // FLAW 3: JWT secret is weak and hardcoded
  const token = jwt.sign(
    { userId: user.rows[0].id, role: user.rows[0].role },
    'secret123',
    { expiresIn: '30d' }  // FLAW 4: Token lives for 30 days
  );

  // FLAW 5: No rate limiting — brute force is trivial
  // FLAW 6: Token sent without httpOnly cookie — XSS can steal it
  res.json({ token });
});

I counted six distinct security flaws in this single endpoint. It compiles. It passes basic tests. A user can log in and get a token. But it is catastrophically insecure.

Here is what secure authentication actually looks like:

secure-auth.js
// SECURE: Production-grade authentication
import bcrypt from 'bcrypt';
import jwt from 'jsonwebtoken';
import rateLimit from 'express-rate-limit';

// Rate limit login attempts
const loginLimiter = rateLimit({
  windowMs: 15 * 60 * 1000,  // 15 minutes
  max: 5,                     // 5 attempts per window
  message: { error: 'Too many login attempts. Try again later.' },
  standardHeaders: true,
  legacyHeaders: false,
});

app.post('/api/login', loginLimiter, async (req, res) => {
  const { email, password } = req.body;

  // Input validation
  if (!email || !password || typeof email !== 'string') {
    return res.status(400).json({ error: 'Email and password required' });
  }

  const user = await db.query(
    'SELECT * FROM users WHERE email = $1',
    [email.toLowerCase().trim()]
  );

  // Consistent error message — do not reveal if email exists
  const genericError = { error: 'Invalid email or password' };

  if (!user.rows.length) {
    // Still hash to prevent timing attacks
    await bcrypt.hash(password, 10);
    return res.status(401).json(genericError);
  }

  // Compare hashed passwords
  const valid = await bcrypt.compare(password, user.rows[0].password_hash);
  if (!valid) {
    return res.status(401).json(genericError);
  }

  // Strong secret from environment, short expiry
  const token = jwt.sign(
    { userId: user.rows[0].id },  // Minimal claims — no role in JWT
    process.env.JWT_SECRET,        // 256-bit secret from env
    { expiresIn: '1h' }           // Short-lived token
  );

  // Set as httpOnly cookie — not accessible to JavaScript
  res.cookie('token', token, {
    httpOnly: true,
    secure: true,               // HTTPS only
    sameSite: 'strict',         // CSRF protection
    maxAge: 60 * 60 * 1000,     // 1 hour
  });

  res.json({ success: true });
});

The secure version addresses every flaw: rate limiting prevents brute force, bcrypt handles password hashing, consistent error messages prevent enumeration, JWT secrets come from environment variables, tokens have short expiry, and cookies are httpOnly and secure.

Why AI Gets Security Wrong

Understanding why these patterns emerge helps explain why vibe coding is inherently risky without human oversight.

The training data problem. AI models are trained on massive code corpora that include Stack Overflow answers, tutorials, open-source projects, and blog posts. A significant portion of this code was written for demonstrations, not for production. Tutorial code optimizes for clarity and brevity, not security. The AI learns that string concatenation is the "normal" way to build SQL queries because that is what most examples show.

Optimizing for "works" not "secure." When you prompt an AI to build a login system, it generates code that accomplishes the functional requirement: a user can log in. The AI does not think about threat models, attack vectors, or defense in depth. It satisfies the stated goal, not the unstated security requirements.

No threat modeling context. A human security engineer thinks about the deployment environment, the sensitivity of the data, the threat actors, and the compliance requirements. An AI has none of this context. It generates the same code whether you are building a personal blog or a banking application.

The completion bias. AI models are trained to complete patterns. When they see a database query being constructed, they reach for the most common pattern in their training data — which is often the insecure one. Parameterized queries are less common in training data because they require more boilerplate and are harder to explain in a tutorial context.

The Vibe Coding Workflow Problem

The fundamental issue with vibe coding is not that AI generates insecure code — it is that the developer does not review what they ship.

When you accept AI-generated code without understanding it, you are making a dangerous assumption: that the AI considered all the same things you would have. It did not. It cannot. It does not know your threat model, your compliance requirements, your deployment environment, or your data sensitivity classification.

The false confidence of "it compiles and passes tests" is particularly dangerous. Functional correctness and security correctness are entirely different dimensions. Code can pass every unit test with flying colors while being trivially exploitable.

Consider this test suite for the insecure login endpoint from earlier:

incomplete-tests.js
// These tests ALL PASS on the insecure implementation
describe('Login API', () => {
  it('should return a token for valid credentials', async () => {
    const res = await request(app)
      .post('/api/login')
      .send({ email: 'test@example.com', password: 'password123' });
    expect(res.status).toBe(200);
    expect(res.body.token).toBeDefined();
  });

  it('should reject invalid credentials', async () => {
    const res = await request(app)
      .post('/api/login')
      .send({ email: 'test@example.com', password: 'wrong' });
    expect(res.status).toBe(401);
  });

  it('should reject non-existent user', async () => {
    const res = await request(app)
      .post('/api/login')
      .send({ email: 'nobody@example.com', password: 'password123' });
    expect(res.status).toBe(401);
  });
});

// Tests that are MISSING:
// - Brute force protection (rate limiting)
// - Password hashing verification
// - Timing attack resistance
// - Token expiry validation
// - Cookie security flags
// - User enumeration via different error messages

Every test passes. The developer feels confident. The code ships to production. And it is vulnerable to at least six different attack vectors that no test checked for.

A Secure AI-Assisted Development Workflow

Vibe coding does not have to be insecure. But it does require intentional practices to catch the security flaws that AI consistently misses.

Security-First Prompting

The way you prompt an AI dramatically affects the security of its output. Instead of asking for functional code, explicitly request secure code.

prompting-example.txt
// BAD PROMPT:
"Build a login endpoint for my Express app"

// GOOD PROMPT:
"Build a secure login endpoint for my Express app. Requirements:
- Use bcrypt for password hashing with a cost factor of at least 12
- Implement rate limiting (max 5 attempts per 15 minutes per IP)
- Use consistent error messages to prevent user enumeration
- Set JWT tokens in httpOnly, secure, sameSite cookies
- Token expiry should not exceed 1 hour
- All inputs must be validated and sanitized
- Include protection against timing attacks
- Follow OWASP authentication best practices"

When you specify security requirements in the prompt, AI models produce dramatically better output. The problem is that most developers do not know what to ask for — which brings us back to the core issue: you need to understand security to prompt for it.

Mandatory Code Review Checklist

Every piece of AI-generated code should pass through a security-focused review before it is accepted. Here is a minimum checklist:

  1. Are database queries parameterized? No string concatenation or template literals in SQL.
  2. Are secrets loaded from environment variables? No hardcoded keys, passwords, or tokens.
  3. Is all user input validated? Check types, ranges, lengths, and formats.
  4. Are dependencies up to date? Run npm audit or pip-audit before accepting new packages.
  5. Is authentication implemented with established libraries? No custom crypto.
  6. Are error messages generic? No stack traces or internal details in responses.
  7. Is HTTPS enforced? Are cookies set with secure and httpOnly flags?
  8. Is there rate limiting on sensitive endpoints?
  9. Are CORS headers configured correctly? Not using wildcard origins.
  10. Is there logging for security events? Failed logins, permission denials, unusual patterns.

Automated Security Scanning

Automated tools catch what human review might miss. Integrate both static and dynamic analysis into your workflow.

SAST (Static Application Security Testing) — Analyzes source code without executing it. Catches SQL injection, XSS, hardcoded secrets, and insecure patterns. Run on every pull request.

DAST (Dynamic Application Security Testing) — Tests the running application by sending malicious inputs. Catches runtime vulnerabilities that static analysis misses. Run against staging environments.

SCA (Software Composition Analysis) — Scans dependencies for known CVEs. Run on every build to catch vulnerable packages before they reach production.

Here is a CI/CD pipeline that integrates security scanning:

security-pipeline.yml
# .github/workflows/security.yml
name: Security Scan

on: [push, pull_request]

jobs:
  security:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      # Dependency vulnerability scan
      - name: Run npm audit
        run: npm audit --audit-level=high

      # Static analysis with Semgrep
      - name: Semgrep SAST
        uses: semgrep/semgrep-action@v1
        with:
          config: >-
            p/security-audit
            p/owasp-top-ten
            p/nodejs

      # Secret detection
      - name: Detect hardcoded secrets
        uses: trufflesecurity/trufflehog@main
        with:
          path: ./
          extra_args: --only-verified

      # CodeQL analysis
      - name: CodeQL Analysis
        uses: github/codeql-action/analyze@v3
        with:
          languages: javascript

Dependency Auditing

Every AI-suggested package should be vetted before installation. Check the package age, maintenance status, download count, known vulnerabilities, and whether it is still actively maintained.

dependency-check.sh
# Before installing an AI-suggested package, check it
# 1. Check for known vulnerabilities
npm audit

# 2. Check package health
npx npm-check-updates

# 3. View package details
npm info <package-name>

# 4. Check if a package is maintained
npm view <package-name> time --json | jq '.modified'

# 5. Use Socket.dev to detect supply chain risks
npx socket optimize

# 6. Lock dependencies after auditing
npm shrinkwrap

The Human-in-the-Loop Requirement

The most important defense against vibe coding vulnerabilities is a simple rule: a human who understands security must review every piece of AI-generated code that touches authentication, authorization, data access, or external integrations.

This does not mean you cannot use AI to write code faster. It means you treat AI-generated code the same way you would treat code from a junior developer — assume it works, but verify it is secure.

Tools for Catching AI Security Flaws

Here are the tools I recommend for teams that use AI-assisted development.

Snyk — Comprehensive security platform that scans dependencies, container images, IaC templates, and source code. Integrates directly into IDEs and CI/CD pipelines. Particularly good at identifying known CVEs in npm and pip packages.

Semgrep — Fast, open-source static analysis tool that supports custom rules. You can write rules specific to your codebase patterns, making it ideal for catching AI-generated anti-patterns. The OWASP ruleset catches most common web vulnerabilities.

CodeQL — GitHub's semantic code analysis engine. Goes beyond pattern matching to understand data flow, making it excellent at catching injection vulnerabilities where tainted input flows through multiple functions before reaching a sink.

OWASP ZAP — Open-source dynamic application security testing proxy. It crawls your running application and tests for vulnerabilities by sending malicious payloads. Essential for testing APIs and web applications before deployment.

npm audit / pip-audit — Built-in package managers include security auditing. Run them in CI/CD pipelines and fail builds on high-severity vulnerabilities. Free, fast, and should be non-negotiable in any project.

A practical Semgrep rule for catching common AI-generated flaws:

semgrep-rules.yml
# .semgrep/ai-security-rules.yml
rules:
  - id: no-string-concat-in-queries
    patterns:
      - pattern: |
          $DB.query(`...${...}...`)
    message: "Potential SQL injection: use parameterized queries instead"
    languages: [javascript, typescript]
    severity: ERROR

  - id: no-hardcoded-secrets
    patterns:
      - pattern: |
          $KEY = "sk_live_..."
      - pattern: |
          $KEY = "AKIA..."
    message: "Hardcoded secret detected: use environment variables"
    languages: [javascript, typescript, python]
    severity: ERROR

  - id: jwt-weak-secret
    patterns:
      - pattern: |
          jwt.sign($PAYLOAD, "...", ...)
    message: "JWT signed with hardcoded secret: use process.env"
    languages: [javascript, typescript]
    severity: ERROR

The Path Forward: Discipline Over Vibes

Vibe coding does not have to be a security disaster. AI is an extraordinarily powerful tool for writing code faster. But speed without review is recklessness, not productivity.

The developers who will thrive in the AI era are those who:

  • Use AI to generate code faster but review every line for security implications
  • Understand common vulnerability patterns well enough to spot them in generated code
  • Integrate automated security scanning into every stage of their pipeline
  • Treat AI output as a first draft, not a finished product
  • Invest in security knowledge so they can prompt AI for secure implementations
  • Maintain a healthy skepticism about code they did not write — regardless of whether a human or AI wrote it

Andrej Karpathy may want to retire the term "vibe coding," but the practice is here to stay. The question is whether we do it responsibly — with security guardrails, automated scanning, and human oversight — or whether we let the vibes carry us straight into the next generation of security breaches.

The most dangerous code is code that works perfectly — until an attacker finds it. AI-generated code is particularly dangerous because it looks confident, compiles cleanly, and passes basic tests while hiding vulnerabilities that only a security-conscious human would catch.

The vibes might feel great. But your users deserve better than vibes. They deserve code that was actually reviewed.

Related Posts

Running Local LLMs in 2026: The Complete Hardware and Setup Guide

Running Local LLMs in 2026: The Complete Hardware and Setup Guide

Local LLMs have gone from hobby to production-ready. Save $300-500/month in API costs with a setup that takes 10 minutes. Here is everything you need to know.

Multi-Agent AI Systems: Moving From Demos to Production

Multi-Agent AI Systems: Moving From Demos to Production

2026 is the year multi-agent AI systems move into production. Here is what it takes to build, orchestrate, and scale agent systems beyond the demo stage.

AI Writes the Code Now. What Is Left for Software Engineers?

AI Writes the Code Now. What Is Left for Software Engineers?

With 51,000+ tech layoffs in 2026 and AI writing production code, the future of software engineering is being redefined. Here is what actually matters now.