The Most Common Stack
TypeScript + React is the largest single language/framework combination in modern frontend, and the largest single AI code generation target by ticket volume. AI agents generate React components effectively, but the failure modes are specific and worth knowing before turning the pipeline loose.
This post covers the patterns that work, the ones that fail, and the validation rules that keep AI-generated UI from quietly drifting your design system into chaos.
What Works
- Adding a new React component that follows existing component conventions. The AI inspects neighboring components and matches.
- Adding a new prop with TypeScript typing and prop drilling resolution. Mechanical.
- Refactoring class components to function components with hooks. AI handles this cleanly when the conversion has clear rules.
- Adding form validation using the existing form library (react-hook-form, Formik). Templated.
- Adding a new TanStack Query / SWR hook for a new endpoint. Templated, deterministic.
- Adding accessibility attributes (
aria-*) to components missing them. AI agents are good at this and it is universally under-done. - Adding tests with React Testing Library / Vitest. AI agents produce idiomatic tests.
What Fails Without Strong Guardrails
- Design system drift. AI agents reach for inline styles or arbitrary Tailwind classes when the project has a design token system. Per-repo config that lists the design tokens (or points the AI to the Storybook) is the fix.
- Inappropriate state management. AI sometimes adds
useStatechains where the project uses Zustand / Redux / Jotai. Per-repo state library declaration. - Effect dependency arrays. AI agents sometimes write incorrect dependency arrays in
useEffect. ESLint withreact-hooks/exhaustive-depscatches this in the validation pipeline. - Hydration errors in Next.js / Remix. AI uses browser-only APIs in components that render server-side. Server-side rendering tests catch this.
- Type narrowing assumptions. AI sometimes asserts non-null with
!where it should narrow. Per-repo lint rule against!operator.
TypeScript-Specific Patterns
Strict TypeScript catches a remarkable amount of AI drift. Repos with these settings see meaningfully higher first-time acceptance:
strict: truenoUncheckedIndexedAccess: trueexactOptionalPropertyTypes: truenoImplicitOverride: true
The AI generates code that compiles under strict TypeScript more reliably than code that compiles under loose TypeScript, because the constraints force precision.
If your codebase is on strict: false: enable strict mode (or progressively add stricter flags) before relying heavily on AI generation. The investment pays for itself within a quarter.
Component Library Awareness
Most production React codebases have an internal component library: Button, Input, Modal, Toast. AI agents will write elements when the codebase has . The fix is per-repo config that declares the component library entry points.
For codebases with both an internal library and a third-party library (shadcn/ui, Radix, Mantine), the AI needs to know the precedence: "prefer the internal library if it exists, fall back to shadcn/ui." Without this, you get a UI that looks like five different design systems wrestling.
Next.js / App Router Specifics
Next.js 14+ App Router has architectural rules that AI agents need to know:
- Server components by default; client components opt in with
'use client'. - Server components cannot use hooks or browser APIs.
- Data fetching in server components is async, not via hooks.
- Layouts are nested; route groups don't affect URL.
AI agents trained on pages-router code sometimes regress App Router code to pages-router patterns. Per-repo config: declare the router style.
For Next.js APIs that are Anthropic-rare (server actions, partial prerendering, parallel routes), AI agents may need explicit examples in the prompt context.
Tailwind, CSS Modules, and Style Systems
Three common style systems, three different AI behaviors:
- Tailwind: AI generates Tailwind classes well. Watch for arbitrary values (
text-[#1a2b3c]) when design tokens exist (text-primary). - CSS Modules: AI matches existing class naming conventions if shown examples.
- CSS-in-JS (styled-components, Emotion): AI generates these but sometimes mixes patterns from multiple libraries.
For consistency: per-repo config that declares the chosen style system, and a lint rule that bans mixing.
Accessibility
Frontend AI agents are surprisingly good at accessibility when prompted to consider it. aria-label, role, keyboard navigation, focus management — the AI knows the patterns. The trick is making accessibility a default consideration in every UI ticket, not a separate concern.
EnsureFix's per-repo config can include "always check accessibility for UI changes," and the ReviewerAgent will flag missing accessibility in the validation pass.
Testing Frontend Code
- Vitest + React Testing Library is the modern default. AI generates idiomatic tests.
- MSW (Mock Service Worker) for API mocking. AI handles the handler patterns well.
- Storybook + interaction tests. AI can add interaction tests for new components when the Storybook setup is established.
- Playwright / Cypress for E2E. AI handles E2E test additions but tends to write brittle selectors. Per-repo rule: prefer
getByRoleovergetByTestIdover class selectors.
What AI agents do badly: visual regression tests. The model can't see pixel diffs. Route visual regression test creation to humans, but let the AI handle the underlying component code.
Bundle Size Awareness
A failure mode unique to frontend: AI agents add dependencies casually. A new ticket adds lodash for one function the codebase already has. A second ticket adds moment for one date format. Bundle size grows.
Mitigations:
- Per-repo dependency allowlist. AI cannot add dependencies outside the list without escalation.
- Bundle size budget in CI. Fail the build if bundle exceeds threshold.
- Per-repo "prefer existing utils" rule. AI checks for existing helpers before importing new dependencies.
These guardrails make a real difference. Without them, AI-driven bundle creep is a six-month problem you don't notice until your LCP regresses.
Server-Side TypeScript (Node.js)
The same TypeScript discipline applies to Node.js backends. Express, Fastify, Hono, Nest.js — AI agents handle each well when shown the framework's conventions.
The big Node.js-specific failure mode: callback-style code mixed with async/await. AI sometimes regresses async code to callbacks when working in older modules. Per-repo lint rule that bans callback-style HTTP / file APIs in new code.
Cost Economics
TypeScript/React tickets sit in the middle of the cost range. Larger context (component + props + tests + types) than Go, smaller than Java. Acceptance rates are high in well-typed codebases.
For ROI context, see the [50-engineer team analysis](/blog/ai-code-generation-roi-50-engineer-team).
Summary
TypeScript + React is a strong AI generation target with the right guardrails: strict TypeScript, design system / component library config, ESLint rules for hooks and types, bundle size budgets, and accessibility as a default check. Without those, AI generation works but produces inconsistent UI that drifts your design system. With them, you get production-quality components at the rate you can review them.
For the cross-cutting safety pattern that catches frontend-specific failure modes, see [enterprise safety layers](/blog/enterprise-safety-ai-generated-code).
Ready to automate your tickets?
See ensurefix process a real ticket from your backlog in a live demo.
Request a Demo