I needed a price displayed on a checkout button. The price lives in Stripe. The button lives in React. Between those two points sit four handoffs that, in a traditional team, would take a week.

I shipped it in one commit. Eight files. Four layers. Zero handoffs.

The Problem: Handoffs Are Where Features Break

A full-stack feature touches infrastructure, backend, API contract, and frontend. Even when the same developer handles most of it, each layer is a separate stage with its own mental model:

  1. Write the SAM template — think in CloudFormation resources, IAM policies, API Gateway routes
  2. Implement the Lambda — switch to Java, Stripe SDK, caching strategy, error handling
  3. Define the API contract — switch to TypeScript, match the response shape, wire up the fetch call
  4. Build the UI — switch to React, hooks, formatting, graceful degradation

Each stage requires unloading one mental model and loading another. You finish the Lambda, and by the time you’re writing the TypeScript type, you have to go back and check: did I call it amount or price? Is it cents or dollars? Did I include the currency field?

The code is simple. The context switching is what kills you. A pricing endpoint is maybe 60 lines of backend logic and 20 lines of frontend code. But moving between four different mental models — infrastructure, backend, API, frontend — turns a small feature into a full day of work with at least one contract mismatch along the way.

The Solution: One Context Across All Layers

When I built this feature, AI held the full context — infrastructure patterns, backend conventions, API shape, and frontend integration — in a single session. No handoffs. No documentation step. No waiting for someone else to finish their part.

Here’s what one commit looked like:

Infrastructure: Added a GetPriceFunction to the SAM template, following the exact same pattern as the existing Lambdas — same policies, same runtime, same CORS configuration. No guessing, no copy-paste drift.

Backend: A Java Lambda that calls Stripe’s Price API, caches the result in a static volatile field (since Lambda instances persist between invocations), and returns amount and currency. The AI added a PriceRetriever functional interface for testability — the test injects a mock instead of calling Stripe. It also wrote tests covering the happy path, caching behavior, Stripe failure handling, and CORS headers.

API contract: A PriceInfo TypeScript type (amount: number, currency: string) and a getPrice() fetch function added to the frontend’s API layer. The type matches the Lambda’s response shape exactly — because both were created in the same context, there’s no interpretation gap.

Frontend: A React Query hook fetches the price on the checkout step, formats it with Intl.NumberFormat, and displays it in the pay button. When the price hasn’t loaded yet, the button shows a generic label. No loading spinner, no error state that blocks the user — graceful degradation.

Eight files changed, 409 lines added. One commit. One review. One deploy.

Why This Matters With AI

A human developer context-switches between layers sequentially. You can only hold one mental model at a time — you’re either thinking in CloudFormation or in React, not both. AI doesn’t have that limitation. It holds the SAM template, the Java implementation, the TypeScript type, and the React component in the same context window. There’s no unloading and reloading. There’s no going back to check what you named the field.

This changes the economics in three ways:

First, context switches disappear. Instead of four stages with three switches between them, the feature is one continuous flow. The AI moves from infrastructure to backend to API to frontend without losing any context from the previous layer.

Second, contract mismatches disappear. The most common bugs in full-stack features aren’t logic errors — they’re mismatches between layers. The Lambda returns amount in cents, the frontend formats it as dollars. The response includes currency but the type definition forgot it. When one context produces all layers simultaneously, these bugs simply can’t happen.

Third, the feedback loop tightens. When you realize the frontend needs an extra field, you don’t file a ticket and wait for the next backend cycle. You add it to the Lambda response, the TypeScript type, and the React component in the same session. The cost of changing your mind drops to near zero.

What This Means for How We Work

The unit of work is shifting. It used to be “a task that fits one developer’s specialty” — a backend ticket, a frontend ticket, an infrastructure ticket. Now it’s “a feature that crosses all layers.”

This has real implications. A feature that used to be estimated as “3 story points backend, 2 story points frontend, 1 story point infrastructure” becomes a single deliverable. The coordination cost between those three items — the handoffs, the waiting, the integration testing — was always the hidden expense. Remove the handoffs and the estimate drops by more than the sum of its parts.

I’m not arguing that specialization is obsolete. Deep expertise in infrastructure, backend, or frontend still matters for architectural decisions and complex debugging. But the day-to-day cost of crossing layer boundaries is what AI eliminates. You still need to know what good infrastructure looks like — you just don’t need to context-switch away from it to build the backend that depends on it.

The Takeaway

The most expensive part of full-stack delivery was never writing code — it was the context switches between the stages of building it. Each switch loses context, introduces delay, and creates room for contract mismatches. AI collapses those switches by holding full context across infrastructure, backend, API, and frontend in a single session. The result: features delivered as coherent units, with fewer bugs, in less time.