The most effective way I’ve found to get accurate code from AI has nothing to do with how you phrase your prompts. It’s a practice from 2003: Ubiquitous Language, from Eric Evans’ Domain-Driven Design.

Before my AI agent writes a single line of code, it gets a glossary — a structured domain vocabulary that defines every term, every boundary, and every relationship in the system.

The Problem: AI Guesses When It Doesn’t Know Your Domain

When you ask AI to build a feature without domain context, it fills the gaps with the most common patterns from its training data. An intelligent human would ask clarifying questions. AI just writes code — confident, clean, and potentially wrong in ways that won’t surface until months later.

Here’s where it gets expensive. Say your domain experts talk about “governance checks” — the validation that happens before an AI agent calls a tool. The AI, lacking that vocabulary, implements it as PermissionChecker with methods like hasPermission() and grantAccess(). The code works. Tests pass. But now your domain experts say “the governance check should also verify the tool’s connection health” and the developer stares at a PermissionChecker class wondering where connection health fits into a permission model. The code and the domain have diverged, and every future conversation between developers and domain experts requires mental translation.

Multiply this across an entire system and you get a codebase where the code says one thing and the business says another. New developers read the code and learn the wrong language. Bug reports reference domain terms that don’t appear in the codebase. The cost compounds over time.

The Solution: Ubiquitous Language as AI Context

In Domain-Driven Design, Ubiquitous Language is the shared vocabulary that the entire team — developers, domain experts, product owners — uses to talk about the system. When everyone says “place an order” instead of “create an order,” the code reflects the business reality.

I’ve operationalized this for AI-assisted development. Before any coding begins, a dedicated agent — the Ubiquitous Language agent — produces three artifacts:

  1. A glossary: Every domain term, precisely defined. For example: “Order: an aggregate representing a customer’s commitment to purchase, transitioning through states: Draft → Placed → Paid → Fulfilled → Cancelled.” Precise enough that the AI knows the exact lifecycle.

  2. Bounded context definitions: Clear boundaries around where terms apply. “Order” means something different in the fulfillment context than in the billing context. The AI needs to know which context it’s working in.

  3. A context map: How bounded contexts relate to each other. Which ones share data? Which ones communicate through events? Where are the translation layers?

These artifacts live in the project’s docs/ubiquitous-language/ folder. Every agent in the pipeline reads them. When the coder agent generates code, it uses domain terms from the glossary. When it names events, they match the language the business uses. When it defines boundaries, they respect the context map.

Why This Matters With AI

The typical approach is to pack context into the prompt itself: “You are a senior developer working on a DDD project. Use event sourcing. Name events in past tense…” This is fragile. It relies on the instruction being perfectly crafted, and it falls apart as the domain grows in complexity.

A glossary and context map are structural context. They don’t depend on prompt wording. They’re versioned, reviewable, and evolve with the project. When a new bounded context is added, the glossary grows. When a term’s meaning shifts, the definition is updated. The AI always has the current truth.

The glossary also includes an “Aliases (AVOID)” column for each term. When the AI sees that a Run should never be called a “Session” or “Conversation,” it stops inventing those names. When it sees that Tool Governance should never be called “permissions” or “access control,” the code stays consistent with the domain model.

What This Looks Like in Practice

On my AI agent orchestration platform, the ubiquitous language directory contains 38 files — 22 bounded context definitions and 16 glossaries, each dated and versioned. The glossary covers 15 bounded contexts including Agent Design, Runtime Orchestration, Observability, Tool Registry, Evaluation & Governance, and Identity & Delegation.

Here are real entries from the glossary:

  • Agent Definition: A versioned, design-time domain object describing an agent’s role, goals, capabilities, tools, constraints, and evaluation criteria. Aliases to AVOID: agent blueprint, agent config, agent spec, agent template.
  • Run: A single input-process-response cycle within a Case. Created when an Internal Message arrives for an agent. Aliases to AVOID: Session, Execution, Thread, Conversation.
  • Governance Check: The validation performed before each tool call during a Run. Four phases: Allowlist Check, Registry Status Check, Connection Health Check, Policy Budget Check. Distinct from Permission Check (Identity & Delegation authorization).

When the coder agent implements governance, it uses GovernanceCheck and produces events called ToolInvocationApproved — matching the glossary exactly. The code reads like the domain because the AI was given the domain’s language.

The glossary also explicitly defines value objects, their immutability rules, and enum values. CompletionReason is defined as a value object with five specific values: user_ended, inactivity_timeout, step_limit_reached, tool_call_limit_reached, duration_limit_reached. Every enum value is specified upfront, so the AI generates code that matches the domain model exactly.

How to Build This In

Start with a structured interview. Before writing the glossary, extract domain knowledge through deliberate questions: What are the core entities? What business events trigger state changes? Where do terms mean different things to different stakeholders? What invariants must always hold?

Format the glossary as a table, not prose. Each entry needs four columns: Term, Definition, Aliases (AVOID), and Related Terms. The aliases column is critical — it tells the AI what NOT to call things, which is as important as what to call them.

One glossary per bounded context. Don’t dump everything into one file. A glossary for Runtime Orchestration shouldn’t include Tool Registry terms. Keep contexts separate so the AI loads only the vocabulary relevant to the current work.

Date and version your glossary files. Domain understanding evolves. A glossary written on day one will be updated twenty times. Dating the files (2026-02-27-runtime-orchestration-glossary.md) creates a history of how the domain model matured.

Store glossaries in the repo, not in prompts. The glossary is a project artifact, not a chat message. Store it in docs/ubiquitous-language/. Every AI agent, every developer, every code review can reference the same source of truth.

Include the glossary in the AI’s context for every coding task. The coder agent should read the relevant glossary before generating code. The quality-check agent should verify that code uses glossary terms, not aliases. The interviewer agent should update the glossary when new terms emerge.

The Takeaway

AI produces domain-accurate code when you give it domain-accurate vocabulary. Invest in a glossary, define your bounded contexts, and map the relationships between them. Clear vocabulary beats clever prompts every time. Domain-Driven Design gave us these tools two decades ago — we just found a new reason to use them.