Architecture Is a Gradient, Not a Binary

James Phoenix
James Phoenix

Every layer of structure you add buys you something and costs you something. The skill is knowing the exchange rate.

Author: James Phoenix | Date: April 2026


Why Architecture Advice Fails

Most architecture advice is binary. “Use clean architecture.” “Just keep it simple.” “You need DDD.” “YAGNI.”

None of that helps you decide what to do on a Tuesday afternoon when you have a createUser() function that’s getting messy. The real question is not “what is the correct architecture?” It is “what do I get for this extra ceremony, and what does it cost me?”

Architecture is not a destination. It is a gradient. You can stop at any point along it and be correct, as long as you understand what you are trading. This article walks through a single function, createUser(), evolving it through five stages. At each stage you will see what changed, what you gained, and what it cost you. The goal is intuition, not dogma.


Stage 1: Everything Inline

The starting point. One function. No abstractions. No layers.

import { db } from "./db";
import { sendEmail } from "./email";

export async function createUser(name: string, email: string) {
  if (!name || name.length < 2) throw new Error("Invalid name");
  if (!email.includes("@")) throw new Error("Invalid email");

  const existing = await db.query("SELECT id FROM users WHERE email = $1", [email]);
  if (existing.rows.length > 0) throw new Error("Email taken");

  const result = await db.query(
    "INSERT INTO users (name, email) VALUES ($1, $2) RETURNING id",
    [name, email]
  );

  await sendEmail(email, "Welcome!", `Hi ${name}, welcome aboard.`);

  return { id: result.rows[0].id, name, email };
}

This is fine. Seriously. If your app has five endpoints and one developer, this is the correct level of architecture. You can read the entire business logic in 15 lines. There is no indirection. A junior engineer can understand it in 30 seconds.

What you have: speed, clarity, zero indirection.

What you do not have: testability (you need a real database and email server to test this), reusability (the validation logic is trapped here), and flexibility (changing the email provider means editing business logic).

Layer introduced None. Raw function.
Benefits Maximum readability. Zero learning curve. Ship today.
Costs Cannot test without infrastructure. Cannot reuse validation. Side effects tangled with logic.
Use when Solo dev, simple CRUD, prototype, <=10 endpoints.
Avoid when Multiple callers need the same validation. You need to test business rules in isolation.

Stage 2: Extract Concerns

The function is getting copy-pasted. A second endpoint also needs user validation. The email provider is changing next quarter. Time to pull things apart.

// validation.ts
export function validateUserInput(name: string, email: string) {
  const errors: string[] = [];
  if (!name || name.length < 2) errors.push("Name must be at least 2 characters");
  if (!email.includes("@")) errors.push("Invalid email format");
  return errors;
}

// createUser.ts
import { db } from "./db";
import { sendEmail } from "./email";
import { validateUserInput } from "./validation";

export async function createUser(name: string, email: string) {
  const errors = validateUserInput(name, email);
  if (errors.length > 0) throw new Error(errors.join(", "));

  const existing = await db.query("SELECT id FROM users WHERE email = $1", [email]);
  if (existing.rows.length > 0) throw new Error("Email taken");

  const result = await db.query(
    "INSERT INTO users (name, email) VALUES ($1, $2) RETURNING id",
    [name, email]
  );

  await sendEmail(email, "Welcome!", `Hi ${name}, welcome aboard.`);

  return { id: result.rows[0].id, name, email };
}

Not dramatic. You extracted a pure function for validation. Now you can test validation rules without a database. Other endpoints can reuse validateUserInput. The email call is still inline, but that is a problem for later.

This is the most common “good enough” point. Many production systems live here for years with no issues. You separated the pure logic (validation) from the impure logic (IO). That single move gives you most of the testability benefit at almost zero cost.

What changed: validation is reusable and independently testable.

What stayed the same: persistence and side effects are still inline.

Layer introduced Pure functions extracted from impure host.
Benefits Validation is testable and reusable. Easy to understand.
Costs Minimal. One extra import. One extra file.
Use when You have shared validation logic or want to test business rules without IO.
Avoid when This is almost always worth doing. Hard to overshoot here.

Stage 3: Application Service

The app is growing. You have three entry points that create users: an API endpoint, a CLI import script, and an admin panel. Each one is duplicating the “check for existing, insert, send welcome email” flow. Time for a service layer.

// userService.ts
import { UserRepository } from "./userRepository";
import { EmailService } from "./emailService";
import { validateUserInput } from "./validation";

export class UserService {
  constructor(
    private repo: UserRepository,
    private email: EmailService
  ) {}

  async createUser(name: string, email: string) {
    const errors = validateUserInput(name, email);
    if (errors.length > 0) throw new Error(errors.join(", "));

    const existing = await this.repo.findByEmail(email);
    if (existing) throw new Error("Email taken");

    const user = await this.repo.create(name, email);

    await this.email.sendWelcome(user);

    return user;
  }
}

// userRepository.ts
export class UserRepository {
  constructor(private db: Pool) {}

  async findByEmail(email: string) {
    const result = await this.db.query("SELECT * FROM users WHERE email = $1", [email]);
    return result.rows[0] ?? null;
  }

  async create(name: string, email: string) {
    const result = await this.db.query(
      "INSERT INTO users (name, email) VALUES ($1, $2) RETURNING *",
      [name, email]
    );
    return result.rows[0];
  }
}

Now you have a real service layer. UserService coordinates the workflow. UserRepository handles persistence. EmailService handles notifications. The service receives its dependencies through the constructor, so you can inject fakes in tests.

This is the point where most backend applications should settle. You can test the service layer with a mock repository and a mock email service. You can swap Postgres for MySQL by writing a new repository. You can change email providers without touching business logic. The cost is real but modest: more files, constructor wiring, and a layer of indirection between your endpoint and the database.

What changed: the orchestration logic is decoupled from infrastructure.

What you pay: constructor injection boilerplate, more files to navigate, the coordination logic is now one layer removed from the IO it orchestrates.

Layer introduced Application service + repository. Constructor injection for dependencies.
Benefits Testable in isolation. Multiple entry points share one flow. Swap infrastructure without touching logic.
Costs More files. DI wiring. Indirection between endpoint and database.
Use when Multiple callers, >1 developer, infrastructure likely to change, you need fast unit tests.
Avoid when Single entry point, solo dev, prototype that might get deleted.

Stage 4: Ports, Adapters, and Effect

The system is getting serious. You have multiple services that interact. Error handling is scattered. You are passing 6 dependencies through constructor injection and the wiring code is painful. Failures are thrown as generic Error objects and you have no idea what can actually go wrong without reading the implementation. Time for typed effects.

This is where Effect earns its keep. Instead of hoping that every caller catches the right errors and that every dependency is wired correctly, you encode the contract into the type system.

import { Effect, Context, Layer, Data } from "effect";

// --- Errors: explicit, typed, not thrown ---

export class ValidationError extends Data.TaggedError("ValidationError")<{
  readonly errors: readonly string[];
}> {}

export class DuplicateEmailError extends Data.TaggedError("DuplicateEmailError")<{
  readonly email: string;
}> {}

// --- Ports: what capabilities does this workflow need? ---

export class UserRepo extends Context.Tag("UserRepo")<
  UserRepo,
  {
    readonly findByEmail: (email: string) => Effect.Effect<User | null>;
    readonly create: (name: string, email: string) => Effect.Effect<User>;
  }
>() {}

export class Notifications extends Context.Tag("Notifications")<
  Notifications,
  {
    readonly sendWelcome: (user: User) => Effect.Effect<void>;
  }
>() {}

// --- Domain logic ---

function validateUserInput(name: string, email: string) {
  const errors: string[] = [];
  if (!name || name.length < 2) errors.push("Name must be at least 2 characters");
  if (!email.includes("@")) errors.push("Invalid email format");
  return errors;
}

// --- Application workflow ---

export const createUser = (
  name: string,
  email: string
): Effect.Effect<User, ValidationError | DuplicateEmailError, UserRepo | Notifications> =>
  Effect.gen(function* () {
    const errors = validateUserInput(name, email);
    if (errors.length > 0) yield* new ValidationError({ errors });

    const repo = yield* UserRepo;
    const existing = yield* repo.findByEmail(email);
    if (existing) yield* new DuplicateEmailError({ email });

    const user = yield* repo.create(name, email);

    const notifications = yield* Notifications;
    yield* notifications.sendWelcome(user);

    return user;
  });

Read the return type: Effect<User, ValidationError | DuplicateEmailError, UserRepo | Notifications>.

That is the entire contract. This function produces a User, can fail with ValidationError or DuplicateEmailError, and requires UserRepo and Notifications to run. The compiler enforces all three. You cannot call it without providing the required services. You cannot ignore the possible errors without the type checker complaining. No more hidden throw statements that blow up in production because nobody knew that code path could fail.

The adapters live separately. They implement the ports:

// adapters/postgresUserRepo.ts
export const PostgresUserRepoLive = Layer.succeed(UserRepo, {
  findByEmail: (email) =>
    Effect.tryPromise(() =>
      pool.query("SELECT * FROM users WHERE email = $1", [email])
    ).pipe(Effect.map((r) => r.rows[0] ?? null)),

  create: (name, email) =>
    Effect.tryPromise(() =>
      pool.query("INSERT INTO users (name, email) VALUES ($1, $2) RETURNING *", [name, email])
    ).pipe(Effect.map((r) => r.rows[0])),
});

// adapters/resendNotifications.ts
export const ResendNotificationsLive = Layer.succeed(Notifications, {
  sendWelcome: (user) =>
    Effect.tryPromise(() =>
      resend.emails.send({
        to: user.email,
        subject: "Welcome!",
        text: `Hi ${user.name}, welcome aboard.`,
      })
    ).pipe(Effect.asVoid),
});

Testing is trivial. You provide in-memory implementations via Layer:

const TestUserRepo = Layer.succeed(UserRepo, {
  findByEmail: (_email) => Effect.succeed(null),
  create: (name, email) => Effect.succeed({ id: "test-1", name, email }),
});

const TestNotifications = Layer.succeed(Notifications, {
  sendWelcome: (_user) => Effect.void,
});

const TestLayer = Layer.merge(TestUserRepo, TestNotifications);

// Runs the workflow with fake services, no database, no email provider
const result = await Effect.runPromise(
  createUser("Alice", "[email protected]").pipe(Effect.provide(TestLayer))
);

This is a significant jump in ceremony. You have tagged errors, service tags, layers, generators. A developer who has never seen Effect will need a few days to get comfortable. That is the cost. The benefit is that your system’s failure modes, dependencies, and contracts are all compiler-verified. As the system grows, that verification pays for itself in prevented production incidents and reduced coordination overhead across a team.

Layer introduced Typed effect system. Ports as service tags. Adapters as layers. Errors in the type channel.
Benefits Compiler-verified contracts. Typed errors. Trivial test wiring. Swappable adapters. Explicit dependency graph.
Costs Learning curve (Effect is not trivial). More boilerplate per service. Generator syntax is unfamiliar. Hiring pool shrinks.
Use when Multiple services interacting, complex failure modes, team >2 people, infrastructure that changes, you want fearless refactoring.
Avoid when Simple CRUD apps. Solo dev who does not plan to grow the team. Prototypes. Anything where “move fast” matters more than “move safely.”

Stage 5: Richer Domain Concepts

The business rules are getting complex. “Creating a user” now means: check if they are in a sanctions list, apply a referral bonus if they came through a partner, emit a UserCreated event so the billing service can provision a trial, and enforce an organization seat limit. Stuffing all of this into createUser makes it a 100-line function again, defeating the point of the service layer.

This is where domain modelling starts to pay. You push business rules into the domain objects themselves and use domain events to decouple side effects from the core workflow.

// domain/user.ts
export class EmailAddress {
  readonly _tag = "EmailAddress";
  private constructor(readonly value: string) {}

  static fromString(raw: string): Effect.Effect<EmailAddress, ValidationError> {
    return raw.includes("@")
      ? Effect.succeed(new EmailAddress(raw.toLowerCase().trim()))
      : Effect.fail(new ValidationError({ errors: ["Invalid email format"] }));
  }
}

export interface UserCreatedEvent {
  readonly _tag: "UserCreated";
  readonly userId: string;
  readonly email: string;
  readonly referralCode: string | null;
  readonly occurredAt: Date;
}

// domain/registrationPolicy.ts
export class SanctionsListError extends Data.TaggedError("SanctionsListError")<{
  readonly email: string;
}> {}

export class SeatLimitError extends Data.TaggedError("SeatLimitError")<{
  readonly orgId: string;
  readonly limit: number;
}> {}

export class ComplianceCheck extends Context.Tag("ComplianceCheck")<
  ComplianceCheck,
  {
    readonly screenEmail: (email: string) => Effect.Effect<void, SanctionsListError>;
  }
>() {}

export class OrgPolicy extends Context.Tag("OrgPolicy")<
  OrgPolicy,
  {
    readonly checkSeatLimit: (orgId: string) => Effect.Effect<void, SeatLimitError>;
  }
>() {}

The application workflow now composes domain policies and emits events:

export const createUser = (
  input: CreateUserInput
): Effect.Effect<
  User,
  ValidationError | DuplicateEmailError | SanctionsListError | SeatLimitError,
  UserRepo | Notifications | ComplianceCheck | OrgPolicy | EventBus
> =>
  Effect.gen(function* () {
    const email = yield* EmailAddress.fromString(input.email);
    const errors = validateName(input.name);
    if (errors.length > 0) yield* new ValidationError({ errors });

    const compliance = yield* ComplianceCheck;
    yield* compliance.screenEmail(email.value);

    if (input.orgId) {
      const orgPolicy = yield* OrgPolicy;
      yield* orgPolicy.checkSeatLimit(input.orgId);
    }

    const repo = yield* UserRepo;
    const existing = yield* repo.findByEmail(email.value);
    if (existing) yield* new DuplicateEmailError({ email: email.value });

    const user = yield* repo.create(input.name, email.value);

    const events = yield* EventBus;
    yield* events.publish({
      _tag: "UserCreated",
      userId: user.id,
      email: email.value,
      referralCode: input.referralCode ?? null,
      occurredAt: new Date(),
    });

    return user;
  });

The welcome email is no longer in the workflow. It reacts to the UserCreated event. The billing trial provisioning also reacts to that event. The referral bonus calculation reacts to it too. None of these side effects know about each other. The workflow does not know about them either.

This is the most expensive level of architecture. You have value objects, domain events, policy services, an event bus, and a growing R type parameter. The type signature alone tells you that this function touches four different services and can fail in four different ways. That is an enormous amount of information, but it is also an enormous amount of ceremony to read and maintain.

Layer introduced Value objects. Domain policies as services. Domain events for side-effect decoupling.
Benefits Business rules are testable in isolation. Side effects are decoupled via events. The type signature is a living specification. Adding a new reaction to user creation requires zero changes to the workflow.
Costs High cognitive overhead. Many files. Event-driven debugging is harder than linear flow. Onboarding takes longer. You need an event bus abstraction.
Use when Complex business rules that change independently. Multiple downstream consumers of domain events. Regulatory/compliance requirements that need audit trails. Team of 5+.
Avoid when Your “domain” is CRUD. You have one downstream consumer. Your business rules fit in one function without squinting.

The Gradient at a Glance

Stage Structure Testability Cognitive Load Best For
1. Inline None Requires infra Lowest Solo dev, prototype, <=10 endpoints
2. Extract Pure functions out Validation testable Low Shared validation, still simple
3. Service DI, repo, service Full isolation Medium Most production backends
4. Effect Typed ports/adapters Compiler-verified High Multi-service, team >2, complex errors
5. Domain Events, policies, VOs Surgical precision Highest Complex domains, regulatory, team 5+

A Decision Framework

Instead of asking “what is the right architecture?”, ask these questions:

How many people work on this code? Solo dev can stay at Stage 1-2. Team of 3+ benefits from Stage 3. Team of 5+ with complex domain rules may need Stage 4-5.

Leanpub Book

Read The Meta-Engineer

A practical book on building autonomous AI systems with Claude Code, context engineering, verification loops, and production harnesses.

Continuously updated
Claude Code + agentic systems
View Book

How many callers invoke this logic? One endpoint calling createUser? Stage 1 is fine. Three entry points? You need at least Stage 3.

How complex are the failure modes? If “it worked or it threw an error” is sufficient, stay low. If you need to distinguish between validation failures, duplicate conflicts, compliance rejections, and capacity limits, Stage 4’s typed errors are worth the cost.

How often does the infrastructure change? If you are locked into Postgres and Resend forever, inline calls are fine. If you swap providers quarterly, the port/adapter boundary pays for itself.

What is the cost of a bug in this code path? User registration in a fintech app with sanctions screening is not the same as user registration in a side project. Higher stakes justify more structure.

Can you defer the decision? Start at the lowest stage that works. Promote to the next stage when you feel specific, concrete pain. Not “this might be a problem someday” but “this is hurting us this week.” Going from Stage 2 to Stage 3 is a routine refactor. Going from Stage 1 to Stage 5 in one jump is a rewrite.


The Core Lesson

Architecture is not a badge of seniority. It is a cost you pay to solve specific problems. Every layer you add is a trade: you gain structure, testability, and flexibility; you pay with indirection, files, and cognitive load.

The best engineers do not rush to the top of the gradient. They also do not stay at the bottom out of false humility. They feel where the pain is, add exactly the structure that addresses it, and stop. They know that the right level of architecture today is the one that solves this week’s actual problems without mortgaging next month’s velocity.

Start naive. Feel the friction. Add a layer. Feel whether the friction decreased or just moved. Repeat. That is how you build architectural judgment. Not by reading a book about hexagonal architecture. By watching createUser() evolve and understanding exactly what each evolution bought you.


Related

Topics
Architecture DesignCode EvolutionFunction RefactoringTrade Offs

More Insights

Cover Image for Dependency Chains Gate Your Throughput

Dependency Chains Gate Your Throughput

When your work is serial, agent latency becomes wall-clock latency. No amount of tooling eliminates the wait. The productive move is redirecting your energy, not fighting the constraint.

James Phoenix
James Phoenix
Cover Image for The Sandbox Is a Harness

The Sandbox Is a Harness

When code becomes the interface between users and systems, the sandbox stops being a security primitive. It becomes a harness for intent.

James Phoenix
James Phoenix