Design Principles

Publish at:

Every working system learns to protect itself from chaos. A function hides its variables, a module conceals its data, a service wraps its state behind an interface. Each act of encapsulation draws a thin skin between what must stay stable and what can change. In the language of physics, that skin is the surface. In design, it is the boundary.

The boundary defines the cost of communication. Across it flow data, dependencies, and assumptions — the equivalents of energy crossing an interface. The larger the boundary, the more energy the system must spend to remain coherent.

Good design, like nature, tends toward the smallest surface that can still contain the required volume of meaning.

Cohesion and Surface Tension #

In software architecture, cohesion measures how tightly related the elements inside a module are, and coupling measures how strongly it depends on others[1]. Together they act like the twin forces of attraction and tension in a droplet. A component with high cohesion and low coupling forms a stable “sphere”: internally strong, externally minimal.

When cohesion weakens, the internal forces disperse. The surface stretches, creating more contact with the outside world. Communication costs rise; the shape deforms. Eventually the design breaks into smaller droplets — microservices, submodules, fragments of intent. Stability is restored only when each regains its own local balance.

Layers as Nested Spheres #

A well-structured system rarely has one sphere; it has many. Each layer — method, class, module, subsystem — acts as a container for the layer beneath. The pattern resembles nested spheres, each enclosing another, like planets and their moons or cells within organisms. Dependencies should flow inward, toward more stable cores, while interfaces point outward to less certain environments[2].

When outer layers depend too deeply on inner detail, pressure builds unevenly. When inner layers leak their secrets outward, curvature fails — the surface no longer holds. Design maturity appears when these boundaries curve naturally around their contents, neither too tight nor too exposed.

Layering Enforcement (Inward Dependencies) #

To keep layers spherical and stable, set explicit import rules for the system:

  • Direction rule: outer may depend inward; inner must never depend outward.
  • No lateral coupling: peers at the same layer do not import each other directly.
  • Contract visibility: each layer exposes only a minimal public surface; internals remain hidden.
  • Dependency inversion: inner layers declare abstractions; outer layers provide implementations and wire them at the edge.
  • Boundary crossing: communicate across bounded contexts via explicit contracts (messages, APIs), not internal types.
  • Acyclic graph: the dependency graph respects a strict ordering of layers; cycles are not permitted.

Governance practices to make the rule real:

  • Maintain an explicit map of allowed dependencies between layers/components and review changes to it.
  • Add lightweight automated checks that fail builds on rule violations (layer import from outer to inner only).
  • Track “layer violations per change” and “cut size” (number of edges crossing a boundary) as health signals over time.
  • Use adapters to cross external boundaries and anti‑corruption layers to shield the core from foreign models.

Visual (allowed vs disallowed edges):

Layers (outer → inner)

[ L3: Delivery/UI ]
         |
         v   ✓ allowed (outer depends inward)
[ L2: Application ]
         |
         v
[ L1: Domain/Core ]
         |
         v
[ L0: Foundations ]

Disallowed examples

  Upward imports (inner → outer):
    L1 ────X──▶ L3

  Lateral (peer ↔ peer within same layer):
    L2.A ──X──▶ L2.B   (use an explicit contract/facade instead)

  Cycles across layers:
    L2 ─▶ L1 ─▶ L2   X  (must be acyclic)

Contract visibility

  Outer layers see only the public surface of the inner layer:
    L3 ─▶ L2.public → OK
    L3 ─▶ L2.internal → X (hidden)

Domain-Driven Aggregates as Local Equilibria #

In domain-driven design[3], an aggregate groups entities that change together. It defines a boundary inside which consistency is guaranteed, and beyond which communication must pass through a single controlled point — the aggregate root. This boundary behaves like the membrane of a cell: it keeps internal change coherent and lets external messages in only through a defined channel.

+---------------------------+
|      Aggregate Root       |
|  +---------------------+  |
|  |  Entities & Value   |  |
|  |     Objects         |  |
|  +---------------------+  |
+---------------------------+

Each aggregate forms a miniature sphere of stability within the wider domain. Together they create a constellation of local equilibria — a galaxy of information spheres held in mutual orbit.

Aggregate Invariants (Inside One Transaction) #

An aggregate is responsible for preserving its own invariants. Typical examples:

  • Balance or quantity never goes negative
  • Cross-entity totals remain consistent within the boundary
  • Temporal limits (daily caps, windows) are enforced atomically
  • Version increases monotonically with each state change

Rules for enforcement:

  • All invariant-changing operations go through the aggregate root
  • Persist the aggregate and its emitted events in the same transaction
  • Reject operations that would break invariants; do not let callers “fix later”

Minimal shape of an aggregate method (pseudo):

AggregateRoot.apply(command):
  validate(command)
  ensure(invariants will hold after change)
  mutate(state)
  record(domainEvent)

Crossing Boundaries Reliably #

  • Outbox pattern: write domain events to an outbox table/stream within the same transaction as aggregate persistence. A separate relay publishes them to the outside world. This avoids “state updated, event lost” races.
  • Idempotency keys on commands: include a client‑supplied key so that retried requests are treated exactly once. Store processed keys per aggregate to make handling idempotent.
  • Optimistic concurrency: include version on writes; reject stale updates to avoid lost updates across concurrent writers.

Visual: aggregate boundary with outbox and relay

Client Command
     |
     v
[ Command Handler ]
     |
     v   load/save (same DB transaction)
[   Repository   ] <───────────────┐
     |                              │
     v                              │
[    Aggregate    ]                 │
  - validate                        │
  - enforce invariants              │
  - mutate                          │
  - record domain event             │
     |                              │
     v                              │
[     Outbox      ] ────────────────┘   // written in same txn
     |
     v   async publish (at-least-once)
[   Outbox Relay  ] ───────────▶ [ External Consumers ]

Example (Command Handler + Repository, Tests) #

Pseudo design (language‑agnostic):

interfaces
  Repository {
    load(aggregateId): Aggregate
    save(aggregate, outboxEvents, expectedVersion): void  // same transaction
  }

  CommandHandler {
    handle(command, idempotencyKey): Result
  }

aggregate Account {
  state: { balance: Money, version: Int, processedKeys: Set<Key> }

  deposit(amount, key): Event
    require amount > 0
    require key not in processedKeys
    newBalance = balance + amount
    ensure newBalance >= 0
    balance = newBalance
    processedKeys.add(key)
    return Deposited(amount)

  withdraw(amount, key): Event
    require amount > 0
    require key not in processedKeys
    ensure balance - amount >= 0   // invariant: never negative
    balance = balance - amount
    processedKeys.add(key)
    return Withdrawn(amount)
}

CommandHandler.handle(cmd, key):
  acc = repo.load(cmd.accountId)
  evt = (cmd.kind == Deposit)  ? acc.deposit(cmd.amount, key)
        (cmd.kind == Withdraw) ? acc.withdraw(cmd.amount, key)
        : fail "unknown"
  repo.save(acc, [evt], expectedVersion = acc.version)
  return Ok

Tests asserting invariants (sketch):

  • Reject overdraw: withdraw beyond balance fails; state unchanged; no outbox event
  • Idempotent deposit: repeating the same idempotency key does not change balance twice
  • Atomicity: when persistence fails, neither state nor outbox is partially written
  • Concurrency: two concurrent withdraws where only one fits — exactly one succeeds via version check

Microservices as Distributed Spheres #

When a system grows too large, a single boundary cannot hold it. Communication overload, scaling limits, and organizational friction stretch the surface until it ruptures. The result is a cluster of smaller spheres — microservices[4]. Each regains local symmetry, but at the cost of increased distance between them.

Just as planets maintain order through gravity, distributed systems rely on well-designed interfaces and contracts. The gravitational pull here is protocol: clear, versioned, minimal. When it weakens, services drift; when it becomes rigid, they collide.

Balance lives between freedom and cohesion, between autonomy and connection.

Boundary Quality (Measuring and Managing Interfaces) #

Measure chattiness and latency across boundaries:

  • Calls per workflow: minimize round‑trips across a boundary; aim for coarse‑grained interactions
  • Payload size: size per call and per workflow; avoid “fat” response shapes that couple clients to internals
  • Tail latency: track p50/p95/p99 and error rate for boundary calls[8]
  • Interaction budget: set a target maximum for cross‑service calls and end‑to‑end latency per key workflow

Manage contracts deliberately:

  • Define contracts in shared, machine‑readable form (e.g., OpenAPI / Proto)[5][6]
  • Prefer additive, backward‑compatible changes (tolerant readers); avoid breaking renames/removals
  • Use consumer‑driven contracts to validate real usage and detect breaking changes early[7]
  • Keep interfaces minimal; expose intent (capabilities), not internal data models

Versioning and deprecation:

  • Publish a clear versioning policy tied to compatibility guarantees (e.g., additive = non‑breaking)
  • Support a defined deprecation window; communicate timelines and migration guides
  • Run old and new contracts side‑by‑side during migrations; remove only after consumers exit
  • Track adoption with telemetry; do not cut earlier than the announced window

System as Graph (Formal Tie‑In) #

Model the system as a graph to make “surface vs. volume” measurable:

  • Nodes (V): components, modules, services, or bounded contexts
  • Edges (E): dependencies (build‑time imports), runtime calls, or dataflows

For a candidate boundary S ⊆ V (a module, subsystem, or context):

  • Cut set

    δ(S) = { (u,v) ∈ E | u ∈ S, v ∉ S } — edges crossing the boundary [9]

  • Cut size

    |δ(S)|

  • Volume proxy

    |V_in| = |S| - (nodes inside), or |E_in| = |{ (u,v) ∈ E | u,v ∈ S }| (internal edges)

  • Surface coefficient

    k(S) = |δ(S)| / |V_in| (or |δ(S)| / |E_in|) — lower is “more spherical”

Related graph measures:

  • Conductance φ(S) = |δ(S)| / min(vol(S), vol(¬S)) using degree‑weighted volume — low φ suggests a well‑separated cluster[10]
  • Modularity maximization partitions detect communities with dense internal edges and sparse cuts — strong candidates for bounded contexts[11]

Heuristics for boundaries

  • Choose boundaries where k(S) and φ(S) are low; re‑evaluate if k(S) trends upward (leaks)
  • Prefer partitions that increase modularity without fragmenting workflows
  • Track per‑release: cut size, surface coefficient, and cross‑boundary latency from key traces

Sketch

          δ(S)
      ┌───────────┐      ◀── edges crossing
      │  S (in)   │───▶
      │  ●─●─●    │      inside S: |V_in|=6, |E_in| dense
      │  │ │╲│    │      cut size |δ(S)| small → low k(S)
      │  ●─● ●    │
      └───────────┘
             │
             ▼ (outside)
             ●──●─●

Overloaded Spheres #

Every sphere has a limit. When too much information, responsibility, or dependency accumulates inside a boundary, surface tension fails. The component collapses under its own weight or leaks complexity outward. In code, this appears as God classes, monolithic services, or fat controllers. In organizations, it appears as teams or processes that have absorbed more than they can contain. The cure is not destruction but division: breaking one unstable sphere into many that can hold their own equilibrium.

The Quiet Rule of Design #

Across all levels — from a function to an enterprise — the same principle repeats. Balance internal strength against external simplicity. Design each boundary so it can hold its meaning without excessive exposure. Let information flow only through deliberate, minimal interfaces. When these conditions align, complexity folds inward instead of spilling outward, and the system begins to curve naturally toward order.

A sphere forms but by resolution. When tension and attraction find harmony, structure follows. That is the quiet geometry beneath every enduring design.

References

  1. Cohesion (computer science) (opens in a new tab) · Back
  2. Layered architecture (Multitier) (opens in a new tab) · Back
  3. Domain-driven design (opens in a new tab) · Back
  4. Microservices (opens in a new tab) · Back
  5. OpenAPI Specification (opens in a new tab) · Back
  6. Protocol Buffers (opens in a new tab) · Back
  7. Consumer-Driven Contracts (Pact) (opens in a new tab) · Back
  8. SLIs, SLOs, and SLAs (Google SRE) (opens in a new tab) · Back
  9. Graph cut (opens in a new tab) · Back
  10. Conductance (graph theory) (opens in a new tab) · Back
  11. Modularity (networks) (opens in a new tab) · Back