Skip to content

Breaking Down Large Tasks

AI produces better results with focused, bounded tasks. When a spec tries to cover too much, the plan becomes unwieldy, the implementation drifts, and verification gets harder. Splitting large features into smaller specs improves every stage of the ACT workflow.

Watch for these indicators:

  • More than 5 phases in the plan. Plans with 6+ phases tend to accumulate errors that compound across later phases.
  • Multiple unrelated concerns. If the spec covers both UI changes and backend logic for different features, it should be split.
  • Requirements that could ship independently. If users would get value from half the requirements without the other half, those are separate specs.
  • The spec takes more than one screen to review. Long specs are hard to hold in context --- for both humans and AI.
  • Vague sections. If parts of the spec feel under-specified while others are detailed, the vague parts probably deserve their own spec with their own clarifying questions.

Each distinct user-facing flow becomes its own spec:

  • “User signs up” --- one spec
  • “User resets password” --- another spec
  • “User updates profile” --- another spec

Even though these all relate to “authentication,” they have separate UI, different edge cases, and can ship independently.

When a feature touches multiple data models, split along domain boundaries:

  • “Create the order data model and repository” --- one spec
  • “Build the order list UI” --- depends on the first, but has its own requirements
  • “Add order filtering and search” --- builds on the list, separate concerns

For screen-heavy features, split by screen or major UI section:

  • “Settings: theme and appearance” --- one spec
  • “Settings: notification preferences” --- another spec
  • “Settings: account management” --- another spec

Splitting by technical layer (data, domain, presentation) can work but often creates specs that are hard to verify in isolation. Prefer splitting by user flow or domain instead.

Each spec should be implementable on its own, but they need to fit together:

  • Define shared interfaces first. If two specs depend on the same data model, write a short spec for the data layer first. Implement it, then write the specs that depend on it.
  • Reference existing code, not future code. A spec should only reference files and patterns that already exist in the codebase, not things another spec will create.
  • Use boundaries to prevent overlap. Each spec’s “out of scope” section should explicitly mention related features being handled by other specs.

When specs have dependencies, implement them in order:

1. Data model and repository (no dependencies)
2. Core business logic (depends on 1)
3. UI screens (depends on 1 and 2)
4. Polish and edge cases (depends on 3)

Run the full workflow (spec, plan, work) for each before starting the next. This way, each spec is planned against real, committed code rather than assumptions about what another spec will produce.

Not every feature needs splitting:

  • Simple features with one user flow, one screen, and fewer than 10 requirements --- keep them as a single spec.
  • Bug fixes that touch a specific area --- one spec is fine.
  • Refactors with a clear, mechanical scope --- one spec works well.

The goal is not to have the most specs. It is to have specs that are clear enough for AI to implement correctly.

Example: splitting a “shopping cart” feature

Section titled “Example: splitting a “shopping cart” feature”

Instead of one spec for “build the shopping cart,” split into:

  1. Cart data model --- cart item model, local storage, add/remove/update operations
  2. Cart screen UI --- display items, quantities, subtotals, empty state
  3. Checkout flow --- address entry, payment method, order confirmation
  4. Cart badge and mini-cart --- header badge with count, slide-out preview

Each spec has its own user flows, edge cases, and validation criteria. Each produces a reviewable, testable increment.