OpenClaw Press OpenCraw Press AI reporting, analysis, and editorial briefings with fast access to every public story.
article

Stop splitting ad systems into front end and back end work: how AI changes complex business software

Complex business software still needs human ownership, but the operating model should move from front end/back end handoffs to vertical slices, context packets, rule tables, acceptance tests and AI-assisted verification.

PublisherWayDigital
Published2026-05-06 03:22 UTC
Languageen
Regionglobal
CategoryEssays

Stop splitting ad systems into front end and back end work: how AI changes complex business software

My read of the situation is simple: the system is probably complex, but the current way of working is probably wrong.

An ad management system is not a toy. It may include accounts, roles, campaign setup, budgets, targeting, creatives, review, bidding, spend, attribution, reports, billing, risk controls and audit logs. Every module has exceptions. When business people ask for a feature, they are often not asking for a button. They are asking for a chain of rules.

Still, complexity does not mean one request should take one or two months. It also does not mean engineers must fully internalize every business detail before they can let AI write code. By May 2026, tools such as Codex and Claude Code have changed the center of software work. The human job should not be hand-writing most code. It should not be holding the entire business logic in someone's head either. The job is to define boundaries, organize context, create acceptance checks, and let AI implement and verify repeatedly.

If a team still receives a complex business request, splits it into "front end person" and "back end person", then spends weeks syncing context, designing APIs, filling gaps and waiting on each other, the structure itself is slowing the work down.

The short answer

  • Engineers still need business understanding, but not in the old way.They need to understand the goal, the rules that must not be broken, the common paths, the edge cases that matter and the acceptance criteria. Details should become searchable, testable and reusable context, not private memory.
  • The team should not spend weeks "fully understanding" a requirement before building.AI should help turn a large request into flows, state machines, rule tables, API drafts, test cases and risk lists. Humans review those artifacts. After that, build in vertical slices.
  • Front end versus back end should not be the default split.For an ad management system, the better split is by business loop. One person owns a complete slice from UI to API to data to tests. The second person reviews, checks rules, explores risks or owns another independent slice.
  • If small requests often become one-month or two-month projects, the bottleneck is probably not AI code generation. It is requirement intake, context management, acceptance criteria and task slicing.

AI did not remove business understanding. It changed its shape

There are two bad extremes here.

One extreme is the engineer saying: the business is too complicated, I must understand everything before I start. This made some sense in the old workflow because humans wrote most of the code and missing one rule could cause expensive rework.

The other extreme is the manager saying: AI is strong now, so engineers do not need to understand the business. Just ask AI to build it. That is not how these tools work. AI does not know your company's campaign rules, special billing cases or risk boundaries unless you provide them.

The better answer is in the middle. Humans do not need to memorize every detail. They do need to turn detail into artifacts that AI can use, the team can inspect and the system can test.

This matches what the industry has learned from agentic coding. Anthropic's Claude Code best practices say Claude Code can read files, run commands and make code changes, but its output depends heavily on context and verification criteria. The document explicitly recommends giving tests, screenshots or expected outputs so Claude can check its own work. It also recommends exploring first, planning second and coding after that.

OpenAI's Codex documentation frames Codex around real codebases, controlled changes, reviews, testing, automation, knowledge work and production systems. OpenAI's Agents documentation also emphasizes evals, traces, guardrails and human review. In other words, the workflow is not "AI types code for me". The workflow is "AI works inside a track that can be checked".

Why the front end/back end split is costly here

The traditional split still works in some teams. But in business software like ad management, the screen is only the entry point for the rule, and the API is only the carrier of the rule. The hard part is the business loop.

Take a feature like "copy an ad campaign". It sounds like a button. In practice it may involve:

  • which fields can be copied and which must be reset;
  • whether creative review status carries over;
  • whether budget, bidding, targeting and audience packages can be reused;
  • whether the copied campaign enters draft status;
  • which roles have permission to copy;
  • what error message appears when copy fails;
  • whether the action enters the audit log;
  • whether reporting and attribution must exclude pre-copy data.

If the team splits this into front end and back end, the front end engineer asks about API fields and states, while the back end engineer asks about screen behavior and business exceptions. Both sides wait for the other half of the truth. AI can generate UI and API code, but it cannot fix a broken information structure by itself.

The better split is vertical. One person owns the whole "copy campaign" loop: page, API, data, permissions, logs and tests. The second person is not idle. They review rules, run counterexamples, inspect tests, check side effects or work on another slice.

This does not require one person to be a traditional full-stack master. AI-native full stack means something different: one person can own a business outcome and use AI plus tools to connect UI, server, data and tests.

Large requirements should not enter development as large requirements

Business teams will always ask for large things. That is normal. They want broader, fuller solutions. The problem is not that the request is large. The problem is accepting it in that shape.

The old workflow is familiar: meeting, notes, flowchart, document, API split, schedule, development. Once the ad logic gets complicated, this workflow grows heavier. The document gets longer, fewer people fully understand it, and the answer becomes another meeting.

The AI-era workflow should be different.

Step 1: ask AI to turn the big request into structured artifacts

The business input can be large. Engineering should not jump straight into implementation. First, use AI to produce:

  • one sentence goal: what business result changes;
  • user roles: who can and cannot use it;
  • main path: the normal path from entry to completion;
  • state machine: object states and allowed transitions;
  • rule table: conditions, actions and exceptions;
  • counterexample list: what must not happen;
  • data impact: fields, tables, events and logs affected;
  • acceptance cases: business, UI, API, permissions, errors and regression tests;
  • slice list: what can ship independently and what must ship together.

Humans should not write all of this from a blank page. AI is good at turning long requirements into rule tables and test cases. The human job is to check: what is missing, what assumption is wrong, what part should not be built yet?

Step 2: business must provide examples and counterexamples, not only wishes

Abstract language kills complex business software. "Budget rules depend on customer type" is not enough. Better input looks like this:

  • Type A customer, daily budget below 1000: allow save but show a warning;
  • Type B customer, qualification not approved: do not allow campaign launch;
  • Agency account copying a campaign: do not copy the original customer's billing entity;
  • Historical performance data must not be reattributed because a campaign was copied.

Examples and counterexamples are more useful than long flowcharts. They can become tests. They can be fed directly to Codex or Claude Code.

Step 3: every slice should show a working result in three to five days

Not every system can ship in a day. But if a slice cannot produce a working result in three to five days, it is probably still too large.

An ad system feature can be sliced like this:

  • first build the read-only detail view and get fields, states and permissions right;
  • then build draft creation without live delivery;
  • then add save validation and error messages;
  • then add audit logs;
  • then add the review flow;
  • finally connect launch controls and reporting effects.

Each slice has acceptance cases. After AI finishes a slice, it runs tests, summarizes changes and lists risks. Humans review the result. They should not wait a month to see the first integrated version.

How much business detail should engineers understand?

This deserves a precise answer.

Engineers do not need to become ad operations experts. They do not need to memorize every customer, every channel and every historical exception. But they must understand six things.

  • The business goal.Is the feature meant to improve campaign setup speed, reduce mistakes, support a new customer type or satisfy compliance? The goal changes tradeoffs.
  • The boundaries that must not break.Money, permissions, launch status, attribution, audit logs and compliance review cannot be handled casually.
  • Core states and lifecycle.A campaign moves from draft to review to live to paused to archived. The allowed actions in each state must be clear.
  • Common paths and common exceptions.They do not need every rare exception upfront, but they need the frequent ones.
  • Acceptance criteria.What does correct mean? How should the page look, the API respond, the error appear, the log record and the test pass?
  • Impact range.Will the change affect reports, billing, permissions, old data or existing customers?

Everything beyond that should go into rule libraries, case libraries, tests and context packs. Humans can look it up. AI can read it. The system can test it.

If an engineer says, "I must understand every detail before I start," the team probably has not turned business knowledge into assets. It is still operating like a workshop.

How the two engineers should work together

The team does not necessarily need to shrink from two people to one. But the split should change.

I would move from "front end owner plus back end owner" to "slice owner plus quality/context owner", and rotate the roles.

  • Slice owner.Owns one complete business slice. Uses AI to read the codebase, read documents, generate a plan, change UI, change API, add tests and run verification. This person owns the business result.
  • Quality and context owner.Checks whether the slice is too large, whether the rule table is complete, whether tests include counterexamples, whether AI misunderstood context and whether the change affects other modules. This person can also ask AI to independently generate a second risk list or test list.

On the next slice, swap roles. Over time both people develop full-loop capability. The team avoids front end waiting for back end and back end waiting for front end. Each feature still gets a second pair of eyes, and business knowledge does not sit with one person.

If a feature is truly large, parallelize by business slice, not by UI versus server. One person can own "draft creation loop" while the other owns "review log loop". Do not make one person only write screens and the other only write database changes.

A two-week operating model for this ad management system

This can be tested without a reorganization.

1. Create a requirement intake template

Every request should include the minimum input below:

  • business goal;
  • user roles;
  • three to five normal examples;
  • three to five counterexamples;
  • any money, permission, launch status, reporting or review impact;
  • release priority: what must be in version one and what can wait.

Business people do not need to write it perfectly. AI can help clean it up. But the content cannot be absent.

2. Engineering starts with an AI-generated requirement packet

The packet should include flow, state machine, rule table, API draft, data impact, acceptance cases and risk list. The owner reviews and corrects it instead of starting from a blank document.

3. Define acceptance before schedule

Every slice needs runnable acceptance criteria. For example:

  • given an unapproved account, campaign creation returns which error;
  • given an agency account, campaign copy clears which fields;
  • given a paused campaign, which buttons are disabled;
  • given a budget change, which fields are recorded in the audit log.

Once these cases exist, Codex or Claude Code has a track. Without them, AI guesses.

4. Build in vertical slices

A slice includes UI, API, data, permissions, logs and tests. Keep it small, but keep it complete. Avoid the old rhythm where front end finishes first, then waits for back end, then both wait for integration.

5. Every AI-assisted change must include three outputs

  • change summary: which files changed and why;
  • verification result: which tests ran, which did not, and why;
  • risk list: which business rules still need confirmation.

This is not paperwork. Anthropic and OpenAI both emphasize verification, traces, review and guardrails in agent workflows. In complex business systems, without those checks, stronger AI can simply produce more complete mistakes.

6. Capture business rules every week

Move confirmed rules, counterexamples, tests, field definitions and permission boundaries into project context files such as AGENTS.md, CLAUDE.md, business rule docs and test datasets. Future AI runs must read them.

This looks boring. It is also where speed compounds. The team stops re-understanding the business every time and starts reusing confirmed context.

Large requirements are not only an engineering problem

If business people always submit huge requests, engineers should not carry all the blame.

Business teams naturally want full solutions. But in an AI-era workflow, they also have to change what they provide. They cannot only say "we need a complete ad management capability" and pass all ambiguity to engineering. They must provide priorities, examples, counterexamples and non-negotiable boundaries. Otherwise AI is still guessing through vague text.

Management should set one rule: large requests may enter the pipeline, but they cannot enter development in large-request form. Every request must be decomposed with AI into slices that can be accepted, rolled back and shipped.

Should one person own it?

One person can own a slice. One person should not be the permanent single point of failure for the whole system.

For an ad management system, the safer structure is a two-person AI-native team:

  • one person is the owner for the current slice and delivers it end to end;
  • one person is the reviewer for rules, tests, risks and maintainability;
  • they swap on the next slice;
  • confirmed rules go into shared context and tests.

This keeps the collaboration, but removes the front end/back end split.

Five metrics to see whether the team is improving

  • Can a request be decomposed into slices within one day?
  • Can the first working slice appear within three to five days?
  • Does every slice have acceptance cases instead of only verbal confirmation?
  • Does every AI-generated change include tests, a change summary and a risk list?
  • Are business rules captured weekly instead of being explained again each time?

If the answer is no, do not stop at "the business is complicated". Change the operating model first.

Complex business software still needs human ownership in the AI era. The owner should not spend most of the time hand-writing code or repeatedly absorbing requirements in meetings. The owner should turn messy business logic into context, rules, tests and shippable slices. AI should generate, reorganize, compare and verify. Humans keep the boundaries, make the judgment calls and own the result.

That is a more reasonable way to build an ad management system in 2026.

References

More from WayDigital

Continue through other published articles from the same publisher.

Comments

0 public responses

No comments yet. Start the discussion.
Log in to comment

All visitors can read comments. Sign in to join the discussion.

Log in to comment
Tags
Attachments
  • No attachments