ARCAS Systems
Chapter 2

The 90/180/365 AI Roadmap

Why a roadmap belongs in the appendix

The playbook spine does not move. People before systems before AI. That order is the same in 2026 as it will be in 2030. What does move is what "AI in the operating model" looks like in any given year. This roadmap is dated, deliberately. Use it for sequence. Verify the costs before quoting any client.

A founder you might recognise

A founder runs a 28 person events business in Dubai. AED 7M (USD 1.9M) last year. The team uses ChatGPT for proposals when one of the senior coordinators remembers to. The founder has a Claude subscription nobody else logs into. There is one n8n workflow that nobody can quite remember setting up. The team is "doing AI" in the same way they were doing it 12 months ago, which is to say not really at all.

The roadmap below is the path the founder walked through, starting with the data hygiene work and ending one year later with a measurement loop that runs without her. The pattern is what serious adoption looks like when the founder commits to sequence over splash.

Days 1 to 90: foundations and one workflow

Outcome at the end of the phase. The team has one source of truth for client records, the document store has been cleaned up, and one workflow has been automated end to end with measurement in place.

Three to five specific moves:

  • Pick the CRM (or formalise the one already in use) and migrate every client and prospect record into it. Decide field standards, train the team, run the discipline check from Data Discipline Before AI.
  • Clean the document store. One folder structure across every client. Archive everything older than 18 months that nobody references.
  • Pick one repetitive workflow that fails the cost test in AI vs Automation (Tier 1 use case, low risk, structured input). Document the manual process step by step.
  • Build the automation. Most likely n8n plus the Claude API for a sales follow-up draft, an inbox triage, a meeting summary loop, or a client report generator.
  • Run the workflow for 30 days. Measure one number: hours saved, response time, error rate. Compare to the manual baseline.

Budget for this phase. AED 30,000 to AED 60,000 (USD 8,170 to USD 16,335). CRM rollout (AED 15,000 to AED 25,000, USD 4,085 to USD 6,810), one Claude.ai workspace (AED 110, USD 30 per seat per month for the team that needs it), one n8n setup (AED 75 to AED 200, USD 20 to USD 54 per month plus AED 5,000 to AED 15,000, USD 1,360 to USD 4,085 in build time), and the founder or operations lead time, which is the largest hidden cost.

The role most likely to do the work. Founder plus operations lead, with one external partner for the CRM rollout and the n8n build if there is no one internal who can do it. Skip hiring a full-time AI lead at this phase. The work is process work, not model work.

The signal you can move on. The one workflow runs by itself, the team trusts it, and you have a real number that proves it returned more than it cost. A measured number rather than "feels faster." If you do not have a number, you are not ready for phase two.

Days 91 to 180: a second workflow, internal RAG, measurement loops

Outcome at the end of the phase. A second workflow is automated, internal RAG is live over the corpus that matters most for the business, and the measurement loop runs without the founder asking for it.

Three to five specific moves:

  • Pick the second workflow. Slightly higher risk than the first but still in the Tier 1 to Tier 2 band. Common second workflows are lead enrichment plus qualification scoring, a client report writer, or a quote generator that produces a one-page document in your brand format.
  • Stand up internal RAG. Pick the corpus first. Most service businesses get the most return from RAG over either client records, past proposals, or a defined SOP library. Pick one. Skip the temptation to roll out RAG over "everything." Reference: RAG: The AI That Reads Your Own Files.
  • Build the measurement loop. A weekly summary that pulls the numbers from the automated workflows and shows them in one dashboard or one Monday morning email. The loop must run without the founder pushing it.
  • Run a quality drift check. Pull ten outputs from the first workflow at random and read them line by line. Mark anything that has slipped. This becomes a monthly habit.
  • Train one team member as the AI steward. Their job is to watch the outputs, flag drift, and own the quality bar across the running workflows.

Budget for this phase. AED 25,000 to AED 50,000 (USD 6,810 to USD 13,615) incremental. Includes the second workflow build (AED 5,000 to AED 15,000, USD 1,360 to USD 4,085), the RAG setup (AED 15,000 to AED 40,000, USD 4,085 to USD 10,890 depending on whether you go off-the-shelf or custom), and ongoing operating cost rising to roughly AED 2,000 to AED 5,000 (USD 545 to USD 1,360) per month across all the running pieces.

The role most likely to do the work. Operations lead now leads. Founder reviews. External partner is needed only for the RAG build if it is custom. The AI steward role can be a 20 percent extension of an existing senior team member's job, treated as a role addition rather than a new hire.

The signal you can move on. Two workflows running, RAG answering at least 80 percent of the high-volume document questions correctly when sampled, and the measurement loop showing the trend without anyone manually compiling it. If quality is drifting on any of the three, fix it before adding more.

Days 181 to 365: agentic patterns, multi-step workflows, stewardship at scale

Outcome at the end of the phase. Multi-step workflows run with approval gates at the points where a human decision matters. The team does its work inside an operating model where AI is the boring background assistant, and the quality bar is held by a named person across every running piece.

Three to five specific moves:

  • Build a multi-step workflow with an approval gate. The classic pattern: AI drafts, AI summarises, the founder or account lead approves, AI executes. Common examples are proposal generation plus pricing recommendation with founder sign-off before the proposal goes to the client, or supplier comparison with the operations lead approving before the quote request goes out.
  • Replace one specialist hire with a workflow plus a part-time human reviewer. Most realistic candidates are a junior research role, a junior reporting role, or a coordinator role that mostly produces standardised outputs.
  • Roll out the AI steward role formally. Job description, weekly time block, monthly drift check, quarterly review with the founder. The steward owns the quality bar across every workflow.
  • Build the cancellation test. Once a quarter, pause one workflow for a week and see what happens. If the team does not notice, it was not worth running. If they panic, the workflow has earned its place.
  • Refresh the corpus and the prompts. Models, tools, and the team's own processes change. Quarterly maintenance keeps the system honest.

Budget for this phase. AED 30,000 to AED 70,000 (USD 8,170 to USD 19,055) across the year. New workflow build (AED 10,000 to AED 25,000, USD 2,720 to USD 6,810), incremental tooling (AED 1,000 to AED 3,000, USD 270 to USD 820 per month), the AI steward role at 20 percent of a senior team member's time (priced at AED 3,000 to AED 6,000, USD 820 to USD 1,635 per month internally), and the saved cost of the role you did not have to hire (typically AED 8,000 to AED 12,000, USD 2,180 to USD 3,270 per month for a junior, sometimes more).

The role most likely to do the work. Operations lead and the AI steward together. The founder shifts to reviewing the system rather than building it. External partners are only needed for specific custom builds.

The signal you have arrived. AI is invisible to the team because it just works. The founder spends less than two hours a month on AI maintenance. There is at least one number on the wall that proves the system returned more than it cost across the year. The cancellation test shows that at least two of the running workflows are genuinely load-bearing for the business.

What this roadmap is and what it is not

The dates are guidance only. Most service businesses run at half this pace and still get there. The order matters more than the speed. Phase two before phase one is real produces the Marina Heights moment from Data Discipline Before AI, where AI confidently gives wrong answers because the data was never structured.

After day 365 the loop continues. Quality drift is permanent. New tools change the stack. The roadmap restarts every year, with the foundations holding and the top layer evolving.