Economic Gravity
Every PFA engagement begins here. Not with a technology inventory. Not with a vendor evaluation. Not with a process map. With the question that most automation projects never formally ask: what actually moves this business, and by how much? The answer to that question determines every decision that follows.
Axiant maps four categories of economic driver: revenue acceleration (where does revenue grow or leak), margin recovery (where is operational inefficiency eroding margin), cycle time compression (where does time suppress growth or compound advantage), and utilization improvement (where are high-value people consuming their capacity on low-value work). Every automation candidate that emerges from the engagement must connect explicitly to one of these four categories.
The Driver Map is not a slide. It is a working artifact that gets updated throughout the engagement as the picture becomes clearer. It functions as the anchor for every downstream decision: which processes to examine in Stage 2, which candidates to qualify in Stage 3, and whether an automation succeeded in Stage 6. Without a Driver Map, there is no basis for any of those decisions.
"Automation conversations do not begin with tools. They begin with economic gravity: the measurable forces that determine whether the business grows, stagnates, or contracts."
Organizations that skip this stage do not know what they are trying to improve. They select processes because they are visible, bounded, and technically approachable, not because they are connected to a driver. The automation may run perfectly. There is simply no way to know whether it mattered.
Operational Truth
Most organizations have two versions of every process. The documented version describes what should happen according to a policy written at some point in the past and updated infrequently since. The real version includes the workarounds developed by the team to handle what the policy did not anticipate, the informal approvals that bypass the official workflow, the tribal knowledge held by the three people who have been here long enough to know how it actually works.
This second version is the Shadow Process. It is what Axiant maps in Stage 2. Not the documentation. The reality. We examine what actually happens at each step: where friction accumulates, where human workarounds have become standard practice, where ambiguity creates inconsistent execution, and where handoffs break or introduce delay. The goal is clarity, not criticism. Every organization has Shadow Processes. The ones that automate successfully are the ones that surface them before building anything on top of them.
The Operational Truth mapping process involves structured interviews with the people who actually execute the process, observation of real execution where possible, and reconciliation of the gap between the documented and the actual. The output is not a cleaned-up version of the existing documentation. It is a stabilized process baseline that reflects how the work actually runs.
"You cannot automate the documented process and expect the real one to follow."
Stage 2 frequently uncovers the R element of DRIFT: Rules Undocumented. Rule clarity has a 0.87 correlation with automation success. It is the single strongest predictor in the PFA framework. Surfacing undocumented rules before automation design begins is not additional work. It is the work that prevents the most common and most expensive failure mode in automation.
The Automation Decision
This is where Process First Automation fundamentally departs from every vendor-led engagement and most consulting-led ones. The question in Stage 3 is not: how do we automate this process? The question is: should we automate this process, and to what degree? The answer is not assumed. It is evaluated.
Every process that emerges from Stage 2 is scored using the Process Readiness Score across five dimensions. Rule Clarity: are the rules documented and understood, or do they live in institutional memory? Driver Connection: does this process tie directly to a measurable business outcome? Process Stability: how consistent is execution day to day? Data Integrity: is the data feeding this process reliable? Human Dependency: is this process deterministic and rule-based, or does it require judgment and interpretation? Each dimension is rated on a 1 to 5 scale. The composite score determines which of the Four Paths applies.
The Four Paths are the classification outcomes. Automate: the process is stable, rules are clear, and the driver connection is strong. Deploy technology. Redesign: the process is broken or unstable. Fix it first, then re-evaluate for automation. Instrument: the process does not need automation, but it needs visibility. Add a data layer so leadership can see what is happening. Preserve: this process should remain human by design. Judgment, empathy, exception handling, and relationship management are human functions and should stay that way.
"When Axiant tells a client that a process should remain human, that is a trust-building moment no automation vendor will replicate. It signals that the engagement is governed by discipline, not billable hours."
The Preserve path deserves specific attention because it is the one that most surprises clients encountering PFA for the first time. The reflex assumption in any automation engagement is that more automation is better. PFA treats automation as a strategic choice, not a default. Some processes are better served by human judgment than by rule-based execution. Identifying those processes and leaving them human is not a failure. It is the methodology working correctly.
Human Amplification
Automation that eliminates human judgment does not create leverage. It creates brittleness. Systems that execute without human oversight at critical decision points are systems that fail at scale and fail without warning. Stage 4 is where the human architecture of every automation is designed deliberately, before the build begins.
The work of Stage 4 is explicit boundary design. What do humans decide? What do systems execute? What triggers escalation to a human reviewer? What constitutes an exception that requires human judgment rather than rule-based handling? These questions are not answered during build or discovered during a production incident. They are answered here, in writing, as a designed artifact.
The goal is not to minimize human involvement for its own sake. The goal is to deploy human judgment where it creates the most value: in revenue-generating work, in relationship management, in decisions that require context and interpretation that deterministic systems cannot replicate. Automation absorbs repetition. Humans retain agency. The result is not a reduced workforce. It is a workforce with greater capacity for the work that matters.
Exception handling is often where automation initiatives fail in production. An automation is built to handle the standard case. The standard case accounts for 80% of volume. The remaining 20% involves variations, edge cases, and situations the original designer did not anticipate. Without a designed exception path, those variations either fail silently, queue indefinitely, or get handled inconsistently by whoever notices first. Stage 4 designs the exception path as deliberately as the main path.
Visible Systems
The Black Box problem is one of the most consistently damaging patterns in small and mid-market automation. An organization deploys an automation, the automation runs, and over the following weeks or months it drifts from its original intent: processing transactions with errors, applying logic that became outdated, handling exceptions incorrectly, or simply failing silently. Nobody notices because nobody built any mechanism for noticing.
By the time the problem surfaces, usually through a downstream complaint or an audit, the damage is already compounded. Weeks of bad data. Months of incorrect outputs. A remediation effort that costs more than the original automation saved.
Visible Systems is the stage that makes this pattern impossible by design. Every automation deployed under PFA has three things from its first day in production: measurable success criteria tied directly to the drivers established in Stage 1, real-time performance monitoring that detects failures before they propagate, and defined ownership so that every system has a named person responsible for its health.
Security alignment and data governance are addressed here as well, not retrofitted after a compliance question surfaces. The architectural decisions made in Stage 5 ensure that the automation is integrated into the organization's operational fabric rather than running as an isolated system that nobody fully understands.
"Faith is replaced with instrumentation. Every system has a pulse."
Proof and Iteration
Stage 6 closes the loop by returning to the starting point: the drivers established in Stage 1. Did revenue accelerate? Did margin recover? Did cycle times compress? Did your team reclaim high-value time? The answers to these questions, measured against the success criteria and Impact Windows established in Stage 3, determine what happens next.
Automations that demonstrate driver impact within their Impact Window are expanded. The scope widens. More process candidates are evaluated. The engagement deepens. Automations that reach their Kill Threshold without demonstrating impact are retired. Not deferred. Not given more time in the hope that something changes. Retired, with the resources redirected to initiatives with stronger foundations.
Kill Thresholds are not punitive. They are honest. They reflect the reality that even well-qualified automations sometimes encounter production conditions that differ from the assumptions made during Stages 2 and 3. The discipline to retire what is not working, rather than defend it, is what separates organizations that compound automation value from those that accumulate automation debt.
Stage 6 also generates the data that makes the next cycle sharper. Every measurement, every refinement decision, every retirement creates institutional knowledge about what your organization's processes are actually capable of. The second cycle through the PFA Loop is always more targeted than the first because the Driver Map is more detailed, the process baselines are more accurate, and the qualification criteria are calibrated to real production data.