COMPANYMeridian Advisory Group
Professional services · 310 employees
DRIFT PATTERNRules UndocumentedInvisible Execution
PROCESS EVALUATEDA ticket routing automation that nobody could explain, built on 140 keyword rules that an engineer wrote before leaving the company.
Meridian's IT team was proud that their service desk tickets routed automatically. What they could not explain was why approximately 30% of them required manual re-routing every week. The routing automation had been built by a senior engineer 18 months before the Diagnostic, and it had worked reasonably well when she was there. When she left, the logic left with her. Current staff knew the automation existed. They knew it routed tickets. They did not know the rules. When the team was asked to pull documentation, they found a configuration export they could not interpret and a spreadsheet that appeared to be a first draft of the logic but bore no resemblance to what was in production. The DRIFT assessment identified the core problem immediately: 140 keyword triggers governing a business-critical workflow, with no owner, no documentation, and no way to update it safely. The automation was not broken. The process underneath it was invisible.
PATH TAKENRedesign
KEY OUTCOMEUnder 6%re-routing rate after routing logic was rationalized from 140 keyword triggers to 23 structured categories. Down from 30%.
Read the walkthrough→COMPANYClearwater Health Technologies
Healthcare technology · 520 employees
DRIFT PATTERNInvisible ExecutionRules Undocumented
PROCESS EVALUATEDNew hires were receiving access to systems they should not have had. The provisioning automation was working exactly as designed: the design was 16 months out of date.
Clearwater's access provisioning automation had been built carefully. Role templates defined what each job type should access. HR events triggered the workflow. The system ran clean and reliably. What nobody had reviewed in 16 months was the role template library. In that time, the company had added three applications to its environment, restructured two roles that affected approximately 60 employees, and deprecated one system that was still listed in two templates. New hires were being provisioned against templates that no longer reflected their actual roles. The security team's quarterly access review caught the first anomaly: a new financial analyst had provisioned access to a legacy client portal the role should never touch. When the team traced it, they found the template had not been reviewed since it was created. The provisioning automation had been running cleanly on wrong inputs for over a year.
PATH TAKENRedesign, then Automate
KEY OUTCOMEZeroover- and under-provisioning incidents in the following audit cycle after role templates were rebuilt and quarterly review cadence established.
Read the walkthrough→COMPANYVantara Capital Services
Financial services · 780 employees
DRIFT PATTERNInvisible ExecutionDisconnected from Drivers
PROCESS EVALUATED847 monitoring alerts. 1,200 per day on average. The incident that caused a two-hour outage had fired 14 alerts before it became critical. All 14 were buried.
Vantara's infrastructure team had built an alert coverage model over four years that, by their own admission, they no longer trusted. Every system had alerts. Critical thresholds had been set during initial deployment and rarely revisited. The alert volume had grown to the point where engineers triaged selectively, reviewing the ones that looked serious based on pattern recognition, not on severity classification. The DRIFT assessment identified the core failure: the alert configuration had never been tied to what the business actually cared about. Thresholds were set based on what the monitoring platform defaulted to, not based on what conditions preceded actual incidents. The Operational Truth mapping session ran the prior 12 months of incident data against the alert log. The pattern was consistent: real incidents were preceded by alerts that looked identical to dozens of routine notifications. The signal was there. It was surrounded by noise at a ratio that made response impossible.
PATH TAKENRedesign
KEY OUTCOME94active alerts after rebuild against real incident data. Down from 847. Critical alert response time: under 8 minutes.
Read the walkthrough→COMPANYArbor Logistics Technology
Logistics technology · 430 employees
DRIFT PATTERNShadow ProcessRules Undocumented
PROCESS EVALUATED38% of changes were being classified as "emergency" changes. Most of them were not emergencies: the standard workflow was just too slow to use.
Arbor's change management process had two paths: a standard approval workflow that averaged 4 days, and an emergency change process that required direct manager sign-off via email and averaged 6 hours. The standard process had been designed with the right governance intent: review, risk assessment, stakeholder notification. The emergency path had been designed for genuine emergencies. Over 18 months, the emergency path had quietly become the preferred path for anything time-sensitive. Engineers had learned the informal threshold: if a change needed to happen before the end of the week, route it as emergency. The compliance audit that triggered the Diagnostic found that 38% of emergency changes in the prior quarter did not meet the documented emergency classification criteria. The change management team was aware the standard process was slow. They had not tracked the workaround rate, and they had not connected the shadow process to the compliance exposure it was creating.
PATH TAKENRedesign
KEY OUTCOME61%decrease in emergency change volume after standard approval was redesigned to under 24 hours for low-risk changes.
Read the walkthrough→