COMPANYPraxis Software Group
SaaS platform · 175 employees
DRIFT PATTERNInvisible ExecutionTechnology-First Thinking
PROCESS EVALUATEDAn AI routing tool that improved first response time while quietly sending the most complex tickets to the wrong queue at a 34% rate.
Praxis's support team purchased an AI routing tool to address tickets sitting in the general queue too long before assignment. First response time improved measurably within two months. CSAT moved in the opposite direction. The team attributed the decline to seasonal volume and deferred investigation. By month five, a manual audit surfaced the actual pattern: the classification model had been trained on ticket subject lines, the field customers filled in least accurately. Tickets with vague subject lines, which correlated strongly with complex or emotionally charged issues, were misrouted at a 34% rate. Agents receiving those misrouted tickets had neither the context nor the expertise the contact required. The tool had been configured before anyone mapped how customers actually wrote tickets, what the subject line field reliably communicated, or which queue assignment errors caused the most downstream damage.
PATH TAKENRedesign
KEY OUTCOMEUnder 8%misrouting rate on complex tickets after classification logic rebuilt on content analysis and structured intake questions.
Read the walkthrough→COMPANYAldercroft Financial
Financial services · 460 employees
DRIFT PATTERNDisconnected from DriversInvisible Execution
PROCESS EVALUATEDA chatbot that launched with 61% deflection and degraded to 38% because customers had learned that escalating to a human was faster than waiting for the bot to resolve anything.
Aldercroft deployed a chatbot to handle account balance inquiries, statement requests, and basic transaction questions. The launch metrics were strong: 61% deflection rate in month one exceeded the project goal. By month four, deflection had fallen to 38% and escalation-to-human rate had risen sharply. The operations team assumed NLP accuracy was degrading and opened a technical investigation. The actual cause was behavioral: customers who had contacted Aldercroft multiple times had learned to trigger escalation immediately because human-handled contacts were resolved faster for most query types the chatbot was designed to handle. The chatbot's resolution path for account balance inquiries required three steps for a result a human agent delivered in one. Customers who had learned this were doing the rational thing. The chatbot was technically performing correctly on a process nobody had validated against the customer's actual experience of using it.
PATH TAKENRedesign
KEY OUTCOMEStableescalation rate after chatbot scoped to query types where it was genuinely faster than human resolution.
Read the walkthrough→COMPANYHarwick Commerce
E-commerce retailer · 320 employees
DRIFT PATTERNFragmented Processes
PROCESS EVALUATEDThe returns portal was fully automated. The refund that followed was a separate manual process. Average time from return receipt to refund was 9 days, generating more inbound contacts than the original orders.
Harwick had invested in a returns portal that automated the customer-facing portion of the return process: label generation, drop-off instructions, and return confirmation. The automation was reliable and well-reviewed. The problem was invisible until the refund did not arrive. Refund processing was handled by a single finance team member who received a daily batch report of received returns and processed refunds manually. The finance step had never been considered part of the returns process; it was treated as a downstream accounting function. The Operational Truth mapping session was the first time the two processes had been documented as a connected workflow. The 9-day average from return receipt to refund was the result of two fully separate processes with no direct handoff, no shared trigger, and no shared owner. Once mapped as a single workflow, the fix was straightforward: automate the refund trigger off the return receipt confirmation event in the shipping system.
PATH TAKENAutomate
KEY OUTCOME2 daysaverage refund time after return receipt automation. Down from 9 days. Refund status contacts fell by 67%.
Read the walkthrough→COMPANYCalloway Systems
B2B software · 240 employees
DRIFT PATTERNInvisible ExecutionFragmented Processes
PROCESS EVALUATEDHealth scores dropped. A task was created. A CSM checked it six days later. Several accounts had already submitted cancellation requests. The system worked. The timing did not.
Calloway's customer success team had a health scoring model they trusted. The model pulled from product usage data, support contact frequency, and contract renewal timeline to produce a weekly score per account. When a score dropped below threshold, the CSP created a task assigned to the account's CSM. The CSM reviewed their task queue weekly on Monday mornings. The gap between a health score dropping on a Tuesday and a CSM making contact the following Monday was five to six days on average. That gap was invisible in the CSP dashboard, which showed task creation as the completion event. During a quarterly business review, the CS director pulled cancellation request data against the health score timeline. In 23% of churned accounts from the prior two quarters, the health score had dropped below threshold more than five days before the account submitted a cancellation request. The CSM had been assigned a task. The task had been opened. The outreach had simply arrived after the customer had already decided to leave.
PATH TAKENRedesign, then Automate
KEY OUTCOME1.1 daysaverage time to first CSM contact after escalation redesigned to direct notification on threshold breach. Early intervention rate up 41%.
Read the walkthrough→