Client Results · E-commerce

Clearing the customer service backlog without doubling the team

A UK DTC brand cleared its support backlog by automating routine questions and routing the rest to humans with full context.

68%

Of tickets auto-resolved without human reply

22 hours

Reclaimed by the team each week

All case studies

The situation

The business is a UK direct-to-consumer brand sitting in the middle of the SME bracket, turning over somewhere between three and eight million pounds a year on a physical product. Growth had been steady for eighteen months, and with growth came tickets. By the time the growth director got in touch, the support inbox was taking in around 1,400 emails a week across two channels, and a four person team was working flat out just to keep the queue from doubling overnight.

The pattern was familiar. Roughly seven in ten messages were the same handful of questions repeated in slightly different words. Where is my order. Can I change the size. How do I start a return. Do you ship to the Channel Islands. The team knew the answers in their sleep, but each one still needed a human to read it, look up the order, copy a tracking link, and write a polite reply. The genuinely tricky tickets, the ones where a customer had been let down or where something had gone wrong with a bespoke item, were getting buried under the routine traffic. Average first response had drifted from under four hours to just over two days, and the team could feel the quality of their replies slipping on the cases that mattered most.

Hiring two more agents was on the table, but the growth director was not keen. The maths worked for this quarter and stopped working the moment the next promotion landed. She wanted a way to handle the easy volume without putting a chatbot wall between her customers and her team, and she had read enough horror stories to be properly suspicious of anyone promising a quick fix.

What we did

We started by reading tickets, not by writing code. Two of us spent a week going through six months of support history, tagging every message by intent and noting which ones the team handled in under a minute and which ones needed real thought. That gave us a clear picture of what was safe to automate and, more importantly, what was not. Order status, tracking lookups, returns initiation, sizing guidance, delivery windows, and basic product questions all sat firmly in the safe pile. Anything involving a complaint, a damaged item, a refund dispute, a wholesale enquiry, or a customer who sounded even slightly upset stayed firmly in the human pile.

The system we built reads each incoming message, classifies it, and only acts when it is confident. For the routine questions it pulls the order data live from the shop platform, drafts a reply in the brand's voice, and sends it. For everything else it does something quieter but arguably more useful. It writes a short summary at the top of the ticket, pulls in the customer's order history and previous conversations, suggests two or three possible responses for the agent to pick from, and hands the whole thing to a human with the context already laid out. The agent stays in charge. Nothing goes out under their name unless they have read it.

We were deliberate about what the system does not touch. It does not handle refunds. It does not make goodwill gestures. It does not reply to anyone whose message contains words suggesting frustration, urgency, or a safety concern. It does not invent policy. When it is unsure, it routes to a human and says so. The growth director was clear from the start that she would rather the system did less and did it well than try to be clever and end up apologising to customers later.

The result

Three months in, the system is auto-resolving sixty-eight percent of incoming tickets without a human touching them. Average first response has dropped from just over two days to twelve minutes for automated replies and around four hours for the ones that still need a person. The team has reclaimed roughly twenty-two hours a week between them, and they are spending that time on the cases that actually need a careful human reply. CSAT on resolved tickets has nudged up from 4.2 to 4.6 out of five, and the repeat customer rate over a rolling ninety days is up by a few points, though we are not claiming all of that is down to support.

The honest bit. In the second week of the pilot the system started telling customers their orders had shipped when the warehouse had only printed the labels. We caught it within a day because we were watching the logs closely, paused the relevant intent, fixed the data source it was reading from, and apologised to the eleven customers who had received the wrong update. Nobody loved it, but the growth director said afterwards that watching us own it quickly was the moment she stopped worrying. The team's view, in their own words, is that they finally feel like they are doing the job they were hired to do rather than firefighting an inbox.

Sound familiar?

If your team is losing hours to work that should take minutes, a 45 minute conversation is all it takes to find out what is possible.

Book a discovery call