The situation
The agency runs around twenty consultants across a handful of specialist desks, placing mid-senior roles in a competitive UK market. Every live brief pulled in between 80 and 200 CVs, sometimes more when a role was posted on the bigger job boards. Consultants were spending five to seven hours per brief just to get to a first shortlist, reading each CV, copying notes into a tracker, and trying to remember who they had already seen for a similar role last quarter.
The growth director had been staring at the same pattern in the pipeline for months. Consultants were closing four to five briefs a month on average, and the ceiling was not effort, it was screening. When she worked the numbers back, roughly a third of billable-capable hours were being spent on CV triage rather than on client calls, candidate conversations, or business development. Hiring more consultants was the obvious lever, but onboarding in this market was slow and expensive, and she did not want to solve a workflow problem with headcount.
What we did
We started by sitting with two consultants for a full day each and watching them actually screen. That surfaced what the tracker never showed, which was that most of the time was spent re-reading the same CV three or four times because the criteria for a role lived in a consultant's head rather than on paper. The fix had to start there.
We built a lightweight intake step that turned a brief into a structured scorecard before any CVs were touched. Must-haves, nice-to-haves, deal-breakers, the salary band, and notes on soft signals the consultant had picked up from the client call. Only once that was agreed did we run CVs through a screening assistant built on top of Claude, which scored each candidate against the scorecard, flagged gaps, and pulled out the two or three lines of evidence that supported the score.
Bias and compliance were the first thing the growth director raised, and rightly so. We were careful here. The assistant does not see names, addresses, photos, dates of birth, or university names during scoring, and we documented the prompt and the fields it reads so the team could show a client or a regulator exactly how a shortlist was produced. We ran a two-week parallel test where consultants screened the old way and the new way side by side, and we checked whether protected-characteristic proxies were creeping into the rankings. A couple did, and we adjusted the scorecard language before going live. The human still makes the final call. The assistant produces a ranked longlist with reasoning, and a consultant reviews, reorders, and cuts before anything reaches the client.
The result
Screening time per brief dropped from an average of six hours to around forty-five minutes, a reduction of roughly 87 percent. Time-to-shortlist, measured from brief sign-off to first CVs landing with the client, fell from just over four days to under a day and a half. Across the desk, consultants are now closing closer to seven briefs a month on average, up from five, without anyone working longer hours.
The more interesting question was what happened to the time the team got back. The honest answer is that it did not all go into more billable work, and the growth director was fine with that. About half of it went into more candidate calls and more client check-ins, which is where the extra placements came from. The rest went into things that had been getting squeezed for years, proper debriefs after interviews, tidier CRM records, and in one case a consultant finally having the space to build out a new sub-sector desk the agency had been talking about since the previous summer. Screening was never the work the agency was paid for. It was just the work that had been eating the work that mattered.
