I’ll be straight, I know next to nothing about UX research but recently I sat down with a UX designer to explore whether AI could reshape their workflow. I’m sharing this because what we discovered wasn’t about us being clever, but watching their eyes widen when they realised what becomes possible when you place AI strategically in the right part of the process was really rewarding for both of us.
As an outsider to this particular workflow, rather than assume I knew where the friction lived, I explored the workflow with them to understand how it actually worked. I asked questions, listened, and started to see where things slowed down. Through that exploration, synthesis emerged as a significant bottleneck and that’s where I thought AI could help…
The Process and the Bottleneck
UX research follows a familiar pattern. You plan what you want to learn, run sessions with users, record conversations, capture observations. Then comes synthesis – the part where you turn raw data into patterns, themes and actionable insights. This is where the designer spent most of their time: hours of reviewing notes, limited to their recorded observations and manually sorting them into themes
It’s worth considering that its not just UX research that works this way. Any process that collects data hits the same walls. Customer feedback, operational observations, market research, internal process audits, they all follow the same rhythm. The collection phase is structured – you know what you’re doing, you’ve got a process. The synthesis phase is different. It’s manual, time-consuming and resource-intensive. It’s where insights can disappear and momentum often stalls.
For this designer, the ceiling was real. They could do maybe one or two research sessions a week because synthesis took so long and counter productively, the richer the data, the longer it takes to make sense of it. There was only so much research they could actually do and the objective here was to continue to feed a hungry dev team.
Thinking Through Where AI Could Help
Many of you know that AI is brilliant at synthesis. Finding patterns in large amounts of text, organising information and surfacing themes that might take a human hours to spot manually. So synthesis felt like the right place to start. Not the interviews themselves, those need a human in the room, asking follow-up questions, reading the room. Not the interpretation either, that requires context, experience, and judgement about what actually matters for the research objectives. But the middle part – the heavy lifting of organising and synthesising raw data – that’s where AI could reshape things.
I suggested we test it with NotebookLM. Feed it the interview transcripts, ask it to help surface themes and take advantage of its inherent capability to build mind maps, create structured summaries and provide an interface to the rich data being collected.
The Live Test
We ran a real user research session, I was the participant and they conducted the interview. We recorded it, transcribed it, then fed the transcript into NotebookLM then started prompting, asking it to identify key themes, pain points, and opportunities.
The results were striking. NotebookLM generated an affinity map showing clusters of related observations. It built a mind map showing how different themes connected. It created a structured summary of the key insights. All of this in minutes. Work that would have taken them hours to do manually.
But the real value wasn’t speed. It was what NotebookLM created, a knowledge base they could naturally prod and poke. They could ask it to highlight specific themes, switch perspectives, tease out concepts that weren’t obvious on the first read. Synthesising and exploring the data became naturally engaging in a way that heavy statistical work or report writing can’t replicate. They weren’t just reading a report. They were thinking through the data, asking questions and discovering things.
The Moment It Clicked
I watched their face as they worked through the outputs. Something shifted. They realised what this actually meant…
They could run more research sessions. Not one or two a week, but potentially three, four, more. Because synthesis wasn’t the bottleneck anymore. They could collect richer data, do deeper exploration, build a more complete picture of user needs. It creates a scenario where not only the workflow itself can improve, but the whole research process becomes supercharged. Its not only about doing it quicker, its about achieving a higher degree of quality at the same time.
That’s the moment. Not excitement about the technology. Recognition of what becomes possible. Capacity they didn’t have before. Better thinking because the friction was gone.
What This Reveals
This isn’t automation. It’s amplification. The human-in-the-loop model in practice. AI handled the pattern-finding and organising. The designer kept the interpretation and decision-making. Both are supported and optimised by the other.
This is how effective AI placement works. Not everywhere in the process, but strategically. Not replacing people, but amplifying what they can do. Not removing judgement, but freeing up space for better judgement because the friction is gone.
This pattern shows up in other processes too. Any workflow where you collect data and then spend time making sense of it, that’s a place where AI can reshape things. Customer feedback analysis. Operational observations. Market research. Internal audits. The synthesis bottleneck is universal and so is the opportunity.
So What?
Look for the places in your process where data gets collected but making sense of it is slow, manual, or lossy. That’s not a people problem, it’s a friction problem. And friction is where AI can help.
Effective AI use isn’t about replacing people. It’s about placing AI where it amplifies human capability and collapses friction. It’s about recognising what AI is actually good at – synthesis, pattern-finding, organising information – and asking where that capability could reshape your workflow.
What’s the synthesis bottleneck in your process? Where could AI help you think differently about what you’ve already discovered?