AI returns polished feedback in seconds. Learners still ask: “Did a human actually read this?”
That question — seemingly simple — points to something deeper. It’s not about whether the feedback is accurate or well-structured. It’s about whether learners feel taught.
Recently, I watched a bootcamp demo where an instructor had built an AI assistant to mark case study submissions. The workflow was impressive: refined prompts, standards aligned to Ofsted’s outstanding criteria, and human verification built into the process. The output was consistent, fast, and genuinely useful. By any measurable standard, it worked. Yet something lingered. The presenter himself raised it: “Does this feel like teaching, or just standardised output?”
That tension — between what we can measure and what we actually experience — is worth sitting with. Especially if you’re leading training or education and considering automation.
The Seduction of Efficiency
Speed and consistency are real. They’re also seductive. When you automate feedback, you solve genuine problems: turnaround time collapses from days to minutes. Quality becomes uniform — no variation based on an instructor’s mood or workload. You can scale to cohorts you couldn’t serve manually. For organisations under cost or capacity pressure, that’s compelling.
But here’s the thing: efficiency and teaching value aren’t the same thing. A learner doesn’t just want correct feedback. They want to feel that someone has engaged with their work. That someone has noticed the specific choice they made, the particular struggle they faced. That someone cares enough to follow up if they have questions.
When feedback is automated — even excellent automated feedback — something intangible can go missing. The sense that a human expert has invested attention in your growth.
The Human-in-the-Experience
I’m calling this the “human-in-the-experience.” It’s not about having a human verify that the AI model is working correctly (that’s important, but it’s technical QA). It’s about making the human presence visible and felt as part of the learning experience itself.
Think about the difference between two scenarios:
- A learner receives an automated feedback report. It’s well-written, standards-aligned, and actionable. But there’s no indication that anyone has looked at their specific submission. They can’t ask a follow-up question to the person who marked it, because there wasn’t one
- Compare that to: a learner receives the same automated report, but it includes a brief note from the instructor — maybe two or three lines — that picks out something distinctive about their work. Something that shows the instructor read it. Suddenly, the experience shifts. The feedback feels like teaching.
The difference isn’t in the quality of the output. It’s in the perception of care and ownership.
Why This Matters Now
This question becomes sharper depending on context. If you’re running a heavily subsidised bootcamp where learners pay little or nothing, maybe standardised feedback is acceptable. The value proposition is different. But if you’re positioning your course as taught by experienced practitioners — if learners are paying a premium, or if the course is university-level, or if you’re emphasising personal development and mentorship — then there’s a credibility risk. Learners will notice if the human presence disappears at assessment time. There’s also a governance question: if the course leader hasn’t personally engaged with the work, can they meaningfully explain or defend the feedback? Can they follow up on a learner’s challenge? Can they adapt their approach based on what they’ve learned from marking?
If the answer is “not really,” then you’ve outsourced not just the labour but the expertise. That might be fine. But it’s worth naming.
The Questions Worth Asking
I don’t have answers here. I think this works at many levels and many depths.
But if you’re piloting or considering automated marking, these feel like the right questions:
– What are your learners’ expectations? Does automation align with the experience you’ve promised them?
– Where could you add visible human touches without losing the efficiency gains?
– Can your instructors genuinely own and explain the feedback they’re delivering?
– What governance do you need around explainability and appeal?
– How will you measure success? Speed alone, or also learner trust and satisfaction?
I’d genuinely like to hear how others are thinking about this. Have you trialled automated marking? Did learners notice? What worked — or didn’t?
I’ll be exploring the practical side of this in a follow-up post: design patterns and templates for keeping the human-in-the-experience visible while you automate. But I’m curious what you think first.
What’s your instinct on this?