The irony isn't lost on me that sometimes the best way to pin down "done" is to acknowledge that it will move. That's not a failure of planning, it's a recognition of reality.

Acceptance criteria, when done right, aren't constraints that limit creativity. They're the guardrails that give teams the confidence to move quickly, knowing exactly when they've delivered something valuable. They're the fine print that saves your bacon when memories fade and expectations drift.
Acceptance criteria:
The fine print that saves your bacon.
And sometimes, they're the conversation starter that gets everyone aligned before a single line of code is written. That alone makes them worth getting right.
There's a special kind of dread that washes over you when a stakeholder says, "This isn't what I wanted" after your team has spent weeks building exactly what they asked for. I felt that sinking feeling last quarter when our payments feature launch went sideways because everyone had a different interpretation of "done."
The moving target problem.
You might think "done" is a simple concept. It's not. In product development, "done" exists in a quantum state simultaneously complete and incomplete until someone important enough observes it and makes a judgment.
Working as a product manager taught me that acceptance criteria are less about technical specifications and more about human psychology. Our team had built what we believed was accurately described. The development team had checked every box on their list. The QA team gave it their blessing. Yet when we demonstrated it, the customer stared at us like we'd presented them with a pizza when they'd ordered a salad.
The fundamental issue wasn't technical; it was perceptual. We'd defined "done" from our perspective, not theirs. Our acceptance criteria failed to capture the implicit expectations held by the people who would use the system daily.
The psychology behind the problem.
Understanding why acceptance criteria fail requires acknowledging the cognitive biases affecting product teams and stakeholders. When teams underestimate complexity due to optimism bias, they create criteria that seem comprehensive but miss crucial elements. Meanwhile, stakeholders often suffer from a "visualisation deficit" because they can't articulate what they want until they see what they don't like.
Our rational brains want to believe we can specify everything upfront. Our irrational brains consistently prove we can't.
Confusing objectives with quality.
One of the most common mistakes teams make is mixing up subjective goals with objective quality standards. In Scrum, the Definition of Done (DoD) should be an objective, measurable bar of quality, not a negotiable outcome that changes based on someone's mood that day.
A former colleague once added "customer approval" to our Definition of Done, turning what should have been clear success criteria into a moving target. Our sprint reviews transformed from collaborative feedback sessions into tense approval meetings where we spent more energy convincing stakeholders than demonstrating value.
How a team learned to pin down "done".
After several painful releases, our team gathered in a conference room with a simple mission: fix our acceptance criteria process. The whiteboard soon filled with competing perspectives:
The developer's view.
"Acceptance criteria should be binary; it either works or doesn't," argued our lead developer. "I need to know exactly when I can mark something complete and move on to the next task."
The designer's perspective.
"User experience isn't binary," countered User Experience. "How do you measure whether something feels intuitive? We need room for qualitative assessment."
The business analyst's insight.
"What if we separated functional and experiential criteria?". "We could have clear yes/no conditions for functionality and a separate evaluation framework for user experience elements."
This was the breakthrough moment. We realised we'd been trying to solve two different problems with the same tool.
The solution:
Layered acceptance criteria.
Tier 1: Functional criteria.
These are the binary, testable conditions that represent minimum viability. For example, "User can complete a transaction with valid card details" or "System locks account after three failed attempts". These criteria are non-negotiable and automatable.
Tier 2: UX requirements.
These address interaction design and visual elements that support the user experience. Instead of vague statements like "interface must be intuitive," we created specific conditions like "User completes first transaction within 60 seconds without help documentation."
Tier 3: Business outcomes.
This tier connects the feature to measurable business results. For example, "Feature increases conversion rate by 2% within the first month." These aren't release blockers, but provide context for why we're building the feature and how we'll measure success.
Practical techniques that worked.
User story mapping.
We adopted user story mapping to visualise the customer journey and place acceptance criteria in context. This gave developers and stakeholders a shared understanding of how individual criteria contributed to the overall user experience.
The collaborative criteria workshop.
Before finalising acceptance criteria, we gathered key stakeholders for a criteria workshop. Having operations staff, developers, and product managers in the same room discussing what "done" prevented countless misunderstandings.
The "so what?" test.
For each criterion, we asked, "So what if this isn't met?" If no one could articulate meaningful consequences, we removed them. This ruthless pruning kept our criteria focused on what truly mattered.
What we learned.
Implementing these changes wasn't easy. Some team members resisted the additional upfront work. A few stakeholders were uncomfortable with the explicit boundaries. However, the results revealed that our next major release went smoothly, with 92% of acceptance criteria met before the first review.
The most profound lesson was that good acceptance criteria aren't about perfection but about alignment. They create a shared understanding that makes collaboration possible even when things don't go exactly to plan.
Another insight was that psychological safety is crucial in defining good criteria. Team members must feel comfortable raising concerns about vague or unrealistic criteria without fear of being labelled "negative" or "not a team player."
When to embrace the moving target.
Sometimes, a moving target is exactly what you need. For exploratory features with unpredictable user behaviour, rigid criteria can be counterproductive. In these cases, we learned to embrace process-oriented criteria: "Feature will be released to 5% of users, data collected for two weeks, and refined based on usage patterns."
The irony isn't lost on me that sometimes the best way to pin down "done" is to acknowledge that it will move. That's not a failure of planning, it's a recognition of reality.
Acceptance criteria, when done right, aren't constraints that limit creativity. They're the guardrails that give teams the confidence to move quickly, knowing exactly when they've delivered something valuable. They're the fine print that saves your bacon when memories fade and expectations drift.
And sometimes, they're the conversation starter that gets everyone aligned before we write a single line of code. That alone makes them worth getting right.