Velocity is a measure of how much work a team can complete in an iteration. It’s calculated by summing up the total number of story points (or manhours) completed in an individual sprint.
It’s sometimes expressed as a range rather than a single number. This helps the team account for a number of variables they can’t control. It’s important to remember that in either case, velocity is just an attempt at accurate estimation based on past performance. It’s never guaranteed.
The estimation techniques in agile are a topic for another article, but I’m sure many of you will hang up on the parentheses above. We won’t go in-depth about the viability of using manhours (or days) in agile estimates. I hope we can agree that there are no accurate estimates regardless of the method and move on.
Let’s take the following example of completed sprints:
There are a few methods to calculate the velocity of the upcoming sprints. If the number distribution seems odd, I can assure you it’s quite realistic. As mentioned, many things affect iterations and their velocity. The first sprint might start off “slow” compared to sprints 2-4 because the team is just getting warmed up. There might be technical considerations, perhaps even some gaps to fill.
Then the pace picks up… so what about the fifth iteration? It’s possible that there were only a few user stories with low priority left in the backlog. Why do we still need to know the velocity? The product owner can always come back with additional requests. That’s the beauty of flexible agile engagement.
With that said, what is the number (and the range) of velocity in the above example?
The former is simple: the average of these five numbers is 10.6. Always round it down to 10.
I highly recommend approaching it as a range, as mentioned. There are a few ways you can do that. The most “scientific” one that we use is to treat it as a range between 80% and 120% of the average. In this example, the velocity range would be between 8 and 12 story points.
In our case, we offset it by 20% because that’s about the average amount of time we spend working on bugs during sprints. A “perfect” iteration of 10 points would mean that we’ve completed 10 points of work, plus about 20% of working on bugs. Please note that you can’t estimate bugs – more on that later.
The “worst” case scenario of 8 points would mean we’ve had an unexpected amount of bugs or other events, i.e. issues with a third-party, dealing with unexpected technical debts, and so on.
In the case of 12 points, it might be that we’ve had an iteration where chose not to work on bugs or didn’t have any known defects to work on.
There isn’t a rule of thumb. In some niches, like ours, you will often work on similar projects that will inevitably have similar velocity.
But in most situations, you can’t accurately estimate velocity based on other projects. Truth be told, we prefer not doing it either. We only resort to such estimates if the client insists, with full disclosure of their accuracy – or lack thereof.
Another reason there’s no baseline for agile velocity is that velocity is relative. In short, you estimate it based on a different task in that project. Understandably, the point of reference is always different. It’s entirely possible for a sprint with a velocity of 10 to achieve more than a sprint with 20 velocity from a different project.
Doing so can be very harmful to the outcomes of your projects and the productivity of the development team. If you’ve ever been in such a scenario, comment to warn others.
I think it goes without saying that you can’t estimate a velocity when your interactions have different durations. A 2-week sprint with a velocity of 20 will rarely translate into 10 story points in a 1-week sprint or 40 in a monthly sprint.
Agile teams can rely on velocity only when all their sprints have consistent duration. If there are variable lengths between two iterations, it’s not possible to compare them.
You don’t have a velocity after just one sprint.
There, I said it.
You might not even have it after three, but that’s where it starts getting closer. It shouldn’t stop you from keeping track of your velocity, but just manage the expectations – both internal and external.
A team’s velocity will naturally vary over time, but should remain fairly consistent as long as:
Sometimes, we can calculate velocity based on past projects to give everyone some clarity. But that’s not the norm. Software development is an infinite, iterative process. You’re only limited by your imagination, so no two projects are the same.
However, they can be similar enough. In our case, web design falls into that category. Unless it’s a bespoke project (like a headless site on a brand-new architecture), we’ve done very similar tasks in the past. It’s a blessing in disguise. You’ll hate me for saying this again, but accurate estimates don’t exist. And if an accurate estimation would be a unicorn, then I don’t know what to call an estimate based on a different project…
In short, it’s even less accurate. But we still need it. We need it as an agency, and you need it as a client. Agile is a different approach, so not everyone is comfortable committing to an “infinite” project. After all, the resources of every single company are finite.
Software development is complex. We communicate to our clients from the get-go that bugs are to be expected. They’re not the standard, and we obviously don’t produce code with the intention to have bugs. There are no shortcuts that cause bugs – this would be against the principles of agile. Bugs are there and we need to acknowledge them. But they aren’t included in a user story estimation.
Since bugs are unknown and unexpected, it’s almost impossible to estimate them, and you shouldn’t include them in the calculations. You can set time aside for bug fixes, or even have a dedicated developer working on them, but that can’t affect your velocity. If you’d like to know your “bugs per sprint” or something similar (which would help spot patterns and improve QA), use an alternative metric.
This is the most common mistake people make regarding velocity. Velocity is a metric for the whole agile team and you should not measure it for individual team members. Never make that mistake.
If you are tracking velocity at the individual level, you’re missing the point. First, because all agile teams are self-organising, there’s no reason to compare one person’s productivity with another’s.
Second, each story point represents only a rough estimate of effort and complexity. It makes no sense to measure individuals against their estimates. You can work towards making their complexity estimates more accurate, but it’s not a performance metric.
Nobody can know how much work each person will do in a future sprint. If we could forecast what everyone will do in each sprint, then we wouldn’t need scrum or agile in the first place!
Velocity is a perfect estimation tool because it considers all the other things that can get in the way, like interruptions and team meetings. It focuses solely on telling you how many user stories can you complete in a typical week based on their complexity.
Velocity is not concerned with whether a single task took an hour or two. In the grand scheme of things, it doesn’t matter. Above anything else, velocity is an internal metric used to improve the performance of a team. It’s helpful to communicate it to stakeholders for various reasons – like transparency, budgeting and prioritisation. But in the end, it’s not a measure of success. It’s not a target for the team either. Velocity is a product of an iteration. This is an important distinction that should dot the i’s and cross the t’s in this article.
Have you ever encountered any issues while tracking velocity?
Do your daily “status report” meetings suck? We’ve all been there. So here’s our gift to you – immediate access to a PDF that will improve them.
Originally published Apr 22, 2022 6:28:56 PM, updated June 9 2023.
We expose the secrets of B2B websites to inspire your team.
Bimonthly website breakdowns for marketers and business owners.
Join the conversation
Looking to share your feedback and join in on the conversation?