Learn more about Velocity Partners offshore software development company
I was coaching a rather large group of Scrum teams at an email marketing SaaS firm. The group was relatively mature and had been practicing Scrum for over 4 years. Over the years, though, the organization had embraced Agile principles and was well on its way to becoming a high-performance agile organization. Most of my efforts were towards “fine-tuning” from the perspective of an “external set of eyes”. It was a privilege working with this organization and its development teams.
But, as with anything in life, there were always challenges and room for improvement. I remember attending a noteworthy backlog maintenance meeting with one of the teams. This particular team was incredibly strong, so I was simply attending to check on how well they were grooming. To be honest, I was hoping to share some lessons from their approaches with some of the less experienced teams.
Jon was one of the “lead engineers” on the team. He had been a Scrum Master for a while, so his agile chops were mature and balanced. However, I was surprised when the following happened:
Max, the Product Owner introduced a User Story for the second time in maintenance. The team had already seen it once and had realized two things:
- It was bigger than a Sprint’s worth of work for the team (call it an Epic or non-executable Story), and
- They needed more information about the legacy codebase surrounding implementing the story.
So they created a Research Spike that represented technical investigation into the story.
This session was the first time the team had got back together after their “learning” from the Spike. Jon had taken the lead on the Spike, working with two other team members.
He went over the implications from a legacy code base perspective. Jon started the discussion. He and his small team recommended that they split the Epic into three sprint-digestible chunks for execution. Two of them had a dependency, so they needed to be worked in the same Sprint. The other needed to be worked in the subsequent Sprint in order to complete the original Epic.
Jon and his team had reviewed the legacy code base and said, in order to do the work properly; it would take a total of approximately 40 Story Points. However, he pointed out that this might be perceived as excessive and that approximately 25 of those points would be spent on refactoring the older code base. The specific breakdown was 18 points for the “new functionality” and 25 points to refactor related legacy code.
The Product Owner excitedly opted for the 18 points and deferring the refactoring bits. Jon and his small Spike team wholeheartedly agreed and the entire team went along for the ride. From a backlog perspective, the 18 points worth of stories became high priority and the refactoring work dropped to near the bottom of the list.
And the meeting ended with everyone being “happy” with the results.
I decided not to say anything, but I left the room absolutely deflated with this decision. It was opposed to everything we had been championing at an agile leadership level. Clearly put, we wanted the teams to be doing solid, high quality work that they could be proud of. In fact, all of our Definition-of-Done and Release Criteria surrounded those notions.
If the cost of this Epic was approximately 40 points to do it “right”, then that was the cost – period. Splitting into the parts you “agreed with” and the ones you “didn’t agree with” were not really options. Sure, each team needed to make the case to the Product Owner surrounding the “why” behind it, but it was not a product-level decision; it was a team-based, quality-first decision. De-coupling the two broke our quality rules and that decision would haunt us later as technical debt.
To close this story, I used my not-so-inconsequential influencing capabilities to change this outcome. We decided that this Epic was important enough to do properly and that the approximately 40-point cost was worth the customer value. In other words, we made a congruent and sound business decision without cutting corners. And the team fully appreciated this opportunity, without second-guessing and guilt, to deliver a fully complete feature that included the requisite refactoring to make it whole.
Now, I only hope they continue to handle “refactoring opportunities” the same way.
Refactoring Versus Technical Debt
Any discussion on refactoring has also to include the notion of technical debt. The two are inextricably linked in the agile space, meaning refactoring is a way of removing or reducing the presence of technical debt. However, not all technical debt is a direct refactoring candidate. What I mean by that is that refactoring typically refers to software or code, while technical debt can surface even in documentation or test cases. So it can be a bit broader if you want to consider it in that context.
Broad Versus Narrow Consideration
Typically any discussion on refactoring is embedded in “the code” – usually the application or component-level code in which you are delivering your product. Sometimes, although much more rarely, the discussion extends to supporting code such as automation, build infrastructure, and supporting scripts.
I would like to make the coupling even stronger between technical debt and refactoring. To me, you refactor away technical debt. You identify the debt and the effort to remove it is refactoring. Now code is a primary place for it, but I believe you can and should refactor “other things”, for example:
- The graphical design on the wall that no longer represents the design of your product;
- The test case (manual, automated, or even exploratory charter) that is no longer relevant given your products behavior;
- The wireframe that has iterated several times with the code and is now out of date;
- That wiki page that speaks to new team members on how to build the application or other team-based documentation;
- The test automation that the team broke during the last Sprint and failed to fix;
- The tooling that everyone uses to measure application performance, but that needs an update;
- The team measures on throughput that have not been updated and no longer apply because the team moved from Scrum to Kanban;
- Or the current process a team is using for story estimation that is not serving them very well.
Clearly I lean towards a broad-brush view to refactoring responsibilities and connecting them to the various kinds of technical debt. From my perspective, I’d recommend that you deal with it as broadly as possible within your own contexts. But let’s move beyond talking about refactoring and instead explore some strategies for dealing with it.
Wrapping Up
In the next and final installment of this article, I’ll share some strategies for how to handle your technical debt. Until then…
Stay agile my friends,
Bob.
References
- Technical debt definition –
- Managing Software Debt by Chris Sterling is a wonderful book dedicated to all aspects of technical software debt.
- Here’s a link to an article/whitepaper I wrote on Technical Test Debt – a variant of technical debt that focuses on the testing and automation aspects –
- A recent perspective by Henrik Kniberg –
- A fairly solid overview of technical debt with some solid references –
- Israel Gat of the Cutter Consortium has published several papers with his views on measuring and the ROI of Technical Debt. Searching for his works would be a good investment.