During my ScrumMaster certification, Mike Cohn recommended that the size of a cross functional scrum team should be between 5 and 9. In a recent exercise, we tried splitting a scrum team three ways – 5-6 people in each team - for the next release.
At the end of last year I counted 16 people in a scrum. For most of the release the team size averaged 12, consisting of a mix of product managers, QA, front-end engineers, software engineers and the ScrumMaster. 16? It never meant to get that big, but with an addition here and an addition there, it did. It was too big.
The scrums were starting to take too long, there were too many stories in the sprint for one team to truly keep track and impediments were flying around everywhere. Risk was starting to creep into each sprint and eventually the release.
I put it to the team that they should split in half for the next release and while some members were ok, others really objected to the idea. They felt it would generate unnecessary tension between a team that had grown strong together.
Due to the type of work within the first sprint in that next release the team worked as one again, but agreed tentatively to an unequal split (1/3 to 2/3) for the following two (and final) sprints. It was ok, both teams had a couple of issues mainly down to the fact that they were building three applications which all needed to communicate with one another, but the smaller team certainly preferred being small.
So now they’ve split into 3.
It’s brilliant. The teams are made up of 1-2 QA, 2-3 Engineers and a Product Manager. The teams were encouraged to go through the agile scrum motions how they saw fit. The smaller size enabled intense collaboration during a short planning phase, resulting comprehensive stories. Poker and sprint planning was a breeze, less big personalities to get in the way of proceedings, and it was so much easier to keep on top of the progress during the sprint. The software’s of high quality. It’s done. It contains a good level of tests and the quality of the code is great.
Now that risk is greatly reduced. In the sprints before if something started to go wrong it could have affected the whole team. All members would get distracted by the issue and if it didn’t get resolved and the goal compromised, the team would be left demoralised by all the drama such failed sprints bring. Oh yeah, and the team members love it.
So if it was going good, then the team gets bigger and you start to have problems, or if you’re trying things for the first time and some members aren’t engaging, break it down.
I’ve been having a series of conversations lately with one of my engineers about learning on the job. By this I mean that technological problems get figured out during a sprint. These problems usually arise from implementing technologies that aren’t so well known by the engineering team. A recent example for us would be finding solutions for complex queries with Fluent NHibernate. The conversations were sparked by recent events during our last release where a sprint goal was compromised by the introduction of a new Domain Driven Design (DDD) architecture, and the previously mentioned object relational mapper (ORM), Fluent NHibernate.
The engineer’s argument for learning on the job centres on the satisfaction gained by solving technical challenges presented by new technological ideas. As a manager, engineer’s job satisfaction is very important to me. In an unsettled economic climate such as the one we’ve just been (and perhaps still are going) through, it becomes even more important to use everything available to keep morale high. Our favourable stance to adopting new technologies over the past couple of years – we built a production app in .Net MVC right through its preview stages, releasing the app as MVC was released – has kept the engineers interested in their jobs. That being said, introducing a new technology is accompanied by risk, risk that can turn into a high cost. Learning on the job during the sprint is risky because it can create impediments that are very difficult to remove.
In order to find the right balance it is important that all the engineers know what they’re doing throughout the sprint. If a sticking point arises, e.g. a situation where you notice an engineer stuck with their head down for hours on end, a couple of scrums pass where they say they’ve got 1 or 2 hours left and they’ll be finished that day but they never do, then you know you have a problem. Sometimes this can be solved by pairing, but probably it’s a sign the team hasn’t done enough homework.
We just halted the introduction of a messaging technology into the next release and are mid way through a two week planning phase. The aim is to take a small sideways step in order to make sure the engineers are technically up to speed with what’s going into it.
The focus of the first planning week has been on story generation, next week we’ll be exploring those stories further to find potential engineering sticking points. When they are found the whole engineering team will come together to investigate the solution. This object of this exercise is not to create solutions for all our potential problems up front, as any changes to the backlog during the release could render this a waste of time, but to use the potential problems as case studies to ensure the engineers have enough knowledge about the technologies we have introduced before embarking on the sprints.
This is not something we will do between every release. A series of weekly, hour long sessions to contextually spike the implementation of the messaging technology using a current production app will get underway as soon as the release starts. We will only introduce it across our range of production apps when the team feels ready.
This is what I consider learning on the job. There will inevitably be some learning during the sprint, but the aim should be to control that as much as possible.
Change is good. That’s what we always say right? On this occasion it really was. Recently a new member of an engineering team brought with them from their previous organisation some great alternative ideas for agile approaches. One in particular was the format of their retrospective.
It goes like this:
The scrum team are all present.
There are two sets of post-it notes, each of a different colour, it’s agreed what colour is for good and what colour is for bad.
The team collectively set about writing one point per post-it what they feel was good and/or bad about the iteration they’ve just completed.
Each team member then gets a chance to stand up in front of the team and read out their notes, elaborating when either prompted or just when they feel like it, and stick them on a board.
When everyone has had their turn, the post-its are categorised into topics and grouped together.
It’s very easy to see what was good about the iteration, but also what was bad.
Then team members each have three votes for improvement which they can add to whatever post-it, or its, they choose.
The votes are tallied and the two topics (either grouped or individual) with the highest amount of votes are turned into backlog items for the next iteration. Of course if there are other negatives that can be easily addressed they are not excluded.
We’ve found this format to be very successful. The structured approach means we easily fit into the time box for the meeting. Every team member gets their chance to praise what they liked, or vent their frustration over what they didn’t; it’s actually quite a good bonding session. The focus on the big negatives means they never fester and are addressed promptly, so the team is continually improving.
As part of the transition to agile scrum at one company I worked for some key software engineering considerations: code maintainability, test coverage, extensibility and scalability came under review. Although products were being delivered, problems were being built into them.
As an engineering team we set ourselves the challenge to start practising TDD (Test Driven Development) within a year. This goal would force better code quality and the code would be future proof as it would be designed to accommodate change.
Such a goal isn’t easy to reach. If the engineers aren’t used to building software in a certain way they can’t just start doing TDD. There needs a period of education; engineers need to loose bad habits.
Sending the engineers on a course could be an option, however, aside from the fact there may be no budget for that, to truly progress the adoption of a sustained learning process is far more beneficial.
We set up a series of weekly hour long internal seminars. Both engineers and the business made concessions; the time of the seminar was 1.30 – 2.30, half over lunch time and half during business hours. We booked a room away from the bustle of daily working life, organised a projector and all the engineers week by week presented on topics we believed would improve their standards of coding. We started by examining design patterns, but soon found we needed to revisit the core principles of OOP (Object Orientated Programming), with particular focus on the Bob Martin’s PinciplesOfOod, also known as the S.O.L.I.D. principles.
Armed with a couple of really good books – Bob Martin’s book Agile Principles, Patterns, and Practices in C# is a fantastic teaching aid – and a lot of support for each other, over a period of six months we moved on to look at the TTD approach and the team’s standards started to improve. After those six months, all new work had to be built using TDD. This was partly successful during the following six months because it was not so easy to write tests against legacy code bases.
A year after the comments from the contractor, the team worked on a green field project. After only four weeks the system was complete. Code maintainability, measured using Microsoft Visual Studio’s code metrics tool, averaged 80 points (which is good!). Test coverage was around 50% for the entire solution, but more importantly 80 -100% in the solution projects containing the business logic (a good post for how to write good unit tests). The engineers were practicing TDD 70% of the time, hindered by falling behind schedule due to problems communicating architectural design expectations.
This is a great achievement. Now the team pair programmes most of the time, working with each other to strive for excellence. Subsequent topics for the seminars included looking at “interfacing the ball of mud” in order to create testable code against legacy systems not designed for testability. The team designed strategies for refactoring legacy systems into a testable architecture and began adopting DDD. The internal seminar is a permanent fixture, anything can be discussed, although it is best to select topics relevent to the business.
It took the best part of a year to see real improvements; I don’t know whether you think that is too long? I don’t. It’s difficult for a team to massively shift their practices whilst minimising the effect on their output for the business. The point is, if your business and team are willing to commit the time – the hour a week for the seminar – and your senior team design and constantly review a quarterly syllabus, the team will teach themselves and improve the quality of their work.
I wanted to post this as an update to the original post on how to estimate story points using the poker planning technique and involving your QA team.
In the first post I describe how a poker planning session almost broke down when including the Quality Assurers in estimating the relative size of each of our stories. Just to get through that session we ended up producing one or two estimates that didn’t include the testing time.
It’s amazing how adaptable people can be. On our second attempt, only 2 weeks later, I’m very pleased to say our session went brilliantly. They key to this success, I feel was down to the ScrumMaster. They took the initiative, pre-empting another possible disastrous session and clarified some of our own rules.
Last time, because both engineering and QA were unused to thinking about delivering stories to done including each other’s time, we ended up settling on the highest value either the engineers, or QA, or a hybrid could agree on. There was very little unanimity, nor was anyone really convinced we were providing estimates to the best of our ability.
This time we started our session by reviewing a very simple process designed for our team that the ScrumMaster had put together. If consensus could not be reached by the first attempt at laying cards, try again. If still no consensus after a round of discussion, try again. On the third attempt if there was still no consensus we would settle on the highest value, as we had in the previous session. Mike Cohn suggests in his “Agile Estimating and Planning” book to “…continue the process as long as estimates are moving closer together…”, but in the past this hasn’t worked for us hence the 3 time cap.
From there on in, the team set about story pointing. We had about 9 stories to estimate and in about an hour and a half we were done. Estimating for each other’s disciplines still didn’t come easy for everyone, but when someone struggled, the team worked with them so they could understand what they were thinking about. Only once did we not reach a consensus by the third attempt, and that story had such a high estimation it would need to be broken down further. The mood of the session was very positive leaving all present extremely satisfied with the result.
So, the first stage particular exercise to incorporate QA into our story point estimation sessions has been a success. Yet another leap forward in our journey.
There will be more to follow on sprint planning with QA and integrating the seating plan of teams who are already co-located.
We had just finished the first stage of a green field application and missed our sprint goals. Damn! When we first saw a problem, should we have pulled some work out of the sprint?
This is not something anyone should be afraid of doing.
Let’s look at what happened.
The competent team had worked together for 18 months on various projects. A new style of architecture was to be introduced to improve testability and in preparation for this shift we had invested in several internal group educational sessions, as well as some good debates during sprints. At the start of the sprint we didn’t commit to as much work as we had capacity to do, however we were pretty close.
After 4 days we hadn’t delivered anything to test. Complications with the precise approach to the new architecture were holding up the project. At the end of the 5th day we still had nothing into QA, so we held a 2nd daily scrum. We agreed if we finished engineering a couple of stories so testing could start by the end of the next day, the engineers would finish the rest of their work during the week and time could be made up by engineers pair testing with QA. A little ‘waterfalacy’ but it was only 1 sprint so never mind!
Can you guess what happened? We didn’t meet the sprint goals and were left with about 40 hours worth of work.
I’ve been in several sprints like this where for various reasons the goals haven’t been met. The end of the sprint is unbearable. The team gets micro managed; frustration grows and turns into panic – a desperate rush to get things done during the last couple of days – then there’s the realisation at the 11th hour of failure.
I don’t like that word, nor does the team who’ve worked so hard to try to reach their goals. The effect it has on morale is a real pain to manage and it doesn’t encourage better quality work. To avoid ‘failure’, teams cut corners and in my experience the tests – so important in modern-day software engineering – go first.
So what should we have done?
The real focus should have been right at the beginning in sprint planning. The potential unknowns associated with incorporating a new technical design into a green field project should have been taken into consideration by the team when confirming their commitment. It’s easy to say in hindsight, but the team should have committed to less in the first place.
That said, if the team are confident that their commitment is achievable, as they were in this instance, and you find yourself in the same place as us, you have three options. The first is to carry on building, accepting the risk of digging yourself a deeper hole. The second: abort the sprint. The third is to pull some stories from the sprint with the approval of the stakeholders.
We know the first option is a BIG risk as you may well end up compromising the quality of your product. The second, I don’t think the situation was serious enough for an abnormal termination. I prefer the third option of redressing the expectations and pulling something out of the sprint. I just think it’s more practical. There is no need for all the drama the other two options create and the sooner the stakeholders are informed of any problems the better.
I don’t think this solution is right for every situation. What’s the point of having a goal if you can just move the goal posts when you feel like it? But sometimes, such as the example Mike Cohn alluded to during my ScrumMaster certification where you have a team new to scrum who are finding their velocity, or this situation where there was a good reason for changing the architecture, then I think it’s the best solution as long as everyone agrees.
Previously in our sprints we included QA time by adding a buffer to the engineer’s task estimates. From now on, to provide greater visibility of the challenges QA might face whilst testing each story, we have started to include them in both the story point estimating and tasking sessions.
We started out by suggesting that the team (QA and Engineering) collectively estimate a story point value not only for how long it takes to engineer, but also how long it may take to test (relative to another of course).
This didn’t go as expected.
Some members got it and were able to take into account the advice or suggestions from their peers and re-evaluate accordingly. Others found it very difficult to accept how QA and engineering could effectively estimate each other’s time.
Such reservations were confirmed during the estimation of the second story. The engineers agreed a 5, QA initially put down a 1. The engineers then suggested to QA what might be involved in the story, but QA re-evaluated only a 2. This resulted in a lively debate about what we were doing, how “pointless” this all was, etc, etc and the session pretty much broke down.
The strongest opinion of what to do next was for the engineers to story point their value, QA to story point their value and then add the two together. For the sake of moving on we tried it. We went back to the first story, engineers a 2, QA a 1 so we got 3. Fine. Next, engineers 5, QA 3 and we have 8. This is easy! Story 3: engineers re-evaluate a bit between 13 and 20 but settled on 20, QA 5. Ok, so we suggest either re-evaluating again collectively to see if we accept 20, or we round up to 40. Nope. Some of the engineers wanted to stick with the story point value of 25 and the previous debate flared up again.
Both engineers and QA understand why we story point; that we use the Fibonacci sequence to balance out the estimations. They’d heard the bucket of water metaphor Mike Cohn used during my ScrumMaster certification ”… you have 25 litres of water, that’s not going to fit in a 20 litre bucket, you should use a 40″. They want (and agree with the need) to be more integrated, but just struggled with the concept of collectively settling on a figure. There was alot of confusion over the level of accuracy required.
After a lo————ng debate, to get through the session a consensus was reached. In order to obtain our team velocity we just needed to be consistent over time. The team settled on this: QA story point their value, engineer’s their value and to accept the highest. From that point the session finished quite quickly; everyone was a bit frazzled but content with the result.
What we have done is wrong. Settling on the highest value is fine if QA vote 1 and engineers vote 13. But what if engineers vote 13 and QA vote 8? Or QA vote 20 and engineers vote 13? The real estimates for those last two examples would surely be nearer 20 and 40? We are not providing an accurate estimate to the best of our knowledge at the time. This is important for management to plan a release. It’s difficult to accurately estimate for a long release period, but we should not knowingly underestimate. This will need to be addressed again during the next session.
The goal is that the team will naturally start considering each other during estimating sessions. They will get there. Even the senior engineer who raised the most questions about the process at one point put down a 40 and justified it by saying “well there’s a lot to test”.
We have just started with test automation which will help to bring QA and engineering together further. We might also get there more quickly with smaller teams; yesterday’s was 3 QA and 7 engineers.
The positives from the story pointing session? We can now clearly see the different challenges that QA face from story to story, which sometimes do not have a direct correlation with engineering time for the story. This is definitely going to help us with both planning and execution.