Posts Tagged ‘Improvement’
During my ScrumMaster certification, Mike Cohn recommended that the size of a cross functional scrum team should be between 5 and 9. In a recent exercise, we tried splitting a scrum team three ways – 5-6 people in each team - for the next release.
At the end of last year I counted 16 people in a scrum. For most of the release the team size averaged 12, consisting of a mix of product managers, QA, front-end engineers, software engineers and the ScrumMaster. 16? It never meant to get that big, but with an addition here and an addition there, it did. It was too big.
The scrums were starting to take too long, there were too many stories in the sprint for one team to truly keep track and impediments were flying around everywhere. Risk was starting to creep into each sprint and eventually the release.
I put it to the team that they should split in half for the next release and while some members were ok, others really objected to the idea. They felt it would generate unnecessary tension between a team that had grown strong together.
Due to the type of work within the first sprint in that next release the team worked as one again, but agreed tentatively to an unequal split (1/3 to 2/3) for the following two (and final) sprints. It was ok, both teams had a couple of issues mainly down to the fact that they were building three applications which all needed to communicate with one another, but the smaller team certainly preferred being small.
So now they’ve split into 3.
It’s brilliant. The teams are made up of 1-2 QA, 2-3 Engineers and a Product Manager. The teams were encouraged to go through the agile scrum motions how they saw fit. The smaller size enabled intense collaboration during a short planning phase, resulting comprehensive stories. Poker and sprint planning was a breeze, less big personalities to get in the way of proceedings, and it was so much easier to keep on top of the progress during the sprint. The software’s of high quality. It’s done. It contains a good level of tests and the quality of the code is great.
Now that risk is greatly reduced. In the sprints before if something started to go wrong it could have affected the whole team. All members would get distracted by the issue and if it didn’t get resolved and the goal compromised, the team would be left demoralised by all the drama such failed sprints bring. Oh yeah, and the team members love it.
So if it was going good, then the team gets bigger and you start to have problems, or if you’re trying things for the first time and some members aren’t engaging, break it down.
I’ve been having a series of conversations lately with one of my engineers about learning on the job. By this I mean that technological problems get figured out during a sprint. These problems usually arise from implementing technologies that aren’t so well known by the engineering team. A recent example for us would be finding solutions for complex queries with Fluent NHibernate. The conversations were sparked by recent events during our last release where a sprint goal was compromised by the introduction of a new Domain Driven Design (DDD) architecture, and the previously mentioned object relational mapper (ORM), Fluent NHibernate.
The engineer’s argument for learning on the job centres on the satisfaction gained by solving technical challenges presented by new technological ideas. As a manager, engineer’s job satisfaction is very important to me. In an unsettled economic climate such as the one we’ve just been (and perhaps still are going) through, it becomes even more important to use everything available to keep morale high. Our favourable stance to adopting new technologies over the past couple of years – we built a production app in .Net MVC right through its preview stages, releasing the app as MVC was released – has kept the engineers interested in their jobs. That being said, introducing a new technology is accompanied by risk, risk that can turn into a high cost. Learning on the job during the sprint is risky because it can create impediments that are very difficult to remove.
In order to find the right balance it is important that all the engineers know what they’re doing throughout the sprint. If a sticking point arises, e.g. a situation where you notice an engineer stuck with their head down for hours on end, a couple of scrums pass where they say they’ve got 1 or 2 hours left and they’ll be finished that day but they never do, then you know you have a problem. Sometimes this can be solved by pairing, but probably it’s a sign the team hasn’t done enough homework.
We just halted the introduction of a messaging technology into the next release and are mid way through a two week planning phase. The aim is to take a small sideways step in order to make sure the engineers are technically up to speed with what’s going into it.
The focus of the first planning week has been on story generation, next week we’ll be exploring those stories further to find potential engineering sticking points. When they are found the whole engineering team will come together to investigate the solution. This object of this exercise is not to create solutions for all our potential problems up front, as any changes to the backlog during the release could render this a waste of time, but to use the potential problems as case studies to ensure the engineers have enough knowledge about the technologies we have introduced before embarking on the sprints.
This is not something we will do between every release. A series of weekly, hour long sessions to contextually spike the implementation of the messaging technology using a current production app will get underway as soon as the release starts. We will only introduce it across our range of production apps when the team feels ready.
This is what I consider learning on the job. There will inevitably be some learning during the sprint, but the aim should be to control that as much as possible.
Change is good. That’s what we always say right? On this occasion it really was. Recently a new member of an engineering team brought with them from their previous organisation some great alternative ideas for agile approaches. One in particular was the format of their retrospective.
It goes like this:
The scrum team are all present.
There are two sets of post-it notes, each of a different colour, it’s agreed what colour is for good and what colour is for bad.
The team collectively set about writing one point per post-it what they feel was good and/or bad about the iteration they’ve just completed.
Each team member then gets a chance to stand up in front of the team and read out their notes, elaborating when either prompted or just when they feel like it, and stick them on a board.
When everyone has had their turn, the post-its are categorised into topics and grouped together.
It’s very easy to see what was good about the iteration, but also what was bad.
Then team members each have three votes for improvement which they can add to whatever post-it, or its, they choose.
The votes are tallied and the two topics (either grouped or individual) with the highest amount of votes are turned into backlog items for the next iteration. Of course if there are other negatives that can be easily addressed they are not excluded.
We’ve found this format to be very successful. The structured approach means we easily fit into the time box for the meeting. Every team member gets their chance to praise what they liked, or vent their frustration over what they didn’t; it’s actually quite a good bonding session. The focus on the big negatives means they never fester and are addressed promptly, so the team is continually improving.
As part of the transition to agile scrum at one company I worked for some key software engineering considerations: code maintainability, test coverage, extensibility and scalability came under review. Although products were being delivered, problems were being built into them.
As an engineering team we set ourselves the challenge to start practising TDD (Test Driven Development) within a year. This goal would force better code quality and the code would be future proof as it would be designed to accommodate change.
Such a goal isn’t easy to reach. If the engineers aren’t used to building software in a certain way they can’t just start doing TDD. There needs a period of education; engineers need to loose bad habits.
Sending the engineers on a course could be an option, however, aside from the fact there may be no budget for that, to truly progress the adoption of a sustained learning process is far more beneficial.
We set up a series of weekly hour long internal seminars. Both engineers and the business made concessions; the time of the seminar was 1.30 – 2.30, half over lunch time and half during business hours. We booked a room away from the bustle of daily working life, organised a projector and all the engineers week by week presented on topics we believed would improve their standards of coding. We started by examining design patterns, but soon found we needed to revisit the core principles of OOP (Object Orientated Programming), with particular focus on the Bob Martin’s PinciplesOfOod, also known as the S.O.L.I.D. principles.
Armed with a couple of really good books – Bob Martin’s book Agile Principles, Patterns, and Practices in C# is a fantastic teaching aid – and a lot of support for each other, over a period of six months we moved on to look at the TTD approach and the team’s standards started to improve. After those six months, all new work had to be built using TDD. This was partly successful during the following six months because it was not so easy to write tests against legacy code bases.
A year after the comments from the contractor, the team worked on a green field project. After only four weeks the system was complete. Code maintainability, measured using Microsoft Visual Studio’s code metrics tool, averaged 80 points (which is good!). Test coverage was around 50% for the entire solution, but more importantly 80 -100% in the solution projects containing the business logic (a good post for how to write good unit tests). The engineers were practicing TDD 70% of the time, hindered by falling behind schedule due to problems communicating architectural design expectations.
This is a great achievement. Now the team pair programmes most of the time, working with each other to strive for excellence. Subsequent topics for the seminars included looking at “interfacing the ball of mud” in order to create testable code against legacy systems not designed for testability. The team designed strategies for refactoring legacy systems into a testable architecture and began adopting DDD. The internal seminar is a permanent fixture, anything can be discussed, although it is best to select topics relevent to the business.
It took the best part of a year to see real improvements; I don’t know whether you think that is too long? I don’t. It’s difficult for a team to massively shift their practices whilst minimising the effect on their output for the business. The point is, if your business and team are willing to commit the time – the hour a week for the seminar – and your senior team design and constantly review a quarterly syllabus, the team will teach themselves and improve the quality of their work.