How to increase Team Velocity by 50% II
If you missed the first post in the series you can find it here! Last time, I opened with the hook of increasing a team's velocity by 50%. I introduced an automation project that would generate integration tests for us. Before that system, the testers, myself included, had trouble keeping pace with the rest of the team. Worse still, we found out later that some of the entities we release had bugs in them! But I had an idea. I assembled a rough outline and a demonstration for the Team Lead. Then after some discussion she gave it a green light. She also gave me one month to set up the necessary scaffolding, while she got the team ahead of schedule. The core of this automation system was T4 templates. For those unfamiliar, they are a file generation framework created by Microsoft. By writing .NET code in the .tt file, one can control the contents of the generated text files. This includes generating C# code, and other file types. We used these templates to generate partial test classes containing the predefined test cases. Not every entity would get the same kind of tests. For example, some entities would have doubles that could not be negative. Others might have a string that had to be populated. There were even different edge cases supported within the same data type. A database containing various flags would dictate what tests to generate. To review, the database housed two kinds of tables: the Main table, and an entity specific table. The Main table controlled whether tests were generated, and linked to the entity tables. The entity tables housed information on the properties to test. AS well as the boundary conditions and other requirements for testing. One challenge I discovered while scaffolding was ensuring that Parent-Child relationships were honored. I couldn't just assign a random ID to the parentID field. The program database would kick that out with a constraint. I discussed and brainstormed on this problem with the Senior tester. We finally decided to create a helper class that could act as a factory for the tested entities. The factory would assign all the required fields of an entity with appropriate values. For the most part, these were randomly generated numbers, or strings. The Helper's factory functions were called to create the entity-under-test's Parents. Following this logic, the helper would create the entire entity tree. This would work at any level of child, leaving our database in an appropriate state. In database testing there are four basic level tests: Create, Read, Update, and Delete. To support these cases, one must control when to save an entity to the database. To help in these cases, we added alternative factory functions to the helper, selected by parameter flags. Up to this point we wrote the helper functions manually. This became difficult to maintain, and so we automated it as well, again using T4 templates. But unlike the test generators, we could not honor the generate flags in the database. There were cases where an entity was not ready to test, but a child, or a parent was wishing to test with it. Instead, we opted to generate factory functions for every listed entity. By the time I had finished this level of scaffolding it was time to bring the team on board the project. We delegated by a series of test cases. 'Not equal null' tests to this developer, 'Less than the specified max length' to another and so on. The size of the system, in comparison to its scaffold, exploded during this time. I spend much less time coding the system, and much more helping and directing the other developers. I sought guidance from the Team Lead often. I did so to ensure that I was not ruffling feather or otherwise harming my effectiveness as a leader. With grace and patience, she guided me on better practices. She offered ideas for how to help the developer understand. Many of her ideas made it into loose documentation that I sent to the developers for reference. But the developers weren't the only ones who had to understand the system. I also had to find a way to communicate the value and usage to the Product Manager. I was blessed with an understanding PM. She allowed me to walk her through the basics of the system, and what it meant. In the end, she decided that it was the perfect place for her to define the AC for any new or modified entities. Once she understood the structure of the tables, she happily filled in the requirements. Moreover, she was able to provide them in greater detail than we'd been able to achieve before. Instead of having some loose requirements, we had detailed expectations. Or in other cases, we had a description of the desired end-state of a modified entity. This greatly reduced confusion altogether. And resulted in far fewer follow-up meetings with the PM. This alone would have increased our team speed. This test Database provided our team, PM included, a common medium to communicate in. And on top of that, it provided enough details for all parties to understand! Back to the practical use of the database, I crafted template SQL queries. These allowed the PM to add new entities, or change existing ones. And with her existing skill she easily found the information she wanted. These tools, including the database, allowed the team to accommodate the availability of the PM's time. Some weeks she would be out with customers, while on others she was free most of the day for discussion. With the test Database, she could tell us what she wanted without having to be present for everyone of our meetings! After a month of expeditious work by our team, we had the core of our automation system ready to use! The developers returned to new development. The testers moved to round out the automation, and to maintain it. Our first process change was adding another step to our storyboard. The developers would now generate the core integration tests for an entities and run them. If those tests didn't pass, then they would fix their entity, before it every went into QA. This extra step saved the testers a great deal of time, since the Developers would see the common bugs. And this reduced the back-and-forth between Development and QA immensly! With the extra time, the testers could focus on maintaining the system. We could also pursue exploratory testing! One drawback in this system was that every time a developer wanted to run their entity tests, it had to change a flag in the database. This flag change affected everyone! Which lead to some confusion in the first week. My first iteration of improvement added an override list on each user's box. This allowed a developer to test without modifying the database. On the topic of maintenance, our automation system was great at handling standard cases. But it was somewhat ornery about special cases, and especially so for one-offs. We had to add a couple of tables to identify specific special relationships. This way we could test them specifically without interrupting the existing structure. Further, we had to carefully manage access to this database to protect it against accidental corruption. Which meant we had to allow the developers to read, but not write to the database. Both of these requirements were non ideal. But in hindsight, we should have expected them, considering the tools used to create the system. But the required maintenance did encourage the team to adopt a better development process. Instead of immediately going to work on new entities, we would start with a through review of the specifications. We adopted the habit of always having the test Database open during these meetings. And we kept it up-to-date with the discussion. When we finished with the meeting, the database accurately reflected our expectations. The developer could immediately and confidently begin their work. The automation system was beneficial for all. Though it did not completely free the testers from test maintenance. It did free up our time for exploratory testing. With the benefits to the developers in rapid feedback, they saved time. For the PM, it provided a fertile communication medium. And all together the team was able to achieve a 50% increase in our velocity for a given iteration. It was a good way to end an internship. The next post, the last in this series, I'll cover what happened by the time I returned as a full-time developer. This will include exact quantification of the team's new stable velocity. I will cover the improvements we made on the system, and even a scion system based on the same idea!