I am currently on a project that provides configuration management support for >350 production application. Many are internal, but there are many dozens of them that are client facing.
One project in particular, has been especially problematic for everyone involved. The project has 155 Java files that makes up the Enterprise Web Service, data processing application. There was a decision to allow the developers to skip unit testing due to deadline constraints. Yet, 6 weeks into the initial QA testing, the QA team was already 2 weeks behind schedule. What is even worse, is that the 2 week shortfall produced a staggering 350 high and critical defects. My review of this project started around the 12 week mark. At this point the defects had been reduced to around 100 open defects that where high or critical. What is even more disturbing, a thorough analysis shows that 5% to 10% of all the defects that have been opened, where repeat defects opened 1-3 times previously in this QA cycle.
Over the years, I have heard almost every excuse why NOT to test. I am yet to come across one that I have ever felt is even remotely valid. I personally think it takes more time (wasted) to come up with an excuse, then it does to actually test your code. So let see some of the wonderful and creative excuses I have seen:
- I don’t know JUnit.
- Testing takes too long, I am on a tight deadline.
- I did not originally create this code, so I am afraid to change anything so that it can be tested.
- The application is small and I can manually test it in less time than create JUnit tests.
- This application requires Database, JMS or other Enterprise Services that make the application only testable in-Container.
Related to my current project, I have only found 2 projects out of >350 that are willing to perform JUnit testing. The rest are all manual.
Admittedly, the QA team now regrets allowing the development team to shave 2 weeks off their development schedule by NOT writing unit tests. The 2 weeks that where saved off the schedule, where added to the QA schedule. But that number is greatly skewed, because 2 weeks of QA time was realized with 7 persons averaging 60 hours per week (420 man hours or 52.5 man days or 10.5 man weeks), and this only accounts for 250 of the 350 defects at the time of analysis. With the current trends, I project the numbers to be in excess of 450 and take an additional 6 weeks of testing to finalize the application. Note: I have not even begun to actually dive into the developers times for fixing these defects.
One of the largest consequences for the omission of Unit testing all together. This means that stopping the process to create the missing unit tests cannot be absorbed in the development life-cycle anymore.
- Stay the coarse and continue to develop with no unit tests while burning massive resources for Application Developers (AD) and Quality Assurance Testers (QA)?
- Stop development and add proper and complete unit testing ass originally specified?
- Something in between?
The solution is not easy, as with all the excuses, burdens and time constraints (real and perceived)…
The best solution is going to be one that is the least amount of pain for all teams involved, yet has the biggest return in investment for all teams involved.
I started to think about where there the biggest pain point was currently in the process. That seemed easy t me. It was the 350+ serious defects that where logged up to this point. Then looking at the 5% to 10% defect reincarnation.
Then I really wanted to sit down and think about what was the actual cost for these defects. I then started to diagram out the current life-cycle for each defect. I found 3 teams involved in the life cycle of a defect.
Starting with the AD, I estimated that in the current NO-Unit-Test scenario, the each developer has 5 basic steps they perform for each defect.
- Acknowledge the submitted defect.
- Assess/Validate Defect.
- Create/Update code
- Create label/baseline of code
- Create new Build/Deploy Request.
These 5 basic steps take anywhere from 4 hours to 83 hours, if a defect takes 1 week to assess and 1 week to fix.
On top of this costs, it takes the Configuration Management Team (CM) 3 hours to 5 hours to make each additional build. These numbers are fairly straight forward, but are incurred each time we create a build to validate fixed defects. I also wanted to note to these teams, that CM does not incur this time usage each defect. However, most defects do cost AD and QA on a per-defect basis.
Now ending with QA, they also have 5 basic tasks to perform each defect:
- Acknowledge QA Request
- Run Tests to Validate Defect.
- Run Regression Tests.
- Assess Outcome
- Create Success or Failure response to AD.
These 5 basic steps take anywhere from 5.5 hours to 11 hours of QA time.
So lets recap: It takes:
- AD 4 to 8 hours per defect
- CM 3 to 5 hours per build
- QA 5.5 to 11 hours per defect
That is a total 9.5 to 94 hours per defect if you subtract the CM time and only count AD and QA. That could cost at least 3,325 hours (415 man days) to complete all 350 defects. These number seem to be inline with all my calculations as thee are 7 AD member and 7 AD members which would take 29 man days to complete at a minimum.
If the numbers do not lie, how can you make a difference in a broken process already in full swing?
I always prefer a Proactive approach to software development, but in this case, we are way past time for proactive practices. In order to achieve some success, and minimize effects upon AD and QA, as well as to the overall schedule, I have coined the term Defect Driven Development (DDD) or Defect Driven Testing (DDT).
DDD: The concept is simple (IMHO):
For every defect AD fixes, a mandatory JUnit test must be created to prove the completion of that defect.
This seems simple enough, what does that mean though? It simple means that organically, the AD team will now be forced to start creating unit tests for any defect they create before it can be re-submitted for re-test.
Simple enough, but how long will that take? I estimate a JUnit test to take 30 to 60 minutes to create. I estimate that an additional 30 to 2,400 minutes will be required for refactoring code to make it unit testable. Remember, there are zero tests now. The likelihood this code in in good health and not procedural is very low. But all I am talking about is 1 to 40 hours to add unit testing into the process. When we really look at the numbers, we see that if it takes a minimum of 9.5 hours per defect, and there is 10% defect duplication, we can deduce that even if we can eliminate the 10% duplication, that is 1 hour per defect. Funny now when we look at the excuse as “It takes too long…”, when at a minimum, we actually save 30 minutes just on duplication. Now I have not finished the project assessment, but I am fairly certain, that a 10% saving is not the only saving that will occur from this effort. I feel that an additional 10% to 30% of the defects projected to occur will be diverted, thus creating an even greater return on investment.
So if the numbers are so simple, and so overwhelming, why can this not be incorporated into the process immediately? Mandates come from above….
Selling this to Management
Initially, management allowed the dismissal of unit tests to save time in order to reach a deployment milestone. The numbers show this has been a very poor decision and is costing far more time and materials to hit the same milestone, yet code quality is degrading each release.
DDD is a process change for developers. Developers, as will many people, do not like change. However, if change is simple adopt, then the hurdle is smaller to overcome. But, it is still a hurdle. Management is the only entity that can enforce a change at this point. I only hope that the number are convincing.
in conclusion, I find that:
- reintroduced defects can cost 5% to 10% additional effort.
- DDD implementation can eliminate 5% to 10% of effort
- My numbers have shown that create unit tests with DDD, can actually save 5% to 35% of effort from AD and QA.
- DDD is simple to implement, and can be absorbed into the current task of assessing and fixing defects.