Monday, May 21, 2007

100% Unit Test Code coverage using Mock, does it really matter?

I always believe delivering a comprehensive integration/functional testing code is far far more important/productive than blindly try to achieve 100% unit testing code coverage. I believe:
a) A integration/functional testing code will always deliver more test coverage then unit testing code.
b) Integration testing code allows us to capture our program error as earlier as possible, and avoid “big bang” integration error at the end.

Thus, I always have “heat and interesting” discussion with some developers where they always believe:
a) Achieve high percentage of unit testing code coverage is very important
b) unit testing code must be “unit testable”, all dependency classes/resources should be “mocked”, even sometime is so so difficult to mock such objects.
c) if our code is not testable, that must be something wrong with our design. We have to change our code to make it testable. Thus making our class final, or our class not implementing an interface, or using static methods are all bad!.

Don't get me wrong, I am not 100% against points above, and I understand important of unit testing. But believe the unit testing code we deliver must return good “Return of Investment” (ROI). Investment here mean our effort of delivering/maintaining the unit testing code. A good ROI unit testing code have the following characteristic:
a) It use “black box” approach , where we pass parameters to a method, test the method and we verified the return value against expected result, we should not care how's the methods executed, and what underlying resources being called.
b) The testing code should be easily promoted to functional/integration testing code, why reinvent the wheel?
c) The testing code must drafted base on agree use case, and tight back to your application's features. Thus, using HR application as an example, to unit test you annual leave application is always approved by your manager produce good ROI value. but to unit test your database connection pooling utilities, or utilities to read properties files or parsing a fake XML file does not produce good ROI value.

To illustrate what I mean by good ROI unit testing code, consider the following examples:

Public class OrderManager {
public boolean proceedOrder(OrderVO anOrder) {
return ( inventoryManager.hasStock(anOrder) &&
accountManager.customerGoodCreadit(anOrder.getCustomer()) &&


To unit test proceedOrder() methods using Mock:

1. First, create InventoryManager, accountManager, and shipmentManager mocks

2. Instruct our mock framework on mock methods that called, and for each mock methods call, we instruct expected return value.
3. And finally, we test return value with our expected result, and ensure mock methods being called.

Does the above code give us good ROI? IMHO, it does not...

a) It's breaking the black box testing approach, we are telling our mock framework steps by steps what mock methods will be call, and specifically ask them to return the value we set, and test method proceedOrder() return value with our expected result, which we already know what it will return in the first hand.

What happen if we wants to change the implementation, do we have to change our testing code as well.

b) It does not bring any business value...

A ROI testing code for the above use case:
1. Set a InventoryManager product (say product “A”) quantities to a specific amount (say 50)

2. test the proceedOrder() by placing product A with various quantities
3. Verify the result against expectedResult

So, to conclude, if a unit testing code does not produce good ROI, do spend more time on delivering functional/integration testing code, and delivering good "ROI" usable code coverage

Apply my points to Automobile industries, we will not hear an engineer that create mock of car tires, or create a mock for road surface to test a car braking system, he/she will actually “integrate” the braking system with a set of tires (make by different rubbers), and conduct braking test on various surface (maybe on a rolling board with different surfaces)

Share me your thoughts...


Paul Ho said...

Testing is expensive. Not testing, well - perhaps even more expensive!

And I agree with you, ROI of the test should be one of the guiding factor to our investment of time (and not to forget, we need to mantain those test codes too). Ghosh, these days they talk as if there is no mantainance required in that part of the world!

As to whether 100% coverage is require. I believe it is a target to achieve. Some seems to use it as a "trophy" of some sort. There is more than just simple code coverage - there is branch coverage, etc. All these adds on to the burden of meeting real-world-application development project time line. It is not as if we have all the time in this world to finish a project - time is money in the real world. There is a time-to-market factor for people who are doing up a product, and academia-minded earthlings (not all) might not appreciate it.

I'm suprised that "making our class final... is bad!". There are many reasons why we do make our class final. It could be due to security reasons (like most security classes).... but well... I guess someone got to take the lead here and I believe the Architect have to have the say.

I have a philsophy - if you cannot test it, you should not develop it. However, sometimes, we can go to the extreme of life. Yet, I think this is part and parcel of our growing up. Our journey to learn, and I can agree to disagree.

I notice also that you have put much emphasis on integration testing. That's good. However, that's not the agenda for a developer in general - that's perhaps a architect's perspective - A view at the overall system. A developer might not have the luxury of this view of the problem. His view could be limited (but very focus), and therefore his mind is set on the microscopic level [unit testing and individual module code coverage]. Your role demands you to look at testing from another perspective.

Both are needed. But with limited resources (and unlimited imagination to test out the system?).... I tend to stand where you stand as much as I love to hold the trophy of a 101% coverage :)

Edward Yakop said...

Given your example.
Shouldn't the proceedOrder() test the argument. I.e. whether it is null etcs?
Do you think by doing this in integration testing to make (null) testing on arguments on every method in the code are easy?

Everyone should design by contract. Unit test is the easiest way to ensure that the contract is being met.

To make the story short. It is not the contract for proceedOrder() (Should not this be called processOrder()?) that it has to perform checking whether stock is valid, the customer of the order has good credit and the shipment date is ok? Regardless on how these operations are performed?

Now let's list what are candidates for and benefits of unit testing:
* Simulate error in the system. For example. What happened if any of the operation failed for whatever reason? (Db not available, JMS server down etcs)
* Contracts are being held and implementations reflect the contract. (includes invalid argument checking etcs)
* Another documentation on how to use a particular class/implementation.
* A better sleep at night, because regression is harder to exist if high coverage is achieved.
* A safer environment for new team members to learn and catch potential mistake because of lack of understanding of the new code.
* A faster way to run test. Waiting for JEE server to test code that are meant to be in unit test is not fun.

* Unit test is important
* Integration test is important
* One can't replace the other
* Make unit test fun and integration test fasts.

Lastly to reply your analogy.
* People don't just build real bridges to validate design. People now uses software to test bridge design.
* People don't build building (scale or not) to test resistance against earthquake. Instead, design on software, simulate the earthquake in software. Designs that have good results in software, are tested again in small scale model.
* And in regards to tire breaking :) If you watch "how they do it". They don't test how many KM does a tire will last by building 10 KM track, get a driver to drive around the track 10,000 times to simulate 100,000KM. They just simply spins the wheel (without the car attaching to it), apply downward pressure to simulate the car weight on rotating platform to simulate the 100,000 KM track. Wet conditions? apply water. Icy conditions? apply water + freezing temperature.

James Khoo said...

This is to Comments for Edward Yakop
"1. Shouldn't the proceedOrder() test the argument. I.e. whether it is null etcs?"

I only give a simple example to illustrate my point of ROI. And I will use "assert" to ensure that the pass parameters is not null.

"Should not this be called processOrder() that it has to perform checking whether stock is valid, the customer of the order has good credit and the shipment date is ok? Regardless on how these operations are performed"

This is exactly what I mean, using unit testing and mock, you can't perform black box testing, you have to know exactly what have been called, and what oder of each called is important..

"* Simulate error in the system"
We could only simulate error via functional/integration testing. Via Unit Testing, we have to "mock" the underlying resources, and thus we can't really "dig" out all possible errors when doing unit testing, since we are mocking the resources.

Agree fully with all Summary

Regards your comments about my analogy.
I do aware people use software to emulate the env. That's provided that have capture enough data to simulate the if I increase the temperature by 1c, will my tire get expanded, how much degree it will expand, how does it apply to different rubber, and etc. All this data needs to be capture, before they can do emulation test.

About my comment testing the wheel on a rotating platform, that's exactly what I mean, that actually attach the wheel with real surface, and test the braking system, for me this is functional testing, as your are testing against real wheel, real surface, isn't ?