Skip to main content

Poll: What sort of unit testing does your team do?

Posted by johnsmart on July 28, 2008 at 11:34 AM PDT

Unit testing is generally considered to be a key part of software development. But how is unit testing really practiced in the industry at the moment? Do you test your code at all? Do you write your tests before or at the same time as your code (as in using a TDD or BDD-style approach), or do you wait until the application is written before coming back to the unit tests (e.g. "Yes boss, it's all done, I just have to write the unit tests")? Do you use test coverage metrics to see how much of your code is being executed during tests, and to isolate untested code?

Note, in this survey, the question is about practices in your team as a whole, not just you as an individual. This is to take into account the fact that people who hang around web sites like this are probably amongst the more IT-literate members of their teams, and the idea is to get a general picture of industry practices. Still, as usual, it's not a scientific survey by any means ;-).

Anyway, you can check out the pole here.

Comments

Unfortunately, as a whole, most projects and teams at my company do not do any JUnit testing. 1 of our 3 larger projects do write JUnits, but only when time permits. The other 2 have some, but they are no longer run, maintained, etc. I'm am on a team writing smaller applications, and services, and a couple of us are pushing for having JUnits for everything, however even with our manager on board it doesn't seem to be working too well. When it comes down to it, our deadlines and amount of projects needed outweighs the time to write and maintain JUnits. We don't do any code coverage, or use anything to check how much of our code is being tested in JUnits (do you have any recommendations?). All that being said, we are pushing very heavily for checking code complexity. All projects are to use PMD (with our custom ruleset) on every server build, and in my team it is "enforced" that no _new_ code can be checked in without approval that has a violation (though its not verified). Finally -- individually, for personal projects (I know, you said not as an individual) where deadlines aren't an issue, I typically write tests before when I know exactly how I want something to work, or during if I know its going to change as I refine requirements. I don't write any tests afterwards for personal work.

Our coding standard is to do JUnit tests for all non-visual classes and a test app for all Swing classes before they're checked in. The unit tests then evolve along with the software. We don't have a policy about whether to do the tests first or last, so long as they are complete. More interesting is how we've structured our project. From past projects I've found that the more classes are in each project file, the less unit testing gets done. As a result, we adopted a hyper-modular architecture, where every class that can stand alone is in it's own project. The result is that we have hundreds of project files in this app. That approach has its strengths and weaknesses (the weaknesses are mainly how tools like NetBeans do with hundreds of open projects), but from a design and testing standpoint, it's been a big win. Since each class is developed in isolation, we aren't tempted to take short cuts regarding modularity, and it's a lot easier to develop good unit level tests. Our tests are meant to provide: 1. Verification that the class works 2. Ability to refactor the code 3. Complete working sample code for how that component is used. (Extremely important when developers need to support others' code.) 4. An environment other than the app where the code can be used, developed, and debugged. (The app environment is often complex, and therefore not a good place to work on a component.) Most of our developers don't like to write test cases. They don't see the value and it requires constant pressure to make sure they get written. A few of us live-and-breathe by our test cases.