Skip to main content

Introducing Behavior Driven Development

Posted by manning_pubs on June 10, 2013 at 8:18 PM PDT






Introducing Behavior Driven Development

By John Ferguson Smart, author of BDD in Action

Behavior Driven Development (BDD) is a software engineering practice designed to help teams build and deliver more valuable, higher quality software faster. In this article, based on chapter 1 of BDD in Action, author John Ferguson Smart explains how Backbone helps to solve this problem in a way that is lightweight and flexible. Save 42% on BDD in Action with Promotional Code bddjn, only at manning.com.

The main challenge regarding managing scope and requirements in modern software development projects is not to eliminate uncertainty by defining and locking down requirements as early as possible, but to manage this uncertainty in a way that will help us progressively discover and deliver an effective solution to underlying the business goals behind a project. Behavior Driven Development (BDD) provides techniques that can help us manage this uncertainty, and reduce the risk that comes with it.

BDD draws on Agile and Lean practices such as Test Driven Development (TDD), Domain Driven Design (DDD), and pull-based feature definition and provides a common language that allows more effective communication and feedback between project team members and business stakeholders.

The origins of BDD

Behavior Driven Development was designed by Dan North in the early to mid 2000s as an easier way to teach and to practice Test Driven Development (TDD). TDD is a simple but incredibly powerful technique that encourages better-designed code and results in substantially lower defect counts. However to this day many teams have difficulty adopting and using it effectively. Dan observed that a few simple practices, such as naming unit tests as full sentences and using the word "should", can help developers write more focused tests, which in turn helps them write higher quality code more efficiently. For example, a traditional unit test for a banking application might read like this:

public class BankAccountTest extends TestCase {
  public void testTransfer() {…}
  public void testDeposit() {…}
}

But, using this style, developers often have trouble knowing where to start, or what tests they should write next. Dan found it easier to think in terms of what the class should do:

public class WhenTransferringInternationalFunds {
  public void should_transfer_funds_to_destination_account() {…}
  public void should_deduct_fees_as_a_separate_transation() {…}
  …
}

Tests that are written this way end up reading more like specifications than unit tests. They focus on the behavior of the application, using tests simply as a means to express and verify that behavior. Dan also noted that tests written this way were much easier to maintain, because their intent was so clear. The impact of this approach was so significant that Dan started referring to what he was doing no longer as Test Driven Development, but as Behavior Driven Development.

However, describing a system's behavior turns out to be what business analysts do every day. Working with business analyst colleague Chris Matts, Dan set out to apply what he had learnt so far to the requirements analysis space. Around this time, Eric Evans introduced the idea of Domain Driven Design, which promotes the use of a ubiquitous language that business people can understand to describe and model a system. The vision of Dan and Chris was to create a ubiquitous language that BAs could use to define requirements unambiguously, and that also could be easily transformed into automated acceptance tests. To achieve this vision, they started expressing the acceptance criteria for user stories in the form of loosely structured "scenarios" like this one:

Given a customer has a current account
When the customer transfers funds from this account to an overseas account
Then the funds should be deposited in the overseas account
And the transaction fee should be deducted from the current account

A business owner can easily understand a scenario written like this. It gives clear and objective goals for each story, in terms both of what needs to be developed, and of what needs to be tested. And, with appropriate tools, scenarios like this one can be turned into automated acceptance criteria that can be executed automatically whenever required. Dan North himself wrote the first dedicated BDD test automation library, JBehave, in the mid 2000s and since then many others have emerged for different languages, both at the unit testing and at the acceptance testing level.

BDD today

Today BDD is successfully practiced in a large number of organizations of all sizes around the world. In his book Specification by Example, Gojko Adzic provides case studies for over 50 such organizations.

Figure 1 gives a high-level overview of the way Behavior Driven Development sees the world. BDD practitioners start by identifying business goals and looking for features that will help deliver these goals. Collaborating with the user, they use concrete examples to illustrate these features. Wherever possible these examples are automated in the form of executable specifications, which both validate the software and provide automatically updated technical and functional documentation. BDD principles are also used at the coding level, where they help developers write code that is of higher quality, better tested, better documented, and easier to use and maintain.

Figure 1 The principal activities and outcomes of Behavior Driven Development. Note that these are activities that occur repeatedly and continuously throughout the process; this is not a single linear waterfall-style process, but a sequence of activities that we practice for each feature we implement.

In the following sections, we will look at how these activities work in a little more detail.

Only build features that deliver business value

As we have seen earlier, heavy upfront specifications don't work particularly well for software projects. So rather than attempting to nail down all of the requirements once and for all, teams practicing BDD engage in ongoing conversations with the end users and other stakeholders to build a common understanding of what features they should build. Rather than working upfront to design a complete solution for the developers to implement, users explain what they need to get out of the system, and how it might help them achieve their objectives. And rather than accepting a list of feature requests from the users no questions asked, teams try to understand the core business goals underlying the project, and propose only the features that can be demonstrated to support these business goals. This constant focus on delivering business value means that teams can deliver more useful features earlier and with less wasted effort.

Work together to specify features

"A complex problem, like discovering ways to delight clients, is best solved by a cognitively diverse group of people that is given responsibility for solving the problem, self-organizes, and works together to solve it."


Stephen Denning, The Leader's Guide to Radical Management

BDD is a highly collaborative practice, both between users and the development team, and within the team itself. Business analysts, developers and testers work together with the end users to define and specify features, with team members drawing ideas from their individual experience and know-how.

This approach is highly efficient. In a more traditional approach, when business analysts effectively relay their understanding of the user requirements to the rest of the team, there is a very high risk of misinterpretation and lost information. Developers can't use their technical knowhow to help deliver a technically superior design, and the QA folk don't get the opportunity to comment on the testability of the specifications until the end of the project. When teams practice BDD, on the other hand, team members build up a shared appreciation of the user needs, as well as a sense of common ownership and engagement in the solution.

Embrace uncertainty

A BDD team knows that they will not know everything upfront, no matter how long they spend writing specifications. The biggest thing slowing us down in a software project is actually understanding what we need to build.

So rather than attempting to lock down the specifications at the start of the project, BDD practitioners assume that the requirements, or more precisely, our understanding of the requirements, will evolve and change throughout the life of the project. Rather than waiting until the end of the project to see if their assumptions about the business requirements are correct, they try to get early feedback from the users and stakeholders to ensure that they are on track, and change tack accordingly.

Very often the most effective way to see if users like a feature is to build it, and show it to them as early as possible. With this in mind, experienced BDD teams prioritize the features that will both deliver value and improve their understanding of what features the users really need, and of how best to build and deliver these features.

Illustrate features with concrete examples

When a team practicing BDD decides to implement a particular feature, the team works together with users to define stories and scenarios of what users expect this feature to deliver. In particular, the user helps define a set of concrete examples that illustrate key outcomes of a feature. These examples use the same common business vocabulary, and can be readily understood by end users and by members of the development team (see figure 2).

Figure 2 Examples play a primary role in BDD to help understand the requirements more clearly.

Examples play a primary role in BDD, simply because they are an extremely effective way of communicating clear, precise, and unambiguous requirements. Specifications written in natural language are, as it turns out, a terribly poor way of communicating requirements, simply because there is so much space for ambiguity, assumptions and misunderstandings. Examples, on the other hand, are a great way to expose and clarify the assumptions, ambiguities and misunderstandings that plague traditional software development.

Examples are also a great way to explore and expand our knowledge. When a user proposes an example of how a feature should behave, project team members will often ask for extra examples to illustrate corner cases, explore edge-cases or to clarify assumptions. Testers are particularly good at this, which is why it is so valuable for them to be involved at this stage of the project.

Don't write automated tests, write executable specifications

These stories and examples form the basis of the specifications that developers use to build the system. They act as both acceptance criteria, determining when a feature is done, and as guidelines for the developers, giving them a clear picture of what they are supposed to build.

Acceptance criteria give the team a way to objectively judge whether a feature has been implemented correctly or not. However, to check this manually for each new code change would be time-consuming and inefficient. This would also slow down feedback, which would in turn slow down the development process. So wherever feasible, teams turn these acceptance criteria into automated acceptance tests, or, more precisely, into executable specifications.

These automated tests are executed as part of the build process, and run whenever a change is made to the application. In this way, they serve both as acceptance tests, determining which new features are complete, and as regression tests, ensuring that new changes have not broken any existing features. You can see an example of this process in figure 3.

Figure 3 Executable specifications are expressed using a common business vocabulary that the whole team can understand, and produces readable reports available to all.

But unlike conventional unit or integration tests, or like the automated functional tests many QA teams are used to, executable specifications are expressed in something close to natural language. They use precisely the examples that the users and development team members proposed and refined earlier on, using exactly the same terms and vocabulary. Executable specifications are about communication as much as they are about validation, and the test reports they generate are easily understandable by everyone involved with the project.

These executable specifications also become a single source of truth, providing reference documentation for how features should be implemented. This makes maintaining the requirements much easier. If specifications are stored in the form of a Word document or on a Wiki page, as is done on many traditional projects, any changes to the requirements will need to be reflected both in the requirements document and in the acceptance tests and test scripts, which introduces a high risk of inconsistency. But for teams practicing BDD, the requirements and the executable specifications are the same thing: when the requirements change, the executable specifications are simply updated directly in a single place.

Don't write unit tests, write low-level specifications

Behavior Driven Development does not stop at the acceptance tests. BDD also helps developers write higher quality code that is more reliable, more maintainable and better documented.
Developers practicing BDD typically use an "outside-in" approach. When they implement a feature, they start from the acceptance criteria and work down, building whatever is needed to make those acceptance criteria pass. The acceptance criteria define the expected outcomes: the developer's job is to write the code that produces these outcomes. This is a very efficient, focused way of working. Just as no feature is implemented unless it contributes to an identified business goal, no code is written unless it contributes to making an acceptance test pass, and therefore to implementing a feature.
But it doesn't stop there, either. Before a BDD developer writes any code, she will reason about what this code should actually do, and express this in the form of a low-level executable specification. She will not think in terms of writing unit tests for a particular class, but writing technical specifications describing how the application should behave: for example, how it should respond to certain inputs or what it should do in a given situation.
These executable specifications are a little like conventional unit tests, but written in a way that both communicates the intent of the code, and gives a worked example of how the code should be used. Writing low-level executable specifications this way is a little like writing detailed design documentation, with lots of examples, but using a tool that is easy, even fun, for developers to use.
At a more technical level, this approach encourages a clean, modular design with well-defined interactions (or APIs, if you prefer a more technical term) between the modules. It also results in code that is reliable, accurate and extremely well tested.
These low-level executable specifications also facilitate code maintenance. When a developer adds a new feature, or changes an existing one, some of the existing executable specifications may fail. When this happens, it can mean one of two things. If the broken specification is still valid, then the developer has introduced a bug, and it needs fixing. And if the requirements have changed and the specification is no longer valid, this specification can be updated to reflect the new requirements, or deleted if it is no longer applicable. Thinking in terms of executable specifications, rather than conventional unit tests, makes this process a great deal easier.

Deliver living documentation

The reports produced this way are no longer simply technical reports for developers, but effectively become a form of product documentation for the whole team, expressed in the same common vocabulary familiar to the users (see figure 4). This documentation is always up to date, and requires little or no manual maintenance. It is automatically produced from the latest version of the application. Each application feature is described in readable terms, and illustrated by a few key examples. For web applications, this sort of living documentation often also includes screenshots of the application for each feature.

Figure 4 Both high level and low level executable specifications generate different sorts of living documentation for the system

Experienced teams organize this documentation so that it is easy to read and easy to use for everyone involved in the project. Developers can consult it to see how existing features work. Testers and business analysts can see how the features they specified have been implemented. Product owners and project managers can use summary views to judge the current state of the project, view progress, and decide what features can be released into production. And users can even use it to see what the application can do and how it works.

And just as automated acceptance criteria provide great documentation for the whole team, low-level executable specifications also provide excellent technical documentation for other developers. This documentation is always up to date, cheap to maintain, contains working code samples, and expresses the intent behind each specification.

Summary

Successful software projects need both to build software right and to build the right software. We need to build the features the users really needs to achieve their business goals, and we need to do so by writing reliable, maintainable code. And BDD gives us the means to do both.


Here are some other Manning titles you might be interested in:

The Art of Unit Testing, Second Edition

The Art of Unit Testing, Second Edition
Roy Osherove

Dependency Injection in .NET

Dependency Injection in .NET
Mark Seeman

Continuous Integration in .NET

Continuous Integration in .NET
Marcin Kawalerowicz and Craig Berntson


Related Topics >>