Now the scenario is more consistently organized with sufficient detail for consistent manual or automated testing.  But we still have the problem of the many "what ifs" identified earlier:

Note how certain words in the login step are underlined:

Given the manager is on the Log in page

And no one is currently logged in

When the manager enters his user ID

And the manager enters his correct password

And the manager clicks "Log in"

Then the manager is taken to his My Projects page

And the page displays all of the projects the manager manages

And the page displays all of the projects the manager belongs to but does not manage

These underlined words can be called "keywords."

By changing one or more preconditions and actions (keywords in the Given and When clauses), you get different results. For example:

Given the manager is on the Log in page

And no one is currently logged in

When the manager enters his user ID

And the manager enters the wrong password

And the manager clicks "Log in"

Then the message "Invalid login attempt" is displayed

or

Given the manager is on the Log in page

And no one is currently logged in

When the manager enters no user ID

And the manager enters his correct password

And the manager clicks "Log in"

Then the message "The user ID field is required" is displayed

Examining these changes systematically is a good way to fully understand all of the possible alternatives. Combining alternatives together into distinct "scenarios" can produce different test cases. The process of examining different scenarios can be useful in planning and scoping development.

But forming combinations and writing test cases manually, is boring and error-prone. It often relies upon unreliable copy-and-paste techniques. Try building sufficient variations on the expense example and you'll see that it's a terrible challenge.

Current test automation solutions only make this problem more apparent. Automated test execution can make it faster to run tests, but automation doesn’t help with the problem of building tests. Most automated tests are still written one at a time. This "serial" style of testing often treats modularity as an afterthought.