Turn your manual testers into automation experts! Request a DemoStart testRigor Free

Test Design Techniques: BVA, State Transition, and more

We can all agree that writing test cases simplifies the testing process and gives structure to the testing efforts. When it comes to getting down to writing these test cases, more often than not, one ends up with analysis paralysis, plagued with questions like “Where should I start? Have I covered sufficient grounds? Am I missing out on some important scenarios?” Moreover, the build-and-ship trend that is followed in Agile requires a structured approach to creating test cases. This is where test design comes into play. Having a systematic approach for what areas to write test cases for is a good way to streamline testing efforts and ensure that the team’s efforts are not wasted. Let’s take a look at what test design entails and the techniques available to achieve it.

What is test design?

Test design focuses on various aspects of the test cases that need to be created. The key activities in test design include:

  • Test Case Design: Creating individual test cases that cover specific scenarios, functions, or features of the system. Test cases outline the inputs, expected outputs, and any preconditions or dependencies required for testing.
  • Test Data Design: Identifying and creating the necessary test data to execute the test cases effectively. This involves selecting relevant input values, configuring the system in specific states, and creating test databases or files.
  • Test Environment Design: Establishing the environment and infrastructure needed to conduct testing. This involves setting up the necessary hardware, software, network configurations, and any other dependencies required for testing.
  • Test Procedure Design: Defining the step-by-step instructions and sequences for executing the test cases. Test procedures outline how to set up the test environment, input the test data, execute the test cases, and record the results.

Test design needs to be a conscious activity to map out the trajectory of testing. This design that is developed will act as a guiding framework for your testing team to follow since ways of coming up with test cases for the system under test are endless.

Why do you need a test design?

The biggest purpose of investing in test design is to yield better ROI and avoid wasting resources. Knowing what to write test cases for, and what data parameters to consider, all this with a pre-defined scope helps. You can ensure that the team optimizes test cases by writing tests for required scenarios only, while also ensuring test coverage. Since test designing is a thought-out activity, it also ensures that error-prone or business-critical scenarios are being covered through test case writing.

Benefits of test design

Test design offers several benefits in the software testing process. Some key benefits of test design include:

  • Improved Test Coverage: Test design ensures comprehensive coverage of the system under test. By systematically designing test cases and test procedures, you can identify the specific functionalities, scenarios, or conditions that need to be tested. This comprehensive coverage increases the likelihood of detecting defects or issues that could arise in different parts of the system.
  • Early Defect Detection: Test design activities help uncover defects or errors in the system at an early stage. Designing test cases that target specific functionalities or scenarios increases the chances of identifying issues that could affect the system’s performance, reliability, or usability. Early defect detection allows for timely debugging and resolution, minimizing the impact of defects on later stages of development.
  • Efficient Testing: Test design helps optimize the testing effort by focusing on critical areas and avoiding redundant or unnecessary tests. Adopting appropriate test design techniques enables the creation of efficient test suites that cover a wide range of scenarios with a reduced number of test cases. This efficiency leads to cost and time savings in the testing process.
  • Validation of Requirements: Test design ensures that the system meets the specified requirements. Mapping test cases to the system requirements help validate whether the system functions as intended and satisfies the stakeholders’ expectations. Test design activities facilitate traceability, allowing for clear visibility into which requirements have been covered by the tests.
  • Documentation and Reproducibility: Test design involves creating well-documented test cases, test data, and test procedures. These artifacts serve as valuable testing process documentation, enabling clear communication within the testing team and facilitating knowledge transfer. Additionally, well-documented test procedures allow for the reproducibility of tests, ensuring consistency and accuracy when rerunning tests in the future.
  • Risk Mitigation: Test design helps identify and address risks associated with the system. By considering potential failure points, critical functionalities, and user interactions, test design activities contribute to risk analysis and mitigation strategies. This proactive approach allows for targeted testing of high-risk areas, reducing the likelihood of defects or failures in production.
  • Enhanced Software Quality: Ultimately, test design contributes to the overall improvement of software quality. Using test design can help organizations identify and resolve issues early in the development lifecycle, increasing customer satisfaction and reducing support and maintenance costs.

Test design techniques

There are various test design techniques that can be used to create test cases and ensure comprehensive test coverage. Some commonly used test design techniques include:

Equivalence Partitioning

This technique divides the input data into groups or partitions, where each partition is expected to exhibit similar behavior. Test cases are then designed to represent each partition, ensuring that representative values from each partition are tested. This technique helps reduce redundant test cases.

Let’s consider an example of equivalence partitioning in test design for a login functionality of a web application. In this case, the input field is the username, which is expected to accept alphanumeric characters. Equivalence partitioning involves dividing the input domain into partitions that exhibit similar behavior. Here, we can divide the input domain of the username into three partitions:

  • Valid Alphanumeric Characters: This partition represents valid alphanumeric characters that should be accepted by the system. For example, “john123” or “alice34”.
  • Invalid Characters: This partition represents invalid characters that should be rejected by the system. It includes special characters, spaces, or any characters outside the alphanumeric range. For example, “@#$%” or “john doe”.
  • Boundary Values: This partition focuses on the boundaries of the input domain. It includes values just below and above the valid alphanumeric range. For example, “john” or “john123456789”.

Now, we can design test cases based on these partitions to achieve test coverage:

1. Valid Alphanumeric Characters:
  • Test Case 1: Input: “john123”, Expected Result: Accepted
  • Test Case 2: Input: “alice34”, Expected Result: Accepted
2. Invalid Characters:
  • Test Case 3: Input: “@#$%”, Expected Result: Rejected
  • Test Case 4: Input: “john doe”, Expected Result: Rejected
3. Boundary Values:
  • Test Case 5: Input: “john”, Expected Result: Accepted
  • Test Case 6: Input: “john123456789”, Expected Result: Accepted

Boundary Value Analysis

Boundary value analysis complements equivalence partitioning by focusing on the boundaries and edges of the partitions. Test cases are designed to test the minimum, maximum, and values just below and above the boundaries. These values are more likely to uncover defects, as boundary conditions often introduce errors.

Let’s continue with the login functionality example, focusing on the password field this time. The password field is expected to accept a minimum of 8 characters and a maximum of 16 characters. Boundary Value Analysis (BVA) involves selecting test cases at the boundaries or edges of partitions. Here, we can consider the following boundaries for the password length:

  • Lower Bound: Minimum allowed password length (8 characters)
  • Upper Bound: Maximum allowed password length (16 characters)
  • Just Below Lower Bound: Password length of 7 characters
  • Just Above Upper Bound: Password length of 17 characters

Based on these boundaries, we can design the following test cases:

  1. Lower Bound:
    Test Case 1: Input: “abcdefgh”, Expected Result: Accepted
  2. Upper Bound:
    Test Case 2: Input: “abcdefghijklmnop”, Expected Result: Accepted
  3. Just Below Lower Bound:
    Test Case 3: Input: “abcdefg”, Expected Result: Rejected
  4. Just Above Upper Bound:
    Test Case 4: Input: “abcdefghijklmnopq”, Expected Result: Rejected

In this example, we ensure that the system accepts passwords that meet the minimum and maximum length requirements while rejecting passwords that fall just below or above those boundaries.

Decision Table Testing

Decision tables are valuable for illustrating intricate business rules or logical conditions. They assist in designing comprehensive test cases that encompass all possible combinations of conditions and actions outlined within the decision table. This approach guarantees thorough test coverage by examining and validating each conceivable scenario.

Let’s consider an example of decision table testing in the context of a user registration form for a website. The registration form requires certain fields to be filled in based on the user’s selected account type. Decision table testing involves documenting the business rules or logical conditions in a tabular format. Here’s an example decision table for the user registration form:

Account Type Username Required? Email Required? Phone Required?
Basic Yes Yes No
Premium Yes No Yes
Admin Yes Yes Yes

In this decision table, the account type determines the requirements for the username, email, and phone fields. Each column represents a condition or requirement, and each row represents a combination of conditions and the resulting actions. Based on this decision table, we can design the following test cases:

1. Basic Account:
  • Test Case 1: Account Type: Basic, Username: Provided, Email: Provided, Phone: Not Provided
  • Expected Result: Registration is successful.
2. Premium Account:
  • Test Case 2: Account Type: Premium, Username: Provided, Email: Not Provided, Phone: Provided
  • Expected Result: Registration is successful.
3. Admin Account:
  • Test Case 3: Account Type: Admin, Username: Provided, Email: Provided, Phone: Provided
  • Expected Result: Registration is successful.
4. Missing Required Fields:
  • Test Case 4: Account Type: Basic, Username: Not Provided, Email: Not Provided, Phone: Not Provided
  • Expected Result: Registration fails with appropriate error messages indicating the missing fields.

State Transition Testing

State transition testing is useful for systems that have different states and undergo transitions based on inputs or events. Test cases are designed to cover various transitions between states, including valid and invalid transitions. This technique helps identify defects related to incorrect state transitions.

Let’s consider an example of state transition testing in the context of an online shopping cart functionality. The shopping cart can be in three states: “Empty,” “Active,” and “Checked Out.”

State transition testing focuses on testing the system’s behavior as it transitions between different states based on inputs or events. In this case, we can design test cases to cover various state transitions. Here’s an example:

1. Transition from Empty to Active:
  • Test Case 1: Add an item to an empty shopping cart.
  • Expected Result: The shopping cart transitions from the “Empty” state to the “Active” state.
2. Transition from Active to Checked Out:
  • Test Case 2: Proceed to checkout with items in the shopping cart.
  • Expected Result: The shopping cart transitions from the “Active” state to the “Checked Out” state.
3. Transition from Checked Out to Empty:
  • Test Case 3: Complete the checkout process and empty the cart.
  • Expected Result: The shopping cart transitions from the “Checked Out” state to the “Empty” state.
4. Transition from Active to Empty:
  • Test Case 4: Remove all items from the shopping cart.
  • Expected Result: The shopping cart transitions from the “Active” state to the “Empty” state.
5. Transition from Checked Out to Active (Invalid transition):
  • Test Case 5: Attempt to add items to the shopping cart after it has been checked out.
  • Expected Result: The system should not allow the transition from the “Checked Out” state to the “Active” state.

Use Case Testing

Use cases describe interactions between actors and the system to accomplish specific goals. Test cases are designed based on these use cases, focusing on testing the system’s functionality from an end-user perspective. This technique helps ensure that the system meets the requirements and supports the intended user actions.

Let’s consider an example of use case testing in the context of an online shopping application. Use case testing focuses on testing the system’s functionality from an end-user perspective by designing test cases based on the interactions between actors and the system to achieve specific goals. Here’s an example use case for the “Add to Cart” functionality:

Use Case: Add Item to
Cart Actor: Customer
Goal: Add a selected item to the shopping cart
Test Case:
1. Precondition: Customer is logged in and browsing the product catalog.
2. Steps:
a. Select a desired item from the product catalog.
b. Click on the "Add to Cart" button.
3. Expected Result: The selected item is added to the shopping cart.
4. Postcondition: The shopping cart reflects the added item with the correct quantity.
This approach helps validate the system's behavior in real-life scenarios and ensures that the end-users can achieve their goals effectively.

Pairwise Testing

Pairwise, or combinatorial, testing is an efficient technique that covers the interactions between input parameters by creating test cases that cover all possible pairwise combinations. This technique reduces the number of test cases needed while maintaining good coverage of possible interactions. The tool or technique used for generating the pairs ensures that the selection is systematic and optimized for coverage.

Let’s consider an example of pairwise testing in the context of a registration form for an online banking application. The registration form requires the user to provide their personal information, such as name, address, email, and phone number. Let’s assume we have four input parameters for the registration form:

  1. Name: Accepts alphanumeric characters.
  2. Address: Accepts alphanumeric characters.
  3. Email: Accepts a valid email format.
  4. Phone Number: Accepts a 10-digit number.

To apply pairwise testing, we identify the different values for each input parameter and generate test cases that cover all possible pairwise combinations. To generate the pairs, we can use a tool or a technique called a pairwise or combinatorial tool. This tool utilizes algorithms to identify the optimal set of pairs to cover all possible combinations efficiently. Here’s an example:

Parameter Values:

  1. Name: [“John”, “Alice”]
  2. Address: [“123 Main St”, “456 Elm St”]
  3. Email: [“[email protected]”, “[email protected]”]
  4. Phone Number: [“1111111111”, “2222222222”]

Pairwise Test Cases:

  1. Test Case 1: Name: “John”, Address: “123 Main St”, Email: “[email protected]”, Phone Number: “1111111111”
  2. Test Case 2: Name: “John”, Address: “456 Elm St”, Email: “[email protected]”, Phone Number: “2222222222”
  3. Test Case 3: Name: “Alice”, Address: “123 Main St”, Email: “[email protected]”, Phone Number: “1111111111”
  4. Test Case 4: Name: “Alice”, Address: “456 Elm St”, Email: “[email protected]”, Phone Number: “2222222222”

In this example, we have considered four test cases that cover the possible pairwise combinations of the input parameters. By testing these specific combinations, we aim to uncover any potential issues or interactions between the parameters.

Error Guessing

Error guessing relies on the experience and intuition of the testers to identify potential areas where defects might exist. Test cases are designed based on common errors, past experiences, or knowledge of the system. This technique can be used alongside other techniques to uncover defects that might be missed using more structured approaches.

Here’s an example of error guessing in test design for a file upload functionality in a web application:

1. Test Case: Upload File with Invalid Format
  • Description: Attempt to upload a file with an unsupported format.
  • Expected Result: The system should display an error message indicating the unsupported file format.
2. Test Case: Upload File with Large Size
  • Description: Attempt to upload a file that exceeds the maximum file size limit.
  • Expected Result: The system should display an error message indicating the file size limit has been exceeded.
3. Test Case: Upload File with Special Characters in the Filename
  • Description: Attempt to upload a file with special characters in the filename.
  • Expected Result: The system should handle the special characters properly and not encounter any issues during the upload process.
4. Test Case: Cancel File Upload
  • Description: Start uploading a file and then cancel the upload process.
  • Expected Result: The system should stop the upload and return to the original state without any errors or unexpected behavior.
5. Test Case: Concurrent File Uploads
  • Description: Simulate multiple users uploading files simultaneously.
  • Expected Result: The system should handle concurrent file uploads without conflicts or data corruption.

These are just a few examples of test design techniques. The selection of techniques depends on factors such as the nature of the system, requirements, available resources, and the expertise of the testing team. Often, a combination of techniques is used to achieve comprehensive test coverage.

How to choose the best-suited test design technique?

Choosing a test design technique depends on several factors, including the characteristics of the system under test, the testing objectives, available resources, and project constraints. Here are some considerations to help you choose the appropriate test design technique:

  • System Complexity: Consider the complexity of the system and the level of detail required in testing. If the system is relatively simple, you may opt for techniques like boundary value analysis or equivalence partitioning. For complex systems, techniques like decision tables, state transition testing, or use case testing may be more suitable.
  • Test Objectives: Clearly define the test objectives, including what aspects of the system you want to verify or focus on. If the objective is to maximize coverage with limited resources, techniques like pairwise testing or combinatorial testing can be useful. If the objective is to test specific functionality or business rules, techniques like decision tables or use case testing may be more appropriate.
  • Available Resources: Consider the resources available, such as time, budget, and skilled testers. Some test design techniques may require more effort and resources than others. Assess the available resources and choose a technique that fits within those constraints. For example, exploratory testing may be a good choice when resources are limited, as it allows for flexible and ad hoc testing.
  • Industry Best Practices: Consider industry best practices and standards. Some industries or domains may have specific recommended test design techniques or guidelines. For example, safety-critical systems may require techniques like fault tree analysis or hazard analysis. Research and understand the best practices relevant to your industry or domain.
  • Previous Experience: Consider the previous experience and knowledge of the testing team. Testers with experience in a particular technique may be more proficient and efficient in using it. Leveraging the expertise and experience of the team can help in selecting the most suitable technique.
  • Combination of Techniques: Remember that you can combine multiple test design techniques to achieve better coverage and address different aspects of testing. A combination of techniques can provide a more comprehensive approach and help identify different types of defects.

Ultimately, the choice of test design technique should be based on a careful analysis of the project requirements, system characteristics, available resources, and testing objectives. It’s important to assess the trade-offs and select the technique that best aligns with the testing goals and constraints of the specific project.

What are test design tools?

Now that we have familiarized ourselves with test design and the techniques available for doing it, let us go a step further and take a look at what a tool meant for test designing does. A test design tool allows for the creation of a test design. Some tools support just a single technique like Equivalence Partitioning, some might support multiple techniques. These tools may offer test design support through templates or wizards to create test designs. Other features included might be test case creation and organization, test data generation, traceability and linkage with requirements, collaboration and sharing to facilitate teamwork, reporting and documentation, and integration with test execution or automation tools. These tools are beneficial for both manual and automated testing, as they help testers design effective test cases regardless of the testing approach used.

Test automation tools to supplement test design tools

Once the test design is ready, the creation of test cases should ideally be straightforward. To truly simplify this process, it’s important to choose an automation tool that is user-friendly. Ultimately, the goal is to make your testing process as efficient as possible. Therefore, using a powerful automation tool like testRigor can help your testers accomplish their tasks quickly. testRigor is an AI-powered, cloud-based tool that can assist your team in several ways:

  • It simplifies the writing of test cases by allowing you to use plain English statements. For instance, if you wish to verify something on a page, you can phrase it as ‘Check that the page contains “Hello Mike!”’
  • Locating UI elements is easy. All you need to do is describe its relative location. For example, if the login button is present below certain text and in the center of the page, you can state it as ‘Click “Login” below “See you on the other side” to the left of “Don’t miss this opportunity”’
  • With testRigor, you can generate test data quickly. Take a look at this video to understand how it works.
  • You have the ability to organize your test cases in test suites and execute them across different devices. testRigor also supports web, mobile, and desktop testing.
  • Visual testing is also straightforward, along with testing scenarios involving emails, login, phone calls, SMS messages, and 2-factor authentication.
  • testRigor allows the creation of reusable rules in plain English, which can be applied across the entire test suite. For instance, if there’s a form-filling step common to multiple test cases, you can write a reusable rule for it.

This is just the tip of the iceberg. testRigor has a lot more to offer.

Conclusion

Effective test design is crucial for achieving comprehensive testing and improving the chances of identifying defects before the application is deployed. It focuses on what conditions should be tested, what approach will be adopted for identifying the scenarios to test, whether any test data needs to be considered, and what kind of details will be added to the test case. Using the right test design technique for your project will help your testers with optimized manual as well as automated testing.

Related Articles

XRay Overview

As software testing gains momentum in a project, the volume of testing artifacts like test cases also increases. This happens so ...

Qase Overview

Test automation tool is an umbrella term that comprises tools that simplify various aspects of software testing. One such aspect ...

Zephyr Overview

Test cases form the basis of software testing, acting as a map that testers use to not just decide what to test but also to ...