|
Anushree
|
Software testing is a non-negotiable part of the software development lifecycle (SDLC). Though automation testing is beneficial and the primary focus of technological advancements, one can just not dismiss the importance and effectiveness of manual testing. For example, in situations where the product is in nascent stages, manual testing is often a better choice. Only after the product stabilizes and matures does it make sense to invest time and effort into setting up automation testing.
With manual testing still being an integral part of software testing, it makes sense to familiarize oneself with the commonly used terms and procedures. Read on below to see a comprehensive compilation of the key concepts required for anyone intending to manually test a product.
SDLC
Stages of the software development life cycle (SDLC)
Types of SDLC models
Waterfall Model |
- Suitable for stable projects with clear requirements
- Sequential approach
- Documentation at every stage, hence easy tracking
|
- No flexibility for accommodating changing requirements or feedback
- The customer needs to wait till the final stage to review the product
|
Spiral Model |
- Suitable for clear and complete requirements, but with room for future enhancements
- The iterative nature allows for continuous improvement based on feedback.
- Focus on risk assessment and mitigation
|
- Effective risk assessment and management are required for project success
- Can be costly and resource intensive
- Difficulty maintaining and tracking stages of this model
|
RAD Model |
- Suitable for projects with clear but presently incomplete requirements
- Components developed can be reused easily
|
- Small cycles may not give substantial functionalities to the product leading to significant delays in timelines
- Experts are needed to build products in this model
|
Prototype Model |
- Prototypes help in identifying missing features or functionalities in the product in advance
- Since the customer is involved from the initial stages, there is no ambiguity
|
- Involvement of customer early on means that if the prototype is not satisfactory, the process goes on in a loop till success. This can prove to be time-consuming, effort-intensive, and costly
|
Agile Model |
- Product is broken down into smaller cycles leading to the development of a working product quickly
- Easy to add new features
- Customer feedback is utilized at every stage
|
- Requires skilled resources to work in such an environment
- If the customer is unsure of the requirements then the project will be impacted
|
Testing methods
Black box testing |
- No knowledge of the internal structure or code
- The focus remains on input and output
- Examples include higher levels of testing like system or acceptance testing
|
White box testing |
- Testing based on the internal logic, structure, and code of the software
- Examples include lower levels of testing like unit testing
|
Grey box testing |
- Combination of black box and white box testing
- Partial knowledge of the product’s internal structure is available
- Examples include integration testing
|
Testing methods for integration testing
Top-down approach |
- Starts with testing the higher-level modules or components first and gradually integrates lower-level modules.
- Stubs (simplified implementations of lower-level components) may be used for simulating the behavior of lower-level modules.
|
Bottom-up approach |
- Begins with testing the lower-level modules or components first and gradually integrates higher-level modules.
- Drivers (programs that simulate higher-level components) may be used to provide inputs and test the behavior of higher-level modules.
|
Sandwich approach |
- Also known as mixed integration testing, this approach combines elements of both top-down and bottom-up approaches.
- Allows for faster testing of critical modules by integrating lower-level modules and higher-level modules simultaneously.
|
Levels of testing
Unit tests |
- Focus on individual units of code like functions and classes
- Major chunk of test cases
- Requires programming knowledge
- Lightweight tests that can run independently
|
Integration tests |
- Focus on integrations between different systems
- These tests may have some dependencies on other modules and systems and hence are not as lightweight as unit tests
|
End-to-end tests |
- Testing is done from the customer’s perspective rather than a developer’s
- UI-based testing hence takes longer to run
- Fewer in number compared to unit and integration tests
|
Types of testing
Functional testing |
- Testing functionalities for proper behavior
|
Usability testing |
- Evaluating product’s ease of use and intuitiveness
|
Security testing |
- Checking the security of the application through the network, data, APIs and other components
- Includes testing for issues like unauthorized access, data breaches, and injection attacks
|
Exploratory testing |
- This form of testing is unscripted and focuses on finding issues in the system
- Relies heavily of the tester’s understanding of the product
|
Smoke testing |
- A preliminary round of testing to quickly assess if the software is stable enough for further testing
|
Regression testing |
- Form of testing done to ensure that existing functionalities do not get impacted after introduction of new features or fixes
|
Alpha testing |
- This kind of acceptance testing is conducted by the internal QA team to identify defects before release
|
Beta testing |
- Beta testing involves real users testing the software in a controlled environment before it’s widely released
|
Sanity testing |
- Validates at a high level if the functionalities are working as expected
- Further testing is done based on these results
|
Performance testing |
- Checks the performance of the application under different conditions like the product’s responsiveness, scalability, and stability under varying load conditions
- Includes targeted testing types like load testing and stress testing
|
Load testing |
- Evaluates the product under peak load to identify bottlenecks and scalability issues
|
Stress testing |
- Tests the product’s stability and performance under extreme conditions
|
Phases of software testing lifecycle
Bug life cycle
Defect |
When product is not working as per requirement |
Bug |
When the results are not as expected |
Error |
Problems in code gives errors |
Failure |
Can occur due to any of the above |
Note: it’s vital to mention that sometimes the words “bug”, “defect” and “error” are used interchangeably.
Testing metrics and artifacts
Defect Density |
- The number of defects found in a specific component or area of the software, normalized by the size of that component
- Defect Density = Number of Defects / Size of Component (e.g., lines of code, function points)
|
Defect Removal Efficiency (DRE) |
- Meant to measure the number of defects found internally
- DRE = (Total Defects Found – Total Defects Found in Production) / Total Defects Found
|
Test Coverage |
- Measures the extent to which the code or requirements are exercised by test cases
|
Defect Age |
- Measures how long a defect remains unresolved
|
QA |
- Refers to the process of assuring quality throughout SDLC
- Helps in creating preventive measures against issues
|
QC |
- Refers to verifying that the product meets expected standards
- Helps in detecting issues in the product
|
Priority |
- Importance of the issue to be resolved with respect to the customer
|
Severity |
- The seriousness of the issue in terms of functionality of the product
|
Test Plan |
- Outlines the overall strategy, scope, objectives, and resources for testing
- It also mentions testing approaches, deliverables, methodologies, schedule, roles and responsibilities of team members
|
Test Strategy |
- Defines the high-level approach and guidelines for testing activities
- Covers aspects such as test levels, test techniques, environments, and entry/exit criteria
|
Requirement Traceability Matrix (RTM) |
- Links requirements to corresponding test cases, ensuring comprehensive coverage
- Helps track the progress of testing against specified requirements
|
Responsibilities of a tester
Leveraging manual testing expertise through automation
The myth that manual testers cannot automate test cases as effectively as automation engineers is debunked by automation tools like testRigor. This powerful test automation tool allows you to create scripts as easily as writing test cases in plain English, eliminating the need to translate those statements into a programming language. Its advanced AI engine not only simplifies the scripting process but also ensures minimal test maintenance overhead, rapid, and precise test execution. With its rich set of commands, you can automate across platforms, such as the web, mobile devices, and even native desktop applications, ensuring optimal test coverage.
A test case in testRigor would look like this:
login
click on “Admin”
add a new contact
logout
Organizations using testRigor report a significant improvement in their quality control endeavors due to increased test coverage, lower issues releasing to production, and quicker release cycles. All this while empowering in-house resources like manual testers to leverage their testing experience without getting sidelined by the technicalities of automation testing. This in turn benefits the company in terms of overall costs and better collaboration between teams.
Join the next wave of functional testing now.
A testRigor specialist will walk you through our platform with a custom demo.