Turn your manual testers into automation experts! Request a DemoStart testRigor Free

The Ultimate Testing Terms Glossary: A Reference for Testing Professionals

Testing is a critical phase in software development that ensures the quality, reliability, and functionality of a product. However, the field of testing is filled with a multitude of technical terms and jargon that can be overwhelming, especially for beginners. This glossary is designed to demystify testing terminology and provide you with clear and concise definitions of key terms.

Whether you are studying for a certification, preparing for an interview, or simply seeking to expand your testing vocabulary, the Testing Terms Glossary is here to support you on your testing journey. Let’s dive in and unlock the power of testing terminology together!

Testing Terms Glossary

Acceptance Criteria

The set of conditions that a user, customer, or any authorized entity requires a component or system to meet before it can be considered approved or accepted.

Acceptance Testing

The procedure in which users test a software or system and decide to either approve or reject its acceptance based on the outcomes. It involves evaluating the program against its initial requirements and the current needs of its end users. This type of test is atypical because it is typically carried out by the software’s customer or end user rather than the development organization. Its purpose is to assess the system’s readiness for implementation or use. It is a formal testing process to determine whether the system meets its acceptance criteria and allows the customer to decide whether to accept or reject it.

Accessibility Testing

Ensuring that a product is usable and available to individuals with disabilities.

Accuracy

The ability of the software product to deliver the intended or agreed-upon outcomes or effects with the required level of accuracy.

Ad-Hoc Testing

The act of spontaneously and ad hoc searching for software bugs. This type of testing is conducted without formal preparation, without following established test design techniques, and without any specific expectations for results. The test execution activity is guided by randomness or arbitrary choices.

Agile Testing

Agile Testing integrates testing throughout development, emphasizing continuous feedback and collaboration. It employs test-driven development, automation and adapts to changing requirements. The approach catches defects early, maintains code quality, and improves efficiency. Agile Testing delivers high-quality software through rapid iterations and effective communication.

All-Pairs Testing or Pairwise Testing

A combinatorial software testing technique that involves testing all possible combinations of input parameters in pairs for a system, usually a software algorithm.

Alpha Testing

Alpha testing is an internal testing process carried out by the manufacturer to verify the functionality and capabilities of a new product. It involves following a set of testing procedures to assess the product’s performance. This testing phase occurs before beta testing and is conducted by the manufacturer’s test team, along with potentially other interested individuals within the company.

Anomaly

An anomaly refers to any situation that deviates from the expected outcome as defined by requirements specifications, design documents, user documents, standards, or personal perception and experience. These deviations can be identified during various activities such as reviews, testing, analysis, compilation, or utilization of software products and related documentation. Anomalies encompass any discrepancies or deviations encountered throughout these processes, signaling deviations from the norm or expected behavior.

API (Application Programming Interface)

In computer science, an application programming interface (API) is a defined interface that outlines how an application program can interact and request services from libraries or operating systems. It specifies the methods and protocols for communication, enabling seamless integration between different software components.

AUT (Application Under Test)

The software or application under consideration for the testing process.

Audit

A separate assessment of software products or processes to determine adherence to standards, guidelines, specifications, and procedures using objective criteria. This evaluation includes documents that outline the required form or content of the products, the process for their production, and the methods for measuring compliance with standards or guidelines.

Audit Trail

A traceability path enables tracking the original input of a process, such as data, back through the process using the process output as a starting point. This traceability aids in defect analysis and enables the conduction of process audits.

Automated Testing

Automated testing refers to using computer-based tools and systems to perform testing tasks, reducing or eliminating the need for manual intervention.

Availability

The level of readiness and availability of a component or system to be used as intended. This is commonly represented as a percentage and indicates how operational and accessible it is.

Back-to-Back Testing

Back-to-Back testing involves running multiple versions or variants of a component or system using the same inputs, comparing their outputs, and analyzing any potential differences.

Baseline

A formally reviewed or agreed upon specification or software product that serves as the foundation for further development and can only be modified through a formal change control process. It can also refer to a set of established values or observations representing the background level of a measurable quantity.

Behavioral Testing

A testing approach where tests are defined in terms of externally observable inputs, outputs, and events. The design of these tests can utilize various sources of information.

Benchmark Test

1. A reference standard used for measurements or comparisons.
2. A test that compares components or systems to each other or a predefined standard.

Big-Bang Testing

An integration testing method where software elements, hardware elements, or both are combined all at once into a component or system without staged integration.

Beta Testing

Testing is conducted at customer sites by end-users before the software product or system is made widely available. Typically, friendly users participate in this testing phase.

Bespoke Software

Software developed specifically for a set of users or customers, as opposed to off-the-shelf software.

Black Box Testing

Testing performed without knowledge of the internal workings of the product. Testers rely on external sources for information about how the product should run, potential risks, expected errors, and error handling.

Blink Testing

A testing approach that focuses on identifying overall patterns of unexpected changes, such as rapidly comparing web pages or scrolling through log files.

Blocked Test Case

A test case that cannot be executed due to unfulfilled preconditions.

Bottom-Up Testing

An integration testing approach where lower-level components are tested first, facilitating the testing of higher-level components.

Boundary Value

A specific value at the extreme edges of a variable or equivalence class subset.

Boundary Value Analysis

Guided testing that explores values at and near the minimum and maximum allowed values for a particular input or output.

Branch Coverage

The percentage of branches covered by a test suite.

Buddy Drop

A private build of a product is used to verify code changes before they are integrated into the main code base.

Bug

An error, flaw, or fault in a computer program hinders its correct functioning or produces incorrect results.

Bug Bash

In-house testing involving various individuals from different roles to identify bugs before the software release.

Bug Triage or Bug Crawl or Bug Scrub

A review meeting to assess and prioritize active bugs reported against the system under test.

Build Verification Test (BVT) or Build Acceptance Test (BAT)

Tests are performed on each new build to ensure its testability and mainstream functionality before being handed to the testing team.

Business Process Testing

A testing approach based on descriptions or knowledge of business processes to design test cases.

Capability Maturity Model (CMM)

A staged framework that outlines the essential elements of an effective software process. It covers best practices for software development and maintenance.

Capability Maturity Model Integration (CMMI)

A framework that defines the key elements of an efficient product development and maintenance process. It encompasses best practices for planning, engineering, and managing product development.

Capacity Testing

Testing conducted to determine the maximum number of users a computer or a set of computers can support.

Capture/Playback Tool (Capture/Replay Tool)

A test execution tool that records inputs during manual testing to generate automated test scripts for later execution. Often used for automated regression testing.

Cause-And-Effect Diagram

A visual diagram used to analyze factors contributing to an overall effect. Also known as a Fishbone Diagram or Ishikawa Diagram.

Churn

A term describing the frequency of changes that occur in a file or module over a specific period.

Code Complete

The stage in which a developer considers all the necessary code for implementing a feature to be checked into source control.

Code Coverage

An analysis method used to determine which parts of the software have been executed by the test suite. It includes statement coverage, decision coverage, or condition coverage.

Code Freeze

The point in the development process where no changes are permitted to the source code of a program.

Command Line Interface (CLI)

A user interface where commands are entered via keyboard input, and the system provides output on the monitor.

Commercial Off-The-Shelf Software (COTS)

Software products developed for the general market and delivered in an identical format to multiple customers.

Compliance

The ability of a software product to adhere to standards, conventions, regulations, or legal requirements.

Compliance Testing

Testing conducted to determine the extent to which a component or system complies with specified requirements.

Component Testing

The process of subdividing an object-oriented software system into units and validating their responses to stimuli applied through their interfaces.

Computer-Aided Software Testing (CAST)

Testing partially or fully automated by another program.

Concurrency Testing

Testing conducted to assess how a component or system handles the occurrence of two or more activities within the same time interval, either through interleaving or simultaneous execution.

Configuration Management

A discipline that applies technical and administrative control to identify, document, and manage changes to the characteristics of a configuration item.

Consistency

The degree of uniformity, standardization, and absence of contradiction among documents or parts of a system or component.

Correctness

The degree to which software is free from errors.

Critical Path

A series of dependent tasks in a project that must be completed as planned to keep the entire project on schedule.

Cross Browser Testing

Compatibility testing aimed at ensuring a web application functions correctly across different browsers and versions.

Cross-Site Scripting

A computer security exploit where information from an untrusted context is inserted into a trusted context to launch an attack.

Custom Software

Software developed specifically for a set of users or customers, as opposed to off-the-shelf software.

Cyclomatic Complexity

A measure of the soundness and confidence in a program, assessing the number of independent paths through a program module.

Daily Build

A development practice where a complete system is compiled and linked on a daily basis to ensure a consistent system with the latest changes is available at any time.

Data-Driven Testing

Testing approach where test cases are parameterized by external data values, often stored in files or spreadsheets. It is commonly used in automated testing to execute multiple tests using a single control script.

Debugging

The process in which developers identify the root cause of a bug and propose potential fixes. Debugging is performed to resolve known bugs either during subsystem or unit development or in response to bug reports.

Decision Coverage

The percentage of decision outcomes exercised by a test suite. Achieving 100% decision coverage implies complete branch coverage and statement coverage.

Defect

A flaw in a component or system that can cause it to fail in performing its intended function, such as an incorrect statement or data definition. Encountering a defect during execution may lead to a failure of the component or system.

Defect Density

The number of defects identified in a component or system divided by its size, typically measured in terms of lines of code, number of classes, or function points.

Defect Leakage Ratio (DLR)

The ratio of undetected defects that make their way into production to the total number of defects.

Defect Masking

Occurs when one defect prevents the detection of another defect.

Defect Prevention

Activities involved in identifying and preventing the introduction of defects into a product.

Defect Rejection Ratio (DRR)

The ratio of rejected defect reports, which may be due to them not being actual bugs, to the total number of defects.
Defect Removal Efficiency (DRE)The ratio of defects found during development to the total defects, including those discovered in the field after release.

Defect Triggering

Something that causes a defect to occur or become observable. For example, low disk space often triggers the appearance of defects.

Deviation

A noticeable departure from the norm, plan, standard, procedure, or reviewed variable.

Direct Metric

A metric that does not rely on the measurement of any other attribute.

Dirty System

A complex and frequently modified system that is often inadequately documented. It may or may not be considered a “legacy” system.

Distributed Testing

Testing conducted at multiple locations, involving multiple teams, or both.

Domain

The set of valid input and/or output values that can be selected.
DriverA software component or test tool that replaces another component, handling control and/or calling of a component or system.

Eating Your Own Dogfood

The practice of a company using pre-release versions of its own software for day-to-day operations, ensuring reliability before selling. It promotes early feedback on product value and usability.

End User

The individual or group who will use the system for its intended operational use in its deployed environment.

Entry Criteria

Decision-making guidelines used to determine if a system under test is ready to progress to a specific testing phase. Entry criteria become more stringent as the test phases advance.

Equivalence Partition

A portion of input or output where the behavior of a component or system is assumed to be the same based on specifications.

Error

An incorrect result produced by a computer or a software flaw that deviates from user expectations.

Error Guessing

A test design technique that utilizes tester experience to anticipate and design tests specifically to expose defects resulting from errors made during development.

Error Seeding

Intentionally adding known defects to existing ones in a component or system to monitor detection, removal rates, and estimate the remaining defects.

Escalate

To report a problem to higher management for resolution.

Exhaustive Testing

An approach where the test suite encompasses all combinations of input values and preconditions.

Exit Criteria

Decision-making guidelines used to determine if a system under test is ready to exit a specific testing phase. When exit criteria are met, the system proceeds to the next phase or the test project is considered complete.

Expected Results

Predicted output data and file conditions associated with a specific test case.

Exploratory Testing

Test design and execution conducted simultaneously, allowing testers to adapt and optimize test-related activities throughout the project.

Extreme Programming (XP)

An agile software development methodology emphasizing collaboration, customer involvement, and iterative development.

Failure

Deviation of a component or system from its expected delivery, service, or result.

Failure Mode

A specific way in which a failure manifests itself, such as symptoms, behaviors, or internal state changes.

Failure Mode and Effect Analysis (FMEA)

A systematic approach to identify and analyze potential failure modes to prevent their occurrence.

Falsification

The process of evaluating an object to demonstrate that it fails to meet requirements.

Fault

An incorrect step, process, or data definition in a computer program.

Fault Injection

Intentionally introducing errors into code to evaluate the ability to detect such errors.

Fault Model

An engineering model depicting potential errors that could occur in a system under test.

Fault Tolerance

The ability of a software product to maintain a specified level of performance in the presence of software faults or interface infringements.

Feature

A desirable behavior, computation, or value produced by an object. Requirements are composed of features.

Feature Freeze

The stage in the development process where work on adding new features is halted, focusing on bug fixing and enhancing the user experience.

First Customer Ship (FCS)

The phase marking the completion of a project, indicating the product is ready for purchase and usage by customers before general availability.

Fishbone Diagram

A visual diagram illustrating factors contributing to an overall effect. Also known as a Cause-And-Effect Diagram or Ishikawa Diagram.

Formal Testing

The process of conducting testing activities and reporting test results according to an approved test plan.

Functional Requirements

The initial definition of a proposed system, documenting goals, objectives, user requirements, management requirements, operating environment, and design methodology.

Functional Testing

Testing performed to verify that the functions of a system are present as specified, without considering the source code structure.

Fuzz Testing

A software testing technique that inputs random data (“fuzz”) into a program to detect failures, such as crashes or violated code assertions.

Gap Analysis

An evaluation of the disparity between required or desired conditions and the current state of affairs.

General Availability (GA)

The stage in which the product is fully complete and manufactured in sufficient quantity for purchase by all anticipated customers.

Glass Box Testing (Clear Box Testing)

Thoroughly testing the code with in-depth knowledge, often performed by the programmer.

Graphical User Interface (GUI)

User interfaces that accept input via devices like a keyboard and mouse, providing graphical output on a monitor.

Gray Box Testing

Combining elements of both Black Box and White Box testing, testing software against its specification while having partial knowledge of its internal workings.

Happy Path

A default scenario without exceptional or error conditions, representing a well-defined test case that executes without exceptions and produces an expected output.

Horizontal Traceability

Tracing requirements for a test level across various layers of test documentation, such as test plan, test design specification, test case specification, and test procedure specification.

IEEE 829

An IEEE standard specifying the format of software test documentation, including various documents like test plan, test design specification, test case specification, and more.

Incident

An operational event outside the norm of system operation, with potential impact on the system.

Incident Report

A document reporting an event occurring during testing that requires investigation.

Incremental Testing

A methodical approach to testing interfaces between unit-tested programs or system components, which can be done in top-down or bottom-up fashion.

Independent V&V

Verification and validation of a software product performed by an independent organization separate from the designer.

Input Masking

When a program encounters an error condition on the first invalid variable, subsequent values are not tested.

Inspection

A formal evaluation technique involving a detailed examination of a system or component by individuals other than the author to detect faults and problems.

Installability

The capability of the software to be successfully installed in a specified environment.

Integration Testing

Testing the combined functionality of individual software modules as a group, following unit testing and preceding system testing.

Interface Testing

Testing conducted to ensure proper passing of information and control between program or system components.

Internationalization (I18N)

The process of designing and coding a product to function properly when adapted for use in different languages and locales.

Ishikawa Diagram

A diagram used to analyze factors contributing to an overall effect, also known as a Fishbone Diagram or Cause-And-Effect Diagram.

Keyword-Driven Testing

A scripting technique using data files containing test inputs, expected outcomes, and keywords related to the application being tested.

Kludge (or kluge)

A temporary or substandard solution applied to address an urgent problem, often believed to keep a project moving forward.

Latent Bug

A bug that exists in the system under test but has not yet been discovered.

Link Rot

The process where links on a website become irrelevant or broken over time due to linked websites disappearing, content changes, or redirects.

Load Testing

Subjecting a system to a statistically representative load to assess software reliability and performance.

Localization (L10N)

The process of translating messages, documentation, and modifying locale-specific files on an internationalized base product.

Low-resource Testing

Testing the system’s behavior when critical resources like memory or disk space are low or depleted.

Maintainability

The effort required to make changes in software without introducing errors.

Mean Time Between Failure (MTBF)

The average time elapsed between system failures.

Mean Time To Repair (MTTR)

The average time needed to fix bugs.

MEGO

Acronym for “My Eyes Glazed Over,” referring to a loss of focus and attention while attempting to comprehend dense technical documents.

Memory Leak

Unintentional consumption of memory by a program where it fails to release memory it no longer needs.

Method

A set of rules and criteria providing a precise and repeatable approach to perform a task and achieve a desired outcome.

Methodology

A collection of methods, procedures, and standards that integrate engineering approaches to product development.

Metric

A quantitative measure of the extent to which a system, component, or process possesses a specific attribute.

Milestone

A scheduled event held accountable to measure progress.

Monkey Testing

Testing by randomly inputting strings or pushing buttons to identify product breakages or vulnerabilities.

Mutation Testing

Modifying the system/program under test to create faulty versions (mutants) and running them through test cases to identify failures, thereby assessing test coverage and identifying additional tests.

Negative Testing

Testing primarily focused on falsification or intentionally breaking the software, also known as Dirty Testing.

Non-functional Testing

Testing the attributes of a component or system that do not relate to functionality, such as reliability, efficiency, usability, maintainability, and portability.

Oracle

Any means used to predict the outcome of a test, including comparing actual and expected outputs or using a tool to determine if the program passed the test.

Orthogonal Array Testing

A systematic approach to test all possible combinations of variables using orthogonal arrays, reducing the number of combinations to test.

Operational Testing

Testing performed by end users in the normal operating environment of the software.

Pairwise Testing

A black box test design technique where test cases are designed to cover all possible combinations of pairs of input parameters.

Pareto Analysis

Analyzing defects by ranking causes from most to least significant, based on the principle that a majority of effects come from a small number of causes (80% of effects from 20% of causes).

Performance Evaluation

Assessing a system or component to determine how effectively it achieves operational objectives.

Performance Testing

Testing conducted to evaluate compliance with specified performance requirements or to demonstrate that the program does not meet performance objectives.

Pesticide Paradox

The phenomenon where the more a software is tested, the more immune it becomes to those tests, similar to how insects develop resistance to pesticides.

Positive Testing

Testing primarily focused on validation or demonstrating the correct functioning of the software, also known as Clean Testing.

Priority

Indicates when a particular bug should be fixed, with priorities changing as the project progresses.

Prototype

An incomplete implementation of software that mimics the expected behavior of the final product.

Pseudo-random

A series of values that appear random but are actually generated according to a predetermined sequence.

Quality

The ability of a product, system, or process to fulfill customer requirements and expectations, conform to standards, and meet user needs.

Quality Assurance (QA)

The process of maintaining a desired level of quality in a service or product throughout its delivery or production.

Quality Control (QC)

Testing aimed at finding bugs and ensuring the quality of the product.

Quality Factor

A management-oriented attribute of software that contributes to its overall quality.

Quality Gate

A milestone in a project where a specific level of quality must be achieved before proceeding.

Quick Confidence Tests (QCT)

A subset of planned or defined test cases that cover the essential functionality of a component or system. It aims to verify that the critical functions of a program are working, without focusing on finer details. Daily build and Quick Confidence Tests are considered industry best practices.

Rainy-Day Testing

Checking whether a system effectively prevents, detects, and recovers from operational problems such as network failures, database unavailability, equipment issues, and operator errors.

Recoverability

The capability of the software to restore a specified level of performance and recover affected data in case of failure.

Regression Testing

Testing performed after making functional improvements or repairs to the program, aiming to ensure that the changes have not caused unintended adverse effects.

Release Candidate

A product build undergoing final testing before shipment, where all code is complete and known bugs have been fixed.

Release Test (or Production Release Test)

Tests conducted to ensure that thoroughly tested and approved code is correctly installed and configured in the production environment.

Reliability

The ability of the software product to perform its required functions under stated conditions for a specified period or number of operations.

Repetition Testing (Duration Testing)

A technique that involves repeating a function or scenario until reaching a specified limit or threshold, or until an undesirable action occurs.

Requirement

A condition or capability needed by a user to solve a problem or achieve an objective.

Requirements

Descriptions of what an object should do and the characteristics it should have, including consistency, completeness, implementability, and falsifiability.

Re-testing

Running test cases that previously failed to verify the success of corrective actions.

Risk

The possibility of experiencing loss.

Robustness

The degree to which a system or component can function correctly in the presence of invalid inputs or challenging environmental conditions.

Root Cause

The underlying reason for a bug, as opposed to the observable symptoms of the bug.

Sanity Testing

A quick test of the main components of a system to determine if it is functioning as expected, without conducting in-depth testing. Often used interchangeably with Smoke Testing.

Scalability

The ability of a software system to handle larger or smaller volumes of data and accommodate more or fewer users. It involves the capability to adjust size or capacity incrementally and cost-effectively with minimal impact on unit costs and additional services procurement. It refers to how well the system performs as the problem size increases.

Scenario

A description of a task performed by an end user, which may or may not be implemented in the current product. Scenarios often involve the use of multiple features.

Severity

Refers to the relative impact or consequence of a bug, and typically remains unchanged unless further insights into hidden consequences are discovered. It indicates the impact of a bug on the system under test, regardless of its likelihood of occurrence or the extent to which it hinders system usage. Contrasts with priority.

Shakeout Test / Shakedown Test

A quick test of the primary components of a system to determine if it is operating as expected, without conducting detailed testing. Often used interchangeably with Smoke Testing.

Smart Monkey Testing

In Smart Monkey Testing, inputs are generated based on probability distributions that reflect expected usage statistics, such as user profiles. It can involve different levels of intelligence (IQ), considering the correlation between input distributions. The input is treated as a single event.

Smoke Test

A subset of planned or defined test cases that cover the essential functionality of a component or system. It aims to verify that the critical functions of a program are working, without focusing on finer details. Daily build and smoke testing are considered industry best practices.

Soak Testing

Testing a system under a significant load over an extended period to observe its behavior under sustained use.

Software Audit

An independent review conducted to assess compliance with requirements, specifications, standards, procedures, codes, contracts, and licensing requirements, among others.

Software Error

Human action resulting in software containing a fault that may cause a failure.

Software Failure

A deviation of system operation from specified requirements due to a software error.

Software Quality

The collection of features and characteristics of a software product that affect its ability to meet given needs.

Software Quality Assurance

The function responsible for ensuring that project standards, processes, and procedures are appropriate and correctly implemented.

Software Reliability

The probability that software will not cause system failure for a specified time and under specified conditions.

Specification

A tangible expression of requirements, such as a document, feature list, prototype, or test suite. Specifications are typically incomplete, as many requirements are implicitly understood. It is a mistake to assume that all requirements are expressed in the specification.

SQL Injection

A hacking technique that attempts to pass SQL commands through a web application’s user interface for execution by the backend database.

Standard

Mandatory requirements enforced to establish a disciplined and uniform approach to software development.

Statement Coverage

The percentage of executable statements that have been executed by a test suite.

Statement Of Work

A description of all the work required to complete a project, provided by the customer.

Static Analysis

Analysis of software artifacts, such as requirements or code, without executing them.

Static Testing

A form of software testing where the software is not actually used. It involves checking the code, algorithm, or document for syntax errors or manually reviewing them for errors.

Straw Man Plan

A lightweight or incomplete plan, such as the initial draft of a test plan or hardware allocation plan, serving as a starting point for discussion and providing a framework for developing a more concrete plan.

Stress Testing

Subjecting a program to heavy loads or stresses to observe its behavior. Stress testing differs from volume testing, as it involves a peak volume of data encountered over a short period.

String Testing

A test phase that focuses on finding bugs in typical usage scripts and operational or control-flow “strings” of a software development project. This test phase is relatively uncommon.

Structural Testing

Testing that focuses on the flow of control within the program, testing different execution paths and data relationships. It is sometimes confused with glass box testing, which encompasses other considerations beyond program structure.

Structural Tests

Tests based on the operational behavior of a computer system, either at the code, component, or design level. They aim to find bugs in operations at different levels, such as lines of code, chips, subassemblies, and interfaces. Also referred to as white-box tests, glass-box tests, code-based tests, or design-based tests.

Stub

A simplified or specialized implementation of a software component used during development or testing to replace a dependent component and facilitate testing.

Sunny-Day Testing

Positive tests conducted to demonstrate the correct functioning of the system.

SWAG

Acronym for Scientific Wild-Ass Guess, referring to an educated guess or estimate made during test scheduling activities in the early stages of development.

System Reliability

The probability that a system will perform a required task or mission for a specified time and under specified environmental conditions.

System Testing

Testing conducted to explore system behaviors that cannot be adequately tested through unit, component, or integration testing. It includes testing performance, installation, data integrity, storage management, security, and reliability. System testing assumes that all components have been successfully integrated and is often performed by independent testers.

System Under Test (SUT)

The system that is the target of the testing process. It refers to the specific software system being tested.

Test

The word “test” is derived from the Latin word for an earthen pot or vessel, which was used to assess materials and measure their weight. This led to the expression “to put to the test,” meaning to determine the presence or measure the elements. It refers to an activity where a system or component is executed under specific conditions, and the results are observed or recorded to evaluate a particular aspect of the system or component.

Test and Evaluation

In the context of the Department of Defense (DOD), test and evaluation (T&E) is the overall process of independent evaluation conducted throughout the system acquisition process. Its purpose is to assess and mitigate acquisition risks and estimate the operational effectiveness and suitability of the system being developed.

Test Asset

Any work product generated during the testing process, such as test plans, test scripts, and test data.

Test Automation

The process of controlling testing activities using a computer, rather than manual execution. It involves automating test execution and related tasks, also known as automated testing.

Test Bed

A configured execution environment used for testing, consisting of specific hardware, operating system, network topology, configuration of the product under test, and other software applications or systems.

Test Case

A collection of test inputs, execution conditions, and expected results created to achieve a specific objective, such as exercising a particular program path or verifying compliance with a requirement.

Test Case Specification

A document that specifies the test data to be used in executing the test conditions identified in the Test Design Specification.

Test Condition

A specific behavior or aspect of the system under test that needs to be verified.

Test Data

Data used to run through a computer program for testing purposes. It includes input data and file conditions associated with a particular test case.

Test Design

The process of selecting and specifying a set of test cases to meet the testing objectives or coverage criteria.

Test Design Specification

A document that provides detailed information about the test conditions, expected results, and test pass criteria, serving as a guideline for executing the tests.

Test-Driven Development

A programming technique emphasized in Extreme Programming (XP) where tests are written before implementing the code. The goal is to achieve rapid feedback and follow an iterative approach to software development.

Test Environment

The hardware and software environment in which tests will be conducted, including any other software or systems with which the software under test interacts. It may involve the use of stubs and test drivers.

Test Harness

A test environment consisting of stubs and drivers required to execute a test.

Test Incident Report

A document that provides detailed information about any failed tests, including the actual versus expected results, and offers insights into the reasons for the failure.

Test Item Transmittal Report

A document reporting the progress of tested software components from one testing stage to the next.

Test Log

A chronological record containing relevant details related to testing activities, such as the order of test case execution, the individuals who performed the tests, and the pass/fail status of each test.

Test Plan

A document that describes the scope, approach, resources, and schedule of intended testing activities. It identifies the test items, features to be tested, testing tasks, responsible individuals, and any risks requiring contingency planning.

Test Procedure

A document that defines the steps required to execute a part of a test plan or a set of test cases.

Test Procedure Specification

A document that provides detailed instructions on how to run each test, including any preconditions for setup and the sequence of steps to be followed.

Test Script

A document, program, or object that specifies the details of each test and subtest in a test suite. It includes the object to be tested, requirements, initial state, inputs, expected outcomes, and validation criteria.

Test Strategy

A high-level description of the test levels and approaches to be followed for testing activities within an organization or program.

Test Suite

A collection of one or more tests aimed at a specific object, sharing a common purpose and database, and often executed as a set.

Test Summary Report

A management report that highlights important findings from the conducted tests. It includes an assessment of the quality of the testing effort, the software system under test, and statistics derived from incident reports.

Test Tool

A software product used to support various testing activities, such as planning, control, specification, test execution, and test analysis.

Test Validity

The degree to which a test achieves its specified goal.

Testability

The ease or cost of testing a software artifact, considering the available tools, processes, and requirements clarity. It can also refer to the degree to which a software artifact supports testing in a specific test context.

Tester

A skilled professional involved in the testing of a software component or system.

Testing

The process of executing a program with the intention of finding errors. It involves activities such as designing, debugging, and executing tests to evaluate software attributes and ensure compliance with requirements.

Testware

Software, and sometimes data, used for testing purposes.

Thread Testing

Testing that demonstrates key functional capabilities by testing a sequence of units that collectively perform a specific function in the application.

Top Down Testing

An integration testing approach where higher-level components are tested first, while lower-level components are replaced by stubs.

Total Defect Containment Effectiveness (TDCE)

A metric that measures the effectiveness of defect detection techniques in identifying defects before the product is released. It is calculated as the number of defects found prior to release divided by the total number of defects found, including those found after release.

Traceability

The ability to identify related items, such as requirements and tests, in documentation and software.

Unit

The smallest testable entity, often corresponding to a subroutine or a component. It is a compilable program segment that does not include called subroutines or functions.

Unit Testing

The testing of individual software components or a collection of components. It involves testing units in isolation, replacing called components with simulators or stubs.

Unreachable Code

Code that cannot be executed because it cannot be reached during program execution.

Usability Testing

Testing performed to assess the extent to which a software product is understood, easy to learn, easy to operate, and attractive to users under specified conditions.

Use Case

A technique for capturing potential requirements of a new system or software change. It includes scenarios that convey how the system should interact with users or other systems to achieve specific business goals.

User Acceptance Testing (UAT)

A formal evaluation performed by customers as a condition of purchase. It is carried out to determine whether the software satisfies acceptance criteria and should be accepted by the customer.

User Interface (UI)

The means by which people interact with a system, providing input and output functionalities.

User Interface Freeze

The point in the development process where no changes are permitted to the user interface. It is essential for creating documentation, screenshots, and marketing materials.

V-Model

A framework that illustrates the software development lifecycle activities from requirements specification to maintenance. It emphasizes the integration of testing activities at each phase of the development process.

Validation Testing

The process of evaluating software at the end of the development process to ensure compliance with requirements.

Verification

The process of evaluating a system or component to determine whether it satisfies the conditions imposed at the start of the development phase. It involves reviewing, inspecting, testing, and checking to ensure compliance with requirements.

Vertical Traceability

The tracing of requirements through layers of development documentation to components.

Volume Testing

Subjecting the program to heavy volumes of data to assess its behavior under such conditions.

Walkthrough

A review process where the designer guides others through a segment of design or code they have written, explaining its details and intent.

White Box Testing

Testing conducted based on the internal structure and implementation details of the software. It involves using instrumentation to systematically test every aspect of the product.

Zero Bug Bounce (ZBB)

The moment in a project where all features are complete, and every work item is resolved, indicating that the end is within sight. However, this moment is often short-lived as new issues may arise during extended system testing, requiring further work.

You're 15 Minutes Away From Automated Test Maintenance and Fewer Bugs in Production
Simply fill out your information and create your first test suite in seconds, with AI to help you do it easily and quickly.
Achieve More Than 90% Test Automation
Step by Step Walkthroughs and Help
14 Day Free Trial, Cancel Anytime
“We spent so much time on maintenance when using Selenium, and we spend nearly zero time with maintenance using testRigor.”
Keith Powe VP Of Engineering - IDT
Related Articles

How to Write Effective Test Scripts for Automation?

Test automation can be helpful for achieving QA goals. You can write test scripts that machines run on your behalf as often as ...

Accessibility Testing: Ensuring Inclusivity in Software

In this new digital age, one of the most critical elements in software development is inclusivity. Accessibility testing is the ...

Test Manager Cheat Sheet

A test manager shoulders the responsibility of contributing to the QA operations in an organization. They work closely with ...
On our website, we utilize cookies to ensure that your browsing experience is tailored to your preferences and needs. By clicking "Accept," you agree to the use of all cookies. Learn more.
Cookie settings
Privacy Overview
This site utilizes cookies to enhance your browsing experience. Among these, essential cookies are stored on your browser as they are necessary for ...
Read more
Strictly Necessary CookiesAlways Enabled
Essential cookies are crucial for the proper functioning and security of the website.
Non-NecessaryEnabled
Cookies that are not essential for the website's functionality but are employed to gather additional data. You can choose to opt out by using this toggle switch. These cookies gather data for analytics and performance tracking purposes.