The Ultimate Testing Terms Glossary: A Reference for Testing Professionals
Testing is a critical phase in software development that ensures the quality, reliability, and functionality of a product. However, the field of testing is filled with a multitude of technical terms and jargon that can be overwhelming, especially for beginners. This glossary is designed to demystify testing terminology and provide you with clear and concise definitions of key terms.
Whether you are studying for a certification, preparing for an interview, or simply seeking to expand your testing vocabulary, the Testing Terms Glossary is here to support you on your testing journey. Let’s dive in and unlock the power of testing terminology together!
Testing Terms Glossary
The set of conditions that a user, customer, or any authorized entity requires a component or system to meet before it can be considered approved or accepted.
The procedure in which users test a software or system and decide to either approve or reject its acceptance based on the outcomes. It involves evaluating the program against its initial requirements and the current needs of its end users. This type of test is atypical because it is typically carried out by the software’s customer or end user rather than the development organization. Its purpose is to assess the system’s readiness for implementation or use. It is a formal testing process to determine whether the system meets its acceptance criteria and allows the customer to decide whether to accept or reject it.
Ensuring that a product is usable and available to individuals with disabilities.
The ability of the software product to deliver the intended or agreed-upon outcomes or effects with the required level of accuracy.
The act of spontaneously and ad hoc searching for software bugs. This type of testing is conducted without formal preparation, without following established test design techniques, and without any specific expectations for results. The test execution activity is guided by randomness or arbitrary choices.
Agile Testing integrates testing throughout development, emphasizing continuous feedback and collaboration. It employs test-driven development, automation and adapts to changing requirements. The approach catches defects early, maintains code quality, and improves efficiency. Agile Testing delivers high-quality software through rapid iterations and effective communication.
A combinatorial software testing technique that involves testing all possible combinations of input parameters in pairs for a system, usually a software algorithm.
Alpha testing is an internal testing process carried out by the manufacturer to verify the functionality and capabilities of a new product. It involves following a set of testing procedures to assess the product’s performance. This testing phase occurs before beta testing and is conducted by the manufacturer’s test team, along with potentially other interested individuals within the company.
An anomaly refers to any situation that deviates from the expected outcome as defined by requirements specifications, design documents, user documents, standards, or personal perception and experience. These deviations can be identified during various activities such as reviews, testing, analysis, compilation, or utilization of software products and related documentation. Anomalies encompass any discrepancies or deviations encountered throughout these processes, signaling deviations from the norm or expected behavior.
In computer science, an application programming interface (API) is a defined interface that outlines how an application program can interact and request services from libraries or operating systems. It specifies the methods and protocols for communication, enabling seamless integration between different software components.
The software or application under consideration for the testing process.
A separate assessment of software products or processes to determine adherence to standards, guidelines, specifications, and procedures using objective criteria. This evaluation includes documents that outline the required form or content of the products, the process for their production, and the methods for measuring compliance with standards or guidelines.
A traceability path enables tracking the original input of a process, such as data, back through the process using the process output as a starting point. This traceability aids in defect analysis and enables the conduction of process audits.
Automated testing refers to using computer-based tools and systems to perform testing tasks, reducing or eliminating the need for manual intervention.
The level of readiness and availability of a component or system to be used as intended. This is commonly represented as a percentage and indicates how operational and accessible it is.
Back-to-Back testing involves running multiple versions or variants of a component or system using the same inputs, comparing their outputs, and analyzing any potential differences.
A formally reviewed or agreed upon specification or software product that serves as the foundation for further development and can only be modified through a formal change control process. It can also refer to a set of established values or observations representing the background level of a measurable quantity.
A testing approach where tests are defined in terms of externally observable inputs, outputs, and events. The design of these tests can utilize various sources of information.
1. A reference standard used for measurements or comparisons.
2. A test that compares components or systems to each other or a predefined standard.
An integration testing method where software elements, hardware elements, or both are combined all at once into a component or system without staged integration.
Testing is conducted at customer sites by end-users before the software product or system is made widely available. Typically, friendly users participate in this testing phase.
Software developed specifically for a set of users or customers, as opposed to off-the-shelf software.
Testing performed without knowledge of the internal workings of the product. Testers rely on external sources for information about how the product should run, potential risks, expected errors, and error handling.
A testing approach that focuses on identifying overall patterns of unexpected changes, such as rapidly comparing web pages or scrolling through log files.
A test case that cannot be executed due to unfulfilled preconditions.
An integration testing approach where lower-level components are tested first, facilitating the testing of higher-level components.
A specific value at the extreme edges of a variable or equivalence class subset.
Guided testing that explores values at and near the minimum and maximum allowed values for a particular input or output.
The percentage of branches covered by a test suite.
A private build of a product is used to verify code changes before they are integrated into the main code base.
An error, flaw, or fault in a computer program hinders its correct functioning or produces incorrect results.
In-house testing involving various individuals from different roles to identify bugs before the software release.
A review meeting to assess and prioritize active bugs reported against the system under test.
Tests are performed on each new build to ensure its testability and mainstream functionality before being handed to the testing team.
A testing approach based on descriptions or knowledge of business processes to design test cases.
A staged framework that outlines the essential elements of an effective software process. It covers best practices for software development and maintenance.
A framework that defines the key elements of an efficient product development and maintenance process. It encompasses best practices for planning, engineering, and managing product development.
Testing conducted to determine the maximum number of users a computer or a set of computers can support.
A test execution tool that records inputs during manual testing to generate automated test scripts for later execution. Often used for automated regression testing.
A visual diagram used to analyze factors contributing to an overall effect. Also known as a Fishbone Diagram or Ishikawa Diagram.
A term describing the frequency of changes that occur in a file or module over a specific period.
The stage in which a developer considers all the necessary code for implementing a feature to be checked into source control.
An analysis method used to determine which parts of the software have been executed by the test suite. It includes statement coverage, decision coverage, or condition coverage.
The point in the development process where no changes are permitted to the source code of a program.
A user interface where commands are entered via keyboard input, and the system provides output on the monitor.
Software products developed for the general market and delivered in an identical format to multiple customers.
The ability of a software product to adhere to standards, conventions, regulations, or legal requirements.
Testing conducted to determine the extent to which a component or system complies with specified requirements.
The process of subdividing an object-oriented software system into units and validating their responses to stimuli applied through their interfaces.
Testing partially or fully automated by another program.
Testing conducted to assess how a component or system handles the occurrence of two or more activities within the same time interval, either through interleaving or simultaneous execution.
A discipline that applies technical and administrative control to identify, document, and manage changes to the characteristics of a configuration item.
The degree of uniformity, standardization, and absence of contradiction among documents or parts of a system or component.
The degree to which software is free from errors.
A series of dependent tasks in a project that must be completed as planned to keep the entire project on schedule.
Compatibility testing aimed at ensuring a web application functions correctly across different browsers and versions.
A computer security exploit where information from an untrusted context is inserted into a trusted context to launch an attack.
Software developed specifically for a set of users or customers, as opposed to off-the-shelf software.
A measure of the soundness and confidence in a program, assessing the number of independent paths through a program module.
A development practice where a complete system is compiled and linked on a daily basis to ensure a consistent system with the latest changes is available at any time.
Testing approach where test cases are parameterized by external data values, often stored in files or spreadsheets. It is commonly used in automated testing to execute multiple tests using a single control script.
The process in which developers identify the root cause of a bug and propose potential fixes. Debugging is performed to resolve known bugs either during subsystem or unit development or in response to bug reports.
The percentage of decision outcomes exercised by a test suite. Achieving 100% decision coverage implies complete branch coverage and statement coverage.
A flaw in a component or system that can cause it to fail in performing its intended function, such as an incorrect statement or data definition. Encountering a defect during execution may lead to a failure of the component or system.
The number of defects identified in a component or system divided by its size, typically measured in terms of lines of code, number of classes, or function points.
The ratio of undetected defects that make their way into production to the total number of defects.
Occurs when one defect prevents the detection of another defect.
Activities involved in identifying and preventing the introduction of defects into a product.
The ratio of rejected defect reports, which may be due to them not being actual bugs, to the total number of defects.
Defect Removal Efficiency (DRE)The ratio of defects found during development to the total defects, including those discovered in the field after release.
Something that causes a defect to occur or become observable. For example, low disk space often triggers the appearance of defects.
A noticeable departure from the norm, plan, standard, procedure, or reviewed variable.
A metric that does not rely on the measurement of any other attribute.
A complex and frequently modified system that is often inadequately documented. It may or may not be considered a “legacy” system.
Testing conducted at multiple locations, involving multiple teams, or both.
The set of valid input and/or output values that can be selected.
DriverA software component or test tool that replaces another component, handling control and/or calling of a component or system.
The practice of a company using pre-release versions of its own software for day-to-day operations, ensuring reliability before selling. It promotes early feedback on product value and usability.
The individual or group who will use the system for its intended operational use in its deployed environment.
Decision-making guidelines used to determine if a system under test is ready to progress to a specific testing phase. Entry criteria become more stringent as the test phases advance.
A portion of input or output where the behavior of a component or system is assumed to be the same based on specifications.
An incorrect result produced by a computer or a software flaw that deviates from user expectations.
A test design technique that utilizes tester experience to anticipate and design tests specifically to expose defects resulting from errors made during development.
Intentionally adding known defects to existing ones in a component or system to monitor detection, removal rates, and estimate the remaining defects.
To report a problem to higher management for resolution.
An approach where the test suite encompasses all combinations of input values and preconditions.
Decision-making guidelines used to determine if a system under test is ready to exit a specific testing phase. When exit criteria are met, the system proceeds to the next phase or the test project is considered complete.
Predicted output data and file conditions associated with a specific test case.
Test design and execution conducted simultaneously, allowing testers to adapt and optimize test-related activities throughout the project.
An agile software development methodology emphasizing collaboration, customer involvement, and iterative development.
Deviation of a component or system from its expected delivery, service, or result.
A specific way in which a failure manifests itself, such as symptoms, behaviors, or internal state changes.
A systematic approach to identify and analyze potential failure modes to prevent their occurrence.
The process of evaluating an object to demonstrate that it fails to meet requirements.
An incorrect step, process, or data definition in a computer program.
Intentionally introducing errors into code to evaluate the ability to detect such errors.
An engineering model depicting potential errors that could occur in a system under test.
The ability of a software product to maintain a specified level of performance in the presence of software faults or interface infringements.
A desirable behavior, computation, or value produced by an object. Requirements are composed of features.
The stage in the development process where work on adding new features is halted, focusing on bug fixing and enhancing the user experience.
The phase marking the completion of a project, indicating the product is ready for purchase and usage by customers before general availability.
A visual diagram illustrating factors contributing to an overall effect. Also known as a Cause-And-Effect Diagram or Ishikawa Diagram.
The process of conducting testing activities and reporting test results according to an approved test plan.
The initial definition of a proposed system, documenting goals, objectives, user requirements, management requirements, operating environment, and design methodology.
Testing performed to verify that the functions of a system are present as specified, without considering the source code structure.
A software testing technique that inputs random data (“fuzz”) into a program to detect failures, such as crashes or violated code assertions.
An evaluation of the disparity between required or desired conditions and the current state of affairs.
The stage in which the product is fully complete and manufactured in sufficient quantity for purchase by all anticipated customers.
Thoroughly testing the code with in-depth knowledge, often performed by the programmer.
User interfaces that accept input via devices like a keyboard and mouse, providing graphical output on a monitor.
Combining elements of both Black Box and White Box testing, testing software against its specification while having partial knowledge of its internal workings.
A default scenario without exceptional or error conditions, representing a well-defined test case that executes without exceptions and produces an expected output.
Tracing requirements for a test level across various layers of test documentation, such as test plan, test design specification, test case specification, and test procedure specification.
An IEEE standard specifying the format of software test documentation, including various documents like test plan, test design specification, test case specification, and more.
An operational event outside the norm of system operation, with potential impact on the system.
A document reporting an event occurring during testing that requires investigation.
A methodical approach to testing interfaces between unit-tested programs or system components, which can be done in top-down or bottom-up fashion.
Verification and validation of a software product performed by an independent organization separate from the designer.
When a program encounters an error condition on the first invalid variable, subsequent values are not tested.
A formal evaluation technique involving a detailed examination of a system or component by individuals other than the author to detect faults and problems.
The capability of the software to be successfully installed in a specified environment.
Testing the combined functionality of individual software modules as a group, following unit testing and preceding system testing.
Testing conducted to ensure proper passing of information and control between program or system components.
The process of designing and coding a product to function properly when adapted for use in different languages and locales.
A diagram used to analyze factors contributing to an overall effect, also known as a Fishbone Diagram or Cause-And-Effect Diagram.
A scripting technique using data files containing test inputs, expected outcomes, and keywords related to the application being tested.
A temporary or substandard solution applied to address an urgent problem, often believed to keep a project moving forward.
A bug that exists in the system under test but has not yet been discovered.
The process where links on a website become irrelevant or broken over time due to linked websites disappearing, content changes, or redirects.
Subjecting a system to a statistically representative load to assess software reliability and performance.
The process of translating messages, documentation, and modifying locale-specific files on an internationalized base product.
Testing the system’s behavior when critical resources like memory or disk space are low or depleted.
The effort required to make changes in software without introducing errors.
The average time elapsed between system failures.
The average time needed to fix bugs.
Acronym for “My Eyes Glazed Over,” referring to a loss of focus and attention while attempting to comprehend dense technical documents.
Unintentional consumption of memory by a program where it fails to release memory it no longer needs.
A set of rules and criteria providing a precise and repeatable approach to perform a task and achieve a desired outcome.
A collection of methods, procedures, and standards that integrate engineering approaches to product development.
A quantitative measure of the extent to which a system, component, or process possesses a specific attribute.
A scheduled event held accountable to measure progress.
Testing by randomly inputting strings or pushing buttons to identify product breakages or vulnerabilities.
Modifying the system/program under test to create faulty versions (mutants) and running them through test cases to identify failures, thereby assessing test coverage and identifying additional tests.
Testing primarily focused on falsification or intentionally breaking the software, also known as Dirty Testing.
Testing the attributes of a component or system that do not relate to functionality, such as reliability, efficiency, usability, maintainability, and portability.
Any means used to predict the outcome of a test, including comparing actual and expected outputs or using a tool to determine if the program passed the test.
A systematic approach to test all possible combinations of variables using orthogonal arrays, reducing the number of combinations to test.
Testing performed by end users in the normal operating environment of the software.
A black box test design technique where test cases are designed to cover all possible combinations of pairs of input parameters.
Analyzing defects by ranking causes from most to least significant, based on the principle that a majority of effects come from a small number of causes (80% of effects from 20% of causes).
Assessing a system or component to determine how effectively it achieves operational objectives.
Testing conducted to evaluate compliance with specified performance requirements or to demonstrate that the program does not meet performance objectives.
The phenomenon where the more a software is tested, the more immune it becomes to those tests, similar to how insects develop resistance to pesticides.
Testing primarily focused on validation or demonstrating the correct functioning of the software, also known as Clean Testing.
Indicates when a particular bug should be fixed, with priorities changing as the project progresses.
An incomplete implementation of software that mimics the expected behavior of the final product.
A series of values that appear random but are actually generated according to a predetermined sequence.
The ability of a product, system, or process to fulfill customer requirements and expectations, conform to standards, and meet user needs.
The process of maintaining a desired level of quality in a service or product throughout its delivery or production.
Testing aimed at finding bugs and ensuring the quality of the product.
A management-oriented attribute of software that contributes to its overall quality.
A milestone in a project where a specific level of quality must be achieved before proceeding.
A subset of planned or defined test cases that cover the essential functionality of a component or system. It aims to verify that the critical functions of a program are working, without focusing on finer details. Daily build and Quick Confidence Tests are considered industry best practices.
Checking whether a system effectively prevents, detects, and recovers from operational problems such as network failures, database unavailability, equipment issues, and operator errors.
The capability of the software to restore a specified level of performance and recover affected data in case of failure.
Testing performed after making functional improvements or repairs to the program, aiming to ensure that the changes have not caused unintended adverse effects.
A product build undergoing final testing before shipment, where all code is complete and known bugs have been fixed.
Tests conducted to ensure that thoroughly tested and approved code is correctly installed and configured in the production environment.
The ability of the software product to perform its required functions under stated conditions for a specified period or number of operations.
A technique that involves repeating a function or scenario until reaching a specified limit or threshold, or until an undesirable action occurs.
A condition or capability needed by a user to solve a problem or achieve an objective.
Descriptions of what an object should do and the characteristics it should have, including consistency, completeness, implementability, and falsifiability.
Running test cases that previously failed to verify the success of corrective actions.
The possibility of experiencing loss.
The degree to which a system or component can function correctly in the presence of invalid inputs or challenging environmental conditions.
The underlying reason for a bug, as opposed to the observable symptoms of the bug.
A quick test of the main components of a system to determine if it is functioning as expected, without conducting in-depth testing. Often used interchangeably with Smoke Testing.
The ability of a software system to handle larger or smaller volumes of data and accommodate more or fewer users. It involves the capability to adjust size or capacity incrementally and cost-effectively with minimal impact on unit costs and additional services procurement. It refers to how well the system performs as the problem size increases.
A description of a task performed by an end user, which may or may not be implemented in the current product. Scenarios often involve the use of multiple features.
Refers to the relative impact or consequence of a bug, and typically remains unchanged unless further insights into hidden consequences are discovered. It indicates the impact of a bug on the system under test, regardless of its likelihood of occurrence or the extent to which it hinders system usage. Contrasts with priority.
A quick test of the primary components of a system to determine if it is operating as expected, without conducting detailed testing. Often used interchangeably with Smoke Testing.
In Smart Monkey Testing, inputs are generated based on probability distributions that reflect expected usage statistics, such as user profiles. It can involve different levels of intelligence (IQ), considering the correlation between input distributions. The input is treated as a single event.
A subset of planned or defined test cases that cover the essential functionality of a component or system. It aims to verify that the critical functions of a program are working, without focusing on finer details. Daily build and smoke testing are considered industry best practices.
Testing a system under a significant load over an extended period to observe its behavior under sustained use.
An independent review conducted to assess compliance with requirements, specifications, standards, procedures, codes, contracts, and licensing requirements, among others.
Human action resulting in software containing a fault that may cause a failure.
A deviation of system operation from specified requirements due to a software error.
The collection of features and characteristics of a software product that affect its ability to meet given needs.
The function responsible for ensuring that project standards, processes, and procedures are appropriate and correctly implemented.
The probability that software will not cause system failure for a specified time and under specified conditions.
A tangible expression of requirements, such as a document, feature list, prototype, or test suite. Specifications are typically incomplete, as many requirements are implicitly understood. It is a mistake to assume that all requirements are expressed in the specification.
A hacking technique that attempts to pass SQL commands through a web application’s user interface for execution by the backend database.
Mandatory requirements enforced to establish a disciplined and uniform approach to software development.
The percentage of executable statements that have been executed by a test suite.
A description of all the work required to complete a project, provided by the customer.
Analysis of software artifacts, such as requirements or code, without executing them.
A form of software testing where the software is not actually used. It involves checking the code, algorithm, or document for syntax errors or manually reviewing them for errors.
A lightweight or incomplete plan, such as the initial draft of a test plan or hardware allocation plan, serving as a starting point for discussion and providing a framework for developing a more concrete plan.
Subjecting a program to heavy loads or stresses to observe its behavior. Stress testing differs from volume testing, as it involves a peak volume of data encountered over a short period.
A test phase that focuses on finding bugs in typical usage scripts and operational or control-flow “strings” of a software development project. This test phase is relatively uncommon.
Testing that focuses on the flow of control within the program, testing different execution paths and data relationships. It is sometimes confused with glass box testing, which encompasses other considerations beyond program structure.
Tests based on the operational behavior of a computer system, either at the code, component, or design level. They aim to find bugs in operations at different levels, such as lines of code, chips, subassemblies, and interfaces. Also referred to as white-box tests, glass-box tests, code-based tests, or design-based tests.
A simplified or specialized implementation of a software component used during development or testing to replace a dependent component and facilitate testing.
Positive tests conducted to demonstrate the correct functioning of the system.
Acronym for Scientific Wild-Ass Guess, referring to an educated guess or estimate made during test scheduling activities in the early stages of development.
The probability that a system will perform a required task or mission for a specified time and under specified environmental conditions.
Testing conducted to explore system behaviors that cannot be adequately tested through unit, component, or integration testing. It includes testing performance, installation, data integrity, storage management, security, and reliability. System testing assumes that all components have been successfully integrated and is often performed by independent testers.
The system that is the target of the testing process. It refers to the specific software system being tested.
The word “test” is derived from the Latin word for an earthen pot or vessel, which was used to assess materials and measure their weight. This led to the expression “to put to the test,” meaning to determine the presence or measure the elements. It refers to an activity where a system or component is executed under specific conditions, and the results are observed or recorded to evaluate a particular aspect of the system or component.
In the context of the Department of Defense (DOD), test and evaluation (T&E) is the overall process of independent evaluation conducted throughout the system acquisition process. Its purpose is to assess and mitigate acquisition risks and estimate the operational effectiveness and suitability of the system being developed.
Any work product generated during the testing process, such as test plans, test scripts, and test data.
The process of controlling testing activities using a computer, rather than manual execution. It involves automating test execution and related tasks, also known as automated testing.
A configured execution environment used for testing, consisting of specific hardware, operating system, network topology, configuration of the product under test, and other software applications or systems.
A collection of test inputs, execution conditions, and expected results created to achieve a specific objective, such as exercising a particular program path or verifying compliance with a requirement.
A document that specifies the test data to be used in executing the test conditions identified in the Test Design Specification.
A specific behavior or aspect of the system under test that needs to be verified.
Data used to run through a computer program for testing purposes. It includes input data and file conditions associated with a particular test case.
The process of selecting and specifying a set of test cases to meet the testing objectives or coverage criteria.
A document that provides detailed information about the test conditions, expected results, and test pass criteria, serving as a guideline for executing the tests.
A programming technique emphasized in Extreme Programming (XP) where tests are written before implementing the code. The goal is to achieve rapid feedback and follow an iterative approach to software development.
The hardware and software environment in which tests will be conducted, including any other software or systems with which the software under test interacts. It may involve the use of stubs and test drivers.
A test environment consisting of stubs and drivers required to execute a test.
A document that provides detailed information about any failed tests, including the actual versus expected results, and offers insights into the reasons for the failure.
A document reporting the progress of tested software components from one testing stage to the next.
A chronological record containing relevant details related to testing activities, such as the order of test case execution, the individuals who performed the tests, and the pass/fail status of each test.
A document that describes the scope, approach, resources, and schedule of intended testing activities. It identifies the test items, features to be tested, testing tasks, responsible individuals, and any risks requiring contingency planning.
A document that defines the steps required to execute a part of a test plan or a set of test cases.
A document that provides detailed instructions on how to run each test, including any preconditions for setup and the sequence of steps to be followed.
A document, program, or object that specifies the details of each test and subtest in a test suite. It includes the object to be tested, requirements, initial state, inputs, expected outcomes, and validation criteria.
A high-level description of the test levels and approaches to be followed for testing activities within an organization or program.
A collection of one or more tests aimed at a specific object, sharing a common purpose and database, and often executed as a set.
A management report that highlights important findings from the conducted tests. It includes an assessment of the quality of the testing effort, the software system under test, and statistics derived from incident reports.
A software product used to support various testing activities, such as planning, control, specification, test execution, and test analysis.
The degree to which a test achieves its specified goal.
The ease or cost of testing a software artifact, considering the available tools, processes, and requirements clarity. It can also refer to the degree to which a software artifact supports testing in a specific test context.
A skilled professional involved in the testing of a software component or system.
The process of executing a program with the intention of finding errors. It involves activities such as designing, debugging, and executing tests to evaluate software attributes and ensure compliance with requirements.
Software, and sometimes data, used for testing purposes.
Testing that demonstrates key functional capabilities by testing a sequence of units that collectively perform a specific function in the application.
An integration testing approach where higher-level components are tested first, while lower-level components are replaced by stubs.
A metric that measures the effectiveness of defect detection techniques in identifying defects before the product is released. It is calculated as the number of defects found prior to release divided by the total number of defects found, including those found after release.
The ability to identify related items, such as requirements and tests, in documentation and software.
The smallest testable entity, often corresponding to a subroutine or a component. It is a compilable program segment that does not include called subroutines or functions.
The testing of individual software components or a collection of components. It involves testing units in isolation, replacing called components with simulators or stubs.
Code that cannot be executed because it cannot be reached during program execution.
Testing performed to assess the extent to which a software product is understood, easy to learn, easy to operate, and attractive to users under specified conditions.
A technique for capturing potential requirements of a new system or software change. It includes scenarios that convey how the system should interact with users or other systems to achieve specific business goals.
A formal evaluation performed by customers as a condition of purchase. It is carried out to determine whether the software satisfies acceptance criteria and should be accepted by the customer.
The means by which people interact with a system, providing input and output functionalities.
The point in the development process where no changes are permitted to the user interface. It is essential for creating documentation, screenshots, and marketing materials.
A framework that illustrates the software development lifecycle activities from requirements specification to maintenance. It emphasizes the integration of testing activities at each phase of the development process.
The process of evaluating software at the end of the development process to ensure compliance with requirements.
The process of evaluating a system or component to determine whether it satisfies the conditions imposed at the start of the development phase. It involves reviewing, inspecting, testing, and checking to ensure compliance with requirements.
The tracing of requirements through layers of development documentation to components.
Subjecting the program to heavy volumes of data to assess its behavior under such conditions.
A review process where the designer guides others through a segment of design or code they have written, explaining its details and intent.
Testing conducted based on the internal structure and implementation details of the software. It involves using instrumentation to systematically test every aspect of the product.
The moment in a project where all features are complete, and every work item is resolved, indicating that the end is within sight. However, this moment is often short-lived as new issues may arise during extended system testing, requiring further work.
Achieve More Than 90% Test Automation | |
Step by Step Walkthroughs and Help | |
14 Day Free Trial, Cancel Anytime |