Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
The degree to which a product or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use.
The capability of the software product to provide the right or agreed results or effects with the needed degree of precision.
A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test.
The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered.
Testing based on a systematic analysis of e.g., product risks or requirements.
The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified.
A tool that carries out static analysis.
Repeated action, process, structure or reusable solution that initially appears to be beneficial and is commonly used but is ineffective and/or counterproductive in practice.
A condition that cannot be decomposed, i.e., a condition that does not contain two or more single conditions joined by a logical operator (AND, OR, XOR).
The capability of the software product to be attractive to the user.
The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage.
The process of intentionally adding defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects. Fault seeding is typically part of development (pre-release) testing and can be performed at any test level (component, integration, or system).
A basic block that can be selected for execution based on a program construct in which one of two or more alternative program paths is available, e.g., case, jump, go to, if-then-else.
A logical expression that can be evaluated as True or False.
The coverage of all possible combinations of all single condition outcomes within one statement.
A white-box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).
The coverage of condition outcomes.
The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g., an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function.
An abstract representation of calling relationships between subroutines in a program.
A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.
A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.
The process of confirming that a component, system or person complies with its specified requirements.
The capability of the software product to enable specified modifications to be implemented.
Testing based on an analysis of the internal structure of the component or system.
Acronym for Command-Line Interface.
The capability of the software product to co-exist with other independent software in a common environment sharing common resources.
Testing based on an analysis of the internal structure of the component or system.
A standard that describes the characteristics of a design or a design description of data or program components.
A black-box test design technique in which test cases are designed to execute specific combinations of values of several parameters.
A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
The degree to which a component or system can exchange information with other components or systems, and/or perform its required functions while sharing the same hardware or software environment.
The set of generic and specific conditions, agreed upon with the stakeholders for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.
The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify.
The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions.
A minimal software item that can be tested in isolation.
Testing performed to expose defects in the interfaces and interactions between integrated components.
The testing of individual software components.
A logical expression that can be evaluated as True or False.
The coverage of all possible combinations of all single condition outcomes within one statement.
A white-box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).
The coverage of condition outcomes.
The percentage of all single condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. 100% modified condition / decision coverage implies 100% decision condition coverage.
A white-box test technique in which test cases are designed to exercise single condition outcomes that independently affect a decision outcome.
A white-box test design technique in which test cases are designed to execute condition outcomes.
The composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.
A discipline applying technical and administrative direction and surveillance to identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements.
Testing to determine the portability of a software product.
A sequence of events (paths) in the execution through a component or system.
A form of static analysis based on a representation of unique paths (sequences of events) in the execution through a component or system. Control flow analysis evaluates the integrity of control flow structures, looking for possible control flow anomalies such as closed loops or logically unreachable process steps.
An abstract representation of all possible sequences of events (paths) in the execution through a component or system.
A sequence of consecutive edges in a directed graph.
An approach to structure-based testing in which test cases are designed to execute specific sequences of events. Various techniques exist for control flow testing, e.g., decision testing, condition testing, and path testing, that each have their specific approach and level of control flow coverage.
The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
A vulnerability that allows attackers to inject malicious code into an otherwise benign website.
The maximum number of linear, independent paths through a program. Cyclomatic complexity may be computed as L = N + 2P, where L = the number of edges/links in a graph, N = the number of nodes in a graph, P = the number of disconnected parts of the graph (e.g., a called graph or subroutine).
The maximum number of linear, independent paths through a program. Cyclomatic complexity may be computed as L = N + 2P, where L = the number of edges/links in a graph, N = the number of nodes in a graph, P = the number of disconnected parts of the graph (e.g., a called graph or subroutine).
The sequence of possible changes to the state of data objects.
A form of static analysis based on the definition and usage of variables.
A white-box test technique in which test cases are designed to execute definition-use pairs of variables.
A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data-driven testing is often used to support the application of test execution tools such as capture/playback tools.
Code that cannot be reached and therefore is impossible to execute.
A tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any program statement and to set and examine program variables.
A tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any program statement and to set and examine program variables.
A program point at which the control flow has two or more alternative routes. A node with two or more links to separate branches.
The percentage of all condition outcomes and decision outcomes that have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage.
A white-box test design technique in which test cases are designed to execute condition outcomes and decision outcomes.
The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.
The result of a decision (which therefore determines the branches to be taken).
A white-box test design technique in which test cases are designed to execute decision outcomes.
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g., an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
The process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact.
A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function.
The set of generic and specific conditions, agreed upon with the stakeholders for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.
The association of a definition of a variable with the subsequent use of that variable. Variable uses include computational (e.g., multiplication) or to direct the execution of a path (predicate use).
A security attack that is intended to overload the system with requests such that legitimate requests cannot be serviced.
Any event occurring that requires investigation.
Testing a component or system in a way for which it was not intended to be used.
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.
The process of evaluating behavior, e.g., memory performance, CPU usage, of a system or component during execution.
Testing that involves the execution of the software of a component or system.
(1) The capability of the software product to provide appropriate performance, relative to the amount of resources used, under stated conditions. (2) The capability of a process to produce the intended outcome, relative to the amount of resources used.
The process of encoding information so that only authorized parties can retrieve the original information, usually by means of a specific decryption key or process.
An executable statement or process step which defines a point at which a given process is intended to begin.
A human action that produces an incorrect result.
The process of intentionally adding defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects. Fault seeding is typically part of development (pre-release) testing and can be performed at any test level (component, integration, or system).
A tool for seeding (i.e., intentionally inserting) faults in a component or system.
The set of generic and specific conditions, agreed upon with the stakeholders for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.
The behavior predicted by the specification, or another source, of the component or system under specified conditions.
The behavior predicted by the specification, or another source, of the component or system under specified conditions.
A test is deemed to fail if its actual result does not match its expected result.
The status of a test result in which the actual result does not match the expected result.
The backup operational mode in which the functions of a system that becomes unavailable are assumed by a secondary system.
Testing by simulating failure modes or actually causing failures in a controlled environment. Following a failure, the failover mechanism is tested to ensure that data is not lost or corrupted and that any agreed service levels are maintained (e.g., function availability or response times).
Deviation of the component or system from its expected delivery, service or result.
The physical or functional manifestation of a failure.
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g., an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
The process of intentionally adding defects to a system for the purpose of finding out whether the system can detect, and possibly recover from, a defect. Fault injection is intended to mimic failures that might occur in the field.
The process of intentionally adding defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects. Fault seeding is typically part of development (pre-release) testing and can be performed at any test level (component, integration, or system).
A tool for seeding (i.e., intentionally inserting) faults in a component or system.
The capability of the software product to maintain a specified level of performance in cases of software faults (defects) or of infringement of its specified interface.
A distinguishing characteristic of a component or system.
A result of an evaluation that identifies some important issue, problem, or opportunity.
A computational model consisting of a finite number of states and transitions between those states, possibly with accompanying actions.
A review characterized by documented procedures and requirements, e.g., inspection.
An integration approach that combines the components or systems for the purpose of getting a basic functionality working early.
Testing based on an analysis of the specification of the functionality of a component or system.
The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions.
Testing based on an analysis of the internal structure of the component or system.
A type of interface that allows users to interact with a component or system through graphical icons and visual indicators.
A pointer within a web page that leads to other web pages.
A tool used to check that no broken hyperlinks are present on a web site.
Any event occurring that requires investigation.
A measure that can be used to estimate or predict another measure.
Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data.
A variable (whether stored within a component or outside) that is read by a component.
A type of peer review that relies on visual examination of documents to detect defects, e.g., violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure.
The capability of the software product to be installed in a specified environment.
Supplied software on any suitable media which leads the installer through the installation procedure.
The process of combining components or systems into larger assemblies.
Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.
The capability of the software product to interact with one or more specified components or systems.
Testing to determine the interoperability of a software product.
Testing a component or system in a way for which it was not intended to be used.
A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test.
The capability of the software product to enable the user to learn its application.
A test plan that typically addresses one test level.
Testing performed to expose defects in the interfaces and interactions between integrated components.
Documentation defining a designated number of virtual users who process a defined set of transactions in a specified time period that a component or system being tested may experience in production.
A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g., numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system.
Testing based on an analysis of the internal structure of the component or system.
Testing based on an analysis of the internal structure of the component or system.
The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment.
Testing to determine the maintainability of a software product.
Modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment.
Testing the changes to an operational system or the impact of a changed environment to an operational system.
The interception, mimicking and/or altering and subsequent relaying of communications (e.g., credit card transactions) by a third party such that a user remains unaware of that third party's presence.
A test plan that typically addresses multiple test levels.
(1) The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. (2) The capability of the software product to avoid failure as a result of defects in the software.
Any model used in model-based testing.
The percentage of all single condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. 100% modified condition / decision coverage implies 100% decision condition coverage.
The average time between failures of a component or system.
The average time a component or system will take to recover from a failure.
The number or category assigned to an attribute of an entity by making a measurement.
The process of assigning a number or category to an entity to describe an attribute of that entity.
A memory access failure due to a defect in a program's dynamic store allocation logic that causes it to fail to release memory after it has finished using it, eventually causing the program and/or other concurrent processes to fail due to lack of memory.
A measurement scale and the method used for measurement.
A human action that produces an incorrect result.
Testing based on or involving models.
The percentage of all single condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. 100% modified condition / decision coverage implies 100% decision condition coverage.
A white-box test technique in which test cases are designed to exercise single condition outcomes that independently affect a decision outcome.
The percentage of all single condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. 100% modified condition / decision coverage implies 100% decision condition coverage.
A white-box test technique in which test cases are designed to exercise single condition outcomes that independently affect a decision outcome.
A minimal software item that can be tested in isolation.
The testing of individual software components.
A software tool or hardware device that runs concurrently with the component or system under test and supervises, records and/or analyzes the behavior of the component or system.
The coverage of all possible combinations of all single condition outcomes within one statement.
A white-box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).
Testing a component or system in a way for which it was not intended to be used.
A form of integration testing where all of the nodes that connect to a given node are the basis for the integration testing.
Testing the attributes of a component or system that do not relate to functionality, e.g., reliability, efficiency, usability, maintainability and portability.
A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
The capability of the software product to enable the user to operate and control it.
Operational testing in the acceptance test phase, typically performed in a (simulated) operational environment by operations and/or systems administration staff focusing on operational aspects, e.g., recoverability, resource-behavior, installability and technical compliance.
The representation of a distinct set of tasks performed by the component or system, possibly based on user behavior when interacting with the component or system, and their probabilities of occurrence. A task is logical rather that physical and can be executed over several machines or be executed in non-contiguous time segments.
A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).
The consequence/outcome of the execution of a test.
A variable (whether stored within a component or outside) that is written by a component.
A form of integration testing that targets pairs of components that work together, as shown in a call graph.
A black-box test design technique in which test cases are designed to execute all possible discrete combinations of each pair of input parameters.
A sequence of consecutive edges in a directed graph.
A white-box test design technique in which test cases are designed to execute paths.
The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.
Testing to determine the performance of a software product.
A data item that specifies the location of another data item.
The ease with which the software product can be transferred from one hardware or software environment to another.
Testing to determine the portability of a software product.
Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.
A logical expression which evaluates to true or false to direct the execution path.
The behavior predicted by the specification, or another source, of the component or system under specified conditions.
A set of interrelated activities, which transform inputs into outputs.
A risk directly related to the test object.
Operational testing in the acceptance test phase, typically performed in a (simulated) operational environment by operations and/or systems administration staff focusing on operational aspects, e.g., recoverability, resource-behavior, installability and technical compliance.
A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources.
A risk related to management and control of the (test) project, e.g., lack of staffing, strict deadlines, changing requirements, etc.
A series which appears to be random but is in fact generated according to some prearranged sequence.
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.
Part of quality management focused on providing confidence that quality requirements will be fulfilled.
A feature or characteristic that affects an item's quality.
A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.
The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure.
Testing to determine the recoverability of a software product.
Testing to determine the recoverability of a software product.
The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations.
A model that shows the growth in reliability over time during continuous testing of a component or system as a result of the removal of defects that result in reliability failures.
Testing to determine the reliability of a software product.
The capability of the software product to be used in place of another specified software product for the same purpose in the same environment.
A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.
The capability of the software product to use appropriate amounts and types of resources, for example the amounts of main and secondary memory used by the program and the sizes of required temporary or overflow files, when the software performs its function under stated conditions.
The process of testing to determine the resource-utilization of a software product.
The consequence/outcome of the execution of a test.
An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough.
A factor that could result in future negative consequences.
The process of assessing identified project or product risks to determine their level of risk, typically by estimating their impact and probability of occurrence (likelihood).
The process of identifying and subsequently analyzing the identified project or product risk to determine its level of risk, typically by assigning likelihood and impact ratings.
The importance of a risk as defined by its characteristics impact and likelihood. The level of risk can be used to determine the intensity of testing to be performed. A risk level can be expressed either qualitatively (e.g., high, medium, low) or quantitatively.
The process of identifying risks using techniques such as brainstorming, checklists and failure history.
The importance of a risk as defined by its characteristics impact and likelihood. The level of risk can be used to determine the intensity of testing to be performed. A risk level can be expressed either qualitatively (e.g., high, medium, low) or quantitatively.
The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.
An approach to testing to reduce the level of product risks and inform stakeholders of their status, starting in the initial stages of a project. It involves the identification of product risks and the use of risk levels to guide the test process.
The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions.
The degree to which a component or system can be adjusted for changing capacity.
Testing to determine the scalability of the software product.
Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data.
An attempt to gain unauthorized access to a component or system, resources, information, or an attempt to compromise system integrity.
A high-level document describing the principles, approach and major objectives of the organization regarding security.
Testing to determine the security of the software product.
A tool that supports operational security.
Testing to determine the maintainability of a software product.
The degree of impact that a defect has on the development or operation of a component or system.
A programming language/interpreter technique for evaluating compound conditions in which a condition on one side of a logical operator may not be evaluated if the condition on the other side is sufficient to determine the final outcome.
The representation of selected behavioral characteristics of one physical or abstract system by another system.
A device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs.
Computer programs, procedures, and possibly associated documentation and data pertaining to the operation of a computer system.
A distinguishing characteristic of a component or system.
The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The software lifecycle typically includes a concept phase, requirements phase, design phase, implementation phase, test phase, installation and checkout phase, operation and maintenance phase, and sometimes, retirement phase. Note these phases may overlap or be performed iteratively.
A feature or characteristic that affects an item's quality.
A feature or characteristic that affects an item's quality.
Any event occurring that requires investigation.
An entity in a programming language, which is typically the smallest indivisible unit of execution.
Documentation that provides a detailed description of a component or system for the purpose of developing and testing it.
A type of code injection in the structured query language (SQL).
Formal, possibly mandatory, set of requirements developed and used to prescribe consistent approaches to the way of working or to provide guidelines (e.g., ISO/IEC standards, IEEE standards, and organizational standards).
An entity in a programming language, which is typically the smallest indivisible unit of execution.
The percentage of executable statements that have been exercised by a test suite.
A white-box test design technique in which test cases are designed to execute statements.
Analysis of software development artifacts, e.g., requirements or code, carried out without execution of these software development artifacts. Static analysis is usually carried out by means of a supporting tool.
A tool that carries out static analysis.
A tool that carries out static analysis.
The capability of the software product to use appropriate amounts and types of resources, for example the amounts of main and secondary memory used by the program and the sizes of required temporary or overflow files, when the software performs its function under stated conditions.
The process of testing to determine the resource-utilization of a software product.
A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified workloads, or with reduced availability of resources such as access to memory or servers.
Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
Testing based on an analysis of the internal structure of the component or system.
Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
Testing based on an analysis of the internal structure of the component or system.
A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.
The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives.
A collection of components organized to accomplish a specific function or set of functions.
Testing the integration of systems and packages; testing interfaces to external organizations (e.g., Electronic Data Interchange, Internet).
Testing an integrated system to verify that it meets specified requirements.
A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken.
A set of one or more test cases.
The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project's goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.
The use of software to perform or support test activities, e.g., test management, test design, test execution and results checking.
A tool that provides an environment for test automation. It usually includes a test harness and test libraries.
An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.
Procedure used to derive and/or select test cases.
A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.
The set of generic and specific conditions, agreed upon with the stakeholders for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.
The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.
The process of transforming general test objectives into tangible test conditions and test cases.
Procedure used to derive and/or select test cases.
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.
An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
The process of running a test on the component or system under test, producing actual result(s).
The use of software, e.g., capture/playback tools, to control the execution of tests, the comparison of actual results to expected results, the setting up of test preconditions, and other test control and reporting functions.
A test tool that executes tests against a designated test item and evaluates the outcomes against expected results and postconditions.
A test environment comprised of stubs and drivers needed to execute a test.
The process of developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts.
Any event occurring that requires investigation.
The organizational artifacts needed to perform testing, consisting of test environments, test tools, office environment and procedures.
The data received from an external source by the test object during test execution. The external source can be hardware, software or human.
A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test.
The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.
A tool that provides support to the test management and control part of a test process. It often has several capabilities, such as testware management, scheduling of tests, the logging of results, progress tracking, incident management and test reporting.
The person responsible for project management of testing activities and resources, and evaluation of a test object. The individual who directs, controls, administers, plans and regulates the evaluation of a test object.
The component or system to be tested.
The consequence/outcome of the execution of a test.
A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.
The activity of establishing or updating a test plan.
The consequence/outcome of the execution of a test.
An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
Commonly used to refer to a test procedure specification, especially an automated one.
An uninterrupted period of time spent in executing tests.
A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.
Procedure used to derive and/or select test cases.
A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test.
A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).
A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.
A software product that supports one or more test activities, such as planning and control, specification, building initial files and data, test execution and test analysis.
The capability of the software product to enable modified software to be tested.
A skilled professional who is involved in the testing of a component or system.
The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
The ability to identify related items in documentation and software, such as requirements with associated tests.
The capability of the software product to enable the user to understand whether the software is suitable, and how it can be used for particular tasks and conditions of use.
A minimal software item that can be tested in isolation.
The testing of individual software components.
Code that cannot be reached and therefore is impossible to execute.
The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions.
Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.
Acceptance testing carried out by future users in a (simulated) operational environment focusing on user requirements and needs.
A person's perceptions and responses resulting from the use or anticipated use of a software product.
All components of a system that provide information and controls for the user to accomplish specific tasks with the system.
An element of storage in a computer that is accessible by a software program by referring to it by a name.
Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
A static analyzer that is used to detect particular security vulnerabilities in the code.
Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
Testing based on an analysis of the internal structure of the component or system.
A pointer that references a location that is out of scope for that pointer or that does not exist.