Terms related to Foundation V3.1 2018

Testing to determine the ease by which users with disabilities can use a component or system.
User or any other person or system that interacts with the test object in a specific way.
The behavior produced/observed when a component or system is tested.
The behavior produced/observed when a component or system is tested.
A review technique performed informally without a structured process.
Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc., or from someone's perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation.
The response of a component or system to a set of input values and preconditions.
A logical expression that can be evaluated as True or False.
A publicly displayed chart that depicts the outstanding effort versus time in an iteration. It shows the status and trend of completing the tasks of the iteration. The X-axis typically represents days in the sprint, while the Y-axis is the remaining effort (usually either in ideal engineering hours or story points).
Acronym for Computer Aided Software Engineering.
A type of testing initiated by modification to a component or system.
A review technique guided by a list of questions or required attributes.
A review technique guided by a list of questions or required attributes.
Testing based on an analysis of the internal structure of the component or system.
Testing based on an analysis of the internal structure of the component or system.
A standard that describes the characteristics of a design or a design description of data or program components.
The degree to which a component or system can exchange information with other components or systems, and/or perform its required functions while sharing the same hardware or software environment.
A test approach in which the test suite comprises all combinations of input values and preconditions.
The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify.
The simultaneous execution of multiple independent threads by a component or system.
A logical expression that can be evaluated as True or False.
The composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.
A software development procedure merging, integrating and testing all changes as soon as they are committed within an automated process.
A sequence of consecutive edges in a directed graph.
The total costs incurred on quality activities and issues and often split into prevention costs, appraisal costs, internal failure costs and external failure costs.
The sequence of possible changes to the state of data objects.
A large user story that cannot be delivered as defined within a single iteration or is large enough that it can be split into smaller user stories.
A human action that produces an incorrect result.
A source code statement that, when translated into object code, can be executed in a procedural manner.
A test approach in which the test suite comprises all combinations of input values and preconditions.
Testing based on the tester's experience, knowledge and intuition.
A software engineering methodology used within Agile software development whereby core practices are programming in pairs, doing extensive code review, unit testing of all code, and simplicity and clarity in code.
A test is deemed to fail if its actual result does not match its expected result.
The status of a test result in which the actual result does not match the expected result.
A distinguishing characteristic of a component or system.
A result of an evaluation that identifies some important issue, problem, or opportunity.
An integration approach that combines the components or systems for the purpose of getting a basic functionality working early.
The degree to which a component or system provides functions that meet stated and implied needs when used under specified conditions.
Testing based on an analysis of the internal structure of the component or system.
Separation of responsibilities, which encourages the accomplishment of objective testing.
Supplied instructions on any suitable media, which guides the installer through the installation process. This may be a manual guide, step-by-step procedure, installation wizard, or any other similar process description.
The process of combining components or systems into larger assemblies.
The degree to which a component or system allows only authorized access and modification to a component, a system or data.
Testing to determine the interoperability of a software product.
A type of software development lifecycle model in which the component or system is developed through a series of repeated cycles.
A metric that supports the judgment of process performance.
On large projects, the person who reports to the test manager and is responsible for project management of a particular test level or a particular set of testing activities.
The activities performed at each stage in software development, and how they relate to one another logically and chronologically.
Testing based on an analysis of the internal structure of the component or system.
Testing based on an analysis of the internal structure of the component or system.
Testing the changes to an operational system or the impact of a changed environment to an operational system.
The number or category assigned to an attribute of an entity by making a measurement.
The process of assigning a number or category to an entity to describe an attribute of that entity.
A measurement scale and the method used for measurement.
A point in time in a project at which defined (intermediate) deliverables and results should be ready.
A human action that produces an incorrect result.
Testing based on or involving models.
The degree to which a system is composed of discrete components such that a change to one component has minimal impact on other components.
The intended environment for a component or system to be used in production.
A high-level document describing the principles, approach and major objectives of the organization regarding testing.
The consequence/outcome of the execution of a test.
A test is deemed to pass if its actual result matches its expected result.
The status of a test result in which the actual result matches the expected result.
A sequence of consecutive edges in a directed graph.
The degree to which a component or system uses time, resources and capacity when accomplishing its designated functions.
A metric that supports the judgment of process performance.
A test tool that generates load for a designated test item and that measures and records its performance during test execution.
A consensus-based estimation technique, mostly used to estimate effort or relative size of user stories in Agile software development. It is a variation of the Wideband Delphi method using a deck of cards with values representing the units in which the team estimates.
A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.
The level of (business) importance assigned to an item, e.g., defect.
An unknown underlying cause of one or more incidents.
A set of interrelated activities, which transform inputs into outputs.
A program of activities designed to improve the performance and maturity of the organization's processes, and the result of such a program.
A framework in which processes of the same nature are classified into an overall model.
A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources.
A set of conventions that govern the interaction of processes, devices, and other components within a system.
A set of activities designed to evaluate the quality of a component or system.
Coordinated activities to direct and control an organization with regard to quality that include establishing a quality policy and quality objectives, quality planning, quality control, quality assurance, and quality improvement.
A proprietary adaptable iterative software development process framework consisting of four project lifecycle phases: inception, elaboration, construction and transition.
A degradation in the quality of a component or system due to a change.
A tool that supports the recording of requirements, requirements attributes (e.g., priority, knowledge responsible) and annotation, and facilitates traceability through layers of requirements and requirements change management. Some requirements management tools also provide facilities for static analysis, such as consistency checking and violations to pre-defined requirements rules.
The consequence/outcome of the execution of a test.
A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.
The degree to which a work product can be used in more than one system, or in building other work products.
A factor that could result in future negative consequences.
The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions.
A source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed.
A review technique in which a work product is evaluated to determine its ability to address specific scenarios.
An iterative incremental framework for managing projects commonly used with Agile software development.
Testing to determine the security of the software product.
A technique to enable virtual delivery of services which are deployed, accessed and managed remotely.
An approach in which test activities are planned as test sessions.
The degree of impact that a defect has on the development or operation of a component or system.
The representation of selected behavioral characteristics of one physical or abstract system by another system.
A device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs.
Computer programs, procedures, and possibly associated documentation and data pertaining to the operation of a computer system.
The activities performed at each stage in software development, and how they relate to one another logically and chronologically.
A distinguishing characteristic of a component or system.
The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The software lifecycle typically includes a concept phase, requirements phase, design phase, implementation phase, test phase, installation and checkout phase, operation and maintenance phase, and sometimes, retirement phase. Note these phases may overlap or be performed iteratively.
An entity in a programming language, which is typically the smallest indivisible unit of execution.
Documentation that provides a detailed description of a component or system for the purpose of developing and testing it.
Formal, possibly mandatory, set of requirements developed and used to prescribe consistent approaches to the way of working or to provide guidelines (e.g., ISO/IEC standards, IEEE standards, and organizational standards).
A diagram that depicts the states that a component or system can assume, and shows the events or circumstances that cause and/or result from a change from one state to another.
A diagram that depicts the states that a component or system can assume, and shows the events or circumstances that cause and/or result from a change from one state to another.
An entity in a programming language, which is typically the smallest indivisible unit of execution.
Coverage measures based on the internal structure of a component or system.
Testing based on an analysis of the internal structure of the component or system.
Testing based on an analysis of the internal structure of the component or system.
A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.
A set of one or more test cases.
An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
An instance of the test process against a single identifiable version of the test object.
A type of test tool that enables data to be selected from existing databases or created, generated, manipulated and edited for use in testing.
A tool that supports the test design activity by generating test inputs from a specification that may be held in a CASE tool repository, e.g., requirements management tool, from specified test conditions held in the tool itself, or from code.
An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
A test tool that executes tests against a designated test item and evaluates the outcomes against expected results and postconditions.
A type of test tool that enables data to be selected from existing databases or created, generated, manipulated and edited for use in testing.
The organizational artifacts needed to perform testing, consisting of test environments, test tools, office environment and procedures.
The data received from an external source by the test object during test execution. The external source can be hardware, software or human.
On large projects, the person who reports to the test manager and is responsible for project management of a particular test level or a particular set of testing activities.
The consequence/outcome of the execution of a test.
The activity of establishing or updating a test plan.
A high-level document describing the principles, approach and major objectives of the organization regarding testing.
A sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap up activities post execution.
A program of activities undertaken to improve the performance and maturity of the organization's test processes.
Documentation summarizing test activities and results.
Collecting and analyzing data from testing activities and subsequently consolidating the data in a report to inform stakeholders.
The consequence/outcome of the execution of a test.
An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
A list of activities, tasks or events of the test process, identifying their intended start and finish dates and/or times, and interdependencies.
A sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap up activities post execution.
An uninterrupted period of time spent in executing tests.
An approach to software development in which the test cases are designed and implemented before the associated component or system is developed.
All components of a system that provide information and controls for the user to accomplish specific tasks with the system.
Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
An element of storage in a computer that is accessible by a software program by referring to it by a name.
Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
Testing based on an analysis of the internal structure of the component or system.
An expert-based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team members.