Terms related to Advanced Test Automation Engineer 2016

A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test.
Testing performed by submitting commands to the software under test using programming interfaces of the application directly.
Permission given to a user or process to access resources.
Defect density of a component of the test automation code.
A test automation approach, where inputs to the test object are recorded during manual testing in order to generate automated test scripts that could be executed later (i.e. replayed).
A test automation approach, where inputs to the test object are recorded during manual testing in order to generate automated test scripts that could be executed later (i.e. replayed).
A tree showing equivalence partitions hierarchically ordered, which is used to design test cases in the classification tree method.
CLI
Acronym for Command-Line Interface.
Testing performed by submitting commands to the software under test using a dedicated command-line interface.
An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g., statement coverage, decision coverage or condition coverage.
A standard that describes the characteristics of a design or a design description of data or program components.
A test suite that covers the main functionality of a component or system to determine whether it works properly before planned testing begins.
Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
A sequence of events, e.g., executable statements, of a component or system from an entry point to an exit point.
The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data-driven testing is often used to support the application of test execution tools such as capture/playback tools.
The process of finding, analyzing and removing the causes of failures in software.
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.
A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
Effort required for running tests manually.
A test is deemed to fail if its actual result does not match its expected result.
The process of intentionally adding defects to a system for the purpose of finding out whether the system can detect, and possibly recover from, a defect. Fault injection is intended to mimic failures that might occur in the field.
A distinguishing characteristic of a component or system.
Method aiming to measure the size of the functionality of an information system. The measurement is independent of the technology. This measurement may be used as a basis for the measurement of productivity, the estimation of the needed resources, and project control.
Representation of the layers, components, and interfaces of a test automation architecture, allowing for a structured and modular approach to implement test automation.
A type of interface that allows users to interact with a component or system through graphical icons and visual indicators.
Testing performed by interacting with the software under test via the graphical user interface.
A test suite that covers the main functionality of a component or system to determine whether it works properly before planned testing begins.
A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test.
The level to which a test object is modified by adjusting it for testability.
A simple scripting technique without any control structure in the test scripts.
The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment.
Modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment.
A measurement scale and the method used for measurement.
Testing based on or involving models.
Hardware and software products installed at users' or customers' sites where the component or system under test will be used. The software may include operating systems, database management systems, and other applications.
The consequence/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and communication messages sent out.
A test is deemed to pass if its actual result matches its expected result.
A sequence of events, e.g., executable statements, of a component or system from an entry point to an exit point.
A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.
A consensus-based estimation technique, mostly used to estimate effort or relative size of user stories in Agile software development. It is a variation of the Wideband Delphi method using a deck of cards with values representing the units in which the team estimates.
The ease with which the software product can be transferred from one hardware or software environment to another.
A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.
A scripting technique where scripts are structured into scenarios which represent use cases of the software under test. The scripts can be parameterized with test data.
A risk directly related to the test object.
A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources.
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.
Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
A test automation approach, where inputs to the test object are recorded during manual testing in order to generate automated test scripts that could be executed later (i.e. replayed).
Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.
Testing to determine the reliability of a software product.
The capability of the software product to use appropriate amounts and types of resources, for example the amounts of main and secondary memory used by the program and the sizes of required temporary or overflow files, when the software performs its function under stated conditions.
A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.
A factor that could result in future negative consequences.
The process of identifying and subsequently analyzing the identified project or product risk to determine its level of risk, typically by assigning likelihood and impact ratings.
The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.
Testing to determine the robustness of the software product.
A test suite that covers the main functionality of a component or system to determine whether it works properly before planned testing begins.
A device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs.
A test suite that covers the main functionality of a component or system to determine whether it works properly before planned testing begins.
Computer programs, procedures, and possibly associated documentation and data pertaining to the operation of a computer system.
The activities performed at each stage in software development, and how they relate to one another logically and chronologically.
A distinguishing characteristic of a component or system.
Documentation that provides a detailed description of a component or system for the purpose of developing and testing it.
Formal, possibly mandatory, set of requirements developed and used to prescribe consistent approaches to the way of working or to provide guidelines (e.g., ISO/IEC standards, IEEE standards, and organizational standards).
The capability of the software product to use appropriate amounts and types of resources, for example the amounts of main and secondary memory used by the program and the sizes of required temporary or overflow files, when the software performs its function under stated conditions.
A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified workloads, or with reduced availability of resources such as access to memory or servers.
A scripting technique that builds and utilizes a library of reusable (parts of) scripts.
A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.
See test object.
The layer in a test automation architecture which provides the necessary code to adapt test scripts on an abstract level to the various components, configuration or interfaces of the SUT.
(1) A person who provides guidance and strategic direction for a test organization and for its relationship with other disciplines. (2) A person who defines the way testing is structured for a given system, including topics such as test tools and test data management.
The use of software to perform or support test activities, e.g., test management, test design, test execution and results checking.
An instantiation of the generic test automation architecture to define the architecture of a test automation solution, i.e., its layers, components, services and interfaces.
A person who is responsible for the design, implementation and maintenance of a test automation architecture as well as the technical evolution of the resulting test automation solution.
A tool that provides an environment for test automation. It usually includes a test harness and test libraries.
A person who is responsible for the planning and supervision of the development and evolution of a test automation solution.
A realization/implementation of a test automation architecture, i.e., a combination of components implementing a specific test automation assignment. The components may include commercial off-the-shelf test tools, test automation frameworks, as well as test hardware.
A high-level plan to achieve long-term objectives of test automation under given boundary conditions.
The disproportionate growth of the number of test cases with growing size of the test basis, when using a certain test design technique. Test case explosion may also happen when applying the test design technique systematically for the first time.
The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
The layer in a generic test automation architecture which supports test implementation by supporting the definition of test suites and/or test cases, e.g., by offering templates or guidelines.
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.
The process of running a test on the component or system under test, producing actual result(s).
The layer in a generic test automation architecture which supports the execution of test suites and/or test cases.
A test tool that executes tests against a designated test item and evaluates the outcomes against expected results and postconditions.
The layer in a generic test automation architecture which supports manual or automated design of test suites and/or test cases.
A test environment comprised of stubs and drivers needed to execute a test.
A customized software interface that enables automated testing of a test object.
The process of recording information about tests executed into a test log.
A tool that provides support to the test management and control part of a test process. It often has several capabilities, such as testware management, scheduling of tests, the logging of results, progress tracking, incident management and test reporting.
The person responsible for project management of testing activities and resources, and evaluation of a test object. The individual who directs, controls, administers, plans and regulates the evaluation of a test object.
A model describing testware that is used for testing a component or a system under test.
A reason or purpose for designing and executing a test.
The consequence/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and communication messages sent out.
The process of recording information about tests executed into a test log.
Collecting and analyzing data from testing activities and subsequently consolidating the data in a report to inform stakeholders.
The consequence/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and communication messages sent out.
A software product that supports one or more test activities, such as planning and control, specification, building initial files and data, test execution and test analysis.
The capability of the software product to enable modified software to be tested.
The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.
A two-dimensional table, which correlates two entities (e.g., requirements and test cases). The table allows tracing back and forth the links of one entity to the other, thus enabling the determination of coverage achieved and the assessment of impact of proposed changes.
A tool that provides an environment for unit or component testing in which a component can be tested in isolation or with suitable stubs and drivers. It also provides other support for the developer, such as debugging capabilities.
A sequence of transactions in a dialogue between an actor and a component or system with a tangible result, where an actor can be a user or anything that can exchange information with the system.
Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.