Terms related to Foundation Extension - Usability 2018

The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity.
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
The degree to which a product or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use.
Testing to determine the ease by which users with disabilities can use a component or system.
The capability of the software product to provide the right or agreed results or effects with the needed degree of precision.
A group of software development methodologies based on iterative incremental development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams.
The capability of the software product to be attractive to the user.
The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage.
A superior method or innovative practice that contributes to the improved performance of an organization under given context, usually recognized as "best" by other peer organizations.
Bug
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g., an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.
A minimal software item that can be tested in isolation.
Users, tasks, equipment (hardware, software and materials), and the physical and social environments in which a software product is used.
A program point at which the control flow has two or more alternative routes. A node with two or more links to separate branches.
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g., an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
A human action that produces an incorrect result.
The degree to which a component or system can continue normal operation despite the presence of erroneous inputs.
An informal usability review in which the reviewers are experts. Experts can be usability experts or subject matter experts, or both.
An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.
Deviation of the component or system from its expected delivery, service or result.
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g., an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
A distinguishing characteristic of a component or system.
A result of an evaluation that identifies some important issue, problem, or opportunity.
A type of evaluation designed and used to improve the quality of a component or system, especially when it is still being designed.
The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions.
A generally recognized rule of thumb that helps to achieve a goal.
A usability review technique that targets usability problems in the user interface or user interface design. With this technique, the reviewers examine the interface and judge its compliance with recognized usability principles (the "heuristics").
An approach to design that aims to make software products more usable by focusing on the use of the software products and applying human factors, ergonomics, and usability knowledge and techniques.
A variable (whether stored within a component or outside) that is read by a component.
The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.
The capability of the software product to interact with one or more specified components or systems.
The capability of the software product to enable the user to learn its application.
A partitioning of the life of a product or project into phases.
The activities performed at each stage in software development, and how they relate to one another logically and chronologically.
Modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment.
A systematic evaluation of software acquisition, supply, development, operation, or maintenance process, performed by or on behalf of management that monitors progress, determines the status of plans and schedules, confirms requirements and their system allocation, or evaluates the effectiveness of management approaches to achieve fitness for purpose.
(1) The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. (2) The capability of the software product to avoid failure as a result of defects in the software.
The process of assigning a number or category to an entity to describe an attribute of that entity.
A point in time in a project at which defined (intermediate) deliverables and results should be ready.
A human action that produces an incorrect result.
A minimal software item that can be tested in isolation.
The consequence/outcome of the execution of a test.
A variable (whether stored within a component or outside) that is written by a component.
The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.
A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.
Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.
The level of (business) importance assigned to an item, e.g., defect.
A set of interrelated activities, which transform inputs into outputs.
A risk directly related to the test object.
A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources.
A risk related to management and control of the (test) project, e.g., lack of staffing, strict deadlines, changing requirements, etc.
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.
A set of activities designed to evaluate the quality of a component or system.
A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.
The consequence/outcome of the execution of a test.
A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.
An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough.
The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.
A factor that could result in future negative consequences.
The process of identifying and subsequently analyzing the identified project or product risk to determine its level of risk, typically by assigning likelihood and impact ratings.
The process of identifying risks using techniques such as brainstorming, checklists and failure history.
The degree of impact that a defect has on the development or operation of a component or system.
Computer programs, procedures, and possibly associated documentation and data pertaining to the operation of a computer system.
The activities performed at each stage in software development, and how they relate to one another logically and chronologically.
A distinguishing characteristic of a component or system.
A questionnaire-based usability test technique for measuring software quality from the end user's point of view.
An entity in a programming language, which is typically the smallest indivisible unit of execution.
Documentation that provides a detailed description of a component or system for the purpose of developing and testing it.
Formal, possibly mandatory, set of requirements developed and used to prescribe consistent approaches to the way of working or to provide guidelines (e.g., ISO/IEC standards, IEEE standards, and organizational standards).
An entity in a programming language, which is typically the smallest indivisible unit of execution.
A type of evaluation designed and used to gather conclusions about the quality of a component or system, especially when a substantial part of it has completed design.
A collection of components organized to accomplish a specific function or set of functions.
Testing an integrated system to verify that it meets specified requirements.
A simple, ten-item attitude scale giving a global view of subjective assessments of usability.
A set of one or more test cases.
The process of running a test on the component or system under test, producing actual result(s).
The person responsible for project management of testing activities and resources, and evaluation of a test object. The individual who directs, controls, administers, plans and regulates the evaluation of a test object.
The consequence/outcome of the execution of a test.
A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.
Collecting and analyzing data from testing activities and subsequently consolidating the data in a report to inform stakeholders.
The consequence/outcome of the execution of a test.
An uninterrupted period of time spent in executing tests.
A skilled professional who is involved in the testing of a component or system.
The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
A usability testing technique where test participants share their thoughts with the moderator and observers by thinking aloud while they solve usability test tasks. Think aloud is useful to understand the test participant.
A minimal software item that can be tested in isolation.
A process through which information about the usability of a system is gathered in order to improve the system (known as formative evaluation) or to assess the merit or worth of a system (known as summative evaluation).
A requirement on the usability of a component or system.
A representative user who solves typical tasks in a usability test.
A document specifying a sequence of actions for the execution of a usability test. It is used by the moderator to keep track of briefing and pre-session interview questions, usability test tasks, and post-session interview questions.
A test session in usability testing in which a usability test participant is executing tests, moderated by a moderator and observed by a number of observers.
A usability test execution activity specified by the moderator that needs to be accomplished by a usability test participant within a given period of time.
A person's perceptions and responses resulting from the use or anticipated use of a software product.
All components of a system that provide information and controls for the user to accomplish specific tasks with the system.
A low-level, specific rule or recommendation for user interface design that leaves little room for interpretation so designers implement it similarly. It is often used to ensure consistency in the appearance and behavior of the user interface of the systems produced by an organization.
A usability evaluation whereby a representative sample of users are asked to report subjective evaluation into a questionnaire based on their experience in using a component or system.
A framework to describe the software development lifecycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development lifecycle.
A part of a series of web accessibility guidelines published by the Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C), the main international standards organization for the internet. They consist of a set of guidelines for making content accessible, primarily for people with disabilities.
A questionnaire-based usability test technique for measuring web site software quality from the end user's point of view.