Software Testing and Quality Assurance by Naik, Tripathy

From CNM Wiki
Jump to: navigation, search

Software Testing and Quality Assurance by Naik, Tripathy is the Software Testing and Quality Assurance: Theory and Practice book authored by Kshirasagar Naik, Department of Electrical and Computer Engineering University of Waterloo, Waterloo, and Priyadarshi Tripathy, NEC Laboratories America, Inc. and published by John Wiley & Sons, Inc., Hoboken, New Jersey, in 2008.

  • 1xEvolution-data optimized (1xEV-DO). Communication standard for transmitting and receiving data frames over a wireless radio channel using CDMA technology.
  • Abstract Syntax Notation. One (ASN.1) Notation to formally define the syntax of messages to be exchanged among an extensive range of applications involving the Internet.
  • Acceptance criteria. Criteria a system must satisfy to be accepted by a customer and to enable the customer to determine whether to accept the system.
  • Acceptance test. Formal testing conducted to determine whether a system satisfies its acceptance criteria.
  • Access terminal. Can be a mobile phone, laptop, or personal digital assistant (PDA) with a wireless modem.
  • Accuracy. Degree of conformity of a measured or calculated quantity to its actual (true) value.
  • Adaptive random testing. In adaptive random testing, test inputs are selected from a randomly generated set in such a way that these test inputs are evenly spread over the entire input domain.
  • Adjacent domain. Two domains are adjacent if they have a boundary inequality in common.
  • Asynchronous transfer mode (ATM). Cell relay network protocol which encodes data traffic into small, fixed-sized (53 bytes = 48 bytes of data and 5 bytes of header information) cells. A connection-oriented technology in which a connection is established between the two endpoints before an actual data exchange begins.
  • Attributes. Properties of the service delivered by the system to users.
  • Authentication. Process of verifying the claimed identity of someone or something.
  • Authentication, authorization, and accounting (AAA). Network server used for controlling access to a network. An authentication process identifies the user. An authorization process implements policies that determine which resources and services a valid user may access. An accounting process keeps track of time and data resources used for billing and usage analysis.
  • Authorization. Process of verifying whether an individual has permission to access a specific resource.
  • Automatable. Test case that is a good candidate for automation.
  • Availability. Measure of the readiness of a system. Simply put, availability is the proportion of time a system is in a functioning condition.
  • Backdoors. Mechanism created by a computer program that allows anyone with knowledge of its existence to gain access to the system.
  • Backup/recoverability test. Verifies that a system can be recouped after a failure. It is done by backing up to a point in the processing cycle before any error occurred and reprocessing all transactions that occurred after that point.
  • Bad fix. Fix causing collateral damage.
  • Basic interconnection test. Verifies whether the implementation can establish a basic interconnection before thorough tests are performed.
  • Basic test. Provides a prima facie indication that the system is ready for more rigorous tests.
  • Behavior test. Verifies the dynamic communication systems requirements of an implementation. These are the requirements and options that define the observable behavior of a protocol. A large part of behavior tests, which constitutes the major portion of communication system tests, can be generated from the protocol standards.
  • Beta testing. Testing conducted by potential buyers prior to the release of the product. The purpose of beta testing is intended not to find defects but to obtain feedback from the field to the developers about the usability of the product.
  • Big-bang integration. Integration testing technique in which all the software modules are put together to construct the complete system so that the system as a whole can be tested.
  • Bit error test (BERT). Involves transmitting a known bit pattern over a channel and then verifying the received pattern for errors.
  • Black-box testing. Also called functional testing, a testing technique that ignores the internal details of a system and focuses solely on the inputs accepted, outputs generated, and execution conditions.
  • Boot test. Verifies that the system can boot up its software image from the supported boot options.
  • Bot. Software agent in Internet parlance. A bot interacts with network services intended for people as if it were a person. One typical use of bots is to gather information. Another, more malicious use of bots is the coordination and operation of an automated attack on networked computers, such as a distributed denial-of-service attack.
  • Bottom-up integration. Integration testing technique in which testing starts from the modules at the outermost branches of a module visibility tree and moves toward the modules making up the "main program."
  • Boundary inequality. From a geometrical viewpoint, a domain is defined by a set of boundary inequalities, where each inequality defines a boundary of the domain.
  • Boundary value analysis (BVA). The aim of BVA is to select elements that are close to the boundaries of an input domain so that both the upper and lower edges of an equivalence class are covered by test cases.
  • Branch coverage. Selecting program paths in such a manner that certain branches (i.e., outgoing edges of nodes) of a control flow graph are covered by the execution of those paths. Complete branch coverage means selecting some paths such that their execution causes all the branches to be covered.
  • Build. Interim software image for internal testing within the organization. Eventually, the final build will be a candidate for system testing, and such a system may be released to customers.
  • Business acceptance testing (BAT). Undertaken within the supplier's development organization to ensure that the system will eventually pass the user acceptance testing.
  • Capability maturity model (CMM). Gives guidelines for improving a software development process. The model facilitates the evaluation of the maturity levels of processes on a scale of 1–5. Level 5 is the highest level of process maturity.
  • Capability test. Checks that the implementation provides the observable capabilities based on the static communication system requirements. The static requirements describe the options, ranges of values for parameters, and timers.
  • Category partition method (CPM). Systematic, specification-based methodology that uses an informal functional specification to produce formal test specification.
  • Causal analysis. A kind of analysis conducted to identify the root cause of a defect and initiate actions so that the source of the defect is eliminated.
  • Change request (CR). Formal request by a code reviewer to make a change to the code.
  • Characterizing sequence. Sequences of the W-set of an FSM are called the characterizing sequences of the FSM.
  • Check-in request form. For each fix that is checked into a build, a check-in request form is filled out by software developers and reviewed by the build engineering group.
  • Clean-room process. Model introduced by IBM in the late 1980s. The process involves two cooperating teams -- development and quality assurance teams -- and five major activities: specification, planning, design and verification, quality certification, and feedback. The following ideas form the foundation of the clean-room process: (i) incremental development under statistical quality control (SQC), (ii) software development based on mathematical principles, and (iii) software testing based on statistical principles.
  • Closed boundary. A boundary is closed if the data points on the boundary are a part of the domain of interest.
  • Closed domain. A domain with all its boundaries closed.
  • Closure error. Occurs if a boundary is closed when the intention is to have an open boundary or vice versa.
  • Collateral damage. What occurs when a new feature or a defect fix in one part of the system causes a defect (damage) to another, possibly unrelated part of the system.
  • Combinatorial testing. Test case selection method in which test cases are identified by combining values of several test input parameters based on some combinatorial strategy.
  • Command line interface test. Verifies that the system can be configured in a specific way by using the command line interface.
  • Commercial off-the-shelf components (COTS components). Software components produced by third-party vendor organizations that can be reused in a system. Often, these types of components are delivered without their source code.
  • Compatibility test. Verifies that the system can work in the same manner across all platforms, operating systems, database management systems, and network operating systems.
  • Competent programmer hypothesis. Assumption for mutation analysis, which states that programmers are generally competent, and they do not create "random" programs.
  • Compliance testing. Also called conformance testing, the process of verifying whether a product meets the standard product specifications it was designed to meet.
  • Computation error. Occurs when specific input data cause the program to execute the correct path but the output value is wrong.
  • Confidentiality. Encrypting data by a sender such that only the intended receiver can decrypt it.
  • Configuration testing. Reconfiguration activities during interoperability tests.
  • Conformance testing. Process that verifies whether an implementation conforms to its specification.
  • Control flow graph (CFG). Graphical representation of the flow of control in a program unit.
  • Coordinated architecture. Enhanced version of the distributed architecture, where the upper and lower testers are coordinated by a test management protocol.
  • Coupling effect. Assumption for mutation analysis which states that if a test suite can reveal simple defects in a program, then it can also reveal more complicated combinations of simple defects.
  • Cross-functionality group. In an organization the set of those groups that have different stakes in a product. For example, a marketing group, a customer support group, a development group, a system test group, a development group, and a product sustaining group are collectively referred to as a cross-functionality group in an organization.
  • Cyclomatic complexity (McCabe's complexity). Based on the graph theory concept and known as cyclomatic number, represents the complexity of a software module.
  • Data conversion acceptance criteria. Used to measure and report the capability of the software to convert existing application data to new formats.
  • Data flow anomaly. Sequence of "unusual" actions on a data variable, for example, two successive assignments of values to a data variable or referencing an undefined variable.
  • Data flow graph (DFG). Graphical representation of a program, where nodes represent computations and branches represent predicates, that is, conditions.
  • Debugging. Process of determining the cause of a defect and correcting it; occurs as a consequence of a test revealing a defect.
  • Decision table. Comprises a set of conditions and a set of effects. For each combinations of conditions, a rule exists. Each rule comprises a Y (yes), N (no), or -- (don't care) response and contains an associated list of effects or expected results.
  • Defect. Flaw in a software with a potential to cause a failure.
  • Defect age. Period of time from the introduction of a defect to its discovery.
  • Defect density. Number of defects per thousand lines of code.
  • Defect prevention. Preventive measures that can be taken during the development of code to reduce the errors in the program.
  • Defect priority. Measure of how soon the defect needs to be fixed.
  • Defect removal efficiency (DRE). Ratio of the number of defects discovered in an activity to the number of defects that should have been found.
  • Defect severity. Measure of the extent of the detrimental effect a defect can have on the operation of the product.
  • Definition of a variable. A variable is said to be defined if the variable's memory location explicitly gets a value.
  • Degraded node test. Verifies the operation of a system after a portion of the system becomes nonoperational.
  • Denial-of-service (DoS) attack. Flooding an information system, such as a server, with a large number of requests for service to the point where the information system cannot respond.
  • Design verification test (DVT). Written and executed by the hardware group before integrating the hardware with the software system. Types of DVTs are diagnostic, electrostatic discharge, electromagnetic emission, electrical, thermal, environmental, acoustics, equipment packaging, safety, and reliability.
  • Deterministic finite-state machine. FSM such that its output and the next state are a function of its current state and the input that is applied.
  • Device under test (DUT). Manufactured product undergoing testing.
  • Diagnostic tests. Verify that the hardware components of the system are functioning as desired. Examples are power-on self test, Ethernet loop-back test, and bit error test.
  • Digital signature. Encrypted message digest that is appended to the message. Producing a digital signature involves public key encryption and a hash function algorithm.
  • Distinguishing sequence. Input sequence which generates a unique output sequence for a state when the input sequence is applied to an FSM starting at the given state.
  • Distributed architecture. Test architecture where there is a PCO at the upper service boundary and another at the lower service boundary. The PCO at the lower service boundary is at the remote end of the N − 1 service provider to indirectly control and observe N ASPs and N PDUs. This allows the upper and lower testers to reside in physically separate locations.
  • Domain error. Occurs when specific input data cause the program to execute the wrong path in the program.
  • Dynamic unit testing. Execution-based testing methodology in which a program unit is actually executed and its outcomes are observed.
  • Element management system test (EMS test). Verifies EMS functionality, such as monitoring and managing the network elements.
  • Emulator. A software emulator allows computer programs to run on a platform (computer architecture and/or operating system) other than the one for which the programs were originally written. Unlike simulation, which only attempts to reproduce a program's behavior, an emulator attempts to model, to various degrees, the states of the device being emulated.
  • Encryption. Cryptographic technique used to provide confidentiality.
  • Engineering change document (EC document). Provides a brief description of the issues and describes what changes are needed to be done to the original requirement.
  • Engineering change order (ECO). Formal document that describes a change to the hardware or software that is to be delivered to the customers. This document includes the hardware/software compatibility matrix and is distributed to operation, customer support, and the sales teams of the organization.
  • Entry criteria. Criteria to be met before the start of a testing phase.
  • Error. When an event activates a fault in a program, it first brings the program into an intermediate unstable state, called error, which, if and when it propagates to the output, eventually causes a system failure.
  • Error guessing. Test design technique in which the experience of the testers is used to (i) guess the probable kinds and locations of faults in a system and (ii) design tests specifically to expose them. Designing test cases using the error guessing technique is primarily based on a tester's experience with code similar to the implementation under test.
  • Error seeding. Process of intentionally adding known defects in a computer program for the purpose of estimating the number of defects remaining in the program during the process of testing and fault removal.
  • Equivalence class partitioning. Divides the input domain of the system under test into classes (or groups) of test cases that have a similar effect on the system.
  • Equivalent mutant. Mutant that is not distinguishable from the program under test. Determining whether or not a mutant is equivalent to a program is in general undecidable.
  • Exit criteria. Criteria specifying the conditions that must be met before the completion of a testing phase.
  • Extended finite-state machine. Extension of a finite-state machine (FSM). An EFSM has the capability to perform additional computations such as updating values of variables, manipulating timers, and making decisions. The Specification and Description Language (SDL) provides a framework for specifying a system as one or more EFSMs.
  • Extensible authentication protocol (EAP). Authentication protocol described in Request for Comments (RFC) 2284. For wireless LANs, the EAP is known as EAP over LAN (EAPOL).
  • Extreme point. Point is a point where two or more boundaries cross.
  • Extreme programming (XP). Software development methodology which is self-adaptive and people-oriented. XP begins with five values: communication, feedback, simplicity, courage, and respect. It then builds up 12 rules/recommendations which XP projects should follow.
  • Failure. Manifested inability of a program to perform its required function. In other words, it is a system malfunction evidenced by incorrect output, abnormal termination, or unmet time and space constraints.
  • Failure intensity. Expressed as the number of failures observed per unit time.
  • False negative. Occurs when a potential or real attack is missed by an intrusion detection system. The more the occurrences of this scenario, the more doubtful the accountability of the intrusion detection system and its technology.
  • False positive. Commonly known as false alarm, occurs when intrusion detection system reads legitimate activity as being an attack.
  • Fault. Cause of a failure. For example, a missing or incorrect piece of code is a fault. A fault may remain undetected for a long time until some event activates it.
  • Fault-based testing. Testing technique used to show that a particular class of faults is not resident in a program. The test cases are aimed at revealing specific kinds of predefined faults, for example, error guessing, fault seeding, or mutation testing.
  • Fault seeding (error seeding). Process of intentionally adding known faults in a computer program for the purpose of monitoring the rate of detection and removal of faults and estimating the number of faults remaining in the program. Also used in evaluating the adequacy of tests.
  • Fault injection. Method by which faults are introduced into a program. An oracle or a specification is available to assert that what was inserted made the program incorrect.
  • Fault simulation. Process of inserting faults in a program. The inserted faults are not guaranteed to make the program incorrect. In fault simulation, one may modify an incorrect statement of a program and turn it into a correct program.
  • Feasible path. Path in which there exists an input to cause the path to execute.
  • Feature. Set of related requirements.
  • First customer shipment (FCS). New software build that is released to the first paying customer.
  • Finite-state machine (FSM). Automata with a finite number of states. The automata changes its state when an external stimulus is applied. The state of an FSM is defined as a stable condition in which the FSM rests until an external stimulus, called an input, is applied. An input causes an FSM to generate an observable output and to undergo a state transition from the current state to a new state where it stays until the next input occurs.
  • Frame relay (FR). Physical layer data transmission technique for moving data frames from one computer/router to another computer/router.
  • Full polling. Used to check the status and any configuration changes of the nodes that are managed by an EMS server.
  • Functional specification document. Requirements document produced by software developers to represent customer needs.
  • Functional testing. Testing in which a program P is viewed as a function that transforms the input vector Xi into an output vector Y i such that Y i = P(Xi). The two key concepts in functional testing are as follows: (i) precisely identify the domain of each input and output variable and (ii) select values from a data domain having important properties.
  • Function point (FP). Unit of measurement to express the amount of business functionality an information system provides to a user. Function points were defined in 1977 by Alan Albrecht at IBM.
  • Gantt chart. Popular bar chart to represent a project schedule.
  • Gold standard oracle. Scheme in which previous version of an existing application system is used to generate expected results.
  • Graphical user interface test. Verifies the look-and-feel interface of an application system.
  • Handoff. Procedure for transferring the handling of a call from one base station to another base station.
  • Hash function. Algorithm that takes an input message of arbitrary length and produces a fixed-length code. The fixed-length output is called a hash, or a message digest, of the original input message.
  • Hazard. State of a system or a physical situation which, when combined with certain environmental conditions, could lead to an accident or mishap. A hazard is a prerequisite for an accident.
  • High-availability tests. Verify the redundancy of individual hardware and software modules. The goal here is to verify that the system recovers gracefully and quickly from hardware and software failure without impacting the operation of the system. It is also known as fault tolerance.
  • High-level design document. Describes the overall system architecture.
  • Ideal test. If we can conclude, from the successful execution of a sample of the input domain, that there are no faults in the program, then the input sample constitutes an ideal test.
  • Implementation under test (IUT). Implementation subject to tests. An IUT can be a complete system or a component thereof.
  • Inappropriate action. Calculating a value in a wrong way, failing to assign a value to a variable, or calling a procedure with a wrong argument list.
  • Inappropriate path selection. If there is a faulty association between a program condition and a path, then a wrong path is selected, and this is called inappropriate path selection.
  • Infeasible path. Program path that can never be executed.
  • Inferred requirement. Anything that a system is expected to do but is not explicitly stated in the specification.
  • In-parameter-order testing (IPO testing). Combinatorial testing technique for the generation of test cases that satisfy pairwise coverage.
  • In-process metrics. Monitor the progress of the project and use these metrics to steer the course of the project.
  • Input vector. Collection of all data entities read by a program whose values must be fixed prior to entering the unit.
  • Inspection. Step-by-step peer group review of a work product, with each step checked against predetermined criteria.
  • Installability test. Ensures that the system can be correctly installed in the customer environment.
  • Integrity checking. Verifying whether or not data have been modified in transit.
  • Internet Protocol (IP). Routing protocol used for moving data across a packet-switched internetwork. The IP is a network layer protocol in the Internet protocol suite.
  • Internet Protocol Security (IPSec). Network layer security protocol which provides security features, including confidentiality, authentication, data integrity, and protection against data replay attacks.
  • Interoperability test. Verifies that the system can interoperate with third-party products.
  • Intersystem testing. Integration testing in which all the systems are connected together and tests are conducted from end to end.
  • Intrasystem testing. Low-level integration testing with the objective of putting the modules together to build a cohesive system. Intrasystem testing requires combining modules together within a system.
  • Ishikawa diagram. Also known as a fishbone diagram or cause-and-effect diagram, shows the causes of a certain event. It was first used by Kaoru Ishikawa in the 1960s and is considered one of the seven basic tools of quality management: histogram, Pareto chart, check sheet, control chart, cause-and-effect diagram, flowchart, and scatter diagram.
  • JUnit. Automated testing framework used by developers who implement program units and unit tests in the Java programming language.
  • Key process area (KPA). A CMM maturity level contains key process areas. KPAs are expected to achieve goals and are organized by common features.
  • Lean. Methodology that is used to speed up and reduce the cost of a manufacturing process by removing waste. The principle to eliminate waste has been borrowed from the ideas of Taiichi Ohno -- the father of the Toyota Production System. The lean development methodology is summarized by the following seven principles: eliminate waste, amplify learning, decide as late as possible, deliver as fast as possible, empower the team, build integrity, and see the whole. The lean process is a translation of the lean manufacturing principles and practices to the software development domain.
  • Light emitting diode (LED) test. Verifies the functioning of the LED indicator status. The LED tests are designed to ensure that the visual operational status of the system and the submodules is correct.
  • Lightweight Directory Access Protocol (LDAP). Protocol derived from the X.500 standard and defined in Request for Comments (RFC) 2251. LDAP is similar to a database but can contain more descriptive information. LDAP is designed to provide fast response to high-volume lookups.
  • Lightweight Extensible Authentication Protocol (LEAP). Cisco-wireless EAP which provides username/password-based authentication between a wireless client and an access control server.
  • Load and scalability test. Exercises the system with multiple actual or virtual users and verifies whether it functions correctly under tested traffic levels, patterns, and combinations.
  • Local architecture. Test architecture where the PCOs are defined at the upper and lower service boundaries of the IUT.
  • Logging and tracing test. Verifies the configuration and operation of logging and tracing functionalities.
  • Logic fault. When a program produces incorrect results independent of resource required, the fault is caused due to inherent deficiencies in the program and not due to lack of resource. The deficiencies are in the form of requirement faults, design faults, and construction faults.
  • Lower tester. Tester entity responsible for the control and observation at the appropriate PCO either below the IUT or at a remote site.
  • Low-level design document. Detailed specification of the software modules within the architecture.
  • Maintainability. Aptitude of a system to undergo repair and evolution.
  • Management information base (MIB). Database used to manage the devices in a communication network.
  • Manufacturing view of quality. Quality is seen as conforming to requirements. The concept of process plays a key role in the manufacturing view.
  • Marketing beta. Beta testing that builds early awareness and interest in the product among potential buyers.
  • Mean time between failure (MTBF). Expected time between two successive failures of a system. Technically, MTBF should be used only in reference to repairable items, while MTTF should be used for nonrepairable items. However, MTBF is commonly used for both repairable and nonrepairable items.
  • Mean time to failure (MTTF). Mean time expected until the first failure of a piece of equipment. MTTF is a statistical value and is meant to be the mean over a long period of time and a large number of units. MTTF is a basic measure of reliability for nonrepairable systems.
  • Mean time to repair (MTTR). Amount of time between when something breaks and when it has been repaired and is fully functional again. MTTR represents the amount of time that the device was unable to provide service.
  • Milestone. Major checkpoint, or a subgoal, identified on the project or testing schedule.
  • Mishap. Also called an accident, an unintended event that results in death, injury, illness, damage or loss of property, or harm to the environment.
  • Missing control flow paths. There is no code to handle a certain condition. This occurs when we fail to identify the condition and, thereby, fail to specify a computation in the form of a path.
  • Module test. Verifies that all the modules function individually as desired within the system. The intent here is to verify that the system along with the software that controls these modules operates as specified in the requirement specification.
  • Mutation analysis. Involves the mutation of source code by introducing statements or modifying existing statements in small ways. The idea is to help the tester develop effective tests or locate weaknesses in the test data or in the code that are seldom or never accessed during execution.
  • Network element. Network node residing on a managed network and running an SNMP agent.
  • Network management station. Executes management applications that monitor and control network elements.
  • New technology LAN manager (NTLM). Authentication protocol used in various Microsoft network protocol implementations. NTLM employs a challenge–response mechanism for authentication in which clients are able to prove their identities without sending a password to the server.
  • Nondeterministic finite-state machine. FSM in which the next-state function is not solely determined by its present state and an input. An internal event too can cause a state transition. In addition, given an external input in some states, the next state of the FSM cannot be uniquely determined.
  • Off point. Given a boundary, an off point is a point away from the boundary. One must consider a domain of interest and its relationship with the boundary while identifying an off point.
  • On-line insertion and removal test. Verifies the individual module redundancy including the software that controls these modules.
  • On point. Given a domain boundary, an on point is a point on the boundary or very near the boundary but still satisfying the boundary inequality.
  • Open boundary. Boundary with data points that are not a part of the domain of interest.
  • Open domain. Domain with all its boundaries open with respect to the domain.
  • Operational profile. Set of operations supported by a system and their probability of occurrence. An operational profile is organized in the form of a tree structure, where each arc is labeled with an action and its occurrence probability.
  • Oracle. Mechanism that verifies the correctness of program outputs. An oracle can be a specification, an expert, a body of data, or another program.
  • Original equipment manufacturer (OEM). Company that builds products or components which are used in other products sold by another company, often called a value-added reseller, or VAR. An OEM typically builds a product to an order based on the designs of the VAR. For example, hard drives in a computer system may be manufactured by a corporation separate from the company that assembles and markets computers.
  • Orthogonal array (OA) testing. Combinatorial testing technique for selecting a set of test cases from a universe of tests and making testing efficient and effective. OA testing is based on a special matrix called a latin square in which the same symbol occurs exactly once in each row and column.
  • Orthogonal defect classification (ODC). Scheme for classifying software defects and guidance for analyzing the classified aggregate defect data.
  • Packet data serving node (PDSN). Provides access to the Internet, intranets, and application servers for mobile stations utilizing a CDMA2000 radio access network (RAN). Acting as an access gateway, a PDSN entity provides simple IP and mobile IP access, foreign agent support, and packet transport for virtual private networking. It acts as a client for an authentication, authorization, and accounting (AAA) server and provides mobile stations with a gateway to the IP network.
  • Pairwise coverage. Requires that, for a given number of input parameters to the system, each possible combination of values for any pair of parameters be covered by at least one test case. It is a special case of combinatorial testing.
  • Pairwise testing. Integration testing technique in which only two interconnected systems are tested in an overall system. The purpose of pairwise testing is to ensure that the two systems under consideration can function together, assuming that other systems within the overall environment behave as expected.
  • Parametric oracle. Scheme in which an algorithm is used to extract some parameters from the actual outputs and compares them with the expected parameter values.
  • Pareto principle. States that 80% of the problems can be fixed with 20% of the entire effort. It is also known as the 80–20 rule.
  • Partition testing. Testing technique in which the input domain of the program is divided into nonoverlapping subdomains; next, one test input is selected from each subdomain. The basic assumption is that all the elements within a subdomain essentially cause the system to behave the same way and that any element of a subdomain will expose an error in the program as any other element in the same domain.
  • Path. Sequence of statements in a program or a program unit. Structurally, a path is a sequence of statements from the initial node of a CFG to one of the terminating nodes.
  • Path predicate. Set of predicates associated with a path.
  • Path predicate expression. Interpreted path predicate.
  • Perfect oracle. Scheme in which the system (IUT) is tested in parallel with a trusted system that accepts every input specified for the IUT and "always" produces the correct result.
  • Performance fault. Causes a program to fail to produce the desired output within specified resource limitations.
  • Performance test. Determines how actual system performance compares to predicted system performance. Tests are designed to verify response time, execution time, throughput, resource utilization, and traffic rate.
  • Ping. Computer network tool used to test whether a particular host is reachable across an IP network. Ping works by sending ICMP "echo request" packets to the target host and listening for ICMP "echo response" replies. Using interval timing and response rate, ping estimates the round-trip time and packet loss rate between hosts.
  • Point of control and observation (PCO). Well-designated point of interaction between a system and its users.
  • Point-to-point protocol (PPP). Data link protocol commonly used to establish a direct connection between two nodes over serial cable, phone line, trunk line, and cellular telephone.
  • Power cycling test. Verifies that a system consistently boots and becomes operational after a power cycle.
  • Power of test methods. Used to compare test methods. The notion of at least as good as is an example of comparing the power of test methods. A test method M is at least as good as a test method N if, whenever N reveals a fault in a program P by generating a test, method M reveals the same fault by generating the same test or another test.
  • Power-on self-test (POST). Determines whether or not the hardware components are their proper states to run the software.
  • Predicate. Logical function evaluated at a decision point.
  • Predicate coverage. Exploring all possible combinations of truth values of conditions affecting a selected path for all paths.
  • Predicate interpretation. Symbolically substituting operations along a path in order to express the predicates solely in terms of the input vector and a constant vector.
  • Product view of quality. The central hypothesis in this view is: if a product is manufactured with good internal properties, then it will have good external qualities.
  • Program mutation. Making a small change to a program to obtain a new program called a mutant. A mutant can be equivalent or inequivalent to the original program. Program mutation is used in evaluating the adequacy of tests.
  • Protected extensible authentication protocol (PEAP). Method to securely transmit authentication information, including passwords, over a wireless network.
  • Quality assurance (QA). (i) A planned and systematic pattern of all actions necessary to provide adequate confidence that an item or product conforms to established technical requirements and (ii) a set of activities designed to evaluate the process by which products are developed or manufactured.
  • Quality circle (QC). Volunteer group of workers, usually members of the same department, who meet regularly to discuss the problems and make presentations to management with their ideas to solve the problems. Quality circles were started in Japan in 1962 by Kaoru Ishikawa as another method of improving quality. The movement in Japan was coordinated by the Union of Japanese Scientists and Engineers (JUSE).
  • Quality control. Set of activities designed to evaluate the quality of developed or manufactured products.
  • Quality criterion. Attribute of a quality factor that is related to software development. For example, modularity is an attribute of the architecture of a software system.
  • Quality factor. Behavioral characteristic of a system. Some examples of high-level quality factors are correctness, reliability, efficiency, testability, portability, and reusability.
  • Quality management. The focus of a quality management group is to ensure process adherence and customize software development processes.
  • Quality metric. Measure that captures some aspect of a quality criterion. One or more quality metrics are associated with each criterion.
  • Quick polling. Used to check whether a network element is reachable by doing a ping on the node using the SNMP Get() operation.
  • Radio access network (RAN). Part of a mobile telecommunication system. It implements a radio access technology. Conceptually, it lies between the mobile phones and the core network (CN).
  • Random testing. Test inputs are selected randomly from the input domain of the system.
  • Referencing a variable. A variable is said to be referenced if the value held in the variable's memory location is fetched.
  • Regression testing. Selective retesting of a system or a component to verify that modifications have not caused unintended effects and that the system or the component still complies with its specified requirements.
  • Regulatory test. Ensures that the system meets the requirements of government regulatory bodies.
  • Release note. Document that accompanies a build or a released software. A release note contains the following information: changes since the last build or release, known defects, defects fixed, and added features.
  • Reliability test. Measures the ability of the system to keep operating over an extended period of time.
  • Reliable criterion. A test selection criterion is reliable if and only if either all tests selected by the criterion are successful or no test selected by the criterion is successful.
  • Remote architecture. Architecture where the IUT does not have a PCO at the upper service boundary and no direct access to the lower service boundary is available.
  • Remote authentication dial-in user service (RADIUS). AAA protocol for applications such as network access and IP mobility.
  • Requirement. Description of the needs or desires of users that a system is supposed to implement.
  • Reset sequence. Input sequence that puts an implementation to its initial state independent of the state that the implementation is in before the reset sequence is applied.
  • Rework cost. Cost of fixing the known defects.
  • Robustness test. Verifies how robust a system is, that is, how gracefully it behaves in error situation or how it handles a change in its operational state.
  • Root cause analysis (RCA). Class of problem solving methods aimed at identifying the root causes of problems. The practice of RCA is predicated on the belief that problems are best solved by attempting to correct or eliminate root causes, as opposed to merely addressing the immediately obvious symptoms.
  • Safety assurance. A safety assurance program is established in an organization to eliminate hazards or reduce their associated risks to an acceptable level.
  • Safety critical software system. Software system whose failure can cause loss of life.
  • Sandwich integration. Testing technique in which the software modules are integrated using a mix of top-down and bottom-up techniques.
  • Scaffolding. Computer programs and data files built to support software development and testing but not intended to be included in the final product. Scaffolding code simulates the functions of components that do not exist yet and allow the program to execute. Scaffolding code involves the creation of stubs and test drivers.
  • Scalability test. Verifies whether the system can scale up to its engineering limits.
  • Scrum. Project management method for software development. The approach was first described by Takeuchi and Nonaka in "The New Product Development Game" (Harvard Business Review, January/February 1986). It is an iterative, incremental process for developing any product or managing any work.
  • Secure shell (SSH). Set of standards and an associated network protocol that allow establishing a secure channel between a local and a remote computer. SSH is typically used to log in to a remote machine and execute commands.
  • Secure socket layer (SSL). Protocol that provides endpoint authentication and communication privacy over the Internet using cryptography.
  • Security. Branch of computer science which deals with protecting computers, network resources, and information against unauthorized access, modification, and/or destruction.
  • Serviceability. Ability of technical support personnel to debug or perform root cause analysis in pursuit of solving a problem with a product. Serviceability is also known as supportability.
  • Shewhart cycle. Also referred to as the Deming cycle after W. Edwards Deming, named after Walter Shewhart, who introduced the concept in his book Statistical Method from the Viewpoint of Quality Control, Dover Publications, New York, 1986. It is a continuous improvement cycle known as plan, do, check, and act (PDCA).
  • Shifted boundary. Shifted boundary error is said to occur if the actual boundary is parallel to but not the same as the boundary of interest.
  • Shrink wrap. Material made of polymer plastic with a mix of polyesters. When heat is applied to this material, it decreases in size so that it forms a seal over whatever it was covering. The shrink wrap provides a tamper-evident seal that helps ensure freshness and discourage pilfering. Shrink wrap is commonly found on CDs, DVDs, software packages, and books.
  • Simple Network Management Protocol (SNMP). Part of the IP suite as defined by the Internet Engineering Task Force. The protocol is used by network management systems for monitoring network-attached devices for conditions that warrant administrative attention.
  • Simulator. Imitation of some real thing, state of affairs, or process. The act of simulating something generally entails representing certain key characteristics or behaviors of a selected physical or abstract system.
  • Six Sigma. Set of practices originally developed by Motorola to systematically improve processes by eliminating defects. The term Six Sigma refers to the ability of highly capable processes to produce output within specification. In particular, processes that operate with Six Sigma quality produce at defect levels below 3.4 defects per (one) million opportunities.
  • Softer handoff. Handoff procedure in which a user-level communication uses two sectors of a single base station simultaneously.
  • Soft handoff. Handoff procedure in which a user-level communication uses two base stations simultaneously.
  • Software image. Compiled software binary.
  • Software reliability. Failure intensity of a software system operating in a given environment.
  • Specification and Description Language (SDL). High-level specification language which is built around the following concepts: system, which is described hierarchically by elements called systems, blocks, channels, processes, services, signals, and signal routes; behavior, which is described using an extension of the FSM concept; data, which are described using the concept of abstract data types and commonly understood program variables and data structures; and communication, which is asynchronous.
  • Spiral model. Also known as the spiral life-cycle model, a systems development method (SDM) used in information technology (IT). This model of development combines the features of the prototyping model and the waterfall model. The spiral model was defined by Barry Boehm in his 1988 article "A Spiral Model of Software Development and Enhancement" (IEEE Computer, May 1988, pp. 61-72).
  • Spoilage. Metric that uses defect age and distribution to measure the effectiveness of testing.
  • Stakeholder. Person or organization that influences a system's behavior or that is impacted by the system.
  • Statement coverage. Selecting paths in such a manner that certain statements are covered by the execution of those paths. Complete statement coverage means selecting some paths such that their execution causes all statements to be covered.
  • Static unit testing. Non-execution-based unit testing. In static unit testing, a programmer does not execute the unit; rather, it involves formal review or verification of code.
  • Statistical oracle. Special case of parametric oracle in which statistical characteristics of the actual test results are verified.
  • Statistical testing. Testing technique which uses a formal experimental paradigm for random testing according to a usage model of the software. In statistical testing a model is developed to characterize the population of uses of the software, and the model is used to generate a statistically correct sample of all possible uses of the software.
  • Stress test. Evaluates and determines the behavior of a software component when the offered load is in excess of its designed capacity.
  • Stub. Dummy subprogram that replaces a module that is called by the module to be tested. A stub does minimal data manipulation, such as print verification of the entry, and returns control to the unit under test.
  • Sustaining phase. Optimizing and refining software that is working and focusing much more solidly on customers and competitors to ensure that one does not lose what has been acquired.
  • Sustaining test engineer. Test engineer responsible for testing the product in its sustaining phase.
  • System integration test (SIT). Testing phase in which software components, hardware components, or both are combined and tested to evaluate their interactions.
  • System resolution test. Probes to provide definite diagnostic answers to specific requirements.
  • System testing. Comprehensive testing undertaken to validate an entire system and its characteristics based on the requirements and the design.
  • Technical beta. Testing conducted to obtain feedback about the usability of the product in a real environment with different configurations. The idea is to obtain feedback from a limited number of users who commit considerable amount of time and thought to their evaluation.
  • Telnet. Network-based application that is used to provide user-oriented command line login sessions between hosts on the Internet.
  • Testability requirement. Requirement that it is possible to construct a test objective which will determine if a system property has been satisfied.
  • Test adequacy. Goodness of a test. If a test does not reveal any fault in a program, it does not mean that there are no faults in the program. Therefore, it is important to evaluate the goodness of a test.
  • Test architecture. Abstract architecture described by identifying the points closest to the IUT at which control and observation are specified. The abstract test architectures can be classified into four major categories: local, distributed, coordinated, and remote.
  • Test automation. Using test tools to execute tests with little or no human intervention.
  • Test case. Pair of input and the expected outcome. A test case covers a specific test objective.
  • Test case design yield (TCDY). Commonly used metric to measure the test case design effectiveness.
  • Test case effectiveness. Measure of the quality of test cases in terms of their fault revealing capability.
  • Test case escaped. Sometimes defects are found in the testing cycle for which there are no test cases designed. For those defects new test cases are designed, which are called test case escaped.
  • Test case library. Compiled library of reusable test steps of basic utilities that are used as building blocks to facilitate the development of automated test scripts.
  • Test coordination procedure. Set of rules to coordinate the actions of the upper and the lower testers.
  • Test cycle. Partial or total execution of all the test suites planned for a given system testing phase. System testing involves at least one test cycle.
  • Test data. Element of the input domain of a program. Test data are selected by considering some selection criteria.
  • Test-driven development (TDD). Software development methodology in which programmers write unit tests before the production code.
  • Test driver. Program that invokes a unit under test, passes inputs to the unit under test, compares the actual outcome with the expected outcome from the unit, and reports the ensuing test result.
  • Test effectiveness. Measure of the quality of the testing effort.
  • Test effort. Metric specifying the cost and the time required to create and execute a test case in person-days.
  • Test environment. Setting in which system tests are executed. It is also known as a test bed.
  • Test event. Atomic interaction between the IUT and an upper or lower tester.
  • Test first. Software development methodology in which the programmers write unit tests before the code.
  • Testing maturity model (TMM). Gives guidance concerning how to improve a test process. The maturity of a test process is represented in five levels, or stages, namely, 1–5. Each stage is characterized by the concepts of maturity goals, supporting maturity goals, and activities, tasks, and responsibilities (ATRs).
  • Testing and test control notation (TTCN). Programming language dedicated to testing of communication protocols and web services. Up to version 2 the language was unconventionally written in the form of tables, and the language used to be called Tree and Tabular Combined Notation (TTCN) and was renamed to Testing and Test Control Notation in version 3.
  • Test management protocol. Protocol used to implement test coordination procedures by using test management protocol data units (TM-PDUs) in the coordination architecture.
  • Test objective. Description of what needs to be verified in order to ensure that a specific requirement is implemented correctly.
  • Test oracle. Can decide whether or not a test case has passed. An oracle provides a method to (i) generate expected results for the test inputs and (ii) compare the expected results with the actual results of the implementation under test.
  • Test predicate. Description of the conditions or combination of conditions relevant to the correct operation of a program.
  • Test prioritization. Ordering the execution of test cases according to certain criteria.
  • Test process. Certain manner of performing activities related to defect detection.
  • Test process improvement model (TPI model). Allows one to evaluate the maturity levels of test processes. The current status of a test process is evaluated from 20 viewpoints, known as key areas. The status of a test process with respect to a key area is represented in terms of one of four levels of maturity -- A, B, C, and D. Level A is the lowest level of maturity, and maturity level ascends from A to D.
  • Test purpose. Specific description of the objective of the corresponding test case.
  • Test selection. Carefully selecting a subset of the test suites on the basis of certain criteria. A chosen subset of the test suites are used to perform regression testing.
  • Test selection criterion. Property of a program, a specification, of a data domain.
  • Test suite. Group of test cases that can be executed as a package in a particular sequence. Test suites are usually related by the area of the system that they exercise, by their priority, or by content.
  • Test tool. Hardware or software product that replaces or enhances some aspect of human activity involved in testing.
  • Test vector. Also called test input vector, an instance of the input to a program.
  • Tilted boundary. Error that occurs if the actual boundary intersects with the intended boundary.
  • Top-down integration. A kind of integration testing technique in which testing starts at the topmost module of the program, often called the "main program," and works toward the outermost branches of the visibility tree, gradually adding modules as integration proceeds.
  • Total quality control (TQC). Management approach for an organization centered on quality and based on the participation of all its members and aiming at long-term success through customer satisfaction and benefits to all members of the organization and to the society. Total quality control was the key concept of Armand Feigenbaum's 1951 book, Quality Control: Principles, Practice, and Administration. Republished in 2004 as Total Quality Control, McGraw-Hill, New York.
  • Traceability matrix. Allows one to make a mapping between requirements and test cases both ways.
  • Transcendental view of quality. Quality that can be recognized through experience but not defined in some tractable form.
  • Transfer sequence. Minimum-length input sequence that brings an implementation from its initial state into a given state.
  • Transition tour. State transitions defined in an FSM specification are executed at least once by applying an input sequence to an implementation, starting from the initial state of the FSM. Such an input sequence is called a transition tour of the FSM.
  • Transmission Control Protocol (TCP). Core protocol of the IP suite. Applications on networked hosts can create connections with one another using the TCP; data segments are transmitted over a TCP connection for higher reliability. The protocol guarantees reliable and in-order delivery of data segments. TCP supports many of the Internet's popular applications, including the World Wide Web, e-mail, and secure shell.
  • Transport layer security (TLS). Provides endpoint authentication and communication privacy over the Internet using cryptography.
  • Tunneled transport layer security (TTLS). Similar to the TLS protocol, but client authentication is extended after a secure transport connection has been established.
  • Undefinition of a variable. A variable is said to be undefined if the variable's memory location holds a value which is not meaningful anymore.
  • Unique input–output sequence (UIO sequence). Essentially an input sequence such that the corresponding output sequence uniquely identifies the state that an implementation was in before the UIO sequences was applied.
  • Unified modeling language (UML). Standardized specification language for object modeling. UML is a general-purpose modeling language that includes a graphical notation used to create an abstract model of a system.
  • Unit. Program unit or module that may be viewed as a piece of code implementing a "low"-level function.
  • Unit testing. Testing a program unit in isolation. Unit testing is performed by the programmer who wrote the program unit.
  • Unit under test. Program unit that is being tested in the context of an emulated environment.
  • Upgrade/downgrade test. Verifies that the system software build can be upgraded or downgraded.
  • Upper tester. Tester entity that controls and observes the upper service boundary of the IUT.
  • Usability test. Means of measuring how well people can use some human-made object, such as a web page, a computer interface, a document, or a device, for its intended purpose.
  • Usage profile. Software profile that characterizes operational use of a software system. Operational use is the intended use of the software in the intended environment.
  • User acceptance testing (UAT). Conducted by the customer to ensure that the system satisfies the contractual acceptance criteria.
  • User view of quality. Extent to which a product meets user needs and expectations.
  • Valid criterion. A test selection criterion is valid if and only if whenever a program under test contains a fault, the criterion selects a test that reveals the fault.
  • Validation. Process of ensuring that the software meets its customer's expectations.
  • Value-based view of quality. The central idea in the value-based view is how much a customer is willing to pay for a certain level of quality.
  • Verdict. A test verdict is a statement of pass, fail, or inconclusive that is associated with a test case. Pass means that the observed outcome satisfies the test purpose and is completely valid with respect to the requirement. Fail means that the observed outcome is invalid with respect to the requirement. An inconclusive verdict means that the observed outcome is valid with respect to the requirement but inconclusive with respect to the test purpose.
  • Verification. Process of ensuring the correspondence of an implementation phase of a software development process with its specification.
  • Virtual circuit (VC). Communication arrangement in which data from a source user may be passed to a destination user over more than one real communication circuit during a single period of communication; the switching is hidden from the users. A permanent virtual circuit (PVC) is a virtual circuit established for repeated use between the same data terminal equipments (DTE). In a PVC, the long-term association is identical to the data transfer phase of a virtual call. Permanent virtual circuits eliminate the need for repeated call setup and clearing. On the other hand, switched virtual circuits (SVCs) are generally set up on a per-call basis and are disconnected when calls are terminated.
  • Virus. Software component that is capable of spreading rapidly to a large number of computers but cannot do so all by itself. It has to spread using the assistance of another program.
  • Walkthrough. Review where a programmer leads a review team through a manual or simulated execution of the product using predefined scenarios.
  • Waterfall model. Sequential software development model in which development is seen as flowing steadily downward -- like a waterfall -- through the phases of requirements analysis, design, implementation, testing, integration, and maintenance. The origin of the term "waterfall" is often said to be an article (Managing the Development of Large Software Systems: Concepts and Techniques, in Proceedings of WESCON, August 1970, pp. 1-9. Reprinted in ICSE, Monterey, CA, 1987, pp. 328-338) published in 1970 by W. W. Royce.
  • White-box testing. Testing methodology in which one primarily takes into account the internal mechanisms, such as code and program logic, of a system or component.
  • Worm. Software component that is capable of, under its own means, infecting a computer system.
  • W-set. Set of input sequences for an FSM. When the set of inputs is applied to an implementation of the FSM in an intended state, one expects to observe outputs which uniquely identify the state of the implementation.
  • Zero-day attack. Presents a new and particularly serious kind of threat. Developed specifically to exploit software vulnerabilities before patches are available, these attacks are not recognized by traditional security products: they enter a network undetected, giving absolutely no time to prepare for a defense.