Business Research Methods 12e by Cooper, Schindler

From CNM Wiki
Jump to: navigation, search

Business Research Methods 12e by Cooper, Schindler is the 12th edition of the textbook authored by Donald R. Cooper, Florida Atlantic University, and Pamela S. Schindler, Wittenberg University, and published in 2014 by McGraw-Hill/Irwin, a business unit of The McGraw-Hill Companies, Inc., New York, NY.

  • A priori contrasts. A special class of tests used in conjunction with the F test that is specifically designed to test the hypotheses of the experiment or study (in comparison to post hoc or unplanned tests).
  • Accuracy. The degree to which bias is absent from the sample -- the underestimators and the overestimators are balanced among members of the sample (i.e., no systematic variance).
  • Action research. A methodology with brainstorming followed by sequential trial-and-error to discover the most effective solution to a problem; succeeding solutions are tried until the desired results are achieved; used with complex problems about which little is known.
  • Active factors. Those independent variables (IV) the researcher can manipulate by causing the subject to receive one treatment level or another.
  • Activity analysis. See process analysis.
  • Administrative question. A measurement question that identifies the participant, interviewer, interview location, and conditions (nominal data).
  • Alternative hypothesis ([[HA]]). That a difference exists between the sample parameter and the population statistic to which it is compared; the logical opposite of the null hypothesis used in significance testing.
  • Ambiguities and paradoxes. A projective technique (imagination exercise) in which participants imagine a brand applied to a different product (e.g., a Tide dog food or Marlboro cereal), and then describe its attributes and position.
  • Analogy. A rhetorical device that compares two different things to highlight a point of similarity.
  • Analysis of variance (ANOVA). Tests the null hypothesis that the means of several independent populations are equal; test statistic is the F ratio; used when you need k -independentsamples tests.
  • Applied research. Research that addresses existing problems or opportunities.
  • Arbitrary scales. Universal practice of ad hoc scale development used by instrument designers to create scales that are highly specific to the practice or object being studied.
  • Area chart. A graphical presentation that displays total frequency, group frequency, and time series data; a.k.a. stratum chart or surface chart.
  • Area sampling. A cluster sampling technique applied to a population with well-defined political or natural boundaries; population is divided into homogeneous clusters from which a single-stage or multistage sample is drawn.
  • Argument. Statement that explains, interprets, defends, challenges, or explores meaning.
  • Artifact correlations. Occur when distinct subgroups in the data combine to give the impression of one.
  • Association. The process used to recognize and understand patterns in data and then used to understand and exploit natural patterns.
  • Asymmetrical relationship. A relationship in which we postulate that change in one variable (IV) is responsible for change in another variable (DV).
  • Attitude. A learned, stable predisposition to respond to oneself, other persons, objects, or issues in a consistently favorable or unfavorable way.
  • Audience. Characteristics and background of the people or groups for whom the secondary source was created; one of the five factors used to evaluate the value of a secondary source.
  • Audience analysis. An analysis of the attendees at a presentation through advance conversations or psychological profiling of age, size, education/knowledge level, experience, gender, diversity, company culture, decision-making roles, and individual attitudes, needs, and motivations.
  • Auditory learners. Audience members who learn through listening; represent about 20 to 30 percent of the audience; implies the need to include stories and examples in research presentations.
  • Authority. The level of data and the credibility of a source as indicated by the credentials of the author and publisher; one of five factors used to evaluate the value of a secondary source.
  • Authority figure. A projective technique (imagination exercise) in which participants are asked to imagine that the brand or product is an authority figure and to describe the attributes of the figure.
  • Automatic interaction detection (AID). A data partitioning procedure that searches up to 300 variables for the single best predictor of a dependent variable.
  • Average linkage method. Evaluates the distance between two clusters by first finding the geometric center of each cluster and then computing distances between the two centers.
  • Backward elimination. Sequentially removing the variable from a regression model that changes R2 the least; see also forward selection and stepwise selection.
  • Balanced rating scale. Has an equal number of categories above and below the midpoint or an equal number of favorable/unfavorable response choices.
  • Band. See prediction and confidence bands.
  • Bar chart. A graphical presentation technique that represents frequency data as horizontal or vertical bars; vertical bars are most often used for time series and quantitative classifications (histograms, stacked bar, and multiple-variable charts are specialized bar charts).
  • Bar code. Technology employing labels containing electronically read vertical bar data codes.
  • Basic research. See pure research.
  • Bayesian statistics. Uses subjective probability estimates based on general experience rather than on data collected. (See "Decision Theory Problem" at the Online Learning Center.)
  • Benefit chain. See laddering.
  • Beta weights. Standardized regression coefficients in which the size of the number reflects the level of influence X exerts on Y.
  • Bibliography (bibliographic database). A secondary source that helps locate a book, article, photograph, etc.
  • Bivariate correlation analysis. A statistical technique to assess the relationship of two continuous variables measured on an interval or ratio scale.
  • Bivariate normal distribution. Data are from a random sample in which two variables are normally distributed in a joint manner.
  • Blind. When participants do not know if they are being exposed to the experimental treatment.
  • Boxplot. An EDA technique; a visual image of the variable's distribution location, spread, shape, tail length, and outliers; a.k.a. box-and-whisker plot.
  • Branched question. A measurement question sequence determined by the participant's previous answer(s); the answer to one question assumes other questions have been asked or answered and directs the participant to answer specific questions that follow and skip other questions; branched questions determine question sequencing.
  • Brand mapping. A projective technique (type of semantic mapping) where participants are presented with different brands and asked to talk about their perceptions, usually in relation to several criteria. They may also be asked to spatially place each brand on one or more semantic maps.
  • Buffer question. A neutral measurement question designed chiefly to establish rapport with the participant (usually nominal data).
  • Business intelligence system (BIS). A system of ongoing information collection about events and trends in the technological, economic, political and legal, demographic, cultural, social, and competitive arenas.
  • Business research. A systematic inquiry that provides information to guide business decisions; the process of determining, acquiring, analyzing and synthesizing, and disseminating relevant business data, information, and insights to decision makers in ways that mobilize the organization to take appropriate actions that, in turn, maximize organizational performance.
  • Callback. Procedure involving repeated attempts to make contact with a targeted participant to ensure that the targeted participant is reached and motivated to participate in the study.
  • Cartoons or empty balloons. A projective technique where participants are asked to write the dialog for a cartoonlike picture.
  • Case. The entity or thing the hypothesis talks about.
  • Case study (case history). A methodology that combines individual and (sometimes) group interviews with record analysis and observation; used to understand events and their ramifications and processes; emphasizes the full contextual analysis of a few events or conditions and their interrelations for a single participant; a type of preexperimental design (one-shot case study).
  • Categorization. For this scale type, participants put themselves or property indicants in groups or categories; also, a process for grouping data for any variable into a limited number of categories.
  • Causal-explanatory study. A study that is designed to determine whether one or more variables explain the causes or effects of one or more outcome (dependent) variables.
  • Causal hypothesis. See explanatory hypothesis.
  • Causal-predictive study. A study that is designed to predict with regularity how one or more variables cause or affect one or more outcome (dependent) variables to occur.
  • Causal study. Research that attempts to reveal a causal relationship between variables. (A produces B or causes B to occur.)
  • Causation. Situation where one variable leads to a specified effect on the other variable.
  • Cell. In a cross-tabulation, a subgroup of the data created by the value intersection of two (or more) variables; each cell contains the count of cases as well as the percentage of the joint classification.
  • Census. A count of all the elements in a population.
  • Central limit theorem. The sample means of repeatedly drawn samples will be distributed around the population mean; for sufficiently large samples (i.e., n 5 301), approximates a normal distribution.
  • Central tendency. A measure of location, most commonly the mean, median, and mode.
  • Central tendency (error of). An error that results because the participant is reluctant to give extreme judgments, usually due to lack of knowledge.
  • Centroid. A term used for the multivariate mean scores in MANOVA.
  • Checklist. A measurement question that poses numerous alternatives and encourages multiple unordered responses; see multiple-choice, multiple-response scale.
  • Chi-square-based measures. Tests to detect the strength of the relationship between the variables tested with a chi-square test: phi, Cramer's V , and contingency coefficient C.
  • Chi-square test (x2 test). A test of significance used for nominal and ordinal measurements.
  • Children's panel. A series of focus group sessions in which the same child may participate in up to three groups in one year, with each experience several months apart.
  • Chronologic interviewing. See sequential interviewing.
  • Clarity. A design principle that advocates the use of visual techniques that allow the audience to perceive meaning from the location of elements.
  • Classical statistics. An objective view of probability in which the hypothesis is rejected, or not, based on the sample data collected.
  • Classification question. A measurement question that provides sociological-demographic variables for use in grouping participants' answers (nominal, ordinal, interval, or ratio data).
  • Closed question/response. A measurement question that presents the participant with a fixed set of choices (nominal, ordinal, or interval data).
  • Cluster analysis. Identifies homogeneous subgroups of study objects or participants and then studies the data by these subgroups.
  • Cluster sampling. A sampling plan that involves dividing the population into subgroups and then draws a sample from each subgroup, a single-stage or multistage design.
  • Clustering. A technique that assigns each data record to a group or segment automatically by clustering algorithms that identify the similar characteristics in the data set and then partition them into groups.
  • Clutter. Verbal behaviors in a presentation that distract the audience; includes repetition of fillers such as "ah," "um," "you know," or "like."
  • Code of ethics. An organization's codified set of norms or standards of behavior that guide moral choices about research behavior; effective codes are regulative, protect the public interest, are behavior-specific, and are enforceable.
  • Codebook. The coding rules for assigning numbers or other symbols to each variable; a.k.a. coding scheme.
  • Coding. Assigning numbers or other symbols to responses so that they can be tallied and grouped into a limited number of categories.
  • Coefficient of determination ([[r2]]). The amount of common variance in X and Y , two variables in regression; the ratio of the line of best fit's error over that incurred by using the mean value of Y.
  • Collinearity. Occurs when two independent variables are highly correlated; causes estimated regression coefficients to fluctuate widely, making interpretation difficult.
  • Communality. In factor analysis, the estimate of the variance in each variable that is explained by the factors being studied.
  • Communication approach. A study approach involving questioning or surveying people (by personal interview, telephone, mail, computer, or some combination of these) and recording their responses for analysis.
  • Communication study. The researcher questions the participants and collects their responses by personal or impersonal means.
  • Comparative scale. A scale in which the participant evaluates an object against a standard using a numerical, graphical, or verbal scale.
  • Compatibility. A design principle that encourages matching visual techniques that comprise the form of the message to its content and the meaning of that content.
  • Component sorts. A projective technique in which participants are presented with flash cards containing component features and asked to create new combinations.
  • Computer-administered telephone survey. A telephone survey via voice-synthesized computer questions; data are tallied continuously.
  • Computer-assisted personal interview (CAPI). A personal, face-to-face interview (IDI) with computer-sequenced questions, employing visualization techniques; real-time data entry possible.
  • Computer-assisted self-interview (CASI). Computer-delivered survey that is self-administered by the participant.
  • Computer-assisted telephone interview (CATI). A telephone interview with computer-sequenced questions and real-time data entry; usually in a central location with interviewers in acoustically isolated interviewing carrels; data are tallied continuously.
  • Concealment. A technique in an observation study in which the observer is shielded from the participant to avoid error caused by observer's presence; this is accomplished by one-way mirrors, hidden cameras, hidden microphones, etc.
  • Concept. A bundle of meanings or characteristics associated with certain concrete, unambiguous events, objects, conditions, or situations.
  • Conceptual scheme. The interrelationships between concepts and constructs.
  • Concordant. When a participant that ranks higher on one ordinal variable also ranks higher on another variable, the pairs of variables are concordant.
  • Confidence interval. The combination of interval range and degree of confidence.
  • Confidence level. The probability that the results will be correct.
  • Confidentiality. A privacy guarantee to retain validity of the research, as well as to protect participants.
  • Confirmatory data analysis. An analytical process guided by classical statistical inference in its use of significance and confidence.
  • Confounding variables (CFV). Two or more variables that are confounded when their effects on a response variable cannot be distinguished from each other.
  • Conjoint analysis. Measures complex decision making that requires multiattribute judgments; uses input from nonmetric independent variables to secure part-worths that represent the importance of each aspect of the participant's overall assessment; produces a scale value for each attribute or property.
  • Consensus scaling. Scale development by a panel of experts evaluating instrument items based on topical relevance and lack of ambiguity.
  • Constant-sum scale. The participant allocates points to more than one attribute or property indicant, such that they total to 100 or 10; a.k.a. fixed-sum scale.
  • Construct. A definition specifically invented to represent an abstract phenomenon for a given research project.
  • Construct validity. See validity, construct.
  • Content analysis. A flexible, widely applicable tool for measuring the semantic content of a communication -- including counts, categorizations, associations, interpretations, etc. (e.g., used to study the content of speeches, ads, newspaper and magazine editorials, focus group and IDI transcripts); contains four types of items: syntactical, referential, propositional, and thematic; initial process is done by computer.
  • Content validity. See validity, content.
  • Contingency coefficient C. A measure of association for nominal, nonparametric variables; used with any-size chi-square table, the upper limit varies with table sizes; does not provide direction of the association or reflect causation.
  • Contingency table. A cross-tabulation table constructed for statistical testing, with the test determining whether the classification variables are independent.
  • Contrast. A design principle that advocates using high-contrast techniques to quickly draw audience attention to the main point.
  • Control. The ability to replicate a scenario and dictate a particular outcome; the ability to exclude, isolate, or manipulate the influence of a variable in a study; a critical factor in inference from an experiment, implies that all factors, with the exception of the independent variable (IV), must be held constant and not confounded with another variable that is not part of the study.
  • Control dimension. In quota sampling, a descriptor used to define the sample's characteristics (e.g., age, education, religion).
  • Control group. A group of participants that is not exposed to the independent variable being studied but still generates a measure for the dependent variable.
  • Control variable. A variable introduced to help interpret the relationship between variables.
  • Controlled test market. Real-time test of a product through arbitrarily selected distribution partners.
  • Controlled vocabulary. Carefully defined subject hierarchies used to search some bibliographic databases.
  • Convenience sample. Nonprobability sample in which element selection is based on ease of accessibility.
  • Convenience sampling. Nonprobability sampling in which researchers use any readily available individuals as participants.
  • Convergent interviewing. An IDI technique for interviewing a limited number of experts as participants in a sequential series of IDIs; after each successive interview, the researcher refines the questions, hoping to converge on the central issues in a topic area; sometimes called convergent and divergent interviewing.
  • Correlation. The relationship by which two or more variables change together, such that systematic changes in one accompany systematic changes in the other.
  • Correlational hypothesis. A statement indicating that variables occur together in some specified manner without implying that one causes the other.
  • Cramer's V. A measure of association for nominal, nonparametric variables; used with larger than 2 3 2 chi-square tables; does not provide direction of the association or reflect causation; ranges from zero to 11.0.
  • Creativity session. Qualitative technique in which an individual activity exercise is followed by a sharing/discussion session, in which participants build on one another's creative ideas; often used with children; may be conducted before or during IDIs or group interviews; usually consists of drawing, visual compilation, or writing exercises.
  • Criterion-related validity. See validity, criterion-related.
  • Criterion variable. See dependent variable.
  • Critical incident technique. An IDI technique involving sequentially asked questions to reveal, in narrative form, what led up to an incident being studied; exactly what the observed party did or did not do that was especially effective or ineffective; the outcome or result of this action; and why this action was effective or what more effective action might have been expected.
  • Critical path method (CPM). A scheduling tool for complex or large research proposals that cites milestones and time involved between milestones.
  • Critical value. The dividing point(s) between the region of acceptance and the region of rejection; these values can be computed in terms of the standardized random variable due to the normal distribution of sample means.
  • Cross-sectional study. The study is conducted only once and reveals a snapshot of one point in time.
  • Cross-tabulation. A technique for comparing data from two or more categorical variables.
  • Cultural interview. An IDI technique that asks a participant to relate his or her experiences with a culture or subculture, including the knowledge passed on by prior generations and the knowledge participants have or plan to pass on to future generations.
  • Cumulative scale. A scale development technique in which scale items are tested based on a scoring system, and agreement with one extreme scale item results also in endorsement of all other items that take a less extreme position.
  • Custom-designed measurement questions. Measurement questions formulated specifically for a particular research project.
  • Custom researcher. Crafts a research design unique to the decision maker's dilemma.
  • Data. Information (attitudes, behavior, motivations, attributes, etc.) collected from participants or observations (mechanical or direct) or from secondary sources.
  • Data analysis. The process of editing and reducing accumulated data to a manageable size, developing summaries, looking for patterns, and applying statistical techniques.
  • Data case. See record.
  • Data entry. The process of converting information gathered by secondary or primary methods to a medium for viewing and manipulation; usually done by keyboarding or optical scanning.
  • Data field. A single element of data from all participants in a study.
  • Data file. A set of data records (all responses from all participants in a study).
  • Data mart. Intermediate storage facility that compiles locally required information.
  • Data mining. Applying mathematical models to extract meaningful knowledge from volumes of data contained within internal data marts or data warehouses; purpose is to identify valid, novel, useful, and ultimately understandable patterns in data.
  • Data preparation. The processes that ensure the accuracy of data and their conversion from raw form into categories appropriate for analysis; includes editing, coding, and data entry.
  • Data visualization. The process of viewing aggregate data on multiple dimensions to gain a deeper, intuitive understanding of the data.
  • Data warehouse. Electronic storehouse where vast arrays of collected integrated data are stored by categories to facilitate retrieval, interpretation, and sorting by data-mining techniques.
  • Database. A collection of data organized for computerized retrieval; defines data fields, data records, and data files.
  • Debriefing. Explains the truth to participants and describes the major goals of the research study and the reasons for using deception.
  • Deception. Occurs when participants are told only part of the truth or the truth is fully compromised to prevent biasing participants or to protect sponsor confidentiality.
  • Decision rule. The criterion for judging the attractiveness of two or more alternatives when using a decision variable.
  • Decision support system (DSS). Numerous elements of data organized for retrieval and use in decision making.
  • Decision variable. A quantifiable characteristic, attribute, or outcome on which a choice decision will be made.
  • Deduction. A form of reasoning in which the conclusion must necessarily follow from the reasons given; a deduction is valid if it is impossible for the conclusion to be false if the premises are true.
  • Demonstration. Presentation support technique using a visual presentation aid to show how something works.
  • Dependency techniques. Those techniques in which criterion or dependent variables and predictor or independent variables are present (e.g., multiple regression, MANOVA, discriminant analysis).
  • Dependent variable (DV). The variable measured, predicted, or otherwise monitored by the researcher; expected to be affected by a manipulation of the independent variable; a.k.a. criterion variable.
  • Descriptive hypothesis. States the existence, size, form, or distribution of some variable.
  • Descriptive statistics. Display characteristics of the location, spread, and shape of a data array.
  • Descriptive study. Attempts to describe or define a subject, often by creating a profile of a group of problems, people, or events, through the collection of data and the tabulation of the frequencies on research variables or their interaction; the study reveals who, what, when, where, or how much; the study concerns a univariate question or hypothesis in which the research asks about or states something about the size, form, distribution, or existence of a variable.
  • Deviation scores. Displays distance of an observation from the mean.
  • Dichotomous question. A measurement question that offers two mutually exclusive and exhaustive alternatives (nominal data).
  • Dictionary. Secondary source that defines words, terms, or jargon unique to a discipline; may include information on people, events, or organizations that shape the discipline; an excellent source of acronyms.
  • Direct observation. Occurs when the observer is physically present and personally monitors and records the behavior of the participant.
  • Directory. A reference source used to identify contact information (e.g., name, address, phone); many are free, but the most comprehensive are proprietary.
  • Discordant. When a subject that ranks higher on one ordinal variable ranks lower on another variable, the pairs of variables are discordant; as discordant pairs increase over concordant pairs, the association becomes negative.
  • Discriminant analysis. A technique using two or more independent interval or ratio variables to classify the observations in the categories of a nominal dependent variable.
  • Discussion guide. The list of topics to be discussed in an unstructured interview (e.g., focus group); a.k.a. interview guide.
  • Disguised question. A measurement question designed to conceal the question's and study's true purpose.
  • Disk-by-mail survey (DBM survey). A type of computer-assisted selfinterview, in which the survey and its management software, on computer disk, are delivered by mail to the participant.
  • Disproportionate sampling. See stratified sampling, disproportionate.
  • Distribution (of data). The array of value counts from lowest to highest value, resulting from the tabulation of incidence for each variable by value.
  • "don't know" response (DK response). A response given when a participant has insufficient knowledge, direction, or willingness to answer a question.
  • Double-barreled question. A measurement question that includes two or more questions in one that the participant might need to answer differently; a question that requests so much content that it would be better if separate questions were asked.
  • Double blind. Study design in which neither the researcher nor the participant knows when a subject is being exposed to the experimental treatment.
  • Double sampling. A procedure for selecting a subsample from a sample; a.k.a. sequential sampling or multiphase sampling.
  • "dummy" table. Displays data one expects to secure during data analysis; each dummy table is a cross-tabulation between two or more variables.
  • Dummy variable. Nominal variables converted for use in multivariate statistics; coded 0, 1, as all other variables must be interval or ratio measures.
  • Dyad (paired interview). A group interview done in pairs (e.g., best friends, spouses, superior-subordinate, strangers); used often with children.
  • EDA. See exploratory data analysis.
  • Editing. A process for detecting errors and data omissions and correcting them when possible; certifies that minimum data quality standards are met.
  • Eigenvalue. Proportion of total variance in all the variables that is accounted for by a factor.
  • Electronic test market. Test that combines store distribution, consumer scanner panel data, and household-level media delivery.
  • Empiricism. Observations and propositions based on sense experience and/or derived from such experience by methods of inductive logic, including mathematics and statistics.
  • Encyclopedia. A secondary source that provides background or historical information on a topic, including names or terms that can enhance your search results in other sources.
  • Enthymeme. A truncated syllogism where one or more minor premises are left unstated. The presenter gives the primary premise and assumes that the audience will supply the missing knowledge in order to reach the conclusion.
  • Environmental control. Holding constant the physical environment of the experiment.
  • Equal-appearing interval scale. An expensive, time-consuming type of consensus scaling that results in an interval rating scale for attitude measurement; a.k.a. Thurstone scale.
  • Equivalence. When an instrument secures consistent results with repeated measures by the same investigator or different samples.
  • Error. Discrepancy between the sample value and the true population value that occurs when the participant fails to answer fully and accurately -- either by choice or because of inaccurate or incomplete knowledge.
  • Error of central tendency. See central tendency (error of).
  • Error of leniency. See leniency (error of).
  • Error term. The deviations of the actual values of Y from the regression line (representing the mean value of Y for a particular value of X).
  • Ethics. Norms or standards of behavior that guide moral choices about research behavior.
  • Ethnographic research. See ethnography.
  • Ethnography. Interviewer and participant collaborate in a field-setting participant observation and unstructured interview; typically takes place where the behavior being observed occurs (e.g., participant's home).
  • Ethos. How well the audience believes that the presenter is qualified to speak on the particular subject; determined by the perception of a presenter's character, his or her past experience, or the credibility and experience of those the presenter evokes.
  • Event sampling. The process of selecting some elements or behavioral acts or conditions from a population of observable behaviors or conditions to represent the population as a whole.
  • Ex post facto design. After-the-fact report on what happened to the measured variable.
  • Example. A true or hypothetical instance used to a clarify complex idea.
  • Executive summary (final report). This document is written as the last element of a research report and either is a concise summary of the major findings, conclusions, and recommendations or is a report in miniature, covering all aspects in abbreviated form.
  • Executive summary (proposal). An informative abstract providing the essentials of the proposal without the details.
  • Experience survey (expert interview). Semistructured or unstructured interviews with experts on a topic or dimension of a topic; an exploratory technique in which knowledgeable experts share their ideas about important issues or aspects of the subject and relate what is important across the subject's range of experience; usually involves a personal or phone interview.
  • Experiment (experimental study). Study involving intervention (manipulation of one or more variables) by the researcher beyond that required for measurement to determine the effect on another variable.
  • Experimental treatment. The manipulated independent variable.
  • Expert group interview. Group interview consisting of individuals exceptionally knowledgeable about the issues or topics to be discussed.
  • Expert interview. A discussion with someone knowledgeable about the problem or its possible solutions.
  • Expert opinion (testimony). Opinions of recognized experts who possess credibility for your audience on a topic; used as support or proof.
  • Explanatory hypothesis (causal hypothesis). A statement that describes a relationship between two variables in which one variable leads to a specified effect on the other variable.
  • Explanatory study. Attempts to explain an event, act, or characteristic measured by research.
  • Explicit attitude. An expressed positive or negative evaluation.
  • Exploration. The process of collecting information to formulate or refine management, research, investigative, or measurement questions; loosely structured studies that discover future research tasks, including developing concepts, establishing priorities, developing operational definitions, and improving research design; a phase of a research project where the researcher expands understanding of the management dilemma, looks for ways others have addressed and/or solved problems similar to the management dilemma or management question, and gathers background information on the topic to refine the research question; a.k.a. exploratory study or exploratory research.
  • Exploratory data analysis (EDA). Patterns in the collected data guide the data analysis or suggest revisions to the preliminary data analysis plan.
  • Exploratory research. See exploration.
  • Exploratory study. See exploration.
  • Exposition. Statement that describes without attempting to explain.
  • Extemporaneous presentation. An audience-centered, preplanned speech made from minimal notes; generates a presentation that is natural, conversational, and flexible to audience interests.
  • External validity. Occurs when an observed causal relationship can be generalized across persons, settings, and times.
  • Extralinguistic behavior. The vocal, temporal, interactive, and verbal stylistic behaviors of human participants.
  • Extraneous variable (EV). Variable to assume (because it has little effect or its impact is randomized) or exclude from a research study.
  • Extranet. A private network that uses the Internet protocols and the public telecommunication system to share a business's information, data, or operations with external suppliers, vendors, or customers.
  • Eye contact. A meeting of the eyes between two people that expresses meaningful nonverbal communication revealing concern, warmth, and authenticity.
  • F ratio. F test statistic comparing measurements of k independent samples.
  • Fact. A piece of information about a situation that exists or an event known to have occurred; it takes the form of a statement about verifiable data that support the presenter's argument.
  • Factor. Denotes an independent variable (IV) in an experiment; factors are divided into treatment levels for the experiment.
  • Factor analysis. A technique for discovering patterns among the variables to determine if an underlying combination of the original variables (a factor) can summarize the original set.
  • Factor scales. Types of scales that deal with multidimensional content and underlying dimensions, such as scalogram, factor, and cluster analyses, and metric and nonmetric multidimensional scaling.
  • Factors. In factor analysis, the result of transforming a set of variables into a new set of composite variables; these factors are linear and not correlated with each other.
  • Field conditions. The actual environmental conditions in which the dependent variable occurs.
  • Field experiment. A study of the dependent variable in actual environmental conditions.
  • Filter question. See screen question.
  • Findings nondisclosure. A type of confidentiality; the sponsor restricts the researcher from discussing the findings of the research project.
  • Five-number summary. The median, the upper and lower quartiles, and the largest and smallest observations of a variable's distribution.
  • Fixed-sum scale. See constant-sum scale.
  • Flow aids. A design principle that provides a visual aid that reveals to the audience where the presenter is within the overall presentation.
  • Focus group. The simultaneous involvement of a small number of research participants (usually 8 to 10) who interact at the direction of a moderator in order to generate data on a particular issue or topic; widely used in exploratory studies; usually lasts 90 minutes to two hours; can be conducted in person or via phone or videoconference.
  • Forced-choice rating scale. Requires that participants select from available alternatives.
  • Forced ranking scale. A scale in which the participant orders several objects or properties of objects; faster than paired comparison to obtain a rank order.
  • Formal study. Research question–driven process involving precise procedures for data collection and interpretation; tests the hypothesis or answers the research questions posed.
  • Format. How the information is presented and how easy it is to find a specific piece of information within a secondary source; one of five factors used to evaluate the value of a secondary source.
  • Forward selection. In modeling and regression, sequentially adds the variables to a regression model that results in the largest R2 increase; see also backward elimination and stepwise selection.
  • Free-response question. A measurement question in which the participant chooses the words to frame the answer; a.k.a. open ended question (nominal, ordinal, or ratio data).
  • Frequency distribution. Ordered array of all values for a variable.
  • Frequency table. Arrays category codes from lowest value to highest value, with columns for count, percent, valid percent, and cumulative percent.
  • Full-service researchers. A firm with both quantitative and qualitative methodology expertise that conducts all phases of research from planning to insight development, often serving as both research firm and consultant.
  • Funnel approach. A type of question sequencing that moves the participant from general to more specific questions and is designed to learn the participant's frame of reference while extracting full disclosure of information on the topic (nominal, ordinal, interval, or ratio data).
  • Gamma (g). Uses a preponderance of evidence of concordant pairs versus discordant pairs to predict association; the gamma value is the proportional reduction of error when prediction is done using preponderance of evidence (values from 21.0 to 11.0).
  • Geographic chart. Uses a map to show regional variations in data.
  • Gestures. A form of nonverbal communication made with a part of the body, used instead of or in combination with verbal communication; allows presenters to express a variety of feelings and thoughts, positive and negative.
  • Goodness of fit. A measure of how well the regression model is able to predict Y.
  • Graphic rating scale. A scale in which the participant places his or her response along a line or continuum; the score or measurement is its distance in millimeters from either endpoint.
  • Grounded theory. An IDI technique in which analysis of the data takes place simultaneously with its collection, with the purpose of developing general concepts or theories with which to analyze the data.
  • Group interview. A data collection method using a single interviewer who simultaneously interviews more than one research participant.
  • Halo effect. Error caused when prior observations influence perceptions of current observations.
  • Handbook. A secondary source used to identify key terms, people, or events relevant to the management dilemma or management question.
  • Heterogeneous group. Participant group consisting of individuals with a variety of opinions, backgrounds, and actions relative to a topic.
  • Histogram. A graphical bar chart that groups continuous data values into equal intervals with one bar for each interval; especially useful for revealing skewness, kurtosis, and modal pattern.
  • Holdout sample. The portion of the sample (usually 1 ⁄ 3or 1 ⁄ 4) excluded for later validity testing when the estimating question is first computed; the equation is then used on the holdout data to calculate R2 for comparison.
  • Homogeneous group. Participant group consisting of individuals with similar opinions, backgrounds, and actions relative to a topic.
  • Hypothesis. A proposition formulated for empirical testing; a tentative descriptive statement that describes the relationship between two or more variables.
  • Hypothetical construct. Construct inferred only from data; its presumption must be tested.
  • Ill-defined problem. One that addresses complex issues and cannot be expressed easily, concisely, or completely.
  • Imaginary universe. A projective technique (imagination exercise) in which participants are asked to assume that the brand and its users populate an entire universe; they then describe the features of this new world.
  • Imagination exercises. A projective technique in which participants are asked to relate the properties of one thing/person/ brand to another.
  • Implicit attitude. An attitude about one object that influences the attitude about other objects.
  • Impromptu speaking. A speech that does not involve preparation, and evolves spontaneously in response to some stimulus, such as a question.
  • Incidence. The number of elements in the population belonging to the category of interest, divided by the total number of elements in the population.
  • Independent variable (IV). The variable manipulated by the researcher, thereby causing an effect or change on the dependent variable.
  • Index. Secondary data source that helps identify and locate a single book, journal article, author, etc., from among a large set.
  • Indirect observation. Occurs when the recording of data is done by mechanical, photographic, or electronic means.
  • Individual depth interview (IDI). A type of interview that encourages the participant to talk extensively, sharing as much information as possible; usually lasts one or more hours; three types: structured, semistructured, and unstructured.
  • Induction (inductive reasoning). To draw a conclusion from one or more particular facts or pieces of evidence; the conclusion explains the facts.
  • Inferential statistics. Includes the estimation of population values and the testing of statistical hypotheses.
  • Informed consent. Participant gives full consent to participation after receiving full disclosure of the procedures of the proposed survey.
  • Interaction effect. The influence that one factor has on another factor.
  • Intercept ([[b0]]). One of two regression coefficients; the value for the linear function when it crosses the Y axis or the estimate of Y when X is zero.
  • Intercept interview. A face-to-face communication that targets participants in a centralized location.
  • Interdependency techniques. Techniques in which criterion or dependent variables and predictor or independent variables are not present (e.g., factor analysis, cluster analysis, multidimensional scaling).
  • Internal consistency. Characteristic of an instrument in which the items are homogeneous; measure of reliability.
  • Internal database. Collection of data stored by an organization.
  • Internal validity. The ability of a research instrument to measure what it is purported to measure; occurs when the conclusion(s) drawn about a demonstrated experimental relationship truly implies cause.
  • Interquartile range (IQR). Measures the distance between the first and third quartiles of a data distribution; a.k.a. midspread ; the distance between the hinges in a boxplot.
  • Interval estimate. Range of values within which the true population parameter is expected to fall.
  • Interval scale. Scale with the properties of order and equal distance between points and with mutually exclusive and exhaustive categories; data that incorporate equality of interval (the distance between one measure and the next measure); e.g., temperature scale.
  • Intervening variable (IVV). A factor that affects the observed phenomenon but cannot be seen, measured, or manipulated; thus its effect must be inferred from the effects of the independent and moderating variables on the dependent variable.
  • Interview. Phone, in-person, or videoconference communication approach to collecting data.
  • Interview guide. See discussion guide.
  • Interview schedule. Question list used to guide a structured interview; a.k.a. questionnaire.
  • Interviewer error. Error that results from interviewer influence of the participant; includes problems with motivation, instructions, voice inflections, body language, question or response order, or cheating via falsification of one or more responses.
  • Intranet. A private network that is contained within an enterprise; access is restricted to authorized audiences; usually behind a security firewall.
  • Investigative questions. Questions the researcher must answer to satisfactorily answer the research question; what the manager feels he or she needs to know to arrive at a conclusion about the management dilemma.
  • Item analysis. Scale development in which instrument designers develop instrument items and test them with a group of participants to determine which highly discriminate between high and low raters.
  • Jargon. Language unique to a profession or discipline; when unknown by the audience can reduce the clarity of the message.
  • Judgment sampling. A purposive sampling in which the researcher arbitrarily selects sample units to conform to some criterion.
  • k-independent-samples tests. Significance tests in which measurements are taken from three or more samples (ANOVA for interval or ratio measures, Kruskal-Wallis for ordinal measures, chi-square for nominal measures).
  • k-related-samples tests. Compares measurements from more than two groups from the same sample or more than two measures from the same subject or participant (ANOVA for interval or ratio measures, Friedman for ordinal measures, Cochran Q for nominal measures).
  • kinesics. The study of the use of body motion communication.
  • kinesthetic learners. People who learn by doing, moving, and touching.
  • kurtosis. Measure of a data distribution's peakedness or flatness (ku); a neutral distribution has a ku of 0, a flat distribution is negative, and a peaked distribution is positive.
  • Laboratory conditions. Studies that occur under conditions that do not simulate actual environmental conditions.
  • Laddering (benefit chain). A projective technique in which participants are asked to link functional features to their physical and psychological benefits, both real and ideal.
  • Lambda (l). A measure of how well the frequencies of one nominal variable predict the frequencies of another variable; values (vary between zero and 1.0) show the direction of the association.
  • Leading question. A measurement question whose wording suggests to the participant the desired answer (nominal, ordinal, interval, or ratio data).
  • Leniency (error of). A participant, within a series of evaluations, consistently expresses judgments at one end of a scale; an error that results when the participant is consistently an easy rater.
  • Letter of transmittal. The element of the final report that provides the purpose of, scope of, authorization for, and limitations of the study; not necessary for internal projects.
  • Level of significance. The probability of rejecting a true null hypothesis.
  • Life history. An IDI technique that extracts from a single participant memories and experiences from childhood to the present day regarding a product or service category, brand, or firm.
  • Likert scale. A variation of the summated rating scale, this scale asks a rater to agree or disagree with statements that express either favorable or unfavorable attitudes toward the object. The strength of attitude is reflected in the assigned score, and individual scores may be totaled for an overall attitude measure.
  • Limiters. Database search protocol for narrowing a search; commonly include date, publication type, and language.
  • Line graph. A statistical presentation technique used for time series and frequency distributions over time.
  • Linearity. An assumption of correlation analysis, that the collection of data can be described by a straight line passing through the data array.
  • Linguistic behavior. The human verbal behavior during conversation, presentation, or interaction.
  • Literature review. Recent or historically significant research studies, company data, or industry reports that act as the basis for the proposed study.
  • Literature search. A review of books, articles in journals or professional literature, research studies, and Web-published materials that relate to the management dilemma, management question, or research question.
  • Loadings. In principal components analysis, the correlation coefficients that estimate the strength of the variables that compose the factor.
  • Logos. The logical argument; requires supporting evidence and analytical techniques that reveal and uphold the researcher's findings and conclusions.
  • Longitudinal study. The study includes repeated measures over an extended period of time, tracking changes in variables over time; includes panels or cohort groups.
  • Mail survey. A relatively low-cost self-administered study both delivered and returned via mail.
  • Main effect. The average direct influence that a particular treatment of the IV has on the DV independent of other factors.
  • Management dilemma. The problem or opportunity that requires a decision; a symptom of a problem or an early indication of an opportunity.
  • Management question. The management dilemma restated in question format; categorized as "choice of objectives," "generation and evaluation of solutions," or "troubleshooting or control of a situation."
  • Management report. A report written for the nontechnically oriented manager or client.
  • Management–research question hierarchy. Process of sequential question formulation that leads a manager or researcher from management dilemma to measurement questions.
  • Manuscript reading. The verbatim reading of a fully written presentation.
  • Mapping rules. A scheme for assigning numbers to aspects of an empirical event.
  • Marginal(s). A term for the column and row totals in a cross-tabulation.
  • Matching. A process analogous to quota sampling for assigning participants to experimental and control groups by having participants match every descriptive characteristic used in the research; used when random assignment is not possible; an attempt to eliminate the effect of confounding variables that group participants so that the confounding variable is present proportionally in each group.
  • MDS. See multidimensional scaling.
  • Mean. The arithmetic average of a data distribution.
  • Mean square. The variance computed as an average or mean.
  • Measurement. Assigning numbers to empirical events in compliance with a mapping rule.
  • Measurement questions. The questions asked of the participants or the observations that must be recorded.
  • Measures of location. Term for measure of central tendency in a distribution of data; see also central tendency.
  • Measures of shape. Statistics that describe departures from the symmetry of a distribution; a.k.a. moments, skewness , and kurtosis.
  • Measures of spread. Statistics that describe how scores cluster or scatter in a distribution; a.k.a. dispersion or variability (variance, standard deviation, range, interquartile range, and quartile deviation).
  • Median. The midpoint of a data distribution where half the cases fall above and half the cases fall below.
  • Memorization. The act of committing to memory all details of a presentation.
  • Metaphor. A figure of speech in which an implicit comparison is made between two unlike things that actually have something important in common.
  • Metaphor elicitation technique. An individual depth interview that reveals participants' hidden or suppressed attitudes and perceptions by having them explain collected images and each image's relation to the topic being studied.
  • Method of least squares. A procedure for finding a regression line that keeps errors (deviations from actual value to the line value) to a minimum.
  • Metric measures. Statistical techniques using interval and ratio measures.
  • Mini-group. A group interview involving two to six people.
  • Missing data. Information that is missing about a participant or data record; should be discovered and rectified during data preparation phase of analysis; e.g., miscoded data, out-ofrange data, or extreme values.
  • Mode. The most frequently occurring value in a data distribution; data may have more than one mode.
  • Model. A representation of a system that is constructed to study some aspect of that system or the system as a whole.
  • Moderating variable (MV). A second independent variable, believed to have a significant contributory or contingent effect on the originally stated IV-DV relationship.
  • Moderator. A trained interviewer used for group interviews such as focus groups.
  • Monitoring. A classification of data collection that includes observation studies and data mining of organizational databases.
  • Motivated sequence. A presentation planning approach that involves the ordering of ideas to follow the normal processes of human thinking; motivates an audience to respond to the presenter's purpose.
  • Multicollinearity. Occurs when more than two independent variables are highly correlated.
  • Multidimensional scale. A scale that seeks to simultaneously measure more than one attribute of the participant or object.
  • Multidimensional scaling (MDS). A scaling technique to simultaneously measure more than one attribute of the participant or object; results are usually mapped; develops a geometric picture or map of the locations of some objects relative to others on various dimensions or properties; especially useful for difficult-to-measure constructs.
  • Multiphase sampling. See double sampling.
  • Multiple-choice, multiple-response scale. A scale that offers the participant multiple options and solicits one or more answers (nominal or ordinal data); a.k.a. checklist.
  • Multiple-choice question. A measurement question that offers more than two category responses but seeks a single answer.
  • Multiple-choice, single-response scale. A scale that poses more than two category responses but seeks a single answer, or one that seeks a single rating from a gradation of preference, interest, or agreement (nominal or ordinal data); a.k.a. multiplechoice question.
  • Multiple comparison tests. Compare group means following the finding of a statistically significant F test.
  • Multiple rating list scale. A single interval or ordinal numerical scale where raters respond to a series of objects; results facilitate visualization.
  • Multiple regression. A statistical tool used to develop a selfweighting estimating equation that predicts values for a dependent variable from the values of independent variables; controls confounding variables to better evaluate the contribution of other variables; tests and explains a causal theory.
  • Multivariate analysis. Statistical techniques that focus upon and bring out in bold relief the structure of simultaneous relationships among three or more phenomena.
  • Multivariate analysis of variance (MANOVA). Assesses the relationship between two or more dependent variables and classificatory variables or factors; frequently used to test differences among related samples.
  • Narrative. See oral history.
  • Narrative pattern. A presentation organizational pattern that involves the use of stories as the primary vehicle for communicating the presenter's message.
  • Negative leniency (error of). An error that results when the participant is consistently a hard or critical rater.
  • Nominal scale. Scale with mutually exclusive and collectively exhaustive categories, but without the properties of order, distance, or unique origin.
  • Noncontact rate. Ratio of noncontacts (no answer, busy, answering machine, and disconnects) to all potential contacts.
  • Nondisclosure. Various types of confidentiality involving research projects, including sponsor, findings, and purpose nondisclosures.
  • Nonexpert group. Participants in a group interview who have at least some desired information but at an unknown level.
  • Nonmetric measures. Statistical techniques using ordinal and nominal measures (nonparametric).
  • Nonparametric tests. Significance tests for data derived from nominal and ordinal scales.
  • Nonprobability sampling. An arbitrary and subjective procedure in which each population element does not have a known nonzero chance of being included; no attempt is made to generate a statistically representative sample.
  • Nonresistant statistics. A statistical measure that is susceptible to the effects of extreme values; e.g., mean, standard deviation.
  • Nonresponse error. Error that develops when an interviewer cannot locate the person with whom the study requires communication or when the targeted participant refuses to participate; especially troublesome in studies using probability sampling.
  • Nonverbal behavior. Human behaviors not related to conversation (e.g., body movement, facial expressions, exchanged glances, eyeblinks).
  • Nonverbal communication. Meaning conveyed through other than verbal means; encompasses clothing and bodily characteristics, physical environment (physical space and time), movement and body position (including kinesics, posture, gesture, touch), eye gaze, and paralanguage (nonverbal cues of the voice).
  • Nonverbal observation. Observation of human behavior without the use of conversation between observers and participants.
  • Normal distribution. A frequency distribution of many natural phenomena; graphically shaped like a symmetrical curve.
  • Normal probability plot. Compares the observed values with those expected from a normal distribution.
  • Null hypothesis (H0). Assumption that no difference exists between the sample parameter and the population statistic.
  • Numerical scale. A scale in which equal intervals separate the numeric scale points, while verbal anchors serve as labels for the extreme points.
  • Objects. Concepts defined by ordinary experience.
  • Observation. The full range of monitoring behavioral and nonbehavioral activities and conditions (including record analysis, physical condition analysis, physical process analysis, nonverbal analysis, linguistic analysis, extralinguistic analysis, and spatial analysis).
  • Observation checklist. A measurement instrument for recording data in an observation study; analogous to a questionnaire in a communication study.
  • Observation playgroup. An observation technique that involves observing children at play, often with targeted objects (toys or materials); observers are usually behind one-way mirrors.
  • Observation study. A monitoring approach to collecting data in which the researcher inspects the activities of a subject or the nature of some material without attempting to elicit responses from anyone; a.k.a. monitoring.
  • Observed significance level. The probability value compared to the significance level (e.g.,.05) chosen for testing and on this basis the null hypothesis is either rejected or not rejected.
  • Observer drift. A source of error affecting categorization caused by decay in reliability or validity of recorded observations over time.
  • OCR. See optical character recognition.
  • Omnibus researcher. Fields research studies, often by survey, at regular, predetermined intervals.
  • Omnibus study. Combines the questions of several decision makers who need information from the same population.
  • One-sample tests. Tests that involve measures taken from a single sample compared to a specified population.
  • One-tailed test. A test of a null hypothesis that assumes the sample parameter is not the same as the population statistic, but that the difference is in only one direction.
  • Online focus group. A type of focus group in which participants use the technology of the Internet, including e-mail, websites, Usenet newsgroups, or an Internet chat room, to approximate the interaction of a face-to-face focus group.
  • Open-ended question. See free-response question.
  • Operational definition. A definition for a variable stated in terms of specific testing criteria or operations, specifying what must be counted, measured, or gathered through our senses.
  • Operationalized. The process of transforming concepts and constructs into measurable variables suitable for testing.
  • Optical character recognition (OCR). Software programs that transfer printed text into a computer file in order to edit and use the information without rekeying the data.
  • Optical mark recognition (OMR). Software that uses a spreadsheet-style interface to read and process data from user-created forms.
  • Optical scanning. A data entry process whereby answers are recorded on computer-readable forms and then scanned to form a data record; reduces data handling and the errors that accompany such data handling.
  • Oral history (narrative). An IDI technique that asks participants to relate their personal experiences and feelings related to historical events or past behavior.
  • Ordinal measures. Measures of association between variables generating ordinal data.
  • Ordinal scale. Scale with mutually exclusive and collectively exhaustive categories, as well as the property of order, but not distance or unique origin; data capable of determining greaterthan, equal-to, or less-than status of a property or an object.
  • Outliers. Data points that exceed 11.5 the interquartile range (IQR).
  • P value probability of observing. A sample value as extreme as, or more extreme than, the value actually observed, given that the null hypothesis is true.
  • Pace. The rate at which the printed page presents information to the reader; it should be slower when the material is complex, faster when the material is straightforward.
  • Paired-comparison scale. The participant chooses a preferred object between several pairs of objects on some property; results in a rank ordering of objects.
  • Paired interview. See dyad.
  • Panel. A group of potential participants who have indicated a willingness to participate in research studies; often used for longitudinal communication studies; may be used for both qualitative and quantitative research.
  • Paralanguage. Nonverbal communication that includes such vocal elements as tone, pitch, rhythm, pause, timbre, loudness, and inflection.
  • Parametric tests. Significance tests for data from interval and ratio scales.
  • Pareto diagram. A graphical presentation that represents frequency data as a bar chart, ordered from most to least, overlaid with a line graph denoting the cumulative percentage at each variable level.
  • Participant. The subject, respondent, or sample element in a research study.
  • Participant-initiated response error. Error that occurs when the participant fails to answer fully and accurately -- either by choice or because of inaccurate or incomplete knowledge.
  • Participant observation. When the observer is physically involved in the research situation and interacts with the participant to influence some observation measures.
  • Participants' perceptual awareness. The subtle or major changes that occur in participants' responses when they perceive that a research study is being conducted.
  • Path analysis. Describes through regression an entire structure of linkages that have been advanced by a causal theory.
  • Path diagram. Presents predictive and associative relationships among constructs and indicators in a structural model.
  • Pathos. An appeal to an audience's sense of identity, self-interest, and emotions, which relies on an emotional connection between the presenter and his or her audience.
  • Pearson correlation coefficient. The r symbolizes the estimate of strength of linear association and its direction between interval and ratio variables; based on sampling data and varies over a range of 11 to 21; the prefix (1, 2) indicates the direction of the relationship (positive or inverse), while the number represents the strength of the relationship (the closer to 1, the stronger the relationship; 0 5 no relationship); and the p represents the population correlation.
  • Performance anxiety (stage fright). A fear produced by the need to make a presentation in front of an audience or before a camera.
  • Permission surveying. The act of surveying prospects or customers who have given permission for such engagement, usually through panel membership.
  • Personification. A projective technique (imagination exercise) in which participants are asked to imagine inanimate objects with the traits, characteristics and features, and personalities of humans.
  • Phi (f). A measure of association for nominal, nonparametric variables; ranges from zero to 11.0 and is used best with 2 3 2 chi-square tables; does not provide direction of the association or reflect causation.
  • Photographic framing. The practice of creating a focal point for all visuals used in presentations.
  • Physical condition analysis. The recording of observations of current conditions resulting from prior decisions; includes inventory, signs, obstacles or hazards, cleanliness, etc.
  • Physical trace. A type of observation that collects measures of wear data (erosion) and accretion data (deposit) rather than direct observation (e.g., a study of trash).
  • Pictograph. A bar chart using pictorial symbols rather than bars to represent frequency data; the symbol has an association with the subject of the statistical presentation and one symbol unit represents a specific count of that variable.
  • Pie chart. Uses sections of a circle (slices of a pie) to represent 100 percent of a frequency distribution of the subject being graphed; not appropriate for changes over time.
  • Pilot test. A trial collection of data to detect weaknesses in design and instrumentation and provide proxy data for selection of a probability sample; see also pretesting.
  • Point estimate. Sample mean; our best predictor of the unknown population mean.
  • Population. The elements about which we wish to make some inferences.
  • Population element. The individual participant or object on which the measurement is taken; a.k.a. population unit, sample element, sample unit.
  • Population parameter. A summary descriptor of a variable of interest in the population; e.g., incidence, mean, variance.
  • Population proportion of incidence. The number of category elements in the population, divided by the number of elements in the population.
  • Portal. A Web page that serves as a gateway to more remote Web publications; usually includes one or more directories, search engines, and other user features such as news and weather.
  • Posture and body orientation. Communication of nonverbal messages by the way you walk and stand.
  • Power of the test. 1 minus the probability of committing a Type II error (1 minus the probability that we will correctly reject the false null hypothesis).
  • Practical significance. When a statistically significant difference has real importance to the decision maker.
  • Practicality. A characteristic of sound measurement concerned with a wide range of factors of economy, convenience, and interpretability.
  • PRE. See proportional reduction in error.
  • Precision. One of the considerations in determining sample validity: the degree to which estimates from the sample reflect the measure taken by a census; measured by the standard error of the estimate -- the smaller the error, the greater the precision of the estimate.
  • Precoding. Assigning codebook codes to variables in a study and recording them on the questionnaire; eliminates a separate coding sheet.
  • Predesigned measurement questions. Questions that have been formulated and tested by previous researchers, are recorded in the literature, and may be applied literally or be adapted for the project at hand.
  • Prediction and confidence bands. Bow-tie-shaped confidence intervals around a predictor; predictors farther from the mean have larger bandwidths in regression analysis.
  • Predictive study. Is used to determine if a relationship exists between two or more variables. When causation is established, one variable can be used to predict the other. In business research, studies conducted to evaluate specific courses of action or to forecast current or future values.
  • Predictor variable. See independent variable.
  • Pretasking. A variety of creative and mental exercises to prepare participants for individual or group interviews, such as an IDI or focus group; intended to increase understanding of participants' own thought processes and bring their ideas, opinions, and attitudes to the surface.
  • Pretesting. The assessment of questions and instruments before the start of a study; an established practice for discovering errors in questions, question sequencing, instructions, skip directions, etc.; see also pilot test.
  • Primacy effect. Order bias in which the participant tends to choose the first alternative; a principle affecting presentation organization in which the first item in a list is initially distinguished as important and may be transferred to long-term memory; implies an important argument should be first in your presentation.
  • Primary data. Data the researcher collects to address the specific problem at hand -- the research question.
  • Primary sources. Original works of research or raw data without interpretation or pronouncements that represent an official opinion or position; include memos, letters, complete interviews or speeches, laws, regulations, court decisions, and most government data, including census, economic, and labor data; the most authoritative of all sources.
  • Principal components analysis. One method of factor analysis that transforms a set of variables into a new set of composite variables; these variables are linear and not correlated with each other; see also factor analysis.
  • Principle of appropriate knowledge. Only information compatible with your audience's knowledge level should be used in a presentation.
  • Principle of capacity limitations. The audience cannot process large amounts of information at one time.
  • Principle of disciminability. Two properties must differ by a large amount for the difference to be discerned by your audience.
  • Principle of informative changes. Your audience will expect anything you speak about, demonstrate, or show in your presentation to convey important information.
  • Principle of perceptual organization. Your audience will automatically group items together, even if you don't give them such groupings; this facilitates their absorbing and storing large amounts of information.
  • Principle of relevance. Only information critical to understanding should be presented.
  • Principle of salience. Your audience's attention is drawn to large perceptible differences.
  • Probability sampling. A controlled, randomized procedure that ensures that each population element is given a known nonzero chance of selection; used to draw participants that are representative of a target population; necessary for projecting findings from the sample to the target population.
  • Probing. Techniques for stimulating participants to answer more fully and relevantly to posed questions.
  • Process analysis (activity analysis). Observation by a time study of stages in a process, evaluated on both effectiveness and efficiency; includes traffic flow within distribution centers and retailers, paperwork flow, customer complaint resolution, etc.
  • Project management. The process of planning and managing a detailed project, through tables and charts with detail responsibilities and deadlines; details relationship between researchers, their assistants, sponsors, and suppliers; often results in a Gantt chart.
  • Projective techniques. Qualitative methods that encourage the participant to reveal hidden or suppressed attitudes, ideas, emotions, and motives; various techniques (e.g., sentence completion tests, cartoon or balloon tests, word association tests) used as part of an interview to disguise the study objective and allow the participant to transfer or project attitudes and behavior on sensitive subjects to third parties; the data collected via these techniques are often difficult to interpret (nominal, ordinal, or ratio data).
  • Properties. Characteristics of objects that are measured; a person's properties are his or her weight, height, posture, hair color, etc.
  • Proportion. Percentage of elements in the distribution that meet a criterion.
  • Proportional reduction in error (PRE). Measures of association used with contingency tables (a.k.a cross-tabulations) to predict frequencies.
  • Proportionate sampling. See stratified sampling, proportionate.
  • Proposal. A work plan, prospectus, outline, statement of intent, or draft plan for a research project, including proposed budget.
  • Proposition. A statement about concepts that may be judged as true or false if it refers to observable phenomena.
  • Proprietary methodology. A research program or technique that is owned by a single firm; may be branded.
  • Proxemics. The study of the use of space; the study of how people organize the territory around them and the discrete distances they maintain between themselves and others.
  • Proximity. An index of perceived similarity or dissimilarity between objects.
  • Pure research (basic research). Designed to solve problems of a theoretical nature with little direct impact on strategic or tactical decisions.
  • Purpose. The explicit or hidden agenda of the authors of the secondary source; one of five factors in secondary source evaluation.
  • Purpose nondisclosure. A type of confidentiality; occurs when the sponsor camouflages the true research objective of the research project.
  • Purposive sampling. A nonprobability sampling process in which researchers choose participants arbitrarily for their unique characteristics or their experiences, attitudes, or perceptions.
  • Q-sort. Participant sorts a deck of cards (representing properties or objects) into piles that represent points along a continuum.
  • Qualitative research. Interpretive techniques that seek to describe, decode, translate, and otherwise come to terms with the meaning, not the frequency, of certain phenomena; a fundamental approach of exploration, including individual depth interviews, group interviews, participant observation, videotaping of participants, projective techniques and psychological testing, case studies, street ethnography, elite interviewing, document analysis, and proxemics and kinesics; see also content analysis.
  • Qualitative techniques. Nonquantitative data collection used to increase understanding of a topic.
  • Quantitative research. The precise count of some behavior, knowledge, opinion, or attitude.
  • Quartile deviation (Q). A measure of dispersion for ordinal data involving the median and quartiles; the median plus one quartile deviation on either side encompasses 50 percent of the observations and eight cover the full range of data.
  • Questionnaire. An instrument delivered to the participant via personal (intercept, phone) or nonpersonal (computer-delivered, mail-delivered) means that is completed by the participant.
  • Quota matrix. A means of visualizing the matching process.
  • Quota sampling. Purposive sampling in which relevant characteristics are used to stratify the sample.
  • Random assignment. A process that uses a randomized sample frame for assigning sample units to test groups in an attempt to ensure that the groups are as comparable as possible with respect to the DV; each subject must have an equal chance for exposure to each level of the independent variable.
  • Random dialing. A computerized process that chooses phone exchanges or exchange blocks and generates numbers within these blocks for telephone surveys.
  • Random error. Error that occurs erratically, without pattern; see also sampling error.
  • Randomization. Using random selection procedures to assign sample units to either the experimental or control group to achieve equivalence between groups.
  • Range. The difference between the largest and smallest scores in the data distribution; a very rough measure of spread of a dispersion.
  • Range tests. See multiple comparison tests.
  • Ranking question. A measurement question that asks the participant to compare and order two or more objects or properties using a numeric scale.
  • Ranking scale. A scale that scores an object or property by making a comparison and determining order among two or more objects or properties; uses a numeric scale and provides ordinal data; see also ranking question.
  • Rating question. A question that asks the participant to position each property or object on a verbal, numeric, or graphic continuum.
  • Rating scale. A scale that scores an object or property without making a direct comparison to another object or property; either verbal, numeric, or graphic; see also rating question.
  • Ratio scale. A scale with the properties of categorization, order, equal intervals, and unique origin; numbers used as measurements have numeric value; e.g., weight of an object.
  • Reactivity response. The phenomenon that occurs when participants alter their behavior due to the presence of the observer.
  • Readability index. Measures the difficulty level of written material; e.g., Flesch Reading Ease Score, Flesch Kincaid Grade Level, Gunning's Fog Index; most word processing programs calculate one or several of the indexes.
  • Recency effect. Order bias occurs when the participant tends to choose the last alternative; in presentations, people remember what they hear at the end of the list of arguments in a speech, recalling those items best; implies an important argument should be the last in your presentation.
  • Reciprocal relationship. Occurs when two variables mutually influence or reinforce each other.
  • Record. A set of data fields that are related, usually by subject or participant; represented by rows in a spreadsheet or statistical database; a.k.a. data case, data record.
  • Record analysis. The extraction of data from current or historical records, either private or in the public domain; a technique of data mining.
  • Recruitment screener. Semistructured or structured interview guide designed to ensure the interviewer that the prospect will be a good participant for the planned research.
  • Refusal rate. Ratio of participants who decline the interview to all potential/eligible contacts.
  • Region of acceptance. Area between the two regions of rejection based on a chosen level of significance (two-tailed test) or the area above/below the region of rejection (one-tailed test).
  • Region of rejection. Area beyond the region of acceptance set by the level of significance.
  • Regression analysis. Uses simple and multiple predictions to predict Y from X values.
  • Regression coefficients. Intercept and slope coefficients; the two association measures between X and Y variables.
  • Relational hypothesis. Describes the relationship between two variables with respect to some case; relationships are correlational or explanatory.
  • Relationship. A design principle that encourages the use of visual techniques that allow the audience to perceive the relationships between elements and sense what information goes together.
  • Relevant population. Those elements in the population most likely to have the information specified in the investigative questions.
  • Reliability. A characteristic of measurement concerned with accuracy, precision, and consistency; a necessary but not sufficient condition for validity (if the measure is not reliable, it cannot be valid).
  • Reliability, equivalence. A characteristic of measurement in which instruments can secure consistent results by the same investigator or by different samples.
  • Reliability, internal consistency. A characteristic of an instrument in which the items are homogeneous.
  • Reliability, stability. A characteristic of measurement in which an instrument can secure consistent results with repeated measurements of the same person or object.
  • Replication. The process of repeating an experiment with different subject groups and conditions to determine the average effect of the IV across people, situations, and times.
  • Reporting study. Provides a summation of data, often recasting data to achieve a deeper understanding or to generate statistics for comparison.
  • Request for proposal (RFP). A formal bid request for research to be done by an outside supplier of research services.
  • Research briefing. Another term for the oral presentation; starts with a brief statement that sets the stage for the body of the findings and explains the nature of the project, how it came about, and what it attempted to do. This is followed by a discussion of the findings that support it. Where appropriate, recommendations are stated in the third stage.
  • Research design. The blueprint for fulfilling research objectives and answering questions.
  • Research process. A sequence of clearly defined steps within a research study.
  • Research question(s). The hypothesis that best states the objective of the research; the answer to this question would provide the manager with the desired information necessary to make a decision with respect to the management dilemma.
  • Research report. The document that describes the research project, its findings, analysis of the findings, interpretations, conclusions, and, sometimes, recommendations.
  • Research variable. See variable.
  • Residual. The difference between the regression line value of Y and the real Y value; what remains after the regression line is fit.
  • Resistant statistics. Statistical measures relatively unaffected by outliers within a data set; e.g., median and quartiles.
  • Respondent. A participant in a study; a.k.a. participant or subject.
  • Response error. Occurs when the participant fails to give a correct or complete answer.
  • Return on investment (ROI). The calculation of the financial return for all organizational expenditures.
  • Right to privacy. The participant's right to refuse to be interviewed or to refuse to answer any questions in an interview.
  • Right to quality. The sponsor's right to an appropriate, valueladen research design and data handling and reporting techniques.
  • Right to safety. The right of interviewers, surveyors, experimenters, observers, and participants to be protected from any threat of physical or psychological harm.
  • Rotation. In principal components analysis, a technique used to provide a more simple and interpretable picture of the relationships between factors and variables.
  • Rule of thirds. The method by which photographers compose their shots in the viewfinder using real or imaginary crosshairs that divide field of view into thirds, vertically and horizontally to create a balanced visual.
  • Rule of three. A presentation organizing device that uses trios, triplets, or triads in organizing support for an argument.
  • Sample. A group of cases, participants, events, or records consisting of a portion of the target population, carefully selected to represent that population; see also pilot test, data mining.
  • Sample frame. List of elements in the population from which the sample is actually drawn.
  • Sample statistics. Descriptors of the relevant variables computed from sample data.
  • Sampling. The process of selecting some elements from a population to represent that population.
  • Sampling error. Error created by the sampling process; the error not accounted for by systematic variance.
  • Scaling. The assignment of numbers or symbols to an indicant of a property or objects to impart some of the characteristics of the numbers to the property; assigned according to value or magnitude.
  • Scalogram analysis. A procedure for determining whether a set of items forms a unidimensional scale; used to determine if an item is appropriate for scaling.
  • Scatterplot. A visual technique that depicts both the direction and the shape of a relationship between variables
  • Scientific method. Systematic, empirically based procedures for generating replicable research; includes direct observation of phenomena; clearly defined variables, methods, and procedures; empirically testable hypotheses; the ability to rule out rival hypotheses; and statistical rather than linguistic justifi- cation of conclusions.
  • Scope. The breadth and depth of topic coverage of a secondary source (by time frame, geography, criteria for inclusion, etc.); one of the five factors for evaluating the quality of secondary sources.
  • Screen question. Question to qualify the participant's knowledge about the target questions of interest or experience necessary to participate.
  • Script. A written version of the introduction, arguments, conclusion, and recommendations used in preparing for a presentation.
  • Search query. The combination of keywords and connectors, operators, limiters, and truncation and phrase devices used to conduct electronic searches of secondary data sources; a.k.a. search statement.
  • Search statement. See search query.
  • Secondary data. Results of studies done by others and for different purposes than the one for which the data are being reviewed.
  • Secondary sources. Interpretations of primary data generally without new research.
  • Self-administered survey. An instrument delivered to the participant via personal (intercept) or nonpersonal (computerdelivered, mail-delivered) means that is completed by the participant without additional contact with an interviewer.
  • Semantic differential scale (SD scale). Measures the psychological meanings of an attitude object and produces interval data; uses bipolar nouns, noun phrases, adjectives, or nonverbal stimuli such as visual sketches.
  • Semantic mapping. A projective technique in which participants are presented with a four-quadrant map in which different variables anchor the two different axes; they then spatially place brands, product components, or organizations within the four quadrants.
  • Semistructured interview. An IDI that starts with a few specific questions and then follows the individual's tangents of thought with interviewer probes; questions generally use an open-ended response strategy.
  • Sensitive attitude. One that a holder feels uncomfortable sharing with others.
  • Sensory sorts. Participants are presented with scents, textures, and sounds, usually verbalized on cards, and asked to arrange them by one or more criteria as they relate to a brand, product, event, etc.
  • Sentence completion. A projective technique in which participants are asked to complete a sentence related to a particular brand, product, event, user group, etc.
  • Sentence outline. Report planning format; uses complete sentences rather than key words or phrases to draft each report section.
  • Sequential interviewing. An IDI technique in which the participant is asked questions formed around an anticipated series of activities that did or might have happened; used to stimulate recall within participants of both experiences and emotions; a.k.a. chronologic interviewing.
  • Sequential sampling. See double sampling.
  • Simple category scale. A scale with two mutually exclusive response choices; a.k.a. dichotomous scale.
  • Simple observation. Unstructured and exploratory observation of participants or objects.
  • Simple prediction. When we take the observed values of X to estimate or predict corresponding Y values; see also regression analysis.
  • Simple random sample. A probability sample in which each element has a known and equal chance of selection.
  • Simplicity. A design principle that emphasizes reducing clutter and advocates using only the information and visual techniques necessary to convey the data, idea, or conclusion.
  • Simulated test market (STM). Test of a product conducted in a laboratory setting designed to simulate a traditional shopping environment.
  • Simulation. A study in which the conditions of a system or process are replicated.
  • Skewness. A measure of a data distribution's deviation from symmetry; if fully symmetrical, the mean, median, and mode are in the same location.
  • Skip interval. Interval between sample elements drawn from a sample frame in systematic sampling.
  • Skip pattern. Instructions designed to route or sequence the participant to another question based on the answer to a branched question.
  • Slope (b1). The change in Y for a 1-unit change in X ; one of two regression coefficients.
  • Snowball sampling. A nonprobability sampling procedure in which subsequent participants are referred by current sample elements; referrals may have characteristics, experiences, or attitudes similar to or different from those of the original sample element; commonly used in qualitative methodologies.
  • Solicited proposal. Proposal developed in response to an RFP.
  • Somers's d. A measure of association for ordinal data that compensates for "tied" ranks and adjusts for direction of the independent variable.
  • Sorting. Participants sort cards (representing concepts or constructs) into piles using criteria established by the researcher.
  • Sound reasoning. The basis of sound research, based on finding correct premises, testing connections between facts and assumptions, and making claims based on adequate evidence.
  • Source evaluation. The five-factor process for evaluating the quality and value of data from a secondary source; see also purpose, scope, authority, audience , and format.
  • Spatial behavior. How humans physically relate to one another.
  • Spatial observation. The recording of how humans physically relate to each other; see also proxemics.
  • Spatial relationships study. An observation study that records how humans physically relate to each other (see also proxemics).
  • Speaker note cards. A brief version of a presentation, in outline or keyword form; may be written on index cards; used for reminding the presenter of the organization of the presentation.
  • Spearman's rho. Correlates ranks between two ordered variables; an ordinal measure of association.
  • Specialty researcher. Establishes expertise in one or a few research methodologies; these specialties usually are based on methodology, process, industry, participant group, or geographic region; often assists other research firms to complete projects.
  • Specific instance. A critical incident selected to prove an overarching claim whereby specifics are translated into more general principles; not as detailed as stories; a form of inductive reasoning.
  • Specification error. An overestimation of the importance of the variables included in a structural model.
  • Sponsor nondisclosure. A type of confidentiality; when the sponsor of the research does not allow revealing of its sponsorship.
  • Spreadsheet. A data-entry software application that arranges data cases or records as rows, with a separate column for each variable in the study.
  • Stability. Characteristic of a measurement scale if it provides consistent results with repeated measures of the same person with the same instrument.
  • Standard deviation (s). A measure of spread; the positive square root of the variance; abbreviated std. dev. ; affected by extreme scores.
  • Standard error of the mean. The standard deviation of the distribution of sample means.
  • Standard normal distribution. The statistical standard for describing normally distributed sample data; used with inferential statistics that assume normally distributed variables.
  • Standard score (Z score). Conveys how many standard deviation units a case is above or below the mean; designed to improve compatibility among variables that come from different scales yet require comparison; includes both linear manipulations and nonlinear transformations.
  • Standard test market. Real-time test of a product through existing distribution channels.
  • Standardized coefficients. Regression coefficients in standardized form (mean 5 0) used to determine the comparative impact of variables that come from different scales; the X values restated in terms of their standard deviations (a measure of the amount that Y varies with each unit change of the associated X variable).
  • Stapel scale. A numerical scale with up to 10 categories (7 positive, 7 negative) in which the central position is an attribute. The higher the positive number, the more accurately the attribute describes the object or its indicant.
  • Statistical significance. An index of how meaningful the results of a statistical comparison are; the magnitude of difference between a sample value and its population value; the difference is statistically significant if it is unlikely to have occurred by chance (represent random sampling fluctuations).
  • Statistical study. A study that attempts to capture a population's characteristics by making inferences from a sample's characteristics; involves hypothesis testing and is more comprehensive than a case study.
  • Statistics (in presentations). Numerical data used in the collection, analysis, and interpretation of data, but also found in of the data collection planning, measurement, and design; expected in research presentations.
  • Stem-and-leaf display. A tree-type frequency distribution for each data value, without equal interval grouping.
  • Stepwise selection. In modeling and regression, a method for sequentially adding or removing variables from a regression model to optimize R2; combines forward selection and backward elimination methods.
  • Stories. A type of supporting material used in a presentation that tells the particulars of an act or occurrence or course of events; is most powerful when involving personal experience.
  • Strategy. The general approach an organization will follow to achieve its goals.
  • Stratified random sampling. Probability sampling that includes elements from each of the mutually exclusive strata within a population.
  • Stratified sampling, disproportionate. A probability sampling technique in which each stratum's size is not proportionate to the stratum's share of the population; allocation is usually based on variability of measures expected from the stratum, cost of sampling from a given stratum, and size of the various strata.
  • Stratified sampling, proportionate. A probability sampling technique in which each stratum's size is proportionate to the stratum's share of the population; higher statistical efficiency than a simple random sample.
  • Stress index. An index used in multidimensional scaling that ranges from 1 (worst fit) to 0 (perfect fit).
  • Structural equation modeling (SEM). Uses analysis of covariance structures to explain causality among constructs.
  • Structured interview. An IDI that often uses a detailed interview guide similar to a questionnaire to guide the question order; questions generally use an open-ended response strategy.
  • Structured response. Participant's response is limited to specific alternatives provided; a.k.a. closed response.
  • Summated rating scale. Category of scales in which the participant agrees or disagrees with evaluative statements; the Likert scale is most known of this type of scale.
  • Supergroup. A group interview involving up to 20 people.
  • Survey. A measurement process using a highly structured interview; employs a measurement tool called a questionnaire, measurement instrument , or interview schedule.
  • Survey via personal interview. A two-way communication initiated by an interviewer to obtain information from a participant; face-to-face, phone, or Internet.
  • Symmetrical relationship. Occurs when two variables vary together but without causation.
  • Syndicated data provider. Tracks the change of one or more measures over time, usually in a given industry.
  • Synergy. The process at the foundation of group interviewing that encourages members to react to and build on the contributions of others in the group.
  • Systematic error. Error that results from a bias; see also systematic variance.
  • Systematic observation. Data collection through observation that employs standardized procedures, trained observers, schedules for recording, and other devices for the observer that mirror the scientific procedures of other primary data methods.
  • Systematic sampling. A probability sample drawn by applying a calculated skip interval to a sample frame; population (N) is divided by the desired sample (n) to obtain a skip interval (k). Using a random start between 1 and k , each k th element is chosen from the sample frame; usually treated as a simple random sample but statistically more efficient.
  • Systematic variance. The variation that causes measurements to skew in one direction or another.
  • T distribution. A normal distribution with more tail area than that in a Z normal distribution.
  • T-test. A parametric test to determine the statistical significance between a sample distribution mean and a population parameter; used when the population standard deviation is unknown and the sample standard deviation is used as a proxy.
  • Tactics. Specific, timed activities that execute a strategy.
  • Target population. Those people, events, or records that contain the desired information for the study that determine whether a sample or a census should be selected.
  • Target question. A measurement question that addresses the core investigative questions of a specific study; these can be structured or unstructured questions.
  • Target question, structured. A measurement question that presents the participant with a fixed set of categories per variable.
  • Target question, unstructured. A measurement question that presents the participant with the context for participant-framed answers; a.k.a. open-ended question, free-response question (nominal, ordinal, or ratio data).
  • Tau (t). A measure of association that uses table marginals to reduce prediction errors, with measures from 0 to 1.0 reflecting percentage of error estimates for prediction of one variable based on another variable.
  • Tau b (tb). A refinement of gamma for ordinal data that considers "tied' pairs, not only discordant and concordant pairs (values from 21.0 to 11.0); used best on square tables (one of the most widely used measures for ordinal data).
  • Tau c (tc). A refinement of gamma for ordinal data that considers "tied' pairs, not only discordant and concordant pairs (values from 21.0 to 11.0); useful for any-size table (one of the most widely used measures for ordinal data).
  • Technical report. A report written for an audience of researchers.
  • Telephone focus group. A type of focus group in which participants are connected to the moderator and each other by modern teleconferencing equipment; participants are often in separate teleconferencing facilities; may be remote- moderated or -monitored.
  • Telephone interview. A study conducted wholly by telephone contact between participant and interviewer.
  • Telephone survey. A structured interview conducted via telephone.
  • 10-minute rule. Varying a presentation's content on 10-minute intervals with videos, demonstrations, questions, and other means to allow the brain to avoid boredom/fatigue and seek new stimuli.
  • Tertiary sources. Aids to discover primary or secondary sources, such as indexes, bibliographies, and Internet search engines; also may be an interpretation of a secondary source.
  • Testimony (expert opinion). Opinions of recognized experts who possess credibility for your audience on a topic; used as support or proof.
  • Test market. A controlled experiment conducted in a carefully chosen marketplace (e.g., website, store, town, or other geographic location) to measure and predict sales or profitability of a product.
  • Test unit. An alternative term for a subject within an experiment (a person, an animal, a machine, a geographic entity, an object, etc.).
  • Thematic Apperception Test. A projective technique in which participants are confronted with a picture (usually a photograph or drawing) and asked to describe how the person in the picture feels and thinks.
  • Theoretical sampling. A nonprobability sampling process in which conceptual or theoretical categories of participants develop during the interviewing process; additional participants are sought who will challenge emerging patterns.
  • Theory. A set of systematically interrelated concepts, definitions, and propositions that are advanced to explain or predict phenomena (facts); the generalizations we make about variables and the relationships among variables.
  • 3-D graphic. A presentation technique that permits a graphical comparison of three or more variables; types: column, ribbon, wireframe, and surface line.
  • Three-point speech. Variations on the rule of three in speech organization that may include introduction–body–conclusion; introduction–three best supporting points–conclusion; three stories; or, other devices in threes.
  • Time sampling. The process of selecting certain time points or time intervals to observe and record elements, acts, or conditions from a population of observable behaviors or conditions to represent the population as a whole; three types include time-point samples, time-interval samples, and continuous real-time samples.
  • Topic outline. Report planning format; uses key words or phrases rather than complete sentences to draft each report section.
  • Treatment. The experimental factor to which participants are exposed.
  • Treatment levels. The arbitrary or natural groupings within the independent variable of an experiment.
  • Triad. A group interview involving three people.
  • Trials. Repeated measures taken from the same subject or participant.
  • Triangulation. Research design that combines several qualitative methods or qualitative with quantitative methods; most common are simultaneous QUAL/QUANT in single or multiple waves, sequential QUAL-QUANT or QUANT-QUAL, sequential QUAL-QUANT-QUAL.
  • Truncation. A search protocol that allows a symbol (usually "?" or "*") to replace one or more characters or letters in a word or at the end of a word root.
  • Two-independent-samples tests. Parametric and nonparametric tests used when the measurements are taken from two samples that are unrelated (Z test, t -test, chi-square, etc.).
  • Two-related-samples tests. Parametric and nonparametric tests used when the measurements are taken from closely matched samples or the phenomena are measured twice from the same sample (t -test, McNemar test, etc).
  • Two-stage design. A design in which exploration as a distinct stage precedes a descriptive or causal design.
  • Two-tailed test. A nondirectional test to reject the hypothesis that the sample statistic is either greater than or less than the population parameter.
  • Type I error. Error that occurs when one rejects a true null hypothesis (there is no difference); the alpha (a) value, called the level of significance, is the probability of rejecting the true null hypothesis.
  • Type II error. Error that occurs when one fails to reject a false null hypothesis; the beta (b) value is the probability of failing to reject the false null hypothesis; the power of the test 1 2 b and is the probability that we will correctly reject the false null hypothesis.
  • Unbalanced rating scale. Has an unequal number of favorable and unfavorable response choices.
  • Unforced-choice rating scale. Provides participants with an opportunity to express no opinion when they are unable to make a choice among the alternatives offered.
  • Unidimensional scale. Instrument scale that seeks to measure only one attribute of the participant or object.
  • Unobtrusive measures. A set of observational approaches that encourage creative and imaginative forms of indirect observation, archival searches, and variations on simple and contrived observation, including physical traces observation (erosion and accretion).
  • Unsolicited proposal. A suggestion by a contract researcher for research that might be done.
  • Unstructured interview. A customized IDI with no specific questions or order of topics to be discussed; usually starts with a participant narrative.
  • Unstructured response. Participant's response is limited only by space, layout, instructions, or time; usually free-response or fill-in response strategies.
  • Utility score. A score in conjoint analysis used to represent each aspect of a product or service in a participant's overall preference ratings.
  • Validity. A characteristic of measurement concerned with the extent that a test measures what the researcher actually wishes to measure; and that differences found with a measurement tool reflect true differences among participants drawn from a population.
  • Validity, construct. The degree to which a research instrument is able to provide evidence based on theory.
  • Validity, content. The extent to which measurement scales provide adequate coverage of the investigative questions.
  • Validity, criterion-related. The success of a measurement scale for prediction or estimation; types are predictive and concurrent.
  • Variability. Term for measures of spread or dispersion within a data set.
  • Variable (research variable). A characteristic, trait, or attribute that is measured; a symbol to which values are assigned; includes several different types: continuous, control, decision, dependent, dichotomous, discrete, dummy, extraneous, independent, intervening, and moderating variables.
  • Variance. A measure of score dispersion about the mean; calculated as the squared deviation scores from the data distribution's mean; the greater the dispersion of scores, the greater the variance in the data set.
  • Videoconferencing focus group. A type of focus group in which researchers use the videoconference facilities of a firm to connect participants with moderators and observers; unlike telephone focus groups, participants can see each other; can be remotely moderated, and in some facilities can be simultaneously monitored by client observers via Internet technology.
  • Virtual test market. A test of a product using a computer simulation of an interactive shopping experience.
  • Visibility. The design principle that visual support materials must be sized and placed in the presentation setting to facilitate the audience's ability to see and read the content.
  • Visitor from another planet. A projective technique (imagination exercise) in which participants are asked to assume that they are aliens and are confronting the product for the first time; they then describe their reactions, questions, and attitudes about purchase or retrial.
  • Visual aids. Presentation tools used to facilitate understanding of content (e.g., chalkboards, whiteboards, handouts, flip charts, overhead transparencies, slides, computer-drawn visuals, computer animation).
  • Visual learners. People who learn through seeing; about 40 percent of the audience; implies the need to include visual imagery, including graphs, photographs, models, etc., in research presentations.
  • Visual preparation. The design principle that a presenter should conceptualize the visual support materials on paper before composing the digital versions.
  • Visualization. The process of developing and organizing support materials that help the audience share in your understanding of the data.
  • Voice recognition. Computer systems programmed to record verbal answers to questions.
  • Web-based questionnaire. A measurement instrument both delivered and collected via the Internet; data processing is ongoing. Two options currently exist: proprietary solutions offered through research firms and off-the-shelf software for researchers who possess the necessary knowledge and skills; a.k.a. online survey, online questionnaire, Internet survey.
  • Web-delivered presentation. One that involves the use of a Web presentation platform, a presenter who remotely controls the delivery of the presentation, and an invited audience who participates via the Web from their office or a Webequipped room.
  • Web-enabled test market. Test of a product using online distribution.
  • Whitespace. A design principle of leaving empty, uncluttered space surrounding important key visuals and text; permits audience to achieve a visual focus.
  • Word or picture association. A projective technique in which participants are asked to match images, experiences, emotions, products and services, and even people and places to whatever is being studied.
  • Z distribution. The normal distribution of measurements assumed for comparison.
  • Z score. See standard score.
  • Z test. A parametric test to determine the statistical significance between a sample distribution mean and a population parameter; employs the Z distribution.