Business Research Methods 8e by Zikmund, Babin, Carr, Griffin

From CNM Wiki
Jump to: navigation, search

Business Research Methods 8e by Zikmund, Babin, Carr, Griffin is the 8th edition of the textbook authored by by William G. Zikmund, Barry J. Babin, Jon C. Carr, and Mitch Griffin and published in 2009 by South-Western College Pub.

  • Alpha (Greek letter). Level of significance or probability of a Type I error.
  • Beta (Greek letter). Probability of a Type II error or slope of the regression line.
  • Mu (Greek letter). Population mean.
  • Rho (Greek letter). Population Pearson correlation coefficient.
  • Sigma (Greek capital letter). Take the sum of.
  • Pi (Greek letter). Population proportion.
  • Sigma (Greek letter). Population standard deviation.
  • Df. Number of degrees of freedom.
  • F. F-statistic.
  • N. Sample size.
  • P. Sample proportion.
  • Pr). Probability of the outcome in the parentheses.
  • R sample. Pearson correlation coefficient.
  • R2. Coefficient of determination (squared correlation coefficient).
  • R2. Coefficient of determination (multiple regression).
  • S. Sample standard deviation (inferential statistics).
  • Sx. Estimated standard error of the mean.
  • Sp. Estimated standard error of the proportion.
  • S2. Sample variance (inferential statistics).
  • T. T-statistic.
  • X. Variable or any unspecified observation.
  • X. Sample mean.
  • Y. Any unspecified observation on a second variable, usually the dependent variable.
  • . Predicted dependent variable score.
  • Z. Standardized score (descriptive statistics) or Z-statistic.
  • Absolute causality. Means the cause is necessary and sufficient to bring about the effect.
  • Abstract level. In theory development, the level of knowledge expressing a concept that exists only as an idea or a quality apart from an object.
  • Acquiescence bias. A tendency for respondents to agree with all or most questions asked of them in a survey.
  • Administrative error. An error caused by the improper administration or execution of the research task.
  • Advocacy research. Research undertaken to support a specific claim in a legal action or represent some advocacy group.
  • Analysis of variance (ANOVA). Analysis involving the investigation of the effects of one treatment variable on an intervalscaled dependent variable -- a hypothesis-testing technique to determine whether statistically significant differences in means occur between two or more groups.
  • Applied business research. Research conducted to address a specific business decision for a specific firm or organization.
  • Attitude. An enduring disposition to consistently respond in a given manner to various aspects of the world, composed of affective, cognitive, and behavioral components.
  • Attribute. A single characteristic or fundamental feature of an object, person, situation, or issue.
  • Back translation. Taking a questionnaire that has previously been translated into another language and having a second, independent translator translate it back to the original language.
  • Backward linkage. Implies that later steps influence earlier stages of the research process.
  • Balanced rating scale. A fixed-alternative rating scale with an equal number of positive and negative categories; a neutral point or point of indifference is at the center of the scale.
  • Basic business research. Research conducted without a specific decision in mind and that usually does not address the needs of a specific organization. It attempts to expand the limits of knowledge in general and is not aimed at solving a particular pragmatic problem.
  • Basic experimental design. An experimental design in which only one variable is manipulated.
  • Behavioral differential. A rating scale instrument similar to a semantic differential, developed to measure the behavioral intentions of subjects toward future actions.
  • Between-groups variance. The sum of differences between the group mean and the grand mean summed over all groups for a given set of observations.
  • Between-subjects design. Each subject in an experiment receives only one treatment combination.
  • Bivariate statistical analysis. Statistical test involving two variables.
  • Blocking variables. A categorical (less-than interval) variable that is not manipulated as is an experimental variable but is included in the statistical analysis of experiments.
  • Box and whisker plots. Graphic representations of central tendencies, percentiles, variabilities, and the shapes of frequency distributions.
  • Briefing session. A training session to ensure that each interviewer is provided with common information.
  • Business ethics. The application of morals to behavior related to the exchange environment.
  • Business intelligence. The subset of data and information that actually has some explanatory power enabling effective decisions to be made.
  • Business opportunity. A situation that makes some potential competitive advantage possible.
  • Business problem. A situation that makes some significant negative consequence more likely.
  • Business research. The application of the scientific method in searching for the truth about business phenomena. These activities include defining business opportunities and problems, generating and evaluating ideas, monitoring performance, and understanding the business process.
  • Callbacks. Attempts to recontact individuals selected for a sample who were not available initially.
  • Case study. The documented history of a particular person, group, organization, or event.
  • Categorical variable. A variable that indicates membership in some group.
  • Category scale. A rating scale that consists of several response categories, often providing respondents with alternatives to indicate positions on a continuum.
  • Causal inference. A conclusion that when one thing happens, another specific thing will follow.
  • Causal research. Allows causal inferences to be made; seeks to identify cause-and-effect relationships.
  • Cell. Refers to a specific treatment combination associated with an experimental group.
  • Census. An investigation of all the individual elements that make up a population.
  • Central location interviewing. Telephone interviews conducted from a central location, allowing firms to hire a staff of professional interviewers and to supervise and control the quality of interviewing more effectively.
  • Central-limit theorem. The theory that, as sample size increases, the distribution of sample means of size n, randomly selected, approaches a normal distribution.
  • Check boxes. In an Internet questionnaire, small graphic boxes, next to answers, that a respondent clicks on to choose an answer; typically, a check mark or an X appears in the box when the respondent clicks on it.
  • Checklist question. A fixed-alternative question that allows the respondent to provide multiple answers to a single question by checking off items.
  • Chi-square test. One of the basic tests for statistical significance that is particularly appropriate for testing hypotheses about frequencies arranged in a frequency or contingency table.
  • Choice. A measurement task that identifies preferences by requiring respondents to choose between two or more alternatives.
  • Classificatory variable. Another term for a categorical variable because it classifies units into categories.
  • Click-through rate. Proportion of people who are exposed to an Internet ad who actually click on its hyperlink to enter the Web site; click-through rates are generally very low.
  • Cluster analysis. A multivariate approach for grouping observations based on similarity among measured variables.
  • Cluster sampling. An economically efficient sampling technique in which the primary sampling unit is not the individual element in the population but a large cluster of elements; clusters are selected randomly.
  • Code book. A book that identifies each variable in a study and gives the variable's description, code name, and position in the data matrix.
  • Codes. Rules for interpreting, classifying, and recording data in the coding process; also, the actual numerical or other character symbols assigned to raw data.
  • Coding. The process of assigning a numerical score or other character symbol to previously edited data.
  • Coefficient alpha (α). The most commonly applied estimate of a multiple item scale's reliability. It represents the average of all possible split-half reliabilities for a construct.
  • Coefficient of determination (R2). A measure obtained by squaring the correlation coefficient; the proportion of the total variance of a variable accounted for by another value of another variable.
  • Cohort effect. A change in the dependent variable that occurs because members of one experimental group experienced different historical situations than members of other experimental groups.
  • Communication process. The process by which one person or source sends a message to an audience or receiver and then receives feedback about the message.
  • Comparative rating scale. Any measure of attitudes that asks respondents to rate a concept in comparison with a benchmark explicitly used as a frame of reference.
  • Completely randomized design. An experimental design that uses a random process to assign subjects to treatment levels of an experimental variable.
  • Composite measures. Measurements that assign a value to an observation based on a mathematical derivation of multiple variables.
  • Composite scale. A way of representing a latent construct by summing or averaging respondents' reactions to multiple items, each assumed to indicate the latent construct.
  • Computer-assisted telephone interviewing (CATI). Technology that allows answers to telephone interviews to be entered directly into a computer for processing.
  • Concept. A generalized idea that represents something of meaning.
  • Concept (or construct). A generalized idea about a class of objects that has been given a name; an abstraction of reality that is the basic unit for theory development.
  • Conclusions and recommendations section. The part of the body of a report that provides opinions based on the results and suggestions for action.
  • Concomitant variation. One of three criteria for causality; occurs when two events "covary," meaning they vary systematically.
  • Conditional causality. Means that a cause is necessary but not sufficient to bring about an effect.
  • Confidence interval estimate. A specified range of numbers within which a population mean is expected to lie; an estimate of the population mean based on the knowledge that it will be equal to the sample mean plus or minus a small sampling error.
  • Confidence level. The range of values for some estimate that accounts for a specified percentage of possibility.
  • Confidentiality. The information involved in a research study will not be shared with others.
  • Conflict of interest. A condition that occurs when one researcher works for two competing companies.
  • Confound. An alternative causal explanation, beyond the intended experimental variable, for any observed differences in the dependent variable.
  • Constancy of conditions. Subjects in all experimental groups are exposed to identical conditions except for the differing experimental treatments.
  • Constant. Unchanging; this is not useful in addressing research questions.
  • Constant-sum scale. A measure of attitudes in which respondents are asked to divide a constant sum to indicate the relative importance of attributes; respondents often sort cards, but the task may also be a rating task.
  • Construct. A term used to refer to concepts measured with multiple variables.
  • Construct validity. Construct validity exists when a measure reliably and truthfully represents a unique concept; consists of several components including face validity, content validity, criterion validity, convergent validity, and discriminant validity.
  • Consumer panel. A longitudinal survey of the same sample of individuals or households to record their attitudes, behavior, or purchasing habits over time.
  • Content analysis. The systematic observation and quantitative description of the content of communication.
  • Content providers. Parties that furnish information on the World Wide Web.
  • Content validity. The degree to which a measure covers the breadth of the domain of interest.
  • Contingency table. A data matrix that displays the frequency of some combination of possible responses to multiple variables; cross-tabulation results.
  • Continuous measures. Measures that reflect the intensity of a concept by assigning values that can take on any value along some scale range.
  • Continuous variable. A variable that can take on a range of values that correspond to some quantitative amount.
  • Contributory causality. Means that a cause need be neither necessary nor sufficient to bring about an effect.
  • Contrived observation. Observation in which the investigator creates an artificial environment in order to test a hypothesis.
  • Control group. A group of subjects to whom no experimental treatment is administered.
  • Convenience sampling. The sampling procedure of obtaining those people or units that are most conveniently available.
  • Convergent validity. Concepts that should be related to one another are in fact related; highly reliable scales contain convergent validity.
  • Conversations. An informal qualitative data-gathering approach in which the researcher engages a respondent in a discussion of the relevant subject matter.
  • Cookies. Small computer files that a content provider can save onto the computer of someone who visits its Web site.
  • Correlation coefficient. A standardized statistical measure of the covariation, or association, between two at-least interval variables.
  • Correlation matrix. The standard form for reporting correlation coefficients for more than two variables.
  • Correspondence rules. These indicate the way that a certain value on a scale corresponds to some true value of a concept.
  • Counterbalancing. Attempts to eliminate the confounding effects of order of presentation by requiring one-fourth of subjects to be exposed to treatment A first, one-fourth to treatment B first, one-fourth to treatment C first, and finally one-fourth to treatment D first.
  • Counterbiasing statement. An introductory statement or preamble to a potentially embarrassing question that reduces a respondent's reluctance to answer by suggesting that certain behavior is not unusual.
  • Covariance. Extent to which two variables are associated systematically with each other.
  • Cover letter. Letter that accompanies a questionnaire to induce the reader to complete and return the questionnaire.
  • Criterion validity. The ability of a measure to correlate with other standard measures of similar constructs or established criteria.
  • Critical values. The values that lie exactly on the boundary of the region of rejection.
  • Cross-checks. The comparison of data from one source with data from another source to determine the similarity of independent projects.
  • Cross-functional teams. Employee teams composed of individuals from various functional areas such as engineering, production, finance, and marketing who share a common purpose.
  • Cross-sectional study. A study in which various segments of a population are sampled and data are collected at a single moment in time.
  • Cross-tabulation. The appropriate technique for addressing research questions involving relationships among multiple lessthan interval variables; results in a combined frequency table displaying one variable in rows and another in columns.
  • Cross-validate. To verify that the empirical findings from one culture also exist and behave similarly in another culture.
  • Curb-stoning. A form of interviewer cheating in which an interviewer makes up the responses instead of conducting an actual interview.
  • Custom research. Research projects that are tailored specifically to a client's unique needs.
  • Customer discovery. Involves mining data to look for patterns identifying who is likely to be a valuable customer.
  • Customer relationship management (CRM). Part of the DSS that addresses exchanges between the firm and its customers.
  • Data. Facts or recorded measures of certain phenomena.
  • Data analysis. The application of reasoning to understand the data that have been gathered.
  • Data conversion. The process of changing the original form of the data to a format suitable to achieve the research objective; also called data transformation.
  • Data entry. The activity of transferring data from a research project to computers.
  • Data file. The way a data set is stored electronically in spreadsheetlike form in which the rows represent sampling units and the columns represent variables.
  • Data integrity. The notion that the data file actually contains the information that the researcher promised the decision maker he or she would obtain, meaning in part that the data have been edited and properly coded so that they are useful to the decision maker.
  • Data mining. The use of powerful computers to dig through volumes of data to discover patterns about an organization's customers and products; applies to many different forms of analysis.
  • Data quality. The degree to which data represent the true situation.
  • Data reduction technique. Multivariate statistical approaches that summarize the information from many variables into a reduced set of variates formed as linear combinations of measured variables.
  • Data transformation. Process of changing the data from their original form to a format suitable for performing a data analysis addressing research objectives.
  • Data warehouse. The multitiered computer storehouse of current and historical data.
  • Data warehousing. The process allowing important day-to-day operational data to be stored and organized for simplified access.
  • Data wholesalers. Companies that put together consortia of data sources into packages that are offered to municipal, corporate, and university libraries for a fee.
  • Database marketing. The use of customer databases to promote one-to-one relationships with customers and create precisely targeted promotions.
  • Database. A collection of raw data, arranged logically and organized in a form that can be stored and processed by a computer.
  • Data-processing error. A category of administrative error that occurs because of incorrect data entry, incorrect computer programming, or other procedural errors during data analysis.
  • Debriefing. Procedure in which research subjects are fully informed and provided with a chance to ask any questions they may have about the experiment.
  • Decision making. The process of developing and deciding among alternative ways of resolving a problem or choosing from among alternative opportunities.
  • Decision statement. A written expression of the key question(s) that the research user wishes to answer.
  • Decision support system (DSS). A computer-based system that helps decision makers confront problems through direct interaction with databases and analytical software programs.
  • Deductive reasoning. The logical process of deriving a conclusion about a specific instance based on a known general premise or something known to be true.
  • Degrees of freedom (df). The number of observations minus the number of constraints or assumptions needed to calculate a statistical term.
  • Deliverables. The term used often in consulting to describe research objectives to a research client.
  • Demand characteristic. Experimental design element or procedure that unintentionally provides subjects with hints about the research hypothesis.
  • Demand effect. The result that occurs when demand characteristics do indeed affect the dependent variable.
  • Dependence techniques. Multivariate statistical techniques that explain or predict one or more dependent variables.
  • Dependent variable. A process outcome or a variable that is predicted and/or explained by other variables.
  • Depth interview. A one-on-one interview between a professional researcher and a research respondent conducted about some relevant business or social topic.
  • Descriptive analysis. The elementary transformation of raw data in a way that describes the basic characteristics such as central tendency, distribution, and variability.
  • Descriptive research. A type of research that describes characteristics of objects, people, groups, organizations, or environments and tries to "paint a picture" of a given situation.
  • Descriptive statistics. Statistics which summarize and describe the data in a simple and understandable manner.
  • Determinant-choice question. A fixed-alternative question that requires the respondent to choose one response from among multiple alternatives.
  • Diagnostic analysis. A type of analysis that seeks to diagnose reasons for business outcomes and focuses specifically on the beliefs and feelings consumers have about and toward competing products.
  • Dialog boxes. Windows that open on a computer screen to prompt the user to enter information.
  • Direct observation. A straightforward attempt to observe and record what naturally occurs; the investigator does not create an artificial situation.
  • Discrete measures. Measures that take on only one of a finite number of values.
  • Discriminant analysis. A statistical technique for predicting the probability that an object will belong in one of two or more mutually exclusive categories (dependent variables), based on several independent variables.
  • Discriminant validity. A type of validity that represents how unique or distinct a measure is; a scale should not correlate too highly with a measure of a different construct.
  • Discussion guide. A focus group outline that includes written introductory comments informing the group about the focus group purpose and rules, and then outlines topics or questions to be addressed in the group session.
  • Disguised questions. Indirect questions that assume that the purpose of the study must be hidden from the respondent.
  • Disproportional stratified sample. A stratified sample in which the sample size for each stratum is allocated according to analytical considerations.
  • Do-not-call legislation. Legal action that restricts any telemarketing organization from calling consumers who either register with a no-call list or who request not to be called.
  • Door-in-the-face compliance technique. A two-step process for securing a high response rate. In step 1 an initial request, so large that nearly everyone refuses it, is made. Next, a second request is made for a smaller favor; respondents are expected to comply with this more reasonable request.
  • Door-to-door interviews. Personal interviews conducted at respondents' doorsteps in an effort to increase the participation rate in the survey.
  • Double-barreled question. A question that may induce bias because it covers two issues at once.
  • Drop-down box. In an Internet questionnaire, a space-saving device that reveals responses when they are needed but otherwise hides them from view.
  • Drop-off method. A survey method that requires the interviewer to travel to the respondent's location to drop off questionnaires that will be picked up later.
  • Dummy coding. Numeric "1" or "0" coding where each number represents an alternate response such as "female" or "male."
  • Dummy tables. Tables placed in research proposals that are exact representations of the actual tables that will show results in the final report with the exception that the results are hypothetical (fictitious).
  • Dummy variable. The way a dichotomous (two group) independent variable is represented in regression analysis by assigning a 0 to one group and a 1 to the other.
  • Editing. The process of checking the completeness, consistency, and legibility of data and making the data ready for coding and transfer to storage.
  • Elaboration analysis. An analysis of the basic cross–tabulation for each level of a variable not previously considered, such as subgroups of the sample.
  • Electronic data interchange (EDI). A type of exchange that occurs when one company's computer system is integrated with another company's system.
  • E-mail surveys. Surveys distributed through electronic mail.
  • Empirical level. A level of knowledge that is verifiable by experience or observation.
  • Empirical testing. Examining a research hypothesis against reality using data.
  • Environmental scanning. A research method that entails all information gathering designed to detect changes in the external operating environment of the firm.
  • Error trapping. The use of software to control the flow of an Internet questionnaire -- for example, to prevent respondents from returning to previous questions or failing to answer a question.
  • Ethical dilemma. A situation in which one chooses form alternative courses of actions, each with different ethical implications.
  • Ethnography. The study of cultures through methods that involve becoming highly active within that culture.
  • Evaluation research. The formal, objective measurement and appraisal of the extent to which a given activity, project, or program has achieved its objectives.
  • Experiment. A carefully controlled study in which the researcher manipulates a proposed cause and observes any corresponding change in the proposed effect.
  • Experimental condition. One of the possible levels of an experimental variable manipulation.
  • Experimental group. A group of subjects to whom an experimental treatment is administered.
  • Experimental treatment. The term referring to the way an experimental variable is manipulated.
  • Experimental variable. The proposed cause, controlled by the researcher who manipulates it.
  • Exploratory research. A type of research conducted to clarify ambiguous situations or discover ideas that may be potential business opportunities.
  • External data. Data created, recorded, or generated by an entity other than the researcher's organization.
  • External validity. The accuracy with which experimental results can be generalized beyond the experimental subjects.
  • Extraneous variables. Variables that naturally exist in the environment and that may have some systematic effect on the dependent variable.
  • Extremity bias. A category of response bias that results because some individuals tend to use extremes when responding to questions.
  • Eye-tracking monitor. A mechanical device used to observe eye movements; some eye monitors use infrared light beams to measure unconscious eye movements.
  • Face validity. A scale's content logically appears to reflect what was intended to be measured.
  • Factor analysis. A prototypical multivariate, interdependence technique that statistically identifies a reduced number of factors from a larger number of measured variables.
  • Factor loading. Indicates how strongly a measured variable is correlated with a factor.
  • Factor rotation. A mathematical way of simplifying factor analysis results to better identify which variables "load on" which factors; the most common procedure is varimax.
  • Factorial design. A design that allows for the testing of the effects of two or more treatments (experimental variables) at various levels.
  • Fax survey. A survey that uses fax machines as a way for respondents to receive and return questionnaires.
  • Field. A collection of characters that represents a single type of data -- usually a variable.
  • Field editing. Preliminary editing by a field supervisor on the same day as the interview to catch technical omissions, check legibility of handwriting, and clarify responses that are logically or conceptually inconsistent.
  • Field experiments. Research projects involving experimental manipulations that are implemented in a natural environment.
  • Field interviewing service. A research supplier that specializes in gathering data.
  • Field notes. The researcher's descriptions of what actually happens in the field; these notes then become the text from which meaning is extracted.
  • Fieldworker. An individual who is responsible for gathering data in the field.
  • Filter question. A question that screens out respondents who are not qualified to answer a second question.
  • Fixed-alternative questions. Questions in which respondents are given specific, limited-alternative responses and asked to choose the one closest to their own viewpoint.
  • Focus blog. A type of informal, "continuous" focus group established as an Internet blog for the purpose of collecting qualitative data from participant comments.
  • Focus group. A small group that discusses some research topic, led by a moderator who guides discussion among the participants.
  • Focus group interview. An unstructured, free-flowing interview with a small group of around six to ten people. Focus groups are led by a trained moderator who follows a flexible format encouraging dialogue among respondents.
  • Foot-in-the-door compliance technique. A technique for obtaining a high response rate; compliance with a large or difficult task is induced by first obtaining the respondent's compliance with a smaller request.
  • Forced answering software. Software that prevents respondents from continuing with an Internet questionnaire if they fail to answer a question.
  • Forced-choice rating scale. A fixed-alternative rating scale that requires respondents to choose one of the fixed alternatives.
  • Forecast analyst. Employee who provides technical assistance such as running computer programs and manipulating data to generate a sales forecast.
  • Forward linkage. A connection that implies that the earlier stages of the research process influence the later stages.
  • Free-association techniques. A technique that records respondents' first (top-of-mind) cognitive reactions to some stimulus.
  • Frequency distribution. A set of data organized by summarizing the number of times a particular value of a variable occurs.
  • Frequency table. A table displaying a frequency distribution.
  • Frequency-determination question. A fixed-alternative question that asks for an answer about general frequency of occurrence.
  • F-test. A statistical test used to determine whether some outcome varies systematically with an independent variable(s).
  • F-test (regression). A statistical test aimed at determining whether or not a significant amount of variance in a dependent variable is explained by the independent variable(s).
  • Funded business research. A type of basic research usually performed by academic researchers and is financially supported by some public or private institution, as in federal government grants.
  • Funnel technique. The technique of asking general questions before specific questions in order to obtain unbiased responses.
  • General linear model (GLM). A way of explaining and predicting a dependent variable based on fluctuations (variation) from its mean. The fluctuations are due to changes in independent variables.
  • Global information system. An organized collection of computer hardware, software, data, and personnel designed to capture, store, update, manipulate, analyze, and immediately display information about worldwide business activity.
  • Goodness-of-fit (GOF). A general term representing how well some computed table or matrix of values matches some population or predetermined table or matrix of the same size.
  • Grand mean. The mean of a variable over all observations.
  • Graphic aids. Pictures or diagrams used to clarify complex points or emphasize a message.
  • Graphic rating scale. A measure of attitude that allows respondents to rate an object by choosing any point along a graphic continuum.
  • Grounded theory. An inductive investigation in which the researcher poses questions about information provided by respondents or taken from historical records; the researcher repeatedly questions the responses to derive deeper explanations.
  • Hawthorne effect. The experimental phenomenon whereby people will perform differently from normal when they know they are experimental subjects.
  • Hermeneutic unit. A text passage from a respondent's story that is linked with a key theme from within this story or provided by the researcher.
  • Hermeneutics. An approach to understanding phenomenology that relies on analysis of texts through which a person tells a story about him or herself.
  • Hidden observation. Observation in which the subject is unaware that observation is taking place.
  • Histogram. A graphical way of showing a frequency distribution in which the height of a bar corresponds to the observed frequency of the category.
  • History effect. An effect that occurs when some change other than the experimental treatment occurs during the course of an experiment that affects the dependent variable.
  • Host. The computer location where the content for a particular Web site physically resides and is accessed.
  • Human subjects review committee. An official group that carefully reviews proposed research design to try to make sure that no harm can come to any research participant.
  • Hypothesis. Formal statement of an unproven proposition that is empirically testable.
  • Hypothesis test of a proportion. A test that is conceptually similar to the one used when the mean is the characteristic of interest but that differs in the mathematical formulation of the standard error of the proportion.
  • Hypothetical constructs. Variables that are not directly observable but are measurable through indirect indicators, such as verbal expression or overt behavior.
  • Idealism. A term that reflects the degree to which one bases one's morality on moral standards.
  • Image profile. A graphic representation of semantic differential data for competing brands, products, or stores to highlight comparisons.
  • Importance-performance analysis. Another name for quadrant analysis.
  • Impute. To fill in a missing data point through the use of a statistical algorithm that provides a best guess for the missing response based on available information.
  • Independent samples t-test. A test for hypotheses stating the mean scores for some interval- or ratio-scaled variable differ based on some less-than interval classificatory variable.
  • Independent variable. A variable that is expected to influence the dependent variable in some way.
  • Index measure. An index assigns a value based on how much of the concept being measured is associated with an observation. Indexes often are formed by putting several variables together.
  • Index numbers. Scores or observations recalibrated to indicate how they relate to a base number.
  • Index of retail saturation. A calculation that describes the relationship between retail demand and supply.
  • Inductive reasoning. The logical process of establishing a general proposition on the basis of observation of particular facts.
  • Inferential statistics. The use of statistics to project characteristics from a sample to an entire population.
  • Information completeness. Having the right amount of information.
  • Information. Data formatted (structured) to support decision making or define the relationship between two facts.
  • Informed consent. Consent given by an individual who understands what the researcher wants him or her to do and who agrees to participate.
  • In-house editing. A rigorous editing job performed by a centralized office staff.
  • In-house interviewer. A fieldworker who is employed by the company conducting the research.
  • In-house research. Research performed by employees of the company that will benefit from the research.
  • Instrumentation effect. A nuisance that occurs when a change in the wording of questions, a change in interviewers, or a change in other procedures causes a change in the dependent variable.
  • Interaction effect. Differences in dependent variable means due to a specific combination of independent variables.
  • Interactive help desk. In an Internet questionnaire, a live, realtime support feature that solves problems or answers questions respondents may encounter in completing the questionnaire.
  • Interactive medium. A medium, such as the Internet, that a person can use to communicate with and interact with other users.
  • Interdependence techniques. Multivariate statistical techniques that give meaning to a set of variables or seek to group things together; no distinction is made between dependent and independent variables.
  • Internal and proprietary data. Secondary data that originate inside the organization.
  • Internal consistency. A measure's homogeneity or the extent to which each indicator of a concept converges on some common meaning.
  • Internal validity. A state that exists to the extent that an experimental variable is truly responsible for any variance in the dependent variable.
  • Internet. A worldwide network of computers that allows users access to information from distant sources.
  • Internet survey. A self-administered questionnaire posted on a Web site.
  • Interpretation. The process of drawing inferences from the analysis results.
  • Interquartile range. A measure of variability.
  • Interrogative techniques. Asking multiple what, where, who, when, why, and how questions.
  • Intersubjective certifiability. Different individuals following the same procedure will produce the same results or come to the same conclusion.
  • Interval scales. Scales that have both nominal and ordinal properties, but that also capture information about differences in quantities of a concept from one observation to the next.
  • Interviewer bias. A response bias that occurs because the presence of the interviewer influences respondents' answers.
  • Interviewer cheating. The practice by fieldworkers of filling in fake answers or falsifying interviews.
  • Interviewer error. Mistakes made by interviewers failing to record survey responses correctly.
  • Intranet. A company's private data network that uses Internet standards and technology.
  • Introduction section. The part of the body of a research report that discusses background information and the specific objectives of the research.
  • Inverse relationship (negative relationship). Covariation in which the association between variables is in the opposite direction. As one goes up, the other goes down.
  • Item nonresponse. Failure of a respondent to provide an answer to a survey question.
  • Judgment sampling (purposive sampling). A nonprobability sampling technique in which an experienced individual selects the sample based on personal judgment about some appropriate characteristic of the sample member.
  • Keyword search. A type of computerized search that takes place as the search engine searches through millions of Web pages for documents containing the keywords.
  • Knowledge management. The process of creating an inclusive, comprehensive, easily accessible organizational memory, often called the organization's intellectual capital.
  • Knowledge. A blend of previous experience, insight, and data that forms (organizational) memory.
  • Laboratory experiment. A type of research in which the researcher has more complete control over the research setting and extraneous variables.
  • Ladder of abstraction. The organization of concepts in sequence from the most concrete and individual to the most general.
  • Laddering. A particular approach to probing, asking respondents to compare differences between brands at different levels that produces distinctions at the attribute level, the benefit level, and the value or motivation level.
  • Latent construct. A concept that is not directly observable or measurable, but can be estimated through proxy measures.
  • Leading question. A question that suggests or implies certain answers.
  • Likert scale. A measure of attitudes designed to allow respondents to rate how strongly they agree or disagree with carefully constructed statements, ranging from very positive to very negative attitudes toward some object.
  • Literature review. A directed search of published works, including periodicals and books, that discusses theory and presents empirical results that are relevant to the topic at hand.
  • Loaded question. A question that suggests a socially desirable answer or that is emotionally charged.
  • Longitudinal study. A survey of respondents at different times, thus allowing analysis of response continuity and changes over time.
  • Mail survey. A self-administered questionnaire sent to respondents through the mail.
  • Main effect. The experimental difference in dependent variable means between the different levels of any single experimental variable.
  • Mall intercept interviews. Personal interviews conducted in a shopping mall.
  • Manager of decision support systems. Employee who supervises the collection and analysis of sales, inventory, and other periodic customer relationship management (CRM) data.
  • Managerial action standard. A specific performance criterion upon which a decision can be based.
  • Manipulation. Means that the researcher alters the level of the variable in specific increments.
  • Manipulation check. A validity test of an experimental manipulation to make sure that the manipulation does produce differences in the independent variable.
  • Marginals. Row and column totals in a contingency table, which are shown in its margins.
  • Market tracking. The observation and analysis of trends in industry volume and brand share over time.
  • Market-basket analysis. A form of data mining that analyzes anonymous point-of-sale transaction databases to identify coinciding purchases or relationships between products purchased and other retail shopping information.
  • Marketing-oriented. A term describing a firm in which all decisions are made with a conscious awareness of their effect on the customer.
  • Maturation effects. Effects that are a function of time and the naturally occurring events that coincide with growth and experience.
  • Mean. A measure of central tendency; the arithmetic average.
  • Measure of association. A general term that refers to a number of bivariate statistical techniques used to measure the strength of a relationship between two variables.
  • Measurement. The process of describing some property of a phenomenon of interest, usually by assigning numbers in a reliable and valid way.
  • Median. A measure of central tendency that is the midpoint; the value below which half the values in a distribution fall.
  • Median split. Dividing a data set into two categories by placing respondents below the median in one category and respondents above the median in another.
  • Mixed-mode survey. A study that employs any combination of survey methods.
  • Mode. A measure of central tendency; the value that occurs most often.
  • Model building. The use of secondary data to help specify relationships between two or more variables; it can involve the development of descriptive or predictive equations.
  • Moderator variable. A third variable that changes the nature of a relationship between the original independent and dependent variables.
  • Moderator. A person who leads a focus group interview and ensures that everyone gets a chance to speak and contribute to the discussion.
  • Monadic rating scale. Any measure of attitudes that asks respondents about a single concept in isolation.
  • Moral standards. Principles that reflect beliefs about what is ethical and what is unethical.
  • Mortality effect (sample attrition). A situation that occurs when some subjects withdraw from the experiment before it is completed.
  • Multicollinearity. The extent to which independent variables in a multiple regression analysis are correlated with each other; high multicollinearity can make interpreting parameter estimates difficult or impossible.
  • Multidimensional scaling. A statistical technique that measures objects in multidimensional space on the basis of respondents' judgments of the similarity of objects.
  • Multiple regression analysis. An analysis of association in which the effects of two or more independent variables on a single, interval-scaled dependent variable are investigated simultaneously.
  • Multiple-grid question. Several similar questions arranged in a grid format.
  • Multistage area sampling. A type of sampling that involves using a combination of two or more probability sampling techniques.
  • Multivariate analysis of variance (MANOVA). A multivariate technique that predicts multiple continuous dependent variables with one or more categorical independent variables.
  • Multivariate statistical analysis. Statistical analysis involving three or more variables or sets of variables.
  • Mutually exclusive. A grouping in which no overlap exists among the fixed-alternative categories.
  • Neural networks. A form of artificial intelligence in which a computer is programmed to mimic the way that human brains process information.
  • No contacts. Members of sampling frame who are not at home or who are otherwise inaccessible on the first and second contact.
  • Nominal scales. Ranking scales that represent the most elementary level of measurement in which values are assigned to an object for identification or classification purposes only.
  • Nonparametric statistics. A type of statistics appropriate when the variables being analyzed do not conform to any known or continuous distribution.
  • Nonprobability sampling. A sampling technique in which units of the sample are selected on the basis of personal judgment or convenience; the probability of any particular member of the population being chosen is unknown.
  • Nonrespondent error. An error that the respondent is not responsible for creating, such as when the interviewer marks a response incorrectly.
  • Nonrespondents. People who are not contacted or who refuse to cooperate in the research.
  • Nonresponse error. The statistical differences between a survey that includes only those who responded and a perfect survey that would also include those who failed to respond.
  • Nonspurious association. One of three criteria for causality; any covariation between a cause and an effect is true and not simply due to some other variable.
  • Normal distribution. A symmetrical, bell-shaped distribution that describes the expected probability distribution of many chance occurrences.
  • Nuisance variables. Items that may affect the dependent measure but are not of primary interest.
  • Numerical scale. An attitude rating scale similar to a semantic differential except that it uses numbers instead of verbal descriptions as response options to identify response positions.
  • Observation. The systematic process of recording the behavioral patterns of people, objects, and occurrences as they are witnessed.
  • Observer bias. A distortion of measurement resulting from the cognitive behavior or actions of a witnessing observer.
  • Online focus group. A qualitative research effort in which a group of individuals provides unstructured comments by entering their remarks into an electronic Internet display board of some type.
  • Open-ended boxes. In an Internet questionnaire, boxes where respondents can type in their own answers to open-ended questions.
  • Open-ended response questions. Questions that pose some problem and ask respondents to answer in their own words.
  • Operationalization. The process of identifying scales that correspond to variance in a concept to be involved in a research process.
  • Operationalizing. The process of identifying the actual measurement scales to assess the variables of interest.
  • Opt in. To give permission to receive selected e-mail, such as questionnaires, from a company with an Internet presence.
  • Optical scanning system. A data processing input device that reads material directly from mark-sensed questionnaires.
  • Oral presentation. A spoken summary of the major findings, conclusions, and recommendations, given to clients or line managers to provide them with the opportunity to clarify any ambiguous issues by asking questions.
  • Order bias. Bias caused by the influence of earlier questions in a questionnaire or by an answer's position in a set of answers.
  • Ordinal scales. Ranking scales allowing items to be arranged based on how much of some quality they possess.
  • Outlier. A value that lies outside the normal range of the data.
  • Outside agency. An independent research firm contracted by the company that actually will benefit from the research.
  • Paired comparison. A measurement technique that involves presenting the respondent with two objects and asking the respondent to pick the preferred object; more than two objects may be presented, but comparisons are made in pairs.
  • Paired-samples t-test. An appropriate test for comparing the scores of two interval variables drawn from related populations.
  • Parametric statistics. Statistics that involve numbers with known, continuous distributions; when the data are interval or ratio scaled and the sample size is large, parametric statistical procedures are appropriate.
  • Partial correlation. The correlation between two variables after taking into account the fact that they are correlated with other variables too.
  • Participant-observation. An ethnographic research approach where the researcher becomes immersed within the culture that he or she is studying and draws data from his or her observations.
  • Percentage distribution. A frequency distribution organized into a table (or graph) that summarizes percentage values associated with particular values of a variable.
  • Performance-monitoring research. Research that regularly, sometimes routinely, provides feedback for evaluation and control of business activity.
  • Personal interview. Face-to-face communication in which an interviewer asks a respondent to answer questions.
  • Phenomenology. A philosophical approach to studying human experiences based on the idea that human experience itself is inherently subjective and determined by the context in which people live.
  • Piggyback. A procedure in which one respondent stimulates thought among the others; as this process continues, increasingly creative insights are possible.
  • Pilot study. A small-scale research project that collects data from respondents similar to those to be used in the full study.
  • Pivot question. A filter question used to determine which version of a second question will be asked.
  • Placebo. An experimental tool used to create the perception that some substance or procedure has been administered.
  • Placebo effect. The effect in a dependent variable associated with the psychological impact that goes along with knowledge of some treatment being administered.
  • Plug value. An answer that an editor "plugs in" to replace blanks or missing values so as to permit data analysis; the choice of value is based on a predetermined decision rule.
  • Point estimate. An estimate of the population mean in the form of a single value, usually the sample mean.
  • Pooled estimate of the standard error. An estimate of the standard error for a t-test of independent means that assumes the variances of both groups are equal.
  • Population (universe). Any complete group of entities that share some common set of characteristics.
  • Population distribution. A frequency distribution of the elements of a population.
  • Population element. An individual member of a population.
  • Population parameters. Variables in a population or measured characteristics of the population.
  • Pop-up boxes. In an Internet questionnaire, boxes that appear at selected points and contain information or instructions for respondents.
  • Preliminary tabulation. A tabulation of the results of a pretest to help determine whether the questionnaire will meet the objectives of the research.
  • Pretest. A small-scale study in which the results are only preliminary and intended only to assist in design of a subsequent study.
  • Pretesting. A screening procedure that involves a trial run with a group of respondents to iron out fundamental problems in the survey design.
  • Primary sampling unit (PSU). A term used to designate a unit selected in the first stage of sampling.
  • Probability. The long-run relative frequency with which an event will occur.
  • Probability sampling. A sampling technique in which every member of the population has a known, nonzero probability of selection.
  • Probing. An interview technique that tries to draw deeper and more elaborate explanations from the discussion.
  • Problem. A situation that occurs when there is a difference between the current conditions and a more preferable set of conditions.
  • Problem definition. The process of defining and developing a decision statement and the steps involved in translating it into more precise research terminology, including a set of research objectives.
  • Product-oriented. A term used to describe a firm that prioritizes decision making in a way that emphasizes technical superiority in the product.
  • Production-oriented. A term used to describe a firm that prioritizes efficiency and effectiveness of the production processes in making decisions.
  • Projective technique. An indirect means of questioning enabling respondents to project beliefs and feelings onto a third party, an inanimate object, or a task situation.
  • Proportion. The percentage of elements that meet some criterion.
  • Proportional stratified sample. A stratified sample in which the number of sampling units drawn from each stratum is in proportion to the population size of that stratum.
  • Propositions. Statements explaining the logical linkage among certain concepts by asserting a universal connection between concepts.
  • Proprietary business research. The gathering of new data to investigate specific problems.
  • Pseudo-research. A study conducted not to gather information for marketing decisions but to bolster a point of view and satisfy other needs.
  • Psychogalvanometer. A device that measures galvanic skin response, a measure of involuntary changes in the electrical resistance of the skin.
  • Pull technology. A procedure by which consumers request information from a Web page and the browser then determines a response; the consumer is essentially asking for the data.
  • Pupilometer. A mechanical device used to observe and record changes in the diameter of a subject's pupils.
  • Push button. In a dialog box on an Internet questionnaire, a small outlined area, such as a rectangle or an arrow, that the respondent clicks on to select an option or perform a function, such as submit.
  • Push poll. Telemarketing under guise of research.
  • Push technology. A program that sends data to a user's computer without a request being made; software is used to guess what information might be interesting to consumers based on the pattern of previous responses.
  • P-value. Probability value, or the observed or computed significance level; p-values are compared to significance levels to test hypotheses.
  • Quadrant analysis. An extension of cross-tabulation in which responses to two rating-scale questions are plotted in four quadrants of a two-dimensional table.
  • Qualitative business research. Research that addresses business objectives through techniques that allow the researcher to provide elaborate interpretations of phenomena without depending on numerical measurement; its focus is on discovering true inner meanings and new insights.
  • Qualitative data. Data that are not characterized by numbers, and instead are textual, visual, or oral; the focus is on stories, visual portrayals, meaningful characterizations, interpretations, and other expressive descriptions.
  • Quantitative business research. Business research that addresses research objectives through empirical assessments that involve numerical measurement and analysis.
  • Quantitative data. Data that represent phenomena by assigning numbers in an ordered and meaningful way.
  • Quasi-experimental designs. Experimental designs that do not involve random allocation of subjects to treatment combinations.
  • Quota sampling. A nonprobability sampling procedure that ensures that various subgroups of a population will be represented on pertinent characteristics to the exact extent that the investigator desires.
  • Radio button. In an Internet questionnaire, a circular icon resembling a button that activates one response choice and deactivates others when a respondent clicks on it.
  • Random digit dialing. The use of telephone exchanges and a table of random numbers to contact respondents with unlisted phone numbers.
  • Random sampling error. A statistical fluctuation that occurs because of chance variation in the elements selected for a sample.
  • Randomization. The random assignment of subject and treatments to groups; it is one device for equally distributing the effects of extraneous variables to all conditions.
  • Randomized-block design. A design that attempts to isolate the effects of a single extraneous variable by blocking out its effects on the dependent variable.
  • Ranking. A measurement task that requires respondents to rank order a small number of stores, brands, or objects on the basis of overall preference or some characteristic of the stimulus.
  • Rating. A measurement task that requires respondents to estimate the magnitude of a characteristic or quality that a brand, store, or object possesses.
  • Ratio scales. Ranking scales that represent the highest form of measurement in that they have all the properties of interval scales with the additional attribute of representing absolute quantities; characterized by a meaningful absolute zero.
  • Raw data. The unedited responses from a respondent exactly as indicated by that respondent.
  • Record. A collection of related fields that represents the responses from one sampling unit.
  • Refusals. People who are unwilling to participate in a research project.
  • Relativism. The rejection of moral standards in favor of the acceptability of some action. This way of thinking rejects absolute principles in favor of situation-based evaluations.
  • Relevance. The characteristics of data reflecting how pertinent these particular facts are to the situation at hand.
  • Reliability. An indicator of a measure's internal consistency.
  • Repeated measures. Experiments in which an individual subject is exposed to more than one level of an experimental treatment.
  • Replication. Repitition of research to determine whether the same interpretation will be drawn if the study is repeated by different researchers with different respondents following the same methods.
  • Report format. The makeup or arrangement of parts necessary to a good research report.
  • Research analyst. A person responsible for client contact, project design, preparation of proposals, selection of research suppliers, and supervision of data collection, analysis, and reporting activities.
  • Research assistants. Research employees who provide technical assistance with questionnaire design, data analyses, and similar activities.
  • Research design. A master plan that specifies the methods and procedures for collecting and analyzing the needed information.
  • Research follow-up. Recontacting decision makers and/or clients after they have had a chance to read over a research report in order to determine whether additional information or clarification is necessary.
  • Research generalist. A research employee who serves as a link between management and research specialists. The research generalist acts as a problem definer, an educator, a liaison, a communicator, and a friendly ear.
  • Research methodology section. The part of the body of a report that presents the findings of the project. It includes tables, charts, and an organized narrative.
  • Research objectives. The goals to be achieved by conducting research.
  • Research program. Numerous related studies that come together to address multiple, related research objectives.
  • Research project. A single study that addresses one or a small number of research objectives.
  • Research proposal. A written statement of the research design.
  • Research questions. Questions that express the research objectives in terms of questions that can be addressed by research.
  • Research report. An oral presentation or written statement of research results, strategic recommendations, and/or other conclusions to a specific audience.
  • Research suppliers. Commercial providers of research services.
  • Researcher-dependent. Research in which the researcher must extract meaning from unstructured responses such as text from a recorded interview or a collage representing the meaning of some experience.
  • Respondent error. A category of sample bias resulting from some respondent action or inaction such as nonresponse or response bias.
  • Respondents. People who verbally answer an interviewer's questions or provide answers to written questions.
  • Response bias. A bias that occurs when respondents either consciously or unconsciously tend to answer questions with a certain slant that misrepresents the truth.
  • Response latency. The amount of time it takes to make a choice between two alternatives, used as a measure of the strength of preference.
  • Response rate. The number of questionnaires returned or completed divided by the number of eligible people who were asked to participate in the survey.
  • Results section. The part of the body of a report that presents the findings of the project. It includes tables, charts, and an organized narrative.
  • Reverse coding. Coding in which the value assigned for a response is treated oppositely from the other items.
  • Reverse directory. A directory similar to a telephone directory except that listings are by city and street address or by phone number rather than alphabetical by last name.
  • Reverse recoding. A method of making sure all the items forming a composite scale are scored in the same direction. Negative items can be recoded into the equivalent responses for a nonreverse coded item.
  • Rule of parsimony. The rule of parsimony suggests an explanation involving fewer components is better than one involving more.
  • Sample. A subset, or some part, of a larger population.
  • Sample bias. A persistent tendency for the results of a sample to deviate in one direction from the true value of the population parameter.
  • Sample distribution. A frequency distribution of a sample.
  • Sample selection error. An administrative error caused by improper sample design or sampling procedure execution.
  • Sample statistics. Variables in a sample or measures computed from sample data.
  • Sample survey. A more formal term for a survey.
  • Sampling. Any procedure that draws conclusions based on measurements of a portion of the population.
  • Sampling distribution. A theoretical probability distribution of sample means for all possible samples of a certain size drawn from a particular population.
  • Sampling frame. A list of elements from which a sample may be drawn, also called working population.
  • Sampling frame error. An error that occurs when certain sample elements are not listed or are not accurately represented in a sampling frame.
  • Sampling unit. A single element or group of elements subject to selection in the sample.
  • Scales. A device providing a range of values that correspond to different values in a concept being measured.
  • Scanner data. The accumulated records resulting from point of sale data recordings.
  • Scanner-based consumer panel. A type of consumer panel in which participants' purchasing habits are recorded with a laser scanner rather than with a purchase diary.
  • Scientific method. A set of prescribed procedures for establishing and connecting theoretical statements about events, for analyzing empirical evidence, and for predicting events yet unknown; techniques or procedures used to analyze empirical evidence in an attempt to confirm or disprove prior conceptions.
  • Search engine. A computerized directory that allows anyone to search the World Wide Web for information using a keyword search.
  • Secondary data. Data that have been previously collected for some purpose other than the one at hand.
  • Secondary sampling unit. A unit selected in the second stage of sampling.
  • Selection effect. Sample bias from differential selection of respondents for experimental groups.
  • Self-administered questionnaires. Surveys in which the respondent takes the responsibility for reading and answering the questions.
  • Self-selection bias. A bias that occurs because people who feel strongly about a subject are more likely to respond to survey questions than people who feel indifferent about it.
  • Semantic differential. A measure of attitudes that consists of a series of seven-point rating scales that use bipolar adjectives to anchor the beginning and end of each scale.
  • Sensitivity. A measurement instrument's ability to accurately measure variability in stimuli or responses.
  • Significance level. A critical probability associated with a statistical hypothesis test that indicates how likely an inference supporting a difference between an observed value and some statistical expectation is true; the acceptable level of Type I error.
  • Simple linear regression (bivariate linear regression). A measure of linear association that investigates straight-line relationships between a continuous dependent variable and an independent variable that is usually continuous but can be a categorical dummy variable.
  • Simple random sampling. A sampling procedure that assures each element in the population of an equal chance of being included in the sample.
  • Simple-dichotomy question (dichotomous question). A fixed-alternative question that requires the respondent to choose one of two alternatives.
  • Single-source data. Diverse types of data offered by a single company, usually integrated on the basis of a common variable such as geographic area or store.
  • Site analysis techniques. Techniques that use secondary data to select the best location for retail or wholesale operations.
  • Situation analysis. The gathering of background information to familiarize researchers and managers with the decision-making environment.
  • Smart agent software. Software capable of learning an Internet user's preferences and automatically searching out information in selected Web sites and then distributing it.
  • Snowball sampling. A sampling procedure in which initial respondents are selected by probability methods and additional respondents are obtained from information provided by the initial respondents.
  • Social desirability bias. Bias in responses caused by respondents' desire, either conscious or unconscious, to gain prestige or appear in a different social role.
  • Sorting. A measurement task that presents a respondent with several objects or product concepts and requires the respondent to arrange the objects into piles or classify the product concepts.
  • Split-ballot technique. The practice of using two alternative phrasings of the same question for respective halves of a sample to elicit a more accurate total response than would a single phrasing.
  • Split-half method. A method for assessing internal consistency by checking the results of one-half of a set of scaled items against the results from the other half.
  • Spyware. Software placed on a computer without consent or knowledge of the user.
  • Standard deviation. A quantitative index of a distribution's spread, or variability; the square root of the variance for a distribution.
  • Standard error of the mean. The standard deviation of the sampling distribution.
  • Standardized normal distribution. A purely theoretical probability distribution that reflects a specific normal curve for the standardized value, Z.
  • Standardized regression coefficient. The estimated coefficient indicating the strength of relationship between an independent variable and dependent variable expressed on a standardized scale where higher absolute values indicate stronger relationships (range is from –1 to 1).
  • Standardized research service. Companies that develop a unique methodology for investigating a business specialty area.
  • Stapel scale. A measure of attitudes that consists of a single adjective in the center of an even number of numerical values.
  • Statistical base. The number of respondents or observations (in a row or column) used as a basis for computing percentages.
  • Status bar. In an Internet questionnaire, a visual indicator that tells the respondent what portion of the survey he or she has completed.
  • Stratified sampling. A probability sampling procedure in which simple random subsamples that are more or less equal on some characteristic are drawn from within each stratum of the population.
  • String characters. Computer terminology to represent formatting a variable using a series of alphabetic characters (nonnumeric characters) that may form a word.
  • Structured question. A question that imposes a limit on the number of allowable responses.
  • Subjective. A term meaning research results are researcher-dependent, meaning different researchers may reach different conclusions based on the same interview.
  • Subjects. The sampling units for an experiment, usually human respondents who provide measures based on the experimental manipulation.
  • Summated scale. A scale created by simply summing (adding together) the response to each item making up the composite measure.
  • Survey. A research technique in which a sample is interviewed in some form or the behavior of respondents is observed and described in some way.
  • Symptoms. Observable cues that serve as a signal of a problem because they are caused by that problem.
  • Syndicated service. A research supplier that provides standardized information for many clients in return for a fee.
  • Systematic error. Error resulting from some imperfect aspect of the research design that causes respondent error or from a mistake in the execution of the research.
  • Systematic error or nonsampling error. A type of error that occurs if the sampling units in an experimental cell are somehow different than the units in another cell, and this difference affects the dependent variable.
  • Systematic sampling. A sampling procedure in which a starting point is selected by a random process and then every nth number on the list is selected.
  • Tabulation. The orderly arrangement of data in a table or other summary format showing the number of responses to each response category; tallying.
  • Tachistoscope. A device that controls the amount of time a subject is exposed to a visual image.
  • T-distribution. A symmetrical, bell-shaped distribution that is contingent on sample size, has a mean of 0 and a standard deviation equal to 1.
  • Telephone interviews. Personal interviews conducted by telephone, the mainstay of commercial survey research.
  • Television monitoring. Computerized mechanical observation used to obtain television ratings.
  • Temporal sequence. One of three criteria for causality that deals with the time order of events -- the cause must occur before the effect.
  • Tertiary sampling unit. A term used to designate a unit selected in the third stage of sampling.
  • Test of differences. An investigation of a hypothesis stating that two (or more) groups differ with respect to measures on a variable.
  • Test tabulation. Tallying of a small sample of the total number of replies to a particular question in order to construct coding categories.
  • Test units. The subjects or entities whose responses to the experimental treatment are measured or observed.
  • Testing effects. A nuisance effect occurring when the initial measurement or test alerts or primes subjects in a way that affects their response to the experimental treatments.
  • Test-market. An experiment that is conducted within actual market conditions.
  • Test-retest method. A reliability approach involving the administration of the same scale or measure to the same respondents at two separate points in time.
  • Thematic apperception test (TAT). A test that presents subjects with an ambiguous picture(s) in which consumers and products are the center of attention; the investigator asks the subject to tell what is happening in the picture(s) now and what might happen next.
  • Themes. Meaning identified by the frequency with which the same term (or a synonym) arises in the narrative description.
  • Theory. A formal, logical explanation of some events that includes predictions of how things relate to one another.
  • Thurstone scale. An attitude scale in which judges assign scale values to attitudinal statements and subjects are asked to respond to these statements.
  • Time series design. A research design used for an experiment investigating long-term structural changes.
  • Timeliness. A term indicating that the data are current enough to still be relevant.
  • Total quality management. A business philosophy that emphasizes market-driven quality as a top organizational priority.
  • Total variability. The sum of within-group variance and betweengroups variance.
  • Totally exhaustive. A category exists for every respondent in among the fixed-alternative categories.
  • Tracking study. A type of longitudinal study that uses successive samples to compare trends and identify changes in variables such as consumer satisfaction, brand image, or advertising awareness.
  • T-test. A hypothesis test that uses the t-distribution. A univariate t-test is appropriate when the variable being analyzed is interval or ratio.
  • Type I error. An error caused by rejecting the null hypothesis when it is true; it has a probability of alpha. Practically, a Type I error occurs when the researcher concludes that a relationship or difference exits in the population when in reality it does not exist.
  • Type II error. An error caused by failing to reject the null hypothesis when the alternative hypothesis is true; it has a probability of beta. Practically, a Type II error occurs when a researcher concludes that no relationship or difference exists when in fact one does exist.
  • Unbalanced rating scale. A fixed-alternative rating scale that has more response categories at one end than the other resulting in an unequal number of positive and negative categories.
  • Undisguised questions. Straightforward questions that assume the respondent is willing to answer.
  • Uniform resource locator (URL). A Web site address that Web browsers recognize.
  • Unit of analysis. What or who should provide the data and at what level of aggregation it should be analyzed (organizations, strategic business units, departments, families, individuals . . .).
  • Univariate statistical analysis. Tests of hypotheses involving only one variable.
  • Unobtrusive methods. Methods in which research respondents do not have to be disturbed for data to be gathered.
  • Unstructured question. A question that does not restrict the respondents' answers.
  • Validity. The accuracy of a measure or the extent to which a score truthfully represents a concept.
  • Value labels. Unique labels assigned to each possible numeric code for a response.
  • Variable piping software. Software that allows variables to be inserted into an Internet questionnaire as a respondent is completing it.
  • Variable. Anything that varies or changes from one instance to another; variables can exhibit differences in value, usually in magnitude or strength, or in direction.
  • Variance. A measure of variability or dispersion. Its square root is the standard deviation.
  • Variate. A mathematical way in which a set of variables can be represented with one equation.
  • Verification. Quality-control procedures in fieldwork intended to ensure that interviewers are following the sampling procedures and to determine whether interviewers are cheating.
  • Visible observation. Observation in which the observer's presence is known to the subject.
  • Voice-pitch analysis. A physiological measurement technique that records abnormal frequencies in the voice that are supposed to reflect emotional reactions to various stimuli.
  • Welcome screen. The first Web page in an Internet survey, which introduces the survey and requests that the respondent enter a password or pin.
  • Within-group error or variance. The sum of the differences between observed values and the group mean for a given set of observations, also known as total error variance.
  • Within-subjects design. Involves repeated measures because with each treatment the same subject is measured.
  • World Wide Web (WWW). A portion of the internet that is a system of computer servers that organize information into documents called Web pages.
  • Z-test for differences of proportions. A technique used to test the hypothesis that proportions are significantly different for two independent samples or groups.