Research Methods For Business 7e by Sekaran, Bougie

From CNM Wiki
Jump to: navigation, search

Research Methods For Business 7e by Sekaran, Bougie is the 7th edition of the Research Methods For Business: A Skill Building Approach textbook authored by Uma Sekaran and Roger Bougie and copyrighted and published in 2016 by John Wiley & Sons Ltd., Chichester, West Sussex, United Kingdom.

  • Action research. A research strategy aimed at initiating change processes, with an incremental focus, for narrowing the gap between the desired and actual states.
  • Alternate hypothesis. An educated conjecture that sets the parameters one expects to find. The alternate hypothesis is tested to see whether or not the null is to be rejected.
  • Ambiguous questions. Questions that are not clearly worded and are likely to be interpreted by respondents in different ways.
  • ANOVA. Stands for analysis of variance, which tests for significant mean differences in variables among multiple groups.
  • Applied research. Research conducted in a particular setting with the specific objective of solving an existing problem in the situation.
  • Area sampling. Cluster sampling within a specified area or region; a probability sampling design.
  • Basic research. Research conducted to generate knowledge and understanding of phenomena (in the work setting) that adds to the existing body of knowledge (about organizations and management theory).
  • Bias. Any error that creeps into the data. Biases can be introduced by the researcher, the respondent, the measuring instrument, the sample, and so on.
  • Bibliography. A listing of books, articles, and other relevant materials, alphabetized according to the last name of the authors, referencing the titles of their works, and indicating where they can be located.
  • Big data. Term commonly used to describe the exponential growth and availability of data from digital sources inside and outside the organization.
  • Canonical correlation. A statistical technique that examines the relationship between two or more dependent variables and several independent variables.
  • Case study. Focuses on collecting information about a specific object, event or activity, such as a particular business unit or organization.
  • Categorization. The process of organizing, arranging, and classifying coding units (in qualitative data analysis).
  • Category (in qualitative data analysis). A group of coding units that share some commonality.
  • Category reliability. The extent to which judges are able to use category definitions to classify qualitative data.
  • Category scale. A scale that uses multiple items to seek a single response.
  • Causal study. A research study conducted to establish cause‐and‐effect relationships among variables.
  • Chi‐square test. A nonparametric test that establishes the independence or otherwise between two nominal variables.
  • Classification data. Personal information or demographic details of the respondents such as age, marital status, and educational level.
  • Closed questions. Questions with a clearly delineated set of alternatives that confine the respondents' choice to one of them.
  • Cluster sampling. A probability sampling design in which the sample comprises groups or chunks of elements with intragroup heterogeneity and intergroup homogeneity.
  • Coding. The analytic process through which the qualitative data that you have gathered are reduced, rearranged, and integrated to form theory (compare Data coding).
  • Coding scheme. Contains predetermined categories for recording what is observed. Such schemes come in many forms and shapes.
  • Comparative scale. A scale that provides a benchmark or point of reference to assess attitudes, opinions, and the like.
  • Complex probability sampling. Several probability sampling designs (such as systematic and stratified random), which offer an alternative to the cumbersome, simple random sampling design.
  • Computer‐assisted telephone interviews (CATI). Interviews in which questions are prompted onto a PC monitor that is networked into the telephone system, to which respondents provide their answers.
  • Concealed observation. Members of a social group under study are not told that they are being observed.
  • Concealment of observation. Relates to whether the members of the social group under study are told that they are being observed.
  • Conceptual analysis. Establishes the existence and frequency of concepts (such as words, themes, or characters) in a text.
  • Concurrent validity. Relates to criterion‐related validity, which is established at the same time the test is administered.
  • Confidence. The probability estimate of how much reliance can be placed on the findings; the usual accepted level of confidence in social science research is 95%.
  • Conjoint analysis. A multivariate statistical technique used to determine the relative importance respondents attach to attributes and the utilities they attach to specific levels of attributes.
  • Consensus scale. A scale developed through consensus or the unanimous agreement of a panel of judges as to the items that measure a concept.
  • Constant sum rating scale. A scale where the respondents distribute a fixed number of points across several items.
  • Construct validity. Testifies to how well the results obtained from the use of the measure fit the theories around which the test was designed.
  • Constructionism. An approach to research that is based on the idea that the world as we know it is fundamentally mental or mentally constructed. Constructionists aim to understand the rules people use to make sense of the world by investigating what happens in people's minds.
  • Content analysis. An observational research method that is used to systematically evaluate the symbolic contents of all forms of recorded communication.
  • Content validity. Establishes the representative sampling of a whole set of items that measures a concept, and reflects how well the dimensions and elements thereof are delineated.
  • Contextual factors. Factors relating to the organization under study such as the background and environment of the organization, including its origin and purpose, size, resources, financial standing, and the like.
  • Contrived setting. An artificially created or "lab" environment in which research is conducted.
  • Control group. The group that is not exposed to any treatment in an experiment.
  • Controlled observation. Controlled observation occurs when observational research is carried out under carefully arranged conditions.
  • Convenience sampling. A nonprobability sampling design in which information or data for the research are gathered from members of the population conveniently accessible to the researcher.
  • Convergent validity. That which is established when the scores obtained by two different instruments measuring the same concept, or by measuring the concept by two different methods, are highly correlated.
  • Correlation matrix. A correlation matrix is used to examine relationships between interval and/or ratio variables.
  • Correlational study. A research study conducted to identify the important factors associated with the variables of interest.
  • Criterion‐related validity. That which is established when the measure differentiates individuals on a criterion that it is expected to predict.
  • Criterion variable. The variable of primary interest to the study, also known as the dependent variable.
  • Critical literature review. A step‐by‐step process that involves the identification of published and unpublished work from secondary data sources on the topic of interest, the evaluation of this work in relation to the problem, and the documentation of this work.
  • Critical realism. A school of thought combining the belief in an external reality (an objective truth) with the rejection of the claim that this external reality can be objectively measured. The critical realist is critical of our ability to understand the world with certainty.
  • Cross‐sectional study. A research study for which data are gathered just once (stretched though it may be over a period of days, weeks, or months) to answer the research question.
  • Data coding. In quantitative research data coding involves assigning a number to the participants' responses so they can be entered into a database.
  • Data display. Taking the reduced qualitative data and displaying them in an organized, condensed manner.
  • Data editing. Data editing deals with detecting and correcting illogical, inconsistent, or illegal data and omissions in the information returned by the participants of the study.
  • Data mining. Helps to trace patterns and relationships in the data stored in the data warehouse.
  • Data reduction. Breaking down data into manageable pieces.
  • Data transformation. The process of changing the original numerical representation of a quantitative value to another value.
  • Data warehouse. A central repository of all information gathered by the company.
  • Deductive reasoning. The application of a general theory to a specific case.
  • Delphi technique. A forecasting method that uses a cautiously selected panel of experts in a systematic, interactive manner.
  • Dependent variable. See Criterion variable.
  • Descriptive statistics. Statistics such as frequencies, the mean, and the standard deviation, which provide descriptive information about a set of data.
  • Descriptive study. A research study that describes the variables in a situation of interest to the researcher.
  • Dichotomous scale. Scale used to elicit a Yes/No response, or an answer to two different aspects of a concept.
  • Directional hypothesis. An educated conjecture as to the direction of the relationship, or differences among variables, which could be positive or negative, or more or less, respectively.
  • Discriminant analysis. A statistical technique that helps to identify the independent variables that discriminate a nominally scaled dependent variable of interest.
  • Discriminant validity. That which is established when two variables are theorized to be uncorrelated, and the scores obtained by measuring them are indeed empirically found to be so.
  • Disproportionate stratified random sampling. A probability sampling design that involves a procedure in which the number of sample subjects chosen from various strata is not directly proportionate to the total number of elements in the respective strata.
  • Double‐barreled question. Refers to the improper framing of a question that should be posed as two or more separate questions, so that the respondent can give clear and unambiguous answers.
  • Double‐blind study. A study where neither the experimenter nor the subjects are aware as to who is given the real treatment and who the placebo.
  • Double sampling. A probability sampling design that involves the process of collecting information from a set of subjects twice – such as using a sample to collect preliminary information, and later using a subsample of the primary sample for more information.
  • Dummy variable. A variable that has two or more distinct levels, which are coded 0 or 1.
  • Efficiency in sampling. Attained when the sampling design chosen either results in a cost reduction to the researcher or offers a greater degree of accuracy in terms of the sample size.
  • Electronic questionnaire. Online questionnaire administered when a microcomputer is hooked up to computer networks.
  • Element. A single member of the population.
  • Epistemology. Theory about the nature of knowledge or how we come to know.
  • Ethics. Code of conduct or expected societal norms of behavior.
  • Ethnography. A research process in which the anthropologist closely observes, records, and engages in the daily life of another culture and then writes accounts of this culture, emphasizing descriptive detail.
  • Exogenous variable. A variable that exerts an influence on the cause‐and‐effect relationship between two variables in some way, and needs to be controlled.
  • Experimental design. A study design in which the researcher might create an artificial setting, control some variables, and manipulate the independent variable to establish cause‐and‐effect relationships.
  • Experimental group. The group exposed to a treatment in an experimental design.
  • Expert panel. A group of people specifically convened by the researcher to elicit expert knowledge and opinion about a certain issue.
  • Exploratory research. A research study where very little knowledge or information is available on the subject under investigation.
  • Ex post facto experimental design. Studying subjects who have already been exposed to a stimulus and comparing them to those not so exposed, so as to establish cause‐and‐effect relationships (in contrast to establishing cause‐and‐effect relationships by manipulating an independent variable in a lab or a field setting).
  • External consultants. Research experts outside the organization who are hired to study specific problems to find solutions.
  • External validity. The extent of generalizability of the results of a causal study to other field settings.
  • Faces scale. A particular representation of the graphic scale, depicting faces with expressions that range from smiling to sad.
  • Face‐to‐face interview. Information gathering when both the interviewer and interviewee meet in person.
  • Face validity. An aspect of validity examining whether the item on the scale, on the face of it, reads as if it indeed measures what it is supposed to measure.
  • Factorial validity. That which indicates, through the use of factor analytic techniques, whether a test is a pure measure of some specific factor or dimension.
  • Field experiment. An experiment done to detect cause‐ and‐effect relationships in the natural environment in which events normally occur.
  • Field study. A study conducted in the natural setting with a minimal amount of researcher interference in the flow of events in the situation.
  • Fixed rating scale. See Constant sum rating scale.
  • Focus group. A group consisting of eight to ten members randomly chosen, who discuss a product or any given topic for about two hours with a moderator present, so that their opinions can serve as the basis for further research.
  • Forced choice. Elicits the ranking of objects relative to one another.
  • Formative scale. Used when a construct is viewed as an explanatory combination of its indicators.
  • Frequencies. The number of times various subcategories of a phenomenon occur, from which the percentage and cumulative percentage of any occurrence can be calculated.
  • Fundamental research. See Basic research.
  • Funneling technique. The questioning technique that consists of initially asking general and broad questions, and gradually narrowing the focus thereafter to more specific themes.
  • Generalizability. The applicability of research findings in one setting to others.
  • Going native. The researcher/observer becomes so involved with the group under study that eventually every objectivity and research interest is lost.
  • Goodness of measures. Attests to the reliability and validity of measures.
  • Graphic rating scale. A scale that graphically illustrates the responses that can be provided, rather than specifying any discrete response categories.
  • Grounded theory. A systematic set of procedures to develop an inductively derived theory from the data.
  • History effects. A threat to the internal validity of the experimental results, when events unexpectedly occur while the experiment is in progress and contaminate the cause‐ and‐effect relationship.
  • Hypothesis. A tentative, yet testable, statement that predicts what you expect to find in your empirical data.
  • Hypothetico‐deductive method. A seven‐step research process of identifying a broad problem area, defining the problem statement, developing hypotheses, determining measures, data collection, data analysis, and the interpretation of data.
  • Independent samples t‐test. Test that is done to see if there are significant differences in the means for two groups in the variable of interest.
  • Independent variable. A variable that influences the dependent or criterion variable and accounts for (or explains) its variance.
  • Inductive reasoning. A process where we observe specific phenomena and on this basis arrive at general conclusions.
  • Inferential statistics. Statistics that help to establish relationships among variables and draw conclusions therefrom.
  • Instrumentation effects. The threat to internal validity in experimental designs caused by changes in the measuring instrument between the pretest and the posttest.
  • Inter item consistency reliability. A test of the consistency of responses to all the items in a measure to establish that they hang together as a set.
  • Interjudge reliability. The degree of consistency between coders processing the same (qualitative) data.
  • Internal consistency. Homogeneity of the items in the measure that tap a construct.
  • Internal consultants. Research experts within the organization who investigate and find solutions to problems.
  • Internal validity of experiments. Attests to the confidence that can be placed in the cause‐and‐effect relationship found in experimental designs.
  • Interval scale. A multipoint scale that taps the differences, the order, and the equality of the magnitude of the differences in the responses.
  • Intervening variable. A variable that surfaces as a function of the independent variable, and helps in conceptualizing and explaining the influence of the independent variable on the dependent variable.
  • Interview. A data collection method in which the researcher asks for information verbally from the respondents.
  • Itemized rating scale. A scale that offers several categories of response, out of which the respondent picks the one most relevant for answering the question.
  • Judgment sampling. A purposive, nonprobability sampling design in which the sample subject is chosen on the basis of the individual's ability to provide the type of special information needed by the researcher.
  • Lab experiment. An experimental design set up in an artificially contrived setting where controls and manipulations are introduced to establish cause‐and‐effect relationships among variables of interest to the researcher.
  • Leading questions. Questions phrased in such a manner as to lead the respondent to give the answers that the researcher would like to obtain.
  • Likert scale. An interval scale that specifically uses the five anchors of Strongly Disagree, Disagree, Neither Disagree nor Agree, Agree, and Strongly Agree.
  • Literature review. A step‐by‐step process that involves the identification of published and unpublished work from secondary data sources on the topic of interest, the evaluation of this work in relation to the problem, and the documentation of this work.
  • Loaded questions. Questions that elicit highly biased emotional responses from subjects.
  • Logistic regression. A specific form of regression analysis in which the dependent variable is a nonmetric, dichotomous variable.
  • Longitudinal study. A research study for which data are gathered at several points in time to answer a research question.
  • Manipulation. How the researcher exposes the subjects to the independent variable to determine cause‐and‐effect relationships in experimental designs.
  • MANOVA. A statistical technique that is similar to ANOVA, with the difference that ANOVA tests the mean differences of more than two groups on one dependent variable, whereas MANOVA tests mean differences among groups across several dependent variables simultaneously, by using sums of squares and cross‐product matrices.
  • Match groups. A method of controlling known contaminating factors in experimental studies, by deliberately spreading them equally across the experimental and control groups, so as not to confound the cause‐and‐effect relationship.
  • Maturation effects. A threat to internal validity that is a function of the biological, psychological, and other processes taking place in the respondents as a result of the passage of time.
  • McNemar's test. A nonparametric method used on nominal data. It assesses the significance of the difference between two dependent samples when the variable of interest is dichotomous.
  • Mean. The average of a set of figures.
  • Measure of central tendency. Descriptive statistics of a data set such as the mean, median, or mode.
  • Measure of dispersion. The variability in a set of observations, represented by the range, variance, standard deviation, and the interquartile range.
  • Measurement. The assignment of numbers or other symbols to characteristics (or attributes) of objects according to a prespecified set of rules.
  • Median. The central item in a group of observations arranged in an ascending or descending order.
  • Mediating variable. A variable that surfaces as a function of the independent variable, and helps in conceptualizing and explaining the influence of the independent variable on the dependent variable.
  • Mode. The most frequently occurring number in a data set.
  • Moderating variable. A variable on which the relationship between two other variables is contingent. That is, if the moderating variable is present, the theorized relationship between the two variables will hold good, but not otherwise.
  • Mortality. The loss of research subjects during the course of the experiment, which confounds the cause‐and‐effect relationship.
  • Multicollinearity. A statistical phenomenon in which two or more independent variables in a multiple regression model are highly correlated.
  • Multiple regression analysis. A statistical technique to predict the variance in the dependent variable by regressing the independent variables against it.
  • Multistage cluster sampling. A probability sampling design that is a stratified sampling of clusters.
  • Narrative analysis. A qualitative approach that aims to elicit and scrutinize the stories we tell about ourselves and their implications for our lives.
  • Nominal scale. A scale that categorizes individuals or objects into mutually exclusive and collectively exhaustive groups, and offers basic, categorical information on the variable of interest.
  • Noncontrived setting. Research conducted in the natural environment where activities take place in the normal manner (i.e., the field setting).
  • Nondirectional hypothesis. An educated conjecture of a relationship between two variables, the directionality of which cannot be guessed.
  • Nonparametric test. A hypothesis test that does not require certain assumptions about the population's distribution, such as that the population follows a normal distribution.
  • Nonparticipant observation. The researcher is never directly involved in the actions of the actors, but observes them from outside the actors' visual horizon, for instance via a one‐way mirror or a camera.
  • Nonprobability sampling. A sampling design in which the elements in the population do not have a known or predetermined chance of being selected as sample subjects.
  • Nonresponse error. Exists to the extent that those who did respond to your survey are different from those who did not on (one of the) characteristics of interest in your study. Two important sources of non‐response are not‐at‐homes and refusals.
  • Nuisance variable. A variable that contaminates the cause‐ and‐effect relationship.
  • Null hypothesis. The conjecture that postulates no differences or no relationship between or among variables.
  • Numerical scale. A scale with bipolar attributes with five points or seven points indicated on the scale.
  • Objectivity. Interpretation of the results on the basis of the results of data analysis, as opposed to subjective or emotional interpretations.
  • Observation. The planned watching, recording, analysis, and interpretation of behavior, actions, or events.
  • One sample t‐test. A test that is used to test the hypothesis that the mean of the population from which a sample is drawn is equal to a comparison standard.
  • One‐shot study. See Cross‐sectional study.
  • Ontology. The philosophical study of what can be said to exist.
  • Open‐ended questions. Questions that the respondent can answer in a free‐flowing format without restricting the range of choices to a set of specific alternatives suggested by the researcher.
  • Operationalizing. Reduction of abstract concepts to render them measurable in a tangible way.
  • Operations research. A quantitative approach taken to analyze and solve problems of complexity.
  • Ordinal scale. A scale that not only categorizes the qualitative differences in the variable of interest, but also allows for the rank‐ordering of these categories in a meaningful way.
  • Outlier. An observation that is substantially different from the other observations.
  • Paired comparisons. Respondents choose between two objects at a time, with the process repeated with a small number of objects.
  • Paired samples t‐test. Test that examines the differences in the same group before and after a treatment.
  • Parallel‐form reliability. That form of reliability which is established when responses to two comparable sets of measures tapping the same construct are highly correlated.
  • Parametric test. A hypothesis test that assumes that your data follow a specific distribution.
  • Parsimony. Efficient explanation of the variance in the dependent variable of interest through the use of a smaller, rather than a larger number of independent variables.
  • Participant observation. In participant observation the researcher gathers data by participating in the daily life of the group or organization under study.
  • Population. The entire group of people, events, or things that the researcher desires to investigate.
  • Positivism. A school of thought employing deductive laws and quantitative methods to get at the truth. For a positivist, the world operates by laws of cause and effect that one can discern if one uses a scientific approach to research.
  • Posttest. A test given to the subjects to measure the dependent variable after exposing them to a treatment.
  • Pragmatism. A viewpoint on research that does not take on a particular position on what makes good research. Pragmatists feel that research on both objective, observable phenomena and subjective meanings can produce useful knowledge, depending on the research questions of the study.
  • Precision. The degree of closeness of the estimated sample characteristics to the population parameters, determined by the extent of the variability of the sampling distribution of the sample mean.
  • Predictive validity. The ability of the measure to differentiate among individuals as to a criterion predicted for the future.
  • Predictor variable. See Independent variable.
  • Pretest. A test given to subjects to measure the dependent variable before exposing them to a treatment.
  • Pretesting survey questions. Test of the understandability and appropriateness of the questions planned to be included in a regular survey, using a small number of respondents.
  • Primary data. Data collected first‐hand for subsequent analysis to find solutions to the problem researched.
  • Probability sampling. The sampling design in which the elements of the population have some known chance or probability of being selected as sample subjects.
  • Problem definition. A definition of the difference between the actual and desired situation.
  • Problem statement. A problem statement includes both a statement of the research objective(s) and the research question(s).
  • Proportionate stratified random sampling. A probability sampling design in which the number of sample subjects drawn from each stratum is proportionate to the total number of elements in the respective strata.
  • Pure observation. Seeks to remove the researcher from the observed actions and behavior; the researcher is never directly involved in the actions and behavior of the group under study.
  • Pure participation. The researcher becomes so involved with the group under study that eventually every objectivity and research interest is lost.
  • Pure research. See Basic research.
  • Purposiveness in research. The situation in which research is focused on solving a well‐identified and defined problem, rather than aimlessly looking for answers to vague questions.
  • Purposive sampling. A nonprobability sampling design in which the required information is gathered from special or specific targets or groups of people on some rational basis.
  • Qualitative data. Data that are not immediately quantifiable unless they are coded and categorized in some way.
  • Questionnaire. A preformulated written set of questions to which the respondent records the answers, usually within rather closely delineated alternatives.
  • Quota sampling. A form of purposive sampling in which a predetermined proportion of people from different subgroups is sampled.
  • Randomization. The process of controlling the nuisance variables by randomly assigning members among the various experimental and control groups, so that the confounding variables are randomly distributed across all groups.
  • Range. The spread in a set of numbers indicated by the difference in the two extreme values in the observations.
  • Ranking scale. Scale used to tap preferences between two or among more objects or items.
  • Rapport. A trusting relationship with the social group under study, by showing respect, being truthful, and showing commitment to the well-being of the group or the individual members of the group, so that they feel secure in sharing (sensitive) information with the researcher
  • Rating scale. Scale with several response categories that evaluate an object on a scale.
  • Ratio scale. A scale that has an absolute zero origin, and hence indicates not only the magnitude, but also the proportion, of the differences.
  • Reactivity. The extent to which the observer affects the situation under observation.
  • Recall‐dependent question. Question that elicits from the respondents information that involves recall of experiences from the past that may be hazy in their memory.
  • Reference list. A list that includes details of all the citations used in the literature review and elsewhere in the paper or report.
  • Reflective scale. Each item in a reflective scale is assumed to share a common basis (the underlying construct of interest).
  • Regression analysis. Used in a situation where one or more metric independent variable(s) is (are) hypothesized to affect a metric dependent variable.
  • Relational analysis. Builds on conceptual analysis by examining the relationships among concepts in a text.
  • Reliability. Attests to the consistency and stability of the measuring instrument.
  • Replicability. The extent to which a re‐study is made possible by the provision of the design details of the study in the research report.
  • Representativeness of the sample. The extent to which the sample that is selected possesses the same characteristics as the population from which it is drawn.
  • Research. An organized, systematic, critical, scientific inquiry or investigation into a specific problem, undertaken with the objective of finding answers or solutions thereto.
  • Research design. A blueprint or plan for the collection, measurement, and analysis of data, created to answer your research questions.
  • Research objective. The purpose or objective of the study explains why the study is being done. Providing a solution to a problem encountered in the work setting is the purpose of the study in most applied research.
  • Research proposal. A document that sets out the purpose of the study and the research design details of the investigation to be carried out by the researcher.
  • Research question(s). Specify what you want to learn about the topic. They guide and structure the process of collecting and analyzing information to help you to attain the purpose of your study. In other words, research questions are the translation of the problem of the organization into a specific need for information.
  • Researcher interference. The extent to which the person conducting the research interferes with the normal course of work at the study site.
  • Restricted probability designs. See Complex probability sampling.
  • Rigor. The theoretical and methodological precision adhered to in conducting research.
  • Sample. A subset or subgroup of the population.
  • Sample size. The actual number of subjects chosen as a sample to represent the population characteristics.
  • Sampling. The process of selecting items from the population so that the sample characteristics can be generalized to the population. Sampling involves both design choice and sample size decisions.
  • Sampling frame. A (physical) representation of all the elements in the population from which the sample is drawn.
  • Sampling unit. The element or set of elements that is available for selection in some stage of the sampling process.
  • Scale. A tool or mechanism by which individuals, events, or objects are distinguished on the variables of interest in some meaningful way.
  • Scientific investigation. A step‐by‐step, logical, organized, and rigorous effort to solve problems.
  • Secondary data. Data that already exist and do not have to be collected by the researcher.
  • Selection effects. The threat to internal validity that is a function of improper or unmatched selection of subjects for the experimental and control groups.
  • Semantic differential scale. Usually a seven‐point scale with bipolar attributes indicated at its extremes.
  • Sequence record. A sequence record allows the researcher conducting an observational study to collect information on how often an event occurs.
  • Simple checklist. Checklist used in structured observation that provides information about how often a certain event has occurred.
  • Simple random sampling. A probability sampling design in which every single element in the population has a known and equal chance of being selected as a subject.
  • Simulation. A model‐building technique for assessing the possible effects of changes that might be introduced in a system.
  • Social desirability. The respondents' need to give socially or culturally acceptable responses to the questions posed by the researcher even if they are not true.
  • Solomon four‐group design. The experimental design that sets up two experimental groups and two control groups, subjecting one experimental group and one control group to both the pretest and the posttest, and the other experimental group and control group to only the posttest.
  • Split‐half reliability. The correlation coefficient between one half of the items measuring a concept and the other half.
  • Stability of a measure. The ability of the measure to repeat the same results over time with low vulnerability to changes in the situation.
  • Standard deviation. A measure of dispersion for parametric data; the square root of the variance.
  • Standardized regression coefficients (or beta coefficients). The estimates resulting from a multiple regression analysis performed on variables that have been standardized (a process whereby the variables are transformed into variables with a mean of 0 and a standard deviation of 1).
  • Stapel scale. A scale that measures both the direction and intensity of the attributes of a concept.
  • Statistical power (1 – β). The probability of correctly rejecting the null hypothesis.
  • Statistical regression. The threat to internal validity that results when various groups in the study have been selected on the basis of their extreme (very high or very low) scores on some important variables.
  • Stratified random sampling. A probability sampling design that first divides the population into meaningful, nonoverlapping subsets, and then randomly chooses the subjects from each subset.
  • Structured interviews. Interviews conducted by the researcher with a predetermined list of questions to be asked of the interviewee.
  • Structured observation. Form of observation where the observer has a predetermined set of categories of activities or phenomena planned to be studied.
  • Subject. A single member of the sample.
  • Survey. A system for collecting information from or about people to describe, compare, or explain their knowledge, attitudes, and behavior.
  • Systematic sampling. A probability sampling design that involves choosing every nth element in the population for the sample.
  • Telephone interview. The information‐gathering method by which the interviewer asks the interviewee over the telephone, rather than face to face, for information needed for the research.
  • Test–retest reliability. A way of establishing the stability of the measuring instrument by correlating the scores obtained through its administration to the same set of respondents at two different points in time.
  • Testability. The ability to subject the data collected to appropriate statistical tests, in order to substantiate or reject the hypotheses developed for the research study.
  • Testing effects. The distorting effects on the experimental results (the posttest scores) caused by the prior sensitization of the respondents to the instrument through the pretest.
  • Theoretical framework. A logically developed, described, and explained network of associations among variables of interest to the research study.
  • Treatment. The manipulation of the independent variable in experimental designs so as to determine its effects on a dependent variable of interest to the researcher.
  • Two‐way ANOVA. A statistical technique that can be used to examine the effect of two nonmetric independent variables on a single metric dependent variable.
  • Type I error (α). The probability of rejecting the null hypothesis when it is actually true.
  • Type II error (β). The probability of failing to reject the null hypothesis given that the alternative hypothesis is actually true.
  • Unbalanced rating scale. An even‐numbered scale that has no neutral point.
  • Unbiased questions. Questions posed in accordance with the principles of wording and measurement, and the right questioning technique, so as to elicit the least biased responses.
  • Unconcealed observation. Members of a social group under study are told that they are being observed.
  • Uncontrolled observation. An observational technique that makes no attempt to control, manipulate, or influence the situation.
  • Unit of analysis. The level of aggregation of the data collected during data analysis.
  • Unobtrusive methods. Methods that do not require the researcher to interact with the people he or she is studying.
  • Unrestricted probability sampling. See Simple random sampling.
  • Unstructured interviews. Interviews conducted with the primary purpose of identifying some important issues relevant to the problem situation, without prior preparation of a planned or predetermined sequence of questions.
  • Unstructured observation. Form of observation that is used when the observer has no definite ideas of the particular aspects that need focus.
  • Validity. Evidence that the instrument, technique, or process used to measure a concept does indeed measure the intended concept.
  • Variable. Anything that can take on differing or varying values.
  • Variance. Indicates the dispersion of a variable in the data set, and is obtained by subtracting the mean from each of the observations, squaring the results, summing them, and dividing the total by the number of observations.
  • Wilcoxon signed‐rank test. A nonparametric test used to examine differences between two related samples or repeated measurements on a single sample. It is used as an alternative to a paired samples t‐test when the population cannot be assumed to be normally distributed.