The Systematic Design of Instruction 8e by Dick, Carey, Carey

From CNM Wiki
Jump to: navigation, search

The Systematic Design of Instruction 8e by Dick, Carey, Carey is the 8th edition of the textbook authored by Walter Dick, Florida State University, Emeritus, Lou Carey, University of South Florida, Emeritus, and James O. Carey, University of South Florida, Emeritus, and published by Pearson Education, Inc. in 2015.

The copyright belongs to Walter Dick and Lou Carey.

  • Alternative assessment. Describes evaluation instruments and procedures other than objective-style tests; includes the evaluation of live performances, products, and attitudes; format includes directions for the learner and a scoring rubric.
  • ARCS model. Keller's theory of motivation: attention, relevance, confidence, and satisfaction.
  • Assessment-centered criteria. Test or item criteria used to judge item writing qualities such as grammar, spelling, punctuation, clarity, parsimony, and the use of recommended item formatting rules.
  • Assessment instruments. Materials developed and used to assess learners' status and progress in both achievement and attitudes. For achievement, objective tests, product development activities, and live performances are included. For attitudes, both observation and self-report techniques are included.
  • Attitude. An internal state that influences an individual's choices or decisions to act under certain circumstances. Attitudes represent a tendency to respond in a particular way.
  • Authentic assessment. Assessment in meaningful real-life contexts (or simulations thereof) in which newly acquired skills will ultimately be applied.
  • Behavior. An action that is an overt, observable, measurable performance.
  • Behavioral objective. See Objective.
  • Blended learning. At its most basic, it is a combination of any two or more learning environments. In common practice, it is typically a combination of web-based and classroom instruction in the same course or training program.
  • Candidate media. Those media that can present the desired information, without regard to which may be the most effective. The distinction is from noncandidate media. A book, for example, cannot present sound and thus would be an inappropriate choice for delivering instruction for certain objectives.
  • Chunk of instruction. All the instruction required to teach one objective or a combination of two or more objectives.
  • Cluster analysis. A technique used with goals in the verbal information domain to identify the specific information needed to achieve the goal and the ways that information can best be organized or grouped.
  • Cognitive flexibility. The ability to adapt and change one's mental organization of knowledge and mental management of solution strategies for solving new, unexpected problems.
  • Cognitive load. The amount of information that a person can manage in working (short-term) memory while participating in a learning activity. The theory predicts detrimental effects on learning and retention from instructional content that is either too much or too complex.
  • Cognitive maps. The graphical representation of how conceptual knowledge is structured (e.g., flowcharts, hierarchies, circles, spider webs).
  • Cognitive strategy. Metaprocesses used by an individual to manage how he or she thinks about things in order to ensure personal learning.
  • Cognitive task analysis. Method of identifying the information and skills required to perform complex tasks. Uses rigorous observation and interview protocols to access the required information from an expert.
  • Cognitivism. A learning theory in which learning is viewed as active mental processing to store new knowledge in memory and retrieve knowledge from memory. Cognitivism emphasizes the structure of knowledge and external conditions that support internal mental processes.
  • Complex goal. A goal that involves more than one domain of learning.
  • Concept. A set of objects, events, symbols, situations, and so on, that can be grouped together on the basis of one or more shared characteristics and given a common identifying label or symbol. Concept learning refers to the capacity to identify members of the concept category.
  • Conditions. A main component of a performance objective that specifies the circumstances and materials required in the assessment of the learners' mastery of the objective.
  • Congruence analysis. Analyzing the congruence among (1) an organization's stated needs and goals and those addressed in candidate instruction; (2) an organization's target learners' entry skills and characteristics and those for which candidate materials are intended; and (3) an organization's resources and those required for obtaining and implementing candidate instruction. It is conducted during the expert judgment phase of summative evaluation.
  • Constructivism. A learning theory in which learning is viewed as an internal process of constructing meaning by combining existing knowledge with new knowledge gained through experiences in the social, cultural, and physical world. Constructivism emphasizes the processes and social interactions in which a student engages for learning.
  • Constructivist learning environment (CLE). Learners in collaborative groups with peers and teachers consulting resources to solve problems. Collaboration can be face to face or managed at a distance by media. Collaboration can be real or simulated in virtual learning spaces.
  • Content stability. The degree to which information to be learned is likely to remain current.
  • Context-centered criteria. Test or item criteria used to judge the congruence between the situations used in the assessments and the learning and performance contexts. Authenticity of examples and simulations is the main focus.
  • Criterion. A standard against which a performance or product is measured.
  • Criterion-referenced test items. Items designed to measure performance on an explicit set of objectives; also known as objective-referenced test items.
  • Delivery system. The means by which instruction will be provided to learners. Includes instructor-led instruction, distance education, computer-based instruction, and self-instructional materials.
  • Design evaluation chart. A method for organizing design information to facilitate its evaluation. The chart relates skills, objectives, and associated test items, allowing easy comparison among the components of the instructional design.
  • Discrepancy analysis. Investigations of the gap between an organization's current status and their desired status on defined goals.
  • Discrimination. Distinguishing one stimulus from another and responding differently to the various stimuli.
  • Domain of learning. A major type of learning outcome that can be distinguished from other domains by the type of learned performance required, the type of mental processing required, and the relevant conditions of learning.
  • Electronic performance support system (EPSS). An application embedded in a software system that can be accessed as needed to support job performance. The application could supply algorithms, expert systems, tutorials, hyperlinked information, and so forth.
  • Embedded attitude question. Question asked of learners about the instruction at the time they first encounter it.
  • Entry skills. Specific competencies or skills a learner must have mastered before entering a given instructional activity. Also known as prerequisite skills.
  • Entry-skill test item. Criterion-referenced test items designed to measure skills identified as necessary prerequisites to beginning a specific course of instruction. Items are typically included in a pretest.
  • Evaluation. An investigation conducted to obtain specific answers to specific questions at specific times and in specific places; involves judgments of quality levels.
  • Expert judgment evaluation. Judgments of the quality of instructional materials made by content experts, learner specialists, or design specialists. The first phase of summative evaluation.
  • Feedback. Information provided to learners about the correctness of their responses to practice questions in the instruction.
  • Field trial. The third stage in formative evaluation, referring to the evaluation of the program or product in the setting in which it is intended to be used. Also, the second phase of summative evaluation.
  • Formative evaluation. Evaluation designed to collect data and information that is used to improve a program or product; conducted while the program is still being developed.
  • Front-end analysis. A process used for evaluating instructional needs and identifying alternative approaches to meeting those needs. It includes a variety of activities including, but not limited to, performance analysis, needs assessment, job analysis, training delivery options, and feasibility analysis.
  • General learner characteristics. The general, relatively stable (not influenced by instruction) traits describing the learners in a given target population.
  • Goal. A broad, general statement of an instructional intent, expressed in terms of what learners will be able to do.
  • Goal analysis. The technique used to analyze a goal to identify the sequence of operations and decisions required to achieve it.
  • Goal-centered criteria. Test or item criteria used to judge the congruence between the instructional goal, performance objectives, and test items of any format that is used to monitor learning.
  • Group-based instruction. The use of learning activities and materials designed to be used in a collective fashion with a group of learners; interactive group-paced instruction.
  • Hierarchical analysis. A technique used with goals in the intellectual skills domain to identify the critical subordinate skills needed to achieve the goal and their interrelationships. For each subordinate skill in the analysis, this involves asking, "What must the student know how to do in order to learn the specific subskills being considered?"
  • Human performance technology. Setting instructional goals in response to problems or opportunities within an organization.
  • Impact analysis. The influence of given training or instruction on the organization requesting the instruction. It questions whether information, skills, and attitudes covered in the learning environment transferred to the jobsite and whether, as a result, identified problems were solved and defined needs met.
  • Impact evaluation stage. Focuses on the jobsite and examines whether (1) an organization's needs were met following use of the instruction, (2) employees are able to transfer new information and skills to the job, and (3) an improvement in job performance or productivity is realized.
  • Individualized instruction. The use by students of systematically designed learning activities and materials specifically chosen to suit their individual interests, abilities, and experience. Such instruction is usually self-paced.
  • Instruction. A set of events or activities presented in a structured or planned manner, through one or more media, with the goal of having learners achieve prespecified behaviors.
  • Instructional analysis. The procedures applied to an instructional goal to identify the relevant skills and their subordinate skills and information required for a student to achieve the goal.
  • Instructional materials. Print or other mediated instruction used by a student to achieve an instructional goal.
  • Instructional strategy. An overall plan of activities to achieve an instructional goal. The strategy includes the sequence of intermediate objectives and the learning activities leading to the instructional goal as well as specification of student groupings, media, and the delivery system. The instructional activities typically include preinstructional activities, content presentation, learner participation, assessment, and follow-through activities.
  • Instructor's manual. The collection of written materials given to instructors to facilitate their use of the instructional materials. The manual should include an overview of the materials, tests with answers, and any supplementary information thought to be useful to the instructors.
  • Intellectual skill. A skill that requires some unique cognitive activity; involves manipulating cognitive symbols, as opposed to simply retrieving previously learned information.
  • Item analysis table. A means of presenting evaluation data that show the percentage of learners who answered each item correctly on a test.
  • Item difficulty value. The percentage of learners who answer a test item or perform a task correctly.
  • Job aid. A device, often in paper or computer form, used to relieve the learner's reliance on memory during the performance of a complex task.
  • Job analysis. The process of gathering, analyzing, and synthesizing descriptions of what people do, or should do, on their jobs.
  • Learner analysis. The determination of pertinent characteristics of members of the target population. Often includes prior knowledge and attitudes toward the content to be taught, as well as attitudes toward the organization and work environment.
  • Learner-centered criteria. Criteria used to judge the congruence among the appropriateness of achievement level, language, contexts, and experiences of target learners and that presented in instructional materials.
  • Learner performance data. Information about the degree to which learners achieved the objectives following a unit of instruction.
  • Learner specialist. A person knowledgeable about a particular population of learners.
  • Learning context. The actual physical location (or locations) in which the instruction under development will be used.
  • Mastery level. A prespecified level of task performance, with no gradations below it, that defines satisfactory achievement of an objective.
  • Media. The physical means of conveying instructional content (e.g., drawings, slides, audio, computer, person, models).
  • Mindful reflection. In constructivist learning, it is the internal mental process which learners consider their own past and present process of learning for the purpose of confirming or adjusting the process for future learning encounters.
  • Model. A simplified representation of a system, often in picture or flowchart form, showing selected features of the system.
  • Module. An instructional package with a single integrated theme that provides the information needed to develop mastery of specified knowledge and skills and serves as one component of a total course or curriculum.
  • Need. A discrepancy between what should be and the actual current status of a situation.
  • Needs assessment. The formal process of identifying discrepancies between current outcomes and desired outcomes for an organization.
  • Noninstructional solution. Means of reducing performance discrepancies other than the imparting of knowledge; includes motivational, environmental, and equipment factors.
  • Objective. A statement of what the learners will be expected to do when they have completed a specified course of instruction, stated in terms of observable performances; also known as performance objective, behavioral objective, and instructional objective.
  • One-to-one evaluation. The first stage in formative evaluation, referring to direct interaction between the designer and individual tryout student.
  • Outcomes analysis. See Impact analysis.
  • Performance analysis. An analytical process used to locate, analyze, and correct job or product performance problems.
  • Performance-based instruction. The use of job performance measures or estimates as inputs for designing training and assessing learning.
  • Performance context. The setting in which it is hoped that learners will use the skills they are learning successfully; includes both the physical and social aspects of the setting.
  • Performance objective. See Objective.
  • Performance support tool (PST). A small-scale, usually stand-alone electronic performance support system designed to support a limited range of job tasks.
  • Performance technology. Application of relevant theories of human learning and behavior to improve human performance in the workplace; synonymous with human performance technology. A performance technologist practices performance technology.
  • Personas. Fictional persons who represent predominant characteristics of target learners. Profiles are typically developed using samples of large groups of intended learners and summarizing biographical information such as education levels, career interests, and motivational factors.
  • Portfolio assessment. The process of meta-evaluating a collection of work samples or assessments to determine observable changes over time in skill level and/or attitudes. All test formats can be used, including objective tests, products, and live performances.
  • Posttest. A criterion-referenced test designed to measure performance on objectives taught during a unit of instruction; given after the instruction. Typically does not include items on entry behaviors.
  • Practice test. A criterion-referenced assessment, typically at the skill or lesson level, used to provide the learner with active participation and rehearsal opportunities and the designer with opportunities to monitor learner progress.
  • Preinstructional activities. Techniques used to provide the following three events prior to delivering instructional content: (1) get the learners' attention; (2) advise them of the prerequisite skills for the unit; and (3) tell them what they will be able to do after the instruction.
  • Prerequisite skills. Also known as entry skills.
  • Pretest. A criterion-referenced test designed to measure performance on objectives to be taught during a unit of instruction and/or performance on entry skills; given before instruction begins.
  • Problem, ill-structured. Situation in which neither the exact rules to be applied nor the exact nature of the solution is identified in the problem statement. Multiple solutions may be acceptable.
  • Problem, well-structured. Situation in which the nature of the solution is well understood, and there is a generally preferred set of rules to follow to determine the solution.
  • Procedural approach (for goal analysis). The process of listing chronologically, in a step-by-step manner, all the substeps required to perform an instructional goal.
  • Psychomotor skill. Execution of a sequence of major or subtle physical actions to achieve a specified result. All skills use some type of physical action; the physical action in a psychomotor skill is the focus of the new learning, and is not merely the vehicle for expressing an intellectual skill.
  • Rapid prototyping. In software development, it is also called rapid application design (RAD) and is the process of using prototype approximations of a software design in order to test whether the application meets the design specifications.
  • Reliability. The consistency or dependability of a measure.
  • Research. An investigation conducted to identify knowledge that is generalized.
  • Return on investment (ROI). In training and development, it is a comparison between the costs incurred for training and the benefits realized from training.
  • Revision. The process of producing an amended, improved, or up-to-date version of a set of instructional materials.
  • Rough draft materials. The development of instructional materials in quick and inexpensive media formats for formative tryout.
  • Scaffolding. Teacher, peer, or mediated guidance for students' learning provided when support is needed for progress and withdrawn as students develop proficiency.
  • Sharable Content Object Reference Model (SCORM). A series of e-learning standards for ensuring interchangeability of course objects within SCORM-compliant course management systems.
  • Situated learning. The concept that learning occurs best through engagement in a process or activity that should be placed in (situated in) a context that is relevant to the learner and the knowledge to be gained.
  • Skill. An ability to perform an action or group of actions; involves overt performance.
  • Small-group evaluation. The second stage of formative evaluation, referring to the use of a small number of tryout students who study an instructional program without intervention from the designer and are tested to assess the effectiveness of the instruction.
  • Step. One skill identified in the analysis of an instructional goal. Describes a complete task, behavior, or decision that must be completed when someone performs the instructional goal. Most goals include five or more steps. See also Substep.
  • Strategic planning. A planning process used to determine and describe future organizational directions, how to achieve the prescribed directions, and how to measure whether the directions are achieved; encompasses a variety of models and processes.
  • Subject-matter expert (SME). A person knowledgeable about a particular content area. Also known as a content specialist; see also Subject-matter specialist.
  • Subject-matter specialist. A person knowledgeable about a particular content area. Also known as a content specialist or a subject-matter expert (SME).
  • Subordinate objective. An objective that must be attained in order to accomplish a terminal objective. Also known as an enabling objective or an intermediate objective.
  • Subordinate skill. A skill that must be achieved in order to learn a higher-level skill. Also known as a subskill or an enabling skill.
  • Substep. One component of a major step in a goal. There must be two or more substeps to justify a substep analysis. Performing each of the substeps in sequence is equivalent to performing the step from which they were derived.
  • Summative evaluation. Evaluation designed and used after an instructional program has been implemented. The purpose is to make decisions concerning whether the instruction actually works as intended in the performance context, and whether progress is being made in ameliorating the performance problems that prompted the instructional design and development effort. It includes two phases: expert judgment and impact. The expert judgment phase includes congruence, content, design, and transfer feasibility analyses; the impact phase includes analyses of instructional effectiveness on learners, job, and organization.
  • Superordinate skill. Higher-level competency composed of and achieved by learning subordinate skills.
  • System. A set of interrelated parts working together toward a defined goal.
  • Systems approach. Procedure used by instructional designers to create instruction. Each step requires input from prior steps and provides input for the next step. Evaluation provides feedback used to revise instruction until it meets the original need or specification.
  • Systems approach and models for instruction. A logical and iterative process of identifying all the variables that can affect the quality of instruction, including delivery, and then integrating information about each variable in the design, development, evaluation, and revision of the instruction.
  • Table of test specifications. Prescriptions for a test that include information such as level of learning, the task, performance objective, test item format, and the number of items to present for each task.
  • Target population. The total collection of possible users of a given instructional program.
  • Terminal objective. An objective the learners will be expected to accomplish when they have completed a course of instruction, made up of subordinate objectives; often, a more specific statement of the instructional goal.
  • Training. A prespecified and planned experience that enables a person to do something that he or she could not do before.
  • Transfer of learning. The process whereby the learner applies skills learned in one context to another, similar context. Also referred to as transfer of training.
  • Tryout students. A representative sample of the target population; may be used to test an instructional program prior to final implementation.
  • Validity. The degree to which a measuring instrument actually measures what it is intended to measure.
  • Verbal information. Requirement to provide a specific response to relatively specific stimuli; involves recall of information.