Multimedia-based Instructional Design by Lee, Owens

From CNM Wiki
Jump to: navigation, search

Multimedia-based Instructional Design by Lee, Owens is the book titled Multimedia-based Instructional Design: Computer-based Training, Web-based Training, Distance Broadcast, authored by William W. Lee and Diana L. Owens and copyrighted and published in 2004 by by Pfeiffer, an Imprint of John Wiley & Sons, Inc., San Francisco, CA.

  • Concurrent validity. Measure of the ability of test items to discriminate between master and novice students. Establishes the superiority of a course in instructional delivery if the course is to be used for certification of competence.
  • Content validity. Measures that use subject-matter experts to review materials to qualitatively validate that there is congruence among the objectives, content, and test items.
  • Correlation. Establishes the relationship among variables regarding whether one variable is dependent on the others for the dependent variable to be true. The result is a number between -1.0 and +1.0. The closer the correlation to +1.0, the higher the positive correlation and the stronger the relationship between the variables.
  • Criterion-referenced (CR). Measure of performance against a predetermined standard that allows comparison of individuals against that standard.
  • Difficulty index. A rating score by subject-matter experts that identifies how high a degree of expertise is necessary for a student to correctly answer a particular test item.
  • Distractor analysis. Analysis of the possible answers (distractors) on a test to determine whether there are certain questions that students are consistently answering incorrectly and whether a certain one of the incorrect distractors is being chosen more frequently than any other.
  • Face validity. Qualitative measures requiring SME validation that course content approximates that of any other course on the subject. It is the minimum validity required to establish that a course teaches what it intends to teach or test. It cannot be the only form of validity used if reliability measures are also required.
  • Formative evaluation. All activities occurring from the time a customer begins contract negotiations until the final product is delivered, ensuring the instructional soundness, quality, and suitability of a training program.
  • Instrument validity. Developing and evaluating test instruments to ensure that the informational data collected is unbiased and replicable during subsequent administrations of the instrument.
  • Item analysis. A test that compares two independent variables to determine whether individual test items are valid. Item analysis requires that various statistical tests be applied, depending on the information required.
  • Mastery curve. A distribution curve with a mean near the upper or lower end of the distribution indicating that the majority of those whose characteristic is shown on the curve have or do not have the attribute. Also known as a leptokurtic curve.
  • Normal curve. A distribution curve that graphically shows the results from a group in which each is compared on the same variable; the average (mean) score is near the middle of the distribution with equal intervals (standard deviations) both above and below the mean.
  • Normal distribution. A distribution where the mean is near the middle of the distribution with equal intervals (standard deviations) both above and below the mean.
  • Norm-referenced (NR). Measures of knowledge or performance against a level that is derived from the average of all scores from a large sample of student performance on the test. This measure allows comparison of students who should possess the same characteristic.
  • Predictive validity. Measures of the ability of a test to predict future success in a skill area as a result of success on a test. Establishes superiority of a course in instructional delivery if the course is used for certification of competence.
  • Qualitative. Subjective measure of instructional soundness. May be open to a variety of interpretations.
  • Quantitative. Measure of instructional soundness that employs data and the results of statistical analyses.
  • Reliability. A quantifiable value that describes the degree to which a training program produces consistent results in what it teaches.
  • Simulations. CBT-generated scenarios that contain a high degree of realism. Highlevel simulations duplicate complex situations in which the student actually experiences and reacts to the scenario; mid-level and low-level simulations demonstrate a situation but have students input answers to questions after the scenario.
  • Skewed distribution. A distribution on a curve where scores are clustered around the top or bottom end of the curve with unequal intervals (standard deviations) on either side of the mean.
  • Standard deviation. A measure of the extent to which individual scores differ from the mean.
  • Standardized. Repeated administrations of a test to refine it to the point where it is both valid and reliable (students score consistently on the test), resulting in a normal distribution curve from any group of people who possess or should possess a certain characteristic.
  • Summative evaluation. Testing the effectiveness of the training program along predetermined criteria.
  • Test-item validity. Statistical analysis of test items to ensure they measure the skills learned to a sufficiently high degree to discriminate between high-achieving and low-achieving students.
  • Validation. Procedures employed to ensure the instructional effectiveness of a training program.
  • Validity. A quantifiable value that describes the degree to which a training program teaches what it claims to teach.