About the Book
This book is excellent for responding to NCLB mandates and NCATE requirements for preparing K-12 teachers, school leaders, professionals and researchers inassessment design and use of data for informing day-to-day practices.
It presents material in practitioner-friendly, straightforward and readable language, without watering down concepts and with minimal use of statistics. Instructors will find that the book provides a balanced treatment of traditional and performance assessment methods. Design, validation, and use of assessment tools for a variety of purposes are addressed through the use of a common Process Model and six User Paths. To improve testing practices, the 1999 AERA, APA & NCME standards are cited throughout. Numerous case studies, examples, demonstrations, tables, and a web-based computer module support the book.
Table of Contents:
I. FOUNDATIONAL CONCEPTS.
1. Measuring Educational Constructs: Basic Concepts.
What Is Educational Assessment? Philosophical Premises.
Elements in a Useful Assessment Procedure.
Operational Definitions of Constructs.
Constructs and Variables.
Assessment, Measurement, and Evaluation.
Role of Assessment in Education.
Organization of the Remaining Chapters.
2. Purposes for Educational Assessment.
The Need for Clear Assessment Purposes.
A Typology of Assessment Uses in Education.
User Path 1: Assessment for Teaching and Learning.
User Path 2: Assessment for Program Planning, Evaluation, and Policy-making.
User Path 3: Assessment for Screening and Diagnosis of Exceptionalities.
User Path 4: Assessment for Guidance and Counseling.
User Path 5: Assessment for Admissions, Licensure, Scholarships, and Awards.
User Path 6: Assessment in Educational Research and Development.
Cross-Over Across User Groups.
Responsibilities for Appropriate Assessment Use.
3. Quality of Assessment Results: Validity, Reliability, and Utility.
Validity.
Validation and Types of Validity Evidence.
Validation: When Should We Do It and How Far Should We Go?
Reliability.
Utility.
Prioritizing Among Validity, Reliability, and Utility.
4. Types of Assessment Tools.
“Traditional,” “Alternative,” “Authentic,” and “Performance” Assessments.
Other Ways of Classifying Assessments.
Types of Assessments Based on Mode of Response.
Advantages and Disadvantages of Different Assessment Methods.
5. A Process Model for Designing, Selecting, and Validating Assessment Tools.
Need for a “General” Process Model.
Components of a Process Model for Assessment Design/Selection and Validation.
A Case Study in Using the Process Model.
The Importance of Following a Systematic Process.
II. APPLYING THE PROCESS MODEL.
6. Specifying the Construct Domain.
Specifying the Domain for Constructs in User Path 1.
Taxonomies of Learning Outcomes.
Specifying the Domain for Constructs in User Paths 2-6.
7. Designing or Selecting Written Structured-Response Assessment Tools in User Path 1.
Why Use Written Structured-Response (W-SR) Assessments?
The Process Model Applied to W-SR Assessment Design or Selection.
Developing Assessment Specifications for W-SR Tools.
Guidelines for Item Construction for Different Types of W-SR Items.
Complex Interpretive W-SR Exercises.
The Last Word on Clues.
Choosing the Best W-SR Item Format.
W-SR Test Assembly.
Content Validation.
Bias During W-SR Assessment Design.
8. Designing or Selecting Performance Assessment Tools in User Path 1.
Why Use Performance Assessments?
Justifying Performance Assessment Methods We Choose.
Applying the Process Model with Performance Assessments.
Designing Different Types of Performance Assessments.
An Interdisciplinary Assessment.
Scoring Rubrics and How to Develop Them.
Specifications for Performance Assessments.
Assembling and Content-Validating Performance Assessments.
Using Rubrics: Sources of Random Error.
Systematic Biases.
9. Designing or Selecting Affective, Social-Emotional, Personality, and Behavioral Assessments In User Paths 2-6.
Applying the Process Model in User Paths 2-6.
Nature of Constructs in Paths 2-6.
Assessment Methods for Measuring Affective, Social-Emotional, Behavioral, or Personality Variables.
Designing Self-Report Instruments.
Designing Structured Observation Forms.
Naturalistic and Anecdotal Observations.
Selecting and Content-Validating Tools in Paths 2-6 Using Specifications.
Classical Examples of Instrument Design in Paths 2-6.
III. EMPIRICAL APPLICATIONS.
10. Analyzing Data from Assessments.
Scales of Measurement.
Continuous and Discontinuous Variables.
Organizing Data.
Measures of Central Tendency.
Measures of Variability.
Graphical Displays of Distributions.
Normal Distribution and Its Applications.
Skewness and Kurtosis.
Measures of Relative Position.
Correlation Coefficients and Their Applications.
11. Decision-Making Applications in User Paths 1-6.
Setting Standards.
Report Card Marking.
Domain-Referenced Mastery Analysis.
Mapping Long-Term Trends on Measured Constructs.
Using Assessment Results for Planning, Determining Needs, or Evaluating Programs/Services.
12. Quantitative Item Analysis.
Purposes for Item Analysis.
Item Analysis Indices.
Differences Between NRT and CRT Item Analysis.
Application of Item Analysis for NRTs.
Application of Item Analysis for CRTs.
Item Descriptive Statistics.
Limitations of Item Analysis Studies.
13. Quantitative Evaluation of Validity and Reliability.
Empirical Validation Methods.
Estimation of Reliability.
Reliability in Criterion-Referenced Measurements.
14. Selecting and Using Standardized Assessment Tools.
Distinguishing Characteristics of Standardized Tests.
Standardized Tests in Use in Education.
Norms, Norm-Referenced Scores, and Score Profiles.
Finding and Evaluating Published Assessment Tools.
Bibliography.
Glossary.
Appendix.
About the Author :
MADHABI CHATTERJI
Madhabi Chatterji (previously Madhabi Banerji) received her Ph.D in measurement and evaluation from the University of South Florida in 1990, and now holds the position of Associate Professor in Measurement, Evaluation, and Education at Teachers College, Columbia University. With over 10 years experience in education, public health, and corporate applications, Professor Chatterji currently teaches introductory measurement to practicing professionals, and the core graduate courses in evaluation methods and theory, and instrument design and validation at Columbia University's Teachers College. Research interests are broad and include designing classroom- and school-based assessment systems, development and validation of construct measures with classical and Rasch measurement methods, evaluating standards-based educational reforms and small- and large-scale interventions with systemic models.
A firm believer in the integration of theory with practice and policy, Madhabi Chatterji's publications include her book and computer module, titled Designing and using tools for educational assessment (Allyn & Bacon, 2003); several evaluation studies and syntheses published in Teachers College Record (in press), Review of Educational Research (2002), American Journal of Public Health (2000), Journal of Learning Disabilities (1998), and Journal of Experimental Education (1998); and research on instrument design and validation published in Educational and Psychological Measurement (1998, 1999, 2002), Journal of Applied Measurement (2000, 2002), Journal of Psychoeducational Assessment (1992), and Journal of Outcome Measurement (1997). The last paper received the Distinguished Paper Award from the Florida Educational Research Association in 1993.
Chatterji has provided numerous seminars/workshops on assessment topics to teachers, practitioners and diverse professionals in public and private settings, and authored a series of guides published by the Florida Department of Education, titled “A Guide to Teaching and Assessing with Florida's Goal Three Standards”(1997) as a part of the state's Goals 2000 effort. Her work with a large number of schools and school districts in Florida and New York has focused largely on capacity-building in assessment, evaluation, and use of data. Research papers now in progress includes one that addresses the use of mixed-method designs to generate research evidence on education programs (received the 2004 AERA Division H Award for Advances in Research Methodology). A book on the same topic is also under consideration.
Author Contact: mb1434@columbia.edu