Magento Commerce


Thirteen Strategies to Measure College Teaching

Double click on above image to view full picture

Zoom Out
Zoom In

Thirteen Strategies to Measure College Teaching

Be the first to review this product

Regular Price: $69.95

Special Price: $62.95

Availability: In stock


Add Items to Cart
OR

Quick Overview

* Student evaluations of college teachers: perhaps the most contentious issue on campus
* This book offers a more balanced approach
* Evaluation affects pay, promotion and tenure, so of intense interest to all faculty
* Major academic marketing and publicity
* Combines original research with Berk's signature wacky humor

To many college professors the words "student evaluations" trigger mental images of the shower scene from Psycho, with those bloodcurdling screams.

Product Description

* Student evaluations of college teachers: perhaps the most contentious issue on campus
* This book offers a more balanced approach
* Evaluation affects pay, promotion and tenure, so of intense interest to all faculty
* Major academic marketing and publicity
* Combines original research with Berk's signature wacky humor

To many college professors the words "student evaluations" trigger mental images of the shower scene from Psycho, with those bloodcurdling screams. They're thinking: "Why not just whack me now, rather than wait to see those ratings again."

This book takes off from the premise that student ratings are a necessary, but not sufficient source of evidence for measuring teaching effectiveness. It is a fun-filled--but solidly evidence-based--romp through more than a dozen other methods that include measurement by self, peers, outside experts, alumni, administrators, employers, and even aliens.

As the major stakeholders in this process, both faculty AND administrators, plus clinicians who teach in schools of medicine, nursing, and the allied health fields, need to be involved in writing, adapting, evaluating, or buying items to create the various scales to measure teaching performance. This is the first basic introduction in the faculty evaluation literature to take you step-by-step through the process to develop these tools, interpret their scores, and make decisions about teaching improvement, annual contract renewal/dismissal, merit pay, promotion, and tenure. It explains how to create appropriate, high quality items and detect those that can introduce bias and unfairness into the results.

Ron Berk also stresses the need for "triangulation"--the use of multiple, complementary methods--to provide the properly balanced, comprehensive and fair assessment of teaching that is the benchmark of employment decision making.

This is a must-read to empower faculty, administrators, and clinicians to use appropriate evidence to make decisions accurately, reliably, and fairly. Don't trample each other in your stampede to snag a copy of this book!





Additional Information

condition new
ISBN 9781579221928
author Ronald A. Berk,Michael Theall
Publication Date Jan 5, 2006
Number of Pages 288
Publisher Stylus Publishing
Table of Contents ACKNOWLEDGMENTS; A FOREWORD (IN BERKIAN STYLE) BY MIKE THEALL; INTRODUCTION; 1 TOP 13 SOURCES OF EVIDENCE OF TEACHING EFFECTIVENESS; A Few Ground Rules; Teaching Effectiveness: Defining the Construct; National Standards; Beyond Student Ratings; A Unified Conceptualization; Thirteen Sources of Evidence; Student Ratings; Peer Ratings; External Expert Ratings; Self-Ratings; Videos; Student Interviews; Exit and Alumni Ratings; Employer Ratings; Administrator Ratings; Teaching Scholarship; Teaching Awards; Learning Outcome Measures; Teaching Portfolio; BONUS: 360 Multisource Assessment; Berk's Top Picks; Formative Decisions; Summative Decisions; Program Decisions; Decision Time; 2 CREATING THE RATING SCALE STRUCTURE; Overview of the Scale Construction Process; Specifying the Purpose of the Scale; Delimiting What Is to Be Measured; Focus Groups; Interviews; Research Evidence; Determining How to Measure the "What"; Existing Scales; Item Banks; Commercially Published Student Rating Scales; Universe of Items; Structure of Rating Scale Items; Structured Items; Unstructured Items; 3 GENERATING THE STATEMENTS; Preliminary Decisions; Domain Specifications; Number of Statements; Rules for Writing Statements; 1. The statement should be clear and direct; 2. The statement should be brief and concise; 3. The statement should contain only one complete behavior, thought, concept; 4. The statement should be a simple sentence; 5. The statement should be at the appropriate reading level; 6.The statement should be grammatically correct; 7. The statement should be worded strongly; 8. The statement should be congruent with the behavior it is intended to measure; 9. The statement should accurately measure a positive or negative behavior; 10. The statement should be applicable to all respondents; 11. The respondents should be in the best position to respond to the statement; 12. The statement should be interpretable in only one way; 13. The statement should NOT contain a double negative; 14. The statement should NOT contain universal or absolute terms; 15. The statement should NOT contain nonabsolute, warm-and-fuzzy terms; 16. The statement should NOT contain value-laden or inflammatory words; 17. The statement should NOT contain words, phrases, or abbreviations that would be unfamiliar to all respondents; 18. The statement should NOT tap a behavior appearing in any other statement; 19. The statement should NOT be factual or capable of being interpreted as factual; 20. The statement should NOT be endorsed or given one answer by almost all respondents or by almost none; 4 SELECTING THE ANCHORS; Types of Anchors; Intensity Anchors; Evaluation Anchors; Frequency Anchors; Quantity Anchors; Comparison Anchors; Rules for Selecting Anchors; 1. The anchors should be consistent with the purpose of the rating scale; 2. The anchors should match the statements, phrases, or word topics; 3. The anchors should be logically appropriate with each statement; 4. The anchors should be grammatically consistent with each question; 5. The anchors should provide the most accurate and concrete responses possible; 6.The anchors should elicit a range of responses; 7. The anchors on bipolar scales should be balanced, not biased; 8. The anchors on unipolar scales should be graduated appropriately; 5 REFINING THE ITEM STRUCTURE; Preparing for Structural Changes; Issues in Scale Construction; 1. What rating scale format is best?; 2. How many anchor points should be on the scale?; 3. Should there be a designated midpoint position, such as; "Neutral," "Uncertain," or "Undecided," on the scale?; 4. How many anchors should be specified on the scale?; 5. Should numbers be placed on the anchor scale?; 6. Should a "Not Applicable" (NA) or "Not Observed"; (NO) option be provided?; 7. How can response set biases be minimized?; 6 ASSEMBLING THE SCALE FOR ADMINISTRATION; Assembling the Scale; Identification Information; Purpose; Directions; Structured Items; Unstructured Items; Scale Administration; Paper-Based Administration; Online Administration; Comparability of Paper-Based and Online Ratings; Conclusions; 7 FIELD TESTING AND ITEM ANALYSES; Preparing the Draft Scale for a Test Spin; Field Test Procedures; Mini-Field Test; Monster-Field Test; Item Analyses; Stage 1: Item Descriptive Statistics; Stage 2: Interitem and Item-Scale Correlations; Stage 3: Factor Analysis; 8 COLLECTING EVIDENCE OF VALIDITY; AND RELIABILITY; Validity Evidence; Evidence Based on Job Content Domain; Evidence Based on Response Processes; Evidence Based on Internal Scale Structure; Evidence Related to Other Measures of Teaching Effectiveness; Evidence Based on the Consequences of Ratings; Reliability Evidence; Classical Reliability Theory; Summated Rating Scale Theory; Methods for Estimating Reliability; 9 REPORTING AND INTERPRETING SCALE RESULTS; Generic Levels of Score Reporting; Item Anchor; Item; Subscale; Total Scale; Department/Program Norms; Subject Matter/Program-Level State, Regional, and; National Norms; Criterion-Referenced versus Norm-Referenced Score Interpretations; Score Range; Criterion-Referenced Interpretations; Norm-Referenced Interpretations; Formative, Summative, and Program Decisions; Formative Decisions; Summative Decisions; Program Decisions; Conclusions; References; Appendices; A. Sample "Home-Grown" Rating Scales; B. Sample 360 Assessment Rating Scales; C. Sample Reporting Formats; D. Commercially Published Student Rating Scale Systems; Index.
Cover Type Paperback
Base Image /9/7/9781579221935_cf200_1.jpg
Small Image /9/7/9781579221935_cf200_1.jpg
Thumbnail /9/7/9781579221935_cf100_1.jpg

Product Tags

Use spaces to separate tags. Use single quotes (') for phrases.