Asee peer logo

Comparing The Reliability Of Two Peer Evaluation Instruments

Download Paper |

Conference

2000 Annual Conference

Location

St. Louis, Missouri

Publication Date

June 18, 2000

Start Date

June 18, 2000

End Date

June 21, 2000

ISSN

2153-5965

Page Count

6

Page Numbers

5.152.1 - 5.152.6

DOI

10.18260/1-2--8216

Permanent URL

https://strategy.asee.org/8216

Download Count

653

Request a correction

Paper Authors

author page

Richard Layton

Download Paper |

Abstract
NOTE: The first page of text has been automatically extracted and included below in lieu of an abstract

Session 3530

Comparing the Reliability of Two Peer Evaluation Instruments Matthew W. Ohland, Richard A. Layton University of Florida / North Carolina A&T State University

Abstract

This paper presents an analysis of student peer evaluations in project teams to compare the reliability of two different evaluation procedures. The project teams consist of junior-level students in a mechanical engineering design course taught by Layton for five semesters in 1997, 1998, and 1999.

The peer-evaluation instruments were used by students to evaluate their teammates’ contributions to the team’s deliverables—oral and written presentations of their solution to a technical design problem. The first instrument is an adaptation of the one advocated by Brown, in which students use a prescribed list of terms such as “excellent,” “very good,” “satisfactory,” and so forth. The second form, by Layton, asked students to assign a numerical rating (from 0 to 5) to 10 different aspects of contribution to the team.

Analysis of variance was used to study the reliability of each of the instruments, using a special form by Crocker and Algina to study inter-rater reliability. The similarity of the reliability coefficients of the two instruments (ρi=0.34 for Brown’s instrument and ρi=0.41 for Layton’s instrument) strengthens the assumption made in the first study—that data from the two instruments are similar enough to be normalized for comparison. At the same time, the higher reliability of Layton’s instrument lends credence to Layton and Ohland’s conclusion that focusing on identified behavioral characteristics of good teamwork (as Layton’s instrument does) can improve peer evaluation. Layton’s instrument accomplishes this to an extent, yielding a modest improvement in reliability. More focused attempts to define teamwork success behaviorally, such as the modification of Brown’s instrument by Kaufman et al., may yield further improvements in reliability. The overall reliability of the two instruments validates such instruments as repeated measures of a consistent trait.

I. Introduction

In order to satisfy ABET EC 2000’s charge for outcomes assessment, evaluation techniques that are largely new in engineering academe are coming into use. One such technique is the peer evaluation instrument. Recent papers by Brown,1 Kaufman et al.,2 and Layton and Ohland3 have described peer evaluation instruments and their use in measuring students’ ability to function in teams. It is important in this context to assess the reliability of peer evaluation instruments.

In a mechanical engineering design course at North Carolina A&T State University, students were assigned to groups for the purpose of completing term projects in design. The project teams

Ohland, M., & Layton, R. (2000, June), Comparing The Reliability Of Two Peer Evaluation Instruments Paper presented at 2000 Annual Conference, St. Louis, Missouri. 10.18260/1-2--8216

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2000 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015