Asee peer logo

Teaching Software Testing with Automated Feedback

Download Paper |

Conference

2018 ASEE Annual Conference & Exposition

Location

Salt Lake City, Utah

Publication Date

June 23, 2018

Start Date

June 23, 2018

End Date

July 27, 2018

Conference Session

Software Engineering Division Technical Session 2

Tagged Division

Software Engineering Division

Tagged Topic

Diversity

Page Count

14

DOI

10.18260/1-2--31062

Permanent URL

https://strategy.asee.org/31062

Download Count

600

Paper Authors

author page

James Perretta

biography

Andrew Deorio University of Michigan Orcid 16x16 orcid.org/0000-0001-5653-5109

visit author page

Andrew DeOrio is a lecturer at the University of Michigan and a consultant for web, machine learning and hardware projects. His research interests are in ensuring the correctness of computer systems, including medical devices, internet of things (IOT) devices, and digital hardware. In addition to teaching software and hardware courses, he teaches Creative Process and works with students on technology-driven creative projects.

visit author page

Download Paper |

Abstract

Computer science and software engineering courses commonly use automated grading systems to evaluate student programming assignments. These systems provide various types of feedback, such as whether student code passes instructor test cases. The literature contains little data on the association between feedback policies and student learning. This work analyzes the association between different types of feedback and student learning, specifically on the topic of software testing.

Our study examines a second-semester computer programming course with a total of 1,556 students over two semesters. The course contained five programming projects where students wrote code according to a specification as well as test cases for their code. Students submitted their code and test cases to an automated grading system. These test cases were evaluated by running them against intentionally buggy instructor solutions. The first semester comprised the control group, while the second semester comprised the experiment group. The two groups received different kinds of feedback on their test cases. The control group was shown whether their tests were free of false positives. In addition to the same feedback as the control group, the experiment group was shown how many intentionally buggy instructor solutions their tests exposed.

Our results measured the quality of student test cases for the control and experiment groups. After students in the experiment group completed two projects with additional feedback on their test cases, they completed a final project without the additional feedback. Despite not receiving additional feedback, their test cases were of higher quality, exposing on average 5% more buggy solutions than students from the control group. We found this difference to be statistically significant after controlling for GPA and whether students worked alone or with a partner.

Perretta, J., & Deorio, A. (2018, June), Teaching Software Testing with Automated Feedback Paper presented at 2018 ASEE Annual Conference & Exposition , Salt Lake City, Utah. 10.18260/1-2--31062

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2018 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015