ASEE PEER - Board 268: Enhancing Zero-Shot Learning of Large Language Models for Early Forecasting of STEM Performance
Asee peer logo

Board 268: Enhancing Zero-Shot Learning of Large Language Models for Early Forecasting of STEM Performance

Download Paper |

Conference

2024 ASEE Annual Conference & Exposition

Location

Portland, Oregon

Publication Date

June 23, 2024

Start Date

June 23, 2024

End Date

July 12, 2024

Conference Session

NSF Grantees Poster Session

Tagged Topics

Diversity and NSF Grantees Poster Session

Permanent URL

https://strategy.asee.org/46841

Request a correction

Paper Authors

author page

Ahatsham Hayat University of Nebraska, Lincoln

author page

Sharif Wayne Akil University of Nebraska, Lincoln

author page

Helen Martinez University of Nebraska, Lincoln

author page

Bilal Khan Lehigh University

author page

Mohammad Rashedul Hasan University of Nebraska, Lincoln Orcid 16x16 orcid.org/0000-0002-9818-9600

Download Paper |

Abstract

This paper introduces an innovative application of conversational Large Language Models (LLMs), such as OpenAI's ChatGPT and Google's Bard, for the early prediction of student performance in STEM education, circumventing the need for extensive data collection or specialized model training. Utilizing the intrinsic capabilities of these pre-trained LLMs, we develop a cost-efficient, training-free strategy for forecasting end-of-semester outcomes based on initial academic indicators. Our research investigates the efficacy of these LLMs in zero-shot learning scenarios, focusing on their ability to forecast academic outcomes from minimal input. By incorporating diverse data elements, including students' background, cognitive, and non-cognitive factors, we aim to enhance the models' zero-shot forecasting accuracy. Our empirical studies on data from first-year college students in an introductory programming course reveal the potential of conversational LLMs to offer early warnings about students at risk, thereby facilitating timely interventions. The findings suggest that while fine-tuning could further improve performance, our training-free approach presents a valuable tool for educators and institutions facing resource constraints. The inclusion of broader feature dimensions and the strategic design of cognitive assessments emerge as key factors in maximizing the zero-shot efficacy of LLMs for educational forecasting. Our work underscores the significant opportunities for leveraging conversational LLMs in educational settings and sets the stage for future advancements in personalized, data-driven student support.

Hayat, A., & Akil, S. W., & Martinez, H., & Khan, B., & Hasan, M. R. (2024, June), Board 268: Enhancing Zero-Shot Learning of Large Language Models for Early Forecasting of STEM Performance Paper presented at 2024 ASEE Annual Conference & Exposition, Portland, Oregon. https://strategy.asee.org/46841

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2024 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015