Advertisement


 

It’s Not About the Bot: Can AI be Used in Education for Good?



Published: Mar 13, 2023  |  

Professor (Educational Leadership and Policy) The University of Texas

AI Education

Image created by Nikki Muller with the assistance of DALL·E 2



AI in academia has the potential to promote inequity and cause privacy concerns or correct past biases and barriers to educational access: it just depends on how it’s used.

Faculty and administration in higher education are necessarily concerned about the impact of ChatGPT and the imminent new GPT-4 version. A powerful chatbot created by OpenAI, ChatGPT, as well as its imitators, Bing and Bard, can perform duties in response to a prompt, from writing essays to generating Python code. 

Given its impressive capabilities, it’s not surprising that the new technology is disrupting higher education, forcing administrators and professors to think about how to promote academic integrity in the evolving AI era or how to leverage the tool to enhance learning.

Vanderbilt University administrators apologized for using AI to generate emails to students following the recent shootings at Michigan State University. But there is more than insensitivity at stake. 

Americans are mixed in their views about AI, depending on the use case. A recent poll found that Americans are most enthusiastic about robots that can perform surgery but least enthusiastic about AI being used to write news articles or offer mental health support.

Although ChatGPT may be the shiniest and newest tool in all arenas—including education—it is not the most troubling use of AI in colleges and universities. The discourse about ChatGPT instead can serve as a catalyst for broader discussions about the use of AI in higher education to address ongoing concerns related to privacy and algorithmic bias in predictions.

AI technologies are sometimes used to inform which students should be recruited or admitted, according to a 2019 study.

Once students enroll, administrators use AI tools to intervene by nudging them to meet with advisors or offering them resources, such as financial aid, to increase their chances of success. AI is also used to make recommendations about majors or courses in which students are most likely to be successful. 

While the uses of these technologies in higher education may appear innocuous, they often rely on student surveillance to collect large amounts of data on their activities. What’s more, these technologies may reproduce social inequities due to biased predictions.

Increasingly, colleges and universities are collecting data from student ID swipes, raising concerns about the invasion of students’ privacy. For instance, in order to predict student outcomes, many colleges and universities collect data capturing the purchases students make using their ID, how long they are logged on to the campus internet network, how many times they go to the gym, and who they socialize with in dining halls.

Some universities even track how prospective students interact with their website—to help them make decisions about who to admit. Oftentimes, students don’t know they are being tracked this way.

Beyond privacy concerns, AI has the potential to perpetuate social inequities. By using historical data that captures social injustices, including racial wealth gaps, inequities in school funding, and housing discrimination, algorithms may yield less favorable predictions for students belonging to historically underserved groups.

For example, an algorithm used to make decisions about where a college should recruit prospective students based on students’ probability of accepting an admission offer is biased and discriminatory. The output from that algorithm can lead colleges to avoid recruiting in neighborhoods where students have a lower probability of going to college. These decisions would unfairly exclude students who may otherwise be interested in attending the institution.

Predictive technologies could lead to practices that discourage historically underserved students from pursuing courses or majors that are viewed as more challenging if previous students with similar demographic characteristics have lower likelihood of success.

A 2021 Markup investigation revealed that a Texas university was categorizing Black and other minority students as “high risk,” raising concerns that this would lead to those students being funneled into majors that are viewed as less challenging. That investigation showed that numerous universities use a measure of “race” as a predictor of whether a student will succeed.

In new research with my collaborator, Hadis Anahideh, we explore how algorithms used to predict college student success could lead to racially biased predictions. We find that a model that includes common predictors of college student success yields predictions that are racially biased.

Specifically, the algorithm is more likely to predict failure for students who actually succeeded if the students are labeled as Black or Hispanic than if they are labeled as white or Asian. The model is also more likely to predict success for students who actually failed for students labeled as white or Asian.

To be sure, AI could potentially be leveraged for good. Georgia State University, for instance, increased their graduation rates and reduced racial gaps in college student success after they started to use predictive analytics. Importantly, they also hired many more advisors and implemented numerous supports to help students finish their degrees.

Indeed, colleges and universities could use AI to improve efficiencies, be more responsive to students’ needs, and even to make more fair decisions by automating processes that are subject to human bias. The key is to make sure social biases are removed from the predictive models.

Recently, the University of Texas-Austin convened a ChatGPT working group. As higher education administrators meet to discuss ChatGPT, they also need to consider broader ethical considerations associated with AI in higher education.

Higher education administrators, stakeholders, policymakers, and researchers need to question how AI is used in higher education, who is designing and deploying the predictive algorithms, what goes into these algorithms, and how the predictions from these algorithms are interpreted to inform decisions.

Students and other higher education stakeholders have a right to be protected from AI systems that harm them by limiting their freedom and opportunities. Everyone needs to be safe from discrimination facilitated by AI systems and to have a say in whether and how activities are tracked, and to what end.

A starting point to begin these conversations is the White House’s blueprint for an AI Bill of Rights, launched in October 2022. The release of the White House’s blueprint has prompted at least five federal agencies to adopt guidelines for using AI responsibly. Earlier this year, the U.S. Department of Commerce released the AI Risk Management Framework.

Given the crucial role higher education can play in society, it is necessary to consider how institutions can use AI to promote or hinder opportunities and develop guidelines around its uses.



Filed under:


Tags mentioned: