Advertisement


 

Teaching the Purpose of College in the Era of Artificial Intelligence


One consequence of the extreme cost of higher education is its perception as a transactional means to an end, rather than an academically insightful venture.


Published: Jul 7, 2023  |  

Professor of Political Science at DePaul University

Artificial Intelligence


Artificial Intelligence (AI) news appears to be as ubiquitous as the phenomenon itself. There are countless reports of how AI is making life better—from reducing the time healthcare professionals spend on paperwork to helping MLB scouts with recruitment—and almost as many about the potential dangers of this burgeoning branch of computer science.

Most of us are active users of AI in our daily lives: When face recognition opens our smartphone, when our car is prevented from drifting into the neighboring lane, when we are alerted about the potentially fraudulent use of our debit card, or when our typos are automatically corrected. 

As a professor of political science at DePaul University, I have been grateful for the ways in which AI has allowed students to turn in papers with fewer grammatical errors, which has allowed me to focus on their evidence, evaluate the structure of their arguments, and engage with their ideas, instead of addressing remedial skills. Yet, the rise of ChatGPT and its counterparts has upended college work and most faculty, myself included, are struggling to catch up. 

For the uninitiated, of which my colleagues and I were just months ago, ChatGPT, developed by OpenAI, is an artificial intelligence platform promoted as capable of following “complex instructions in natural language” to “solve difficult problems with accuracy.” When I asked my 21-year-old daughter to explain it, she opened her ChatGPT app and typed in, “give me a one paragraph summary and review of Daisy Jones and the Six.” Within seconds we read a sophisticated, fully developed, grammatically perfect paragraph that exceeded anything I could have produced in that time frame. 

I turned to the Gen Z members in my household for guidance on ChatGPT as I do with most technological issues, but their proficiency, as well as the quality of the responses we received, troubled me. It was around this time that we had begun conversations at work about how to detect AI in students’ work. Before I knew it, I came face to face with this challenge.

One of the courses I teach is the introductory American politics class. Students take this class either because they are going to major in political science or to fulfill a general education requirement. The final exam was accessible for 24 hours online via Desire2Learn (D2L), our online teaching platform, but students had only two hours to complete it.

Prior to the final, students received a review sheet of key terms, as well as copies of all my power point presentations. The exam was open note and open book, but without proper preparation, it would be impossible to complete in two hours. 

As I was grading the short answers, which asked students to define a term and explain its relevance to the study of American politics and government, I was struck by how some answers seemed exceptionally well-crafted, often including aspects we had not covered in class. Some essays were similarly suspicious, incorporating Supreme Court cases that were not covered either in lectures or the textbook. One essay on Matthew Desmond’s Evicted, relied on a framework more common to sociology than political science. 

With only 15 students in my class, I proceeded to copy and paste all the short answers and essays into the various platforms designed to detect AI generated prose, including one designed by OpenAI. At the end of this time-consuming process, I was confident that two students had used AI to complete the exam.

In my six years serving on the Academic Integrity Board at DePaul, I saw numerous instances when good students made bad decisions. So, instead of immediately reporting the two students, I emailed the entire class that some final exams showed signs of Artificial Intelligence software. I offered students the opportunity to contact me about their use of AI, which would result in an alternative assignment and no penalty. 

I shared my belief in second chances and my desire to honor my role as an educator at a teaching institution. I also emphasized that integrity (including but not limited to academic integrity) is of the utmost importance, which meant that if a suspected student did not contact me, they would receive a zero for the final exam and I would file an Academic Integrity violation with the university. 

Within hours, I received five emails from students (only one of the suspects), admitting to using AI and apologizing. Most profusely. Notably, the final question on the exam asked students to answer true or false to a statement indicating that their work had been their own, without assistance, human or artificial. Every single student had selected true. 

After several days, I followed up with the student who had not contacted me, informing her that she would fail the exam and I would report the violation. She emailed back and apologized. Now out of 15 students, six had admitted to using AI.

Some might question why I would give an exam online in the first place, but this format benefits both the students and instructors. Students can take the exam at a time most convenient to them and in a space and place they prefer—more like “real-world” independent working.

One of my students, for example, was grateful to be able to go home to Canada and not stay in Chicago to take the exam. I appreciate the online grading of the closed ended questions, as well as the data I receive about which questions students found most difficult. In addition, student handwriting has become increasingly illegible, so I am grateful for typed essays. 

The preparation period, when students are working on their review sheets and studying for the exam, still achieves one of the goals of a final, which is to force students to review the materials and reflect on their learning. The essays, of course, provide them with the opportunity to integrate the lessons and critically evaluate the topic—in this case, key aspects of American democracy.

Final exams are designed to do several things, only one of which is to assess content knowledge. Indeed, in my view, content knowledge is less important than the ability to synthesize ideas, to craft arguments, to weave together key elements. Cognitive psychology has provided evidence of how the act of writing helps with the development of critical thinking skills—and critical thinking, of course, is a key requirement for citizens in a democracy.

To be sure, AI also holds many benefits for students and teachers. And even if it didn’t, it is clearly here to stay. Moreover, as was true with plagiarism, as quickly as teachers become versed in identifying and responding to AI, students will become more adept at evading detection.  

The bigger issue here is the purpose and value of higher education. When tuition alone averages $35,852 for private, non-profit colleges and $9,375 for public universities, it is no wonder that students treat their time on campus as a transactional venture between producer and consumer, not a shared educational journey. 

To address the improper use of AI in the classroom, we need to move beyond detection systems and engage students in discussions about the purpose of their education, share the rationale behind general education requirements, and collectively work to create a populace that is capable of, to paraphrase Ben Franklin, keeping our republic.



Filed under:


Tags mentioned: