My generation has the unique experience of attending school in the advent of artificial intelligence (AI). A few years ago when I began my undergraduate studies, no one really knew about AI. Now, it is a global phenomenon. I think the popularity of AI among college students, specifically large-language models (LLMs), skyrocketed last year with the increasing accessibility and rapid development of OpenAI's GPT-4 models. Currently, there is unprecedented public and private investment in the development of AI, which makes it likely that the technology is here to stay. This naturally forces schools to wrestle with new and important questions on how to best regulate students’ use of AI in the classroom.
I recently took a course on probability and statistics where, in the very first lecture, the professor openly advocated for the use of AI in completing homework and generating practice problems. He claimed that we would be putting ourselves at a disadvantage if we didn’t utilize resources like ChatGPT to better understand the course material. Part of his rationale was that the latest LLMs output the averaged response of multiple professors and experts in the field, which may offer deeper insights than hearing from just one professor like himself. I soon witnessed my classmates follow the professor's advice by consulting Google's Gemini 3 Pro on their devices during his lectures.
It makes sense that classic courses, such as probability and statistics, at the undergraduate level are well suited for AI tutoring because the content is well-known and established, so there is plenty of high-quality data to train the models and have them reliably output accurate responses in those subjects. Conversely, LLMs would not be useful for studying niche and cutting-edge topics due to the inherent scarcity of data in those fields.
I am convinced that the latest AI models have mastered all the fundamental undergraduate-level courses, and I suspect that other students share the same dreadful realization of what that implies: Companies can save money by replacing entry-level white-collar workers with AI. Unfortunately, this catastrophic opportunity has been ruthlessly capitalized on by the industry, as reflected in the abysmal job market. Unless the subscription prices of AI increase significantly, this problem will persist and we will continue to suffer.
Personally, the most dangerous aspect of AI in my studies is its seductive power in drawing me away from office hours. There is stress involved with meeting people, along with time and place limitations, but these discomforts disappear when interacting with LLMs. I can ask whenever, wherever, and however I want without any problems or fear of judgement. But this lack of social friction probably weakens neural pathways and promotes passive learning. I’m also worried that professors' teaching abilities will gradually decline over time due to less engagement from students.
I hope that the concerns brought by the introduction of AI into the undergraduate experience can be addressed by students remaining disciplined and practicing self-control. AI is a major force in the growing collection of modern technologies that have serious potential to disrupt education but are also capable of greatly enhancing it. The challenge of trying to maximize AI's utility while not becoming subservient to its power is a great exercise in preparing for the real world where such double-edged tools are always available. Therefore, schools should focus efforts on guiding students to effectively use AI to complement, not replace, critical thinking.