For this blog post, we were tasked to read an article by Ronald Purser about AI use in higher education. We are then tasked with addressing a couple of questions:
Do you agree with Professor Purser’s concerns? Why or why not. Please give examples from the essay.
What surprised you the most about the information on AI and the CSU?
From your perspective as a student, what do you believe the role of AI should be at Chico State, the CSU and in higher education in general?
How could this class be modified to address concerns about learning and AI?
Purpose of “higher education”?
I wanted to start off with Purser’s final point of the article, as it resonates strongly with me.
I didn’t go to college “in order to” get a job. I went to explore, to be challenged, to figure out what mattered.
I’ve always been interested in learning programming even since the end of elementary school. To me, the idea that I can start from a blank canvas, and create interactive experiences with technology is awesome! I was excited to finally be in community college where I have classes specifically to learn programming concepts from experts eager to teach them. But I quickly became disappointed to find some professors who relied too heavily on their textbooks to the point where they really felt like they weren’t able to teach the content themselves.
Not so much that the content was difficult to understand. As someone passionate about this interest, I’ve been experimenting and playing around on my own ever since my interest was sparked back in elementary school. These classes were fairly easy for me. Though I could see my peers being unable to grasp the concepts, and the professors being unable to help, so I often had to step in and help my peers. It made me come to the disappointing realization: getting a degree doesn’t mean you are learning anything, just serves as certificate to indicate to companies that you’re at least slightly knowledgeable in the field.
Purser argues that ChatGPT’s aggressive push, especially in higher education, is causing serious harm to our learning ecosystem. With students using it to easily cheat on their assignments, educators beginning to use it to create their content, schools changing the standards to allow more of AI-usage, on top of schools laying off professors yet sinking more money into these AI firms to reduce the quality of learning material.
The math is brutal and the juxtaposition stark: millions for OpenAI while pink slips go out to longtime lecturers. The CSU isn’t investing in education—it’s outsourcing it, paying premium prices for a chatbot many students were already using for free.
This shift where students are not learning any meaningful skills, but learning to use AI to think on their behalf is frightnening. Eventually, this will get to the point where students graduate, but with no knowledge or skill attributable to what’s written on their degree. Only the skill to delegate that task to an AI. It then leads to devalue what a degree means, what education is for, what purpose and role professors serve to their students.
Anthropologist David Graeber wrote about the rise of “bullshit jobs”—work sustained not by necessity or meaning but by institutional inertia. Universities now risk creating their academic twin: bullshit degrees. AI threatens to professionalize the art of meaningless activity, widening the gap between education’s public mission and its hollow routines. In Graeber’s words, such systems inflict “profound psychological violence,” the dissonance of knowing one’s labor serves no purpose.
Universities are already caught in this loop: students going through motions they know are empty, faculty grading work they suspect wasn’t written by students, administrators celebrating “innovations” everyone else understands are destroying education. The difference from the corporate world’s “bullshit jobs” is that students must pay for the privilege of this theatre of make-believe learning.
Ideally, I wished that the purpose of higher education was to learn, to explore and experiment, to build new skills. Even without AI, I know that an ecosystem of apathetic and uninspired professors turns the purpose of college and university into simply a vehicle to help get into a job. But continuing continuing down this route, I agree that the whole idea of higher education would become a pointless endeavor, just becoming a vehicle to siphon money into a seemingly parasitic industry.
Is AI a tool or technology?
Purser was was posing another argument of whether AI is a tool or technology, describing both as followed:
Tools help us accomplish tasks; technologies reshape the very environments in which we think, work, and relate. As philosopher Peter Hershock observes, we don’t merely use technologies; we participate in them. With tools, we retain agency—we can choose when and how to use them. With technologies, the choice is subtler: they remake the conditions of choice itself. A pen extends communication without redefining it; social media transformed what we mean by privacy, friendship, even truth.
Purser’s argues that AI increases the user’s dependency on it as technology, as it starts to depreciate their agency, mentioning a study from MIT:
When participants used ChatGPT to draft essays, brain scans revealed a 47 percent drop in neural connectivity across regions associated with memory, language, and critical reasoning. Their brains worked less, but they felt just as engaged—a kind of metacognitive mirage.
I’d like to argue that AI is a tool, but what makes it such a huge problem is how easy it is to misuse it. In my personal experience, I’ve come to realize AI is an excellent tool for finding, parsing, and rephrasing complicated and niche information. The catch is to recognize that the raw output of AI is not sufficient as is.
As an example for me, I wanted to learn how I’m able to create interactive windows on different desktop platforms from scratch. I struggled to grasp the content of the documentations provided for Microsoft’s Win32 and macOS’s Cocoa APIs. The AI proved to be excellent at answering questions on these documentations and providing examples. However, I wouldn’t say I have learned the content until I am able to read and understand the documentations myself, and redo everything on my own unassisted. Here AI serves as a tool, kinda like training wheels. But at some point you must take those training wheels to realistically say you understand the concepts.
The problem is when a student has little interest in the subject, they feel little need to learn it. So to them, the training wheels never come off. What is more agreeable however, is that trusting students to learn how to use AI responsibly becomes an afterthought when companies pushing this technology (like OpenAI) benefits the most when they get as many adopters of their tool as possible. They don’t care how you use it, just that you use it and that they make the most money.
The audacity was breathtaking. Tell an 18-year-old whose financial aid, scholarship or visa depends on GPA to develop “personal AI ethics” while you profit from the very technology designed to undermine their learning. It’s classic neoliberal jiu-jitsu: reframe the erosion of institutional norms as a character-building opportunity. Yeah, like a drug dealer lecturing about personal responsibility while handing out free samples.
The other issue with not learning how to use AI responsibly is that many tend to forget the little disclaimer “ChatGPT can make mistakes”. Which is especially important when used in higher education as it can be harder to determine whether an AI is making mistakes when you don’t already know the content.
What’s surprising about AI and CSU?
I’m so used to seeing anti-AI policies in just about all the classes I’ve taking that it is surprising to hear CSUs becoming more lax about this policy campus-wide. With no deterrent to sway student’s to use AI responsibly, no efforts made to teach or encourage responsible use of AI, this really will make some degrees expensive pointless pieces of paper. As someone who likes to know how to do things properly, so I’m able to do it on my own, it’s frightening to know that it’ll be okay for students to succeed without any real work or effort.
How should higher education handle AI?
It’s hard to think about, and tackle every single ethical and moral issue regarding AI. At the rate the industry is moving, it is getting less and less possible to completely avoid AI. At this moment, it may be a bit too archaic to completely ban AI throughout higher education. At the same time however, allowing unrestricted use to it will create a lot of problems as discussed. Instead, we should start to see sections about how to use AI responsibly, though this can be tricky to implement properly.
Generally, AI should only be used as a supplement, not a replacement to what a person is writing or doing. Students could use it to adjust phrasing of text, search for sources (and source them directly and verify them manually), or to help understand concepts they struggle to grasp. Similarly, professors could use it to suggest new ways to explain a topic. Though, if a professor uses it for grading, they should also give it a bit of manual review as well. Anything the professor uses from AI should be things that they are able to validate and explain if the needs to arise.
Finally, my opinion is that we shouldn’t have a one singular company promoting their product when they have such a high financial interest to do so. As “open” as OpenAI would like to be perceived, I struggle to find many cases where their products were made for the betterment of the community. They are generally very closed-sourced, for-profit, and super aggressive against competition. What reason are other AIs not seeing as much promotion (at least as described from this article)? Generally monopolies are bad, so we shouldn’t help a particular company create one by supporting only them.
Modify this class to address concerns?
Me personally, hearing about AI everywhere gets fatiguing. As it’s relatively new and a hot topic, the discussion comes up often, though our class doesn’t linger on a topic for so long to do a deep dive into a particular technology like AI. Though maybe a project or class session to demonstrate useful cases of AI, how to use it properly, and exploring the consequences of misusing it, might be beneficial to understand the technology rather than simply fear it.