As artificial intelligence continues to reshape industries across the globe, educators at UNCA grapple with how to incorporate it into their teaching while ensuring students develop critical thinking and ethical reasoning skills.
“We can’t just bury our heads in the sand and say it’s not there,” said Stephanie O’Brien, lecturer of mass communication and faculty fellow at the Center for Teaching and Learning. “Students are using it and it’s not going away.”
Having served on an AI task force at UNCA that examined its impact on teaching, staff operations and student services, O’Brien is at the forefront of AI integration. Now, through her two-year faculty fellowship, she leads efforts to build a faculty learning community focused on AI literacy and pedagogical strategies.
“The goal of the group is to help professors understand how AI tools can be used effectively in their fields while ensuring students develop critical thinking skills and ethical awareness in their usage,” O’Brien said.
Without a university-wide policy, O’Brien said AI’s role in university education varies depending on the professor, subject and level. In foundational courses such as O’Brien’s “Basic Video Production,” she strictly limits AI usage. In contrast, she encourages students in her advanced theory course, Media Industries and Artificial Intelligence, to use AI for brainstorming and research while requiring them to disclose its use in all assignments.
“The point of the senior seminar last spring was to try and delve into what AI is about, where it is being used in our careers and how it is best used. So in that class, students used it all the time. I mean, that was the point of it, but they have to provide a link to their ChatGPT session, so I can see exactly how they used it,” O’Brien said.
Sara Van Wert, assistant professor of computer science, discourages AI use in lower levels, and said she believes students need to start by building their own conceptual model of how a computer works.
“At the 100 level, we want people to understand things just the old fashioned way. We want them to have an understanding of what a variable does and have this basic infrastructure about reasoning, about systems and about software,” Van Wert said.
In upper-level classes, Van Wert said AI tools have many valid uses. She highlights how asking specific questions of tools like ChatGPT can save time and enhance learning.
“One of the things that computer science engineers have to do is learn many different languages and applications have their own rules and online documentation,” Van Wert said. “For things like that, I definitely encourage people to use ChatGPT to ask questions like, ‘I notice how you do it in Java. What does this look like in Python?’ ChatGPT can show them the mapping of the syntax, and then for the student it’s, ‘Oh, OK, I see.’”
Even with the learning enhancements AI tools provide, O’Brien said some professors remain resistant to AI, wanting to keep it out of the classroom entirely because they consider its use a form of cheating.
“There’s some people who will say, ‘Never in my class. I don’t want to talk about it. I don’t want to do it. We’re not going to use it,’ and that is that professor’s prerogative,” O’Brien said.
In contrast, professors like Drake Thomas, lecturer of business, fully embrace AI’s role in the classroom – though with a slight disclaimer.
“So in my syllabus, I have a funny little phrase that says, ‘Within this class, you are welcome to use any AI foundation models unrestricted for any purpose and at no penalty, except for taking quizzes or the final exam. Do not expect this in any other class, and do not become an evangelist for AI tools in other courses when those professors may not want to leverage them in the same way or to the same extent,” Thomas said.
With a diverse background in technology management, Thomas makes a concerted effort to engage with AI tools daily, both for personal productivity and classroom integration.
“I promised myself I’d use AI for two hours every day just to see what it could do,” Thomas said. “It never makes me feel good to use it for writing, though. Writing is an art, and I don’t want to lose that connection.”
Thomas focuses on AI as a tool for efficiency. He uses Otter AI to effortlessly transcribe and summarize lectures that within 15 minutes can be dispersed and used by his students as comprehensive study tools.
He said his current favorite is Google Gemini’s LM Notebook. As a multimodal tool, he “feeds” it specific information, such as academic papers, lectures and videos to quickly create efficient study materials, like podcasts. By carefully controlling its inputs, Thomas minimizes AI hallucinations – instances where AI generates false and misleading results which appear credible. This ensures reliable study materials that students can even interact with. For podcasts, LM Notebook allows students to “call-in” with their own questions and engage in conversations with the AI-generated “experts.”
“What’s really crazy about it, it’ll generate podcasts from the stuff that I feed it and these people have a conversation. They are not human, but they will have a podcast discussion that, you can’t tell is not human, about the topics that you told it to research,” Thomas said. “So I’ve even used this in classes where I’ve had podcasts that my students listen to where the AI podcasters are discussing how to study for my quizzes.”
For O’Brien and other professors, the impact of AI extends beyond education into broader ethical and environmental concerns. According to O’Brien, large language models are incredibly resource-intensive, a concern she hopes to explore in the faculty learning group.
Van Wert also considers the broader ethical and economic implications of these tools, noting that massive server farms must download billions of bits of data to train AI models and how only a few companies have the resources to train their models using these methods.
“These companies have these huge server farms where they process this data and try to train these models. It’s a lot of data. It’s proprietary. There’s not a lot of transparency and only certain companies are in the position to build frontier models. Everyone else can have access to the results, but it’s very impractical for a smaller firm to participate in this space,” Van Wert said.
Van Wert questions those who claim AI creates a level playing field. In her computer science class, “AI, Ethics and Society,” she and her students examine how increasing centralization in tech stifles innovation.
“We talk about the political economy and challenges that AI and tech are mediating,” Van Wert said. “There is a very techno-libertarian sort of ethos in tech where everything is about the market. The early internet was talked about as this democratizing force where everything would be more distributed and less centralized, meanwhile everything is flowing to a few big players.”
UNCA professors across multiple departments deal with the realities of preparing students for careers that increasingly rely on these tools. O’Brien highlights AI’s increasing presence in digital media and journalism.
“The field of journalism has been gutted over the past decade and AI tools can help streamline reporting processes,” O’Brien said.
O’Brien emphasized her point by citing a journalist from The Wall Street Journal, whom she invited to speak to the class.
“He was working on a financial story that would normally require weeks of work and research assistants to go through records. Now that can all be done pretty quickly,” O’Brien said.
While acknowledging AI’s role in engineering, Van Wert stresses there are no AI shortcuts when getting hired as a computer engineer.
“AI is a powerful tool, but it doesn’t replace the ability to think strategically,” Van Wert said. “Employers want graduates who can analyze problems and present solutions, not just rely on automation.”
Van Wert describes “whiteboarding,” an exercise where candidates solve problems and think aloud without using external tools, as the currency of the realm for landing engineering jobs.
“Whiteboarding is mostly a screening tool. Employers definitely want to know that their employees know the core algorithms and the core data structure. It’s about being able to think through problems,” Van Wert said. “In my software engineering class we do a lot of whiteboarding.”
As AI tools become more integrated into education and the workplace, Thomas emphasizes that while AI can streamline problem-solving, true success depends on developing interpersonal and critical-thinking skills – qualities that technology cannot replace.
“I think with my students, solving a problem is no longer the hard part. It’s being able to present oneself in front of a classroom, to hold oneself in a confident manner, to speak articulately about a product from memory,” Thomas said. “I think those are things that will make our graduates more successful – a human being standing up in front of other human beings.”
Brian Hickey • Apr 22, 2025 at 11:20 am
I first met Professor Drake Thomas in BUS-386 where employed a project that acted like a natural experiment – and we were graded on how we, as a class, responded to the unexpected challenges that arise from projects like this.
The greatest skill my classmates and I took away from the course was how to speak to one another – shortly after the pandemic (mask mandates had just been lifted).
I’ve been lucky enough to learn how to best use AI tools alongside Drake as he learns them. New technology by its nature requires everyone to learn new topics. It is a rare treat to have an instructor who maintains respect as the Professor but invites you along their learning journey as a collaborator.