Students, teachers and employees navigate the unstoppable yet misunderstood power of ChatGPT and other AI tools.
“I think every single student needs to know AI now for their jobs,” said UNCA Mass Communication Lecturer Stephanie O’Brien. “It’s going to be in every job aspect that I can think of.”
O’Brien said this is her first semester teaching her new media studies seminar about the relationship between artificial intelligence and media industries. She said the course looks at how to use AI and how to use it ethically as students go forward in their careers.
“Just last year about this time I was scared to death of it,” O’Brien said. “But then I decided that I needed to learn more about it.”
While creating and teaching the course O’Brien said she has learned how quickly AI moves and advances.
“It’s changing every day,” O’Brien said. “I put together a basic schedule in December, and by January I was having to change it.”
O’Brien said AI’s rapid advancement keeps her on her toes and keeps the class from ever being stagnant, but it’s also hard to keep up with.
According to Nieman Lab, five of this year’s Pulitzer finalists used AI in some way. They state that this is the first year that entrants are required to disclose AI usage.
“Every time some kind of new media or technology comes in we’re worried about what it’s going to do to the next generation, and we always find ways to adapt,” O’Brien said. “It always works out somehow.”
When it comes to catching students using AI for cheating with assignments, O’Brien said she does not use AI detectors and if she suspects a student of using AI, it is better to have a conversation with them than to accuse them.
“My fear is we’ve gotten way ahead of ourselves and somehow the brakes need to be pumped,” said Brent Hetland, a software developer of 24 years.
Hetland uses the generative AI tool Bing Copilot in his daily workflow. Copilot works in similar ways to ChatGPT, providing text responses to text prompts. He said he uses it for coming up with solutions and problem-solving.
“Well, I first incorporated it into my workflow reluctantly,” Hetland said.
Hetland said he is the only member of his team of six developers that uses an AI tool in their workflow. When his teammates come to him with questions, he said he often uses Copilot to get an answer and sends a screenshot back to them, demonstrating and promoting the AI’s power.
“I only do that when it really answers their question, which is most of the time,” Hetland said. “I would say I’m the only one that uses AI on my team, but I’m trying to get others to do it.”
Despite its power, Hetland said AI cannot do his whole job for him, only make him quicker.
“You can use it for little parts of what we do, but it can’t do the whole thing. It can’t do the whole application,” Hetland said.
He said AI is not likely to take web developers’ jobs anytime soon. He said it is an awesome resource for troubleshooting problems and giving suggestions, but it still doesn’t know what you’re doing and what application you’re creating.
“I think AI will not put a whole lot of people out of jobs right away in the development world, but what I see it doing is giving developers more time to slack off,” Hetland said. “I found that I probably have more time to slack off because I’ve gotten an answer from Copilot.”
While AI is making the lives of developers easier, Hetland said he doesn’t think employers will reduce pay or increase workloads any time soon because developers are slow to adopt AI.
“If you use AI for anything, be careful and be smart about it. If you let AI use what you create, then be prepared to lose what you have because it is going to try to replace you,” said UNCA student Robert Watkins.
Watkins said there is little hope for the government to regulate the use and development of AI.
“Restrictions on it are basically impossible to carry out. I think if the government wanted to put restrictions on it, they would have to place restrictions on the companies themselves,” Watkins said.
Watkins said he thinks students should not use AI to create work for them. When it comes to using it for brainstorming, his opinion is more complicated.
“I think it’s a good use of it, but I disagree with it,” Watkins said. “A big thing that I’ve noticed recently is that fewer and fewer people, especially younger Gen Z and Gen Alpha, are just completely lacking critical thinking skills.”
Watkins said he worries that future generations will have no clue how to do critical thinking tasks when they are without AI.
“It’s going to mean in 30 years they’re going to be a whole bunch of people in the workforce that are going to be faced with an idea and they’re not going to have any idea how to fix it,” Watkins said.
He said an important way to prevent a future like this could be education.
“If you educate people, if you tell them this is what AI can do and you show them this is what AI can do and they have experience with it, they’re going to know that they need to be careful that it can easily do things that they don’t want it to do,” Watkins said.
There are several ways for people who are wary of this future to get involved.
“We can always fight in whatever way we’re comfortable with. If for some people that’s going online and sharing with as many people as they can the dangers of AI, then that’s it. If it’s going out into the streets and protesting, that’s it. If it’s writing to their representatives, then they should do it,” Watkins said. “But there’s always something that we can do.”