When ChatGPT launched in late 2022, university faculty worried about the powerful artificial intelligence tool’s impact on learning.

Among other things, ChatGPT can answer questions, write papers, and create images based on natural language prompts. How would students gain knowledge and critical thinking skills, they wondered, with the ever-present temptation to claim AI-generated work as their own?

“It was very much a hair-on-fire moment,” recalls Andrew Perkins, a professor of marketing. “A lot of us faculty⁠—and not just at Washington State University but across the nation and the world⁠—were saying, ‘We’ve got to figure this out quick.’”

Profile head shot of Andrew PerkinsAndrew Perkins (Courtesy Carson College of Business)

Perkins is part of a WSU Carson College of Business task force convened to discuss ChatGPT and other AI tools. Faculty members recognized that banning ChatGPT’s use for assignments and policing violations wasn’t realistic. Detection programs couldn’t keep up with the rapidly evolving technology.

“Instead, we’ve chosen to incorporate ChatGPT into our teaching, with the idea it could enhance student learning and productivity,” Perkins says. “We think of it as a competitive advantage for our students compared to students from universities with more restrictive ChatGPT policies.”

At the Edward R. Murrow College of Communication, Nanu Iyer arrived at similar conclusions. WSU policy lets individual faculty members decide if and how students in their classes can use AI for assignments.

“As a teacher I feel it’s my responsibility to address this and prepare my students for a world that will use AI, deliberately or unknowingly, in every single task before too long,” says Iyer, an associate professor at WSU Vancouver.

Profile head shot of Nanu IyerNanu Iyer (Courtesy WSU Vancouver)

By the end of 2023, ChatGPT had more than 180 million users, according to parent company OpenAI. Not teaching students how to use the tool effectively, while recognizing its shortcomings, felt like turning them loose without an instruction manual, Iyer says.

Bringing AI into the classroom required changes in how they teach, both Perkins and Iyer say.

Perkins reviewed the homework for his classes, scrapping projects that students could easily complete with ChatGPT. Instead, he focused on instructing students on how to use it as a time-saving tool while critiquing its output.

Last fall, he built a lesson around the United Auto Workers strike. Before class, the marketing management students watched a YouTube video where a used car salesman discussed the strike’s impact on retail lots.

“The salesman is hilarious, and he’s got insightful views on the inner workings of the used car market,” Perkins says. “But he’s talking extemporaneously in the 12-minute video, so there is lots of repetition and ums and ahs.”

Students used ChatGPT to create a transcript of the video and summarize the key points. After a class discussion, they worked in teams to create marketing messages around the strike. Assigned the role of an automobile manufacturer, they used ChatGPT to craft messages tailored to dealers, customers, and other audiences.

Evaluating the effectiveness of the AI-generated messages was part of the in-class assignment. Students also turned in their ChatGPT prompts, and they’re responsible for the accuracy of AI-generated content.

“If you don’t have expertise in the subject matter, it’s really difficult to know if what AI is giving you is accurate and correct,” Perkins says.

That’s a point Iyer drives home with his students. ChatGPT is generally designed to predict the next phrase in a sequence. It can create “AI hallucinations,” spouting nonsense with authority.

Last year, Iyer’s students wrote a research paper on their own and turned in a paper written by ChatGPT on the same topic. “The AI version gave them ideas for improving their writing and helped highlight common mistakes they were making,” Iyer says.

He also assigned a literature review created by ChatGPT. To their dismay, students learned that ChatGPT cited researchers who didn’t exist and referred to fabricated information in journal articles.

“They realized that AI doesn’t know how to say no, even when it doesn’t have the answer. It will cook up AI hallucinations and pass them off as real,” Iyer says. “You can’t use AI-generated work without proofing, checking, and verifying.”

Ultimately, Iyer says he’s preparing students to use AI ethically and responsibly in their future workplace. It can help them speed through mundane tasks, freeing up time for creativity and critical thinking. Besides checking AI-generated responses for accuracy, students also must know how to write effective prompts to get the most out of AI software.

“This technology will become integral to how we work,” Iyer says. “AI may not take over our jobs, but people with AI skills definitely will.”