California college professors test out AI in the classroom, even as cheating debate continues


This spring, as debates were raging on college campuses about the proper role of generative AI in higher education, Diablo Valley College adjunct professor Frako Loden created an assignment to see how students in her American Cinema class interacted with ChatGPT.

For their final opinion piece of the semester, they were to pick a discussion question about the 1950’s movie “A Place in the Sun,” insert it into ChatGPT as a prompt, and then grade the response themselves. The AI got key details of the plot wrong in some cases, Loden said.

In the film, for example, protagonist George takes his girlfriend to a lake and she falls in and accidentally drowns, but ChatGPT says that he purposely killed her there. “That may be a subtle point, but it really does figure at the end when you evaluate his character,” said Loden, “ChatGPT kind of runs rough over that and suggests that he was planning it from the start and that he’s an evil dude.”

Loden’s assignment illustrates not only the limitations of ChatGPT — Loden said she found in her own research that many details of movie plots it gives are not only false, but “ideologically loaded” and “maybe even racist” — but how professors are increasingly experimenting with its use in the classroom. California’s public higher education systems have not yet created a formal policy regarding the use of generative AI, which can create images and text that are nearly indistinguishable from those made by humans. That leaves professors in the role of watchdog, preventing breaches of academic integrity. While some focus on cracking down on cheaters, a growing number have decided that the technology is here to stay, and are assigning work that seeks to convey to students the benefits of AI as a research tool while acknowledging its limitations and propensity for error.

“Faculty have to come to a decision, whether it’s in California or nationwide. And the decision is, do you want to adopt?” said Tony Kashani, a professor of education at Antioch University who is writing a book about the use of AI in the classroom. “On campus there’s a lot of contention about this.”

When it comes to AI, technology has moved more quickly than ethics and policy, said Kashani. He said bots like ChatGPT show great promise as a “writing consultant” for students. “It’s not often that students have a chance to sit down with a professor and have long discussions about how to go about this paper, that paper, how to approach research on this topic and that topic. But ChatGPT can do that for them, provided…they know how to use the right ethics, to use it as a tool and not a replacement for their work.”

That’s the approach taken by Stanford sociology professor David Grusky, whose syllabus for a recent public policy class allowed the use of AI-generated text in assignments under the stipulation they be cited in the same way a conversation with a human would be.

“It’s a conversation that can be evoked at will. But it’s not different in the content,” said Grusky. “You still have to evaluate what someone says and whether or not it’s sensible.”

He believes that AI can help teach students to evaluate the quality of sources, serving academia well in the long term. “I believe our job typically in kind of the world of undergraduate instruction is to try to help people become more thoughtful, more rigorous, more analytic.”

Stanford, after a push from professors, created a baseline policy forbidding the use of AI to aid in the completion of assignments unless otherwise allowed in a class syllabus. And some California college professors remain skeptical.

“I see it more of a problem than a benefit,” said Santa Rosa Junior College history and political science instructor Johannes Van Gorp.

The advent of generative AI has increased the workload of instructors who seek to stop cheating, he said, especially since software that checks for AI-generated content is imperfect.

Van Gorp has adopted a policy forbidding the use of artificial intelligence in his classes, running nearly every assignment that gets turned in through three different AI checkers to build confidence in the results he gets.

“At first I was reporting (AI use) through the system, but it was so ubiquitous that I just started, as bad as it sounds, giving zeros on the assignments with a note: ‘This is AI generated.’”

Still, Van Gorp said he has to acknowledge that “the world is shifting.”

“Things like (the grammar-checking tool) Grammarly or whatnot, those are AI programs as well. And so where do you draw the line? And I’m not quite sure I’ve figured that one out. And certainly the institutions haven’t.”

California State University’s Academic Senate, which represents faculty, passed a resolution in March calling for a working group on artificial intelligence in higher education, to be formed by the end of August. The working group would examine AI’s limitations, opportunities for professional development of faculty, and how to ensure academic integrity, coordinating the university’s response across campuses.

To make their point, faculty used ChatGPT to draft part of the resolution itself. “What level of academic dishonesty would this constitute on a CSU campus?” the writers asked, adding, “This resolution calls upon the CSU to consider how best to leverage this technology, understanding that AI will inevitably change the nature of education independent of any action the system takes.”

Generative AI is out there and will be here in the future, said Academic Senate Chair Beth Steffel in an interview. “If we ignore it or try to ban it, it is probably to everyone’s detriment.”

Faculty at the California Community Colleges have also pledged to develop a framework that colleges can use to create policies on AI by spring 2024. The University of California has had an AI working group since 2020, which has in the past recommended the technology’s use in counseling, student retention, admissions and even test proctoring, as well as calling for individual UC campuses to set up councils to oversee their use of AI.

A March survey by the college-ranking website BestColleges found that 43% of college students say they have experience using AI, such as Chat GPT, with 22% saying they’ve used it to complete exams or assignments.

“I imagine that number is going to grow,” said Camille Crittenden, executive director at UC Berkeley’s Center for Information Technology Research in the Interest of Society and a member of the UC workgroup. “So the teachers might as well be involved in helping them to use it responsibly, figuring out how to actually double check citations and make sure that they’re real.”

As universities grapple with setting policy, professors are flocking to social media to vent and ask questions. Many of the conversations show a split between professors who want to integrate the use of AI and those who fear allowing it into the classroom.

“I just caught a student using ChatGPT to answer questions on online quizzes,” one professor posted to Pandemic Pedagogy, a Facebook group made to assist faculty in navigating online teaching. “On my syllabus, I say that students’ work must be their own and plagiarism will result in a failing grade, but I don’t mention using these kinds of platforms…What should I do?”

(The Facebook group is invitation-only, but some posters gave CalMatters permission to cite their comments.)

Some wrote about the seeming futility of trying to catch cheaters, given the unreliability of software designed to flag AI-generated content.

“We should avoid assignments that try to ‘harness’ ChatGPT or other AI’s,” another commenter argued, adding that the services might not remain free of charge and could start returning answers that are shaped to benefit advertisers.

Illustration generated via artificial intelligence program Midjourney, and finalized with Adobe Photoshop (Beta)
Illustration generated via artificial intelligence program Midjourney, and finalized with Adobe Photoshop (Beta)

Elizabeth Blakey, an associate professor of journalism at Cal State Northridge, allowed master’s students in her mass communications class to use ChatGPT to help draft research proposals. “It’ll give you information, it’ll give you names, maybe some ideas or vocabulary words that you didn’t think of,” she said in an interview. “And then you can take it from there and use your own creativity and your own further research to build on that.”

She believes it helped reduce her students’ anxiety about the tool and taught them a new skill they can take into the workforce.

Beatrice Barros, one of Blakey’s students, said ChatGPT came in handy when she changed her project topic halfway through the semester but was nervous about not having enough time to complete it. Using the AI, she said, “helped me with the head start, like a motivation.”

But she learned how to navigate what the AI gave her with skepticism. “Sometimes it was very, very wrong,” she said. “It made me more aware that ChatGPT can sometimes trick you, maybe get you in trouble if you don’t read content.”

Her overall takeaway? “Sometimes it’s better to do your homework.”

Blakey’s colleague David Blumenkrantz gave students in his visual communications class a choice about whether to use AI to design a magazine. They could write their magazine’s proposal and premise, or have ChatGPT write it for them. AI-generated images could grace the magazine’s cover, with students adding in the typeface and titles over it. The only stipulation: that students explain which parts were AI-generated and why.

About a third of the class chose to use AI for the assignment, he said.

Blumenkrantz said he is currently partnering with a Nairobi University in Kenya to build up their photojournalism program and that his 63-page curriculum was mostly compiled from AI-generated content. He gave ChatGPT prompts, changed the responses to go more in depth into each topic, and fact checked them, he said. He spent weeks making the curriculum, he said, when it would have taken months without the AI-generated research, a result he called “astonishing.”

Jenae Cohn, the executive director of the UC Berkeley Center for Teaching & Learning, which helps professors design effective instruction, said she and her staff often hear from faculty like Blumenkrantz, who “want to understand better how to use AI in creative ways in their teaching.”

“On the other end of the spectrum, we have a lot of questions about how students are using AI to cheat. There’s a lot of concerns about academic integrity.”

As for her own take, she said, “I don’t think that AI is going to necessarily destroy education. I don’t think it’s going to revolutionize education, either. I think it’s just going to sort of expand the toolbox of what’s possible in our classrooms.”

Walker is a fellow with the CalMatters College Journalism Network, a collaboration between CalMatters and student journalists from across California. This story and other higher education coverage are supported by the College Futures Foundation.