Skip to Content, Navigation, or Footer.
The Cornell Daily Sun
Tip Line Join Our Newsletter
Thursday, March 19, 2026

Guest.jpeg

GUEST ROOM | In AI Research, Academic Integrity Falls by the Wayside

Reading time: about 7 minutes

Generative artificial intelligence has saturated higher education, and there’s no shortage of scholars and pedagogical pundits to interpret it for us. The Chronicle of Higher Education published an issue dedicated to AI. Inside Higher Ed has an AI section. Schmidt Sciences, a philanthropic organization, recently awarded almost two dozen teams of scholars $11 million for AI-bolstered humanities research.

In the rush to understand and adapt to AI, and in the din of heady declarations of gold standards for working with the technology, it is vital for us to stay grounded. If we don’t, we risk being carried away in tides of what may be fittingly called “slop”: the flimsy, shifting research of those who seek to ride the wave. There may be one such researcher in Ithaca.

I first encountered Jan Burzlaff’s work in the pages of The Sun. The Jewish Studies Program postdoctoral associate published a column in which he delineated four principles for “using AI at Cornell”: Curiosity, Transparency, Interpretation and Dialogue. That was Nov. 3, 2025. Two days later, he published another article in University World News that featured three guiding principles for institutions: Teach uncertainty as a method, not a flaw; Mentor writing as craft, not output; and Build institutional patience. Just two months before that, in September, Burzlaff published an academic paper in Rethinking History, also featured in the Cornell Chronicle, that promoted five principles for “writing history in the age of AI”: Interpretation over description; Create, do not reproduce; Use Corpora, do not Be one; Refuse algorithmic ethics; and Write as a person, not a prompt.

In the Nov. 3 article, Burzlaff wrote, “we need to teach not resistance, but discernment” when working with AI. On Nov. 5, he reversed course without acknowledging a contradiction or even a developing thought process. He wrote, “universities are investing heavily in ‘AI literacy,’ but few teach what we might call AI resistance.”

I asked Burzlaff in an email how and why his thinking changed between writing these articles, which were published two days apart. He responded, “My thinking evolved as AI became more embedded in everyday academic life.”

The core reason underpinning Burzlaff’s principles is that ChatGPT cannot successfully render the complexity of Holocaust survivor testimony. However, in his Rethinking History publication on the subject, he neither cites ChatGPT nor quotes it at length — a notable omission, as it is the AI’s responses which prove his thesis: that the Holocaust is “a limit case for generative AI.”

Burzlaff stated in an email that not citing or extensively quoting ChatGPT “was a deliberate methodological decision. The article is not an empirical study of specific prompts but a conceptual argument about AI and historical reasoning.”

One might ask why the scholar did not explain this deliberate decision in his paper. One might also recall the rigor of papers like “Does ChatGPT Have a Poetic Style?”, which describe the researchers’ prompts, quantitatively code the machine’s output and provide excerpts of its writing. In either case, the lack of transcripts for other scholars to interpret in Burzlaff’s articles raises questions about how one can assess the validity of his conclusions. Without the ability to see his conversations with the large language model, it seems that readers must take him at his word.

And indeed, it appears that they are. Burzlaff’s research on the topic has been featured in the Cornell Chronicle, The Cornell Daily Sun and a Cornell Center for Teaching Innovation event in October, where the scholar shared his pedagogy with other instructors looking to learn “what works” for fostering engaged learning among students. Burzlaff’s writings about his work have also appeared in Inside Higher Ed, one of two major sources of higher education news in the U.S.

The popularity of Burzlaff’s research despite its slipperiness tells a bleak story about the state of higher education. It indicates a system more concerned with — or simply more easily convinced by — sweeping declarations than intentional inquiry, a system frantically trying to make sense of the tattered remains of its ‘raison d’être.’

Indeed, falsified research on AI has rocked the Massachusetts Institute of Technology, where a student published a paper that appears to have been entirely fabricated, but enjoyed widespread prestige nonetheless. According to The Wall Street Journal, the now-former student “focused on AI, a field where peer-reviewed research is still in its infancy and the hunger for data is insatiable.”

According to research featured in University of California, Berkeley’s Haas News, “Even as AI tools help researchers write more papers faster, many of these studies are of marginal scientific merit. The resulting flood of polished but potentially superficial work” complicates the process of separating “worthy papers from unimportant and potentially misleading work.”

The solution to this informational crisis is not to rush toward striking conclusions or to amplify unproven ideas, but to take measured, conscious steps toward understanding how this technology can and should work in our classrooms, and to platform scholars who take that responsibility seriously.

Burzlaff is slated to give a talk on March 19 at the Center for Teaching Innovation’s Bovay Seminar Series in generative AI. His program is entitled, “Teaching With AI: What the Classroom Reveals When Machines Join Us.” One wonders which revelations, which apparently quicksilver principles for engaging with AI Burzlaff will debut at this event. It will likely be attended by dozens of lecturers and professors — some curious, some confused — who are all seeking to adapt to new technologies in ways that foster their students’ growth.

It is not that Burzlaff’s principles are necessarily wrong. It is that they are unaccountable and untraceable. Still, the shifting tenets have appeared in academic journals, popular outlets and official Cornell publications and events.

When artificial intelligence is a ubiquitous topic of conversation and scholarship, we must all remain vigilant of the claims made at the wild frontiers of this novel technology. In the age of AI, we must continue to demand rigorous research and academic integrity from our scholars, just as we demand it from our students.

Finley Williams '25 is a former Milstein Scholar and Arts & Culture columnist with The Cornell Daily Sun.

The Sun is interested in publishing a broad and diverse set of content from the Cornell and greater Ithaca community. We want to hear what you have to say about this topic or any of our pieces. Here are some guidelines on how to submit. And here’s our email: opinion-editor@cornellsun.com.


Read More