You’re starting a new class, and while reading through the syllabus you notice a line at the bottom stating: The use of AI is strictly prohibited. In the lecture that follows, that policy seems more casual. The professor says ‘AI can be used for brainstorming, not drafting.’ By the time the first assignment is due, the rule feels like a moving target. The challenge has become writing in accordance with unfinished rules.
Across universities, the rules around AI are anything but consistent. One professor allows the use of AI, another bans it entirely, while a third encourages experimentation with no real limit. Departments enforce guidelines that contradict one another, and university-wide policies fall behind what is already happening in the classroom. Terms like “assistance” and “collaboration” appear frequently in syllabi, but the definition is never the same. Universities have not clearly defined the boundaries of acceptable AI use, leaving students to interpret them on their own.
Expectations around plagiarism feel stable; expectations around AI do not. Copying someone else’s work results in some form of penalty, while citing that source is acceptable. Learning requires time and effort to be invested in research, and is supported through the production of academic work. With AI, however, the writing process has become shaped by tools and prompts. The boundaries between independent work and outside help have been blurred. AI has reshaped the writing process itself, exposing how poorly defined the purpose of academic work already is. If an AI tool helps generate ideas for an essay, is it any different from brainstorming with a friend? Where does thinking end and assistance begin?
Proof of Learning or Proof of Compliance?
Academic work is evaluated through what can be graded, with essays and exams offering visible proof that learning has taken place. However, learning itself is messy. Drafts are abandoned, ideas change mid-sentence, and full comprehension of a topic happens slowly. When AI becomes part of this process, whether it is through suggestions or corrections, distinguishing a student’s thinking from a tool’s assistance becomes harder. What is then being assessed is whether the final product appears acceptable. Students are left to decide whether the goal is to maintain critical thinking or to merely meet expectations.
In theory, assignments are meant to build skills such as critical thinking, argumentation, and independent analysis. In practice, they function as proof that learning occurred in the ‘approved’ way. The absence of clear educational goals has led many students to develop their own informal rules regarding AI, adjusting their choices in response to the current expectations.
The Absence of Clarity
I spoke with students around Lund University about their thoughts on AI. Many refuse to let AI touch their final draft for how its use might be judged. Some have even started adjusting the writing style they developed before AI, worried that certain stylistic choices, such as the em-dash ( — ) or overly polished transitions, might be misinterpreted as artificial. Some students admit they even hesitate to use Grammarly, unsure whether automated grammar corrections might be seen as unauthorized assistance. A tool once considered standard has started to feel suspicious.
The difference between intention and reality is important when AI comes into the mix. Restricting certain tools protects integrity without defining what students are meant to learn or how that learning should be demonstrated. In doing so, it steers the attention toward following the rules of AI rather than learning from the work itself.
Clarity around AI and education will continue to shift and the definitions of what is and is not allowed will evolve. This turbulence is shaping how assignments are completed and how learning itself is understood. Students are asked to anticipate judgment and make ethical decisions individually. The result is an educational experience defined less by certainty than by constant recalibration.
Until the boundaries are clearer, students will continue learning in motion, as adaptability has now become part of the curriculum. When education focuses on controlling tools rather than defining what thinking looks like, the purpose of academic work becomes harder to defend. The real issue is not with AI; it is with what academic work is meant to measure.




