Skip to Content
đźš§ This site is under development; its content is not final and may change at any time. đźš§
Students' ExperiencesAI-assisted writing

Using AI to Teach Writing

Writing is one of the most important things students do at university, not because they need to produce documents, but because writing is how they develop and test their own thinking. A student who can construct a clear argument, anticipate objections, and revise their ideas in response to evidence is a student who has genuinely learned something. The concern with AI is not that it produces text — it is that it can produce text while completely bypassing that thinking process.

This creates both a problem and an opportunity. The problem is familiar: students may use AI to generate submissions without engaging with the material at all. The opportunity is less often discussed: when AI is incorporated deliberately into writing instruction, it becomes one of the most powerful tools available for teaching the thinking behind the writing. The key is designing tasks where the human intellectual contribution is the object of assessment, not the final document.

This article offers practical approaches for any lecturer whose course involves writing, not just those who teach writing explicitly.

Rethinking What Writing Instruction Is For

Before incorporating AI into writing tasks, it helps to be clear about what writing instruction is actually trying to achieve. If the goal is a polished, correct document, then AI poses a serious threat: it can produce such documents reliably and at scale. But if the goal is to develop a student’s capacity to think, argue, structure, and communicate — then the document is evidence of learning, not the learning itself.

This distinction changes how you design and assess writing tasks. The question to ask of every writing assignment is: what thinking does this task require, and how will I know whether the student has done it? When you can answer that question clearly, you are in a position to incorporate AI as a tool while keeping the intellectual work where it belongs — with the student.

It also means that the most AI-resistant writing tasks are not necessarily the most restrictive ones. Tasks that require students to make genuine judgements, draw on their own experience or analysis, defend positions under questioning, or demonstrate a trajectory of developing thought are all far more resistant to wholesale AI substitution than generic essay prompts — regardless of whether AI use is permitted.

Techniques for the Classroom

AI as First Draft

One of the most effective techniques is to make the AI-generated first draft the starting point of the assignment rather than the prohibited shortcut. Ask students to use an AI tool to generate a draft on the essay question, then substantially revise it based on their own knowledge, research, and critical judgement.

The submission is not the AI draft — it is the revised version, along with an annotated changelog: a document where students identify each significant change they made and explain why they made it. Did they correct a factual error? Add evidence the AI omitted? Restructure an argument that was logically weak? Replace a vague generalisation with a claim they can actually defend?

The annotation becomes the primary evidence of learning. A student who submits a barely-changed AI draft with a sparse changelog reveals exactly as much as a student who submits a thoroughly reworked text with a detailed and perceptive one. The technique also teaches students to see revision as intellectual work, not just tidying — which is one of the hardest lessons in writing education.

To calibrate the task, be explicit in the brief about what level of transformation you expect. You might specify: “At least half the sentences in your final submission should be your own, and every major claim should be supported by a source you have verified.” This sets a floor without eliminating the productive friction of working with and against the AI draft.

AI as Revision Partner

Rather than giving feedback only after students submit, design a workflow where AI feedback is a required intermediate step. Ask students to share a draft with an AI tool and paste the AI’s feedback into their submission alongside their final essay.

The value of this is not that the AI’s feedback replaces yours — it often will not be as precise or disciplinary as what an experienced teacher provides — but that it creates a recorded dialogue. Students must respond to the AI’s suggestions, either by acting on them or explaining why they have not. This response is where the learning happens: a student who writes “the AI suggested I define my terms earlier, and I disagree because the argument only makes sense after the reader understands the context” is doing exactly the kind of reflective, reasoned thinking that writing instruction aims to cultivate.

This approach also redistributes your attention productively. You spend less time pointing out problems the AI has already flagged and more time engaging with the choices students have made in response.

Generating and Defeating Counterarguments

Argumentation is one of the hardest writing skills to teach, partly because students often cannot see their own arguments from the outside. AI is an effective tool for producing counterarguments that students must then respond to.

Ask students to write their thesis and the main lines of their argument, then prompt an AI to argue against them as forcefully as possible. Their task is to revise their essay so that it addresses the strongest of those objections. This can be done iteratively: write, generate objections, revise, generate objections again.

The technique surfaces weaknesses in student arguments that neither the student nor the teacher might have noticed, and it does so in a form that is immediately actionable. It also teaches students that a strong argument is one that has anticipated and addressed opposition — which is a disciplinary standard in most academic writing, but one that is rarely made explicit.

For more adversarial disciplines like law, philosophy, or policy studies, you can extend this by having the AI play the role of an opposing counsel, a philosophical interlocutor, or a policy sceptic, and asking students to sustain their argument across several rounds of challenge. This is a written version of the simulation technique and produces similar benefits in terms of deep engagement with the material.

Comparative Analysis of AI and Human Writing

Give students two texts on the same topic: one AI-generated, one written by a human expert or, for more advanced classes, two AI-generated texts produced with different prompts. Ask students to compare them analytically, using the evaluative framework from your discipline.

What makes one more persuasive than the other? Which handles evidence better? Where does one simplify something the other engages with carefully? What is present in the human-authored text that the AI version lacks, and vice versa? This comparative exercise builds students’ sense of quality in your discipline — their understanding of what a genuinely good piece of writing in this field looks like — which is foundational for their own writing development.

For writing teachers specifically, this can be a revealing exercise in voice, specificity, and structure. AI-generated academic prose tends to be correct but generic; it rarely takes risks, makes unexpected connections, or reveals a particular mind at work. Helping students see and name that difference gives them something concrete to aim for in their own work.

Structured Brainstorming

AI is a useful collaborator in the early, generative phase of writing, when students are working out what they actually think about a topic before they begin to commit to an argument. Ask students to use an AI tool to brainstorm: to explore different angles on a question, generate potential lines of argument, or identify the main debates in the field.

The task is then to make a deliberate choice: which of these directions do they actually find most interesting or defensible, and why? Requiring students to document their brainstorming session and reflect on the choices they made surfaces the intellectual judgement that is often invisible in a polished final submission.

This is particularly useful for students who freeze at the blank page, and for courses where the disciplinary landscape is unfamiliar. The AI brainstorm does not produce the argument — it maps the territory, and the student decides where to go.

Targeted Language Feedback for Multilingual Students

For students writing in a second or third language, the burden of surface-level language errors can obscure their actual understanding of the material. AI tools can provide detailed, patient feedback on grammar, syntax, and clarity in ways that would be impractical for a teacher to do at scale — and without the time pressure or potential embarrassment of a face-to-face correction.

This frees your feedback for higher-order concerns: argument structure, use of evidence, disciplinary conventions. Ask multilingual students to run their draft through an AI tool and address any surface-level suggestions before submission. You can then focus your response on what they are actually thinking, which is generally the part most worth developing.

It is worth being explicit with students about this division of labour, and about the fact that surface fluency matters in professional contexts — but that it is separate from the intellectual quality of their work, which is what your assessment primarily concerns.

Designing Writing Assignments That Work With AI

The techniques above work best when the underlying assignment is designed with AI in mind from the start. A few principles:

Specificity defeats genericism. AI performs best on generic prompts. An assignment that asks students to “discuss climate change policy” is much more susceptible to AI substitution than one that asks them to evaluate a specific piece of legislation using a framework introduced in week four of your course, drawing on at least two sources from the course reading list. The more your prompt is tied to the specific intellectual journey of your course, the less useful a generic AI response will be.

Make the process part of the submission. Requiring students to submit drafts, brainstorming notes, AI interaction logs, or revision changelogs alongside their final essay shifts the evidence of learning from the product to the process. It also makes the process visible to you, which is where you can offer the most useful feedback.

Ask for genuine personal judgement. Prompts that require students to take and defend a position — including one that could be contested by a reasonable person — force a kind of commitment that AI-generated text typically avoids. “Evaluate the argument made by [specific scholar] in [specific text]” is much harder to outsource than “discuss the strengths and weaknesses of social media.”

Use reflection as a lever. Ending any writing assignment with a short reflective question — “What was the hardest part of this argument to make, and why?” or “What do you still find unresolved about this topic?” — creates evidence of genuine engagement that is almost impossible to fake without having actually done the thinking. Reflection prompts also develop metacognitive habits that improve students’ writing over time.

Integrating AI Writing Tools Into Your Course Policy

If you plan to use any of these techniques, it is worth being explicit with students about your course’s approach to AI from the outset, rather than leaving them to guess. Clarity about when and how AI may be used — and what counts as the student’s own contribution — reduces anxiety, reduces unintentional misuse, and creates a better learning environment than ambiguity does.

You might consider distinguishing between different phases of the writing process in your policy: AI may be used for brainstorming and generating counterarguments but not for producing submitted prose; or AI may be used to give feedback on drafts but students must disclose which suggestions they acted on. These distinctions communicate that you understand how these tools work and have thought carefully about their role, which tends to be received better than a blanket prohibition that students know cannot be enforced.

Whatever your policy, the most important thing is to pair it with explicit instruction in how to evaluate what AI produces. Students who understand AI’s limitations — its tendency to hallucinate sources, flatten complexity, and produce confident-sounding errors — are better placed to use it responsibly and to recognise when its outputs are not good enough to build on.

Example Prompts

The two prompts below illustrate different roles for AI in the writing process. Both can be shared directly with students as part of an assignment brief.

For revision feedback

I am writing an essay for a university course. The essay question is: [paste your essay question here].

Below is my current draft. Please read it carefully and give me critical feedback on the following:

  1. Is my central argument clear? Restate what you think I am arguing in one sentence.
  2. Are there significant counterarguments I have not addressed?
  3. Where is my reasoning weakest — where might a reader find it unconvincing?
  4. Are there places where I make claims without adequate evidence?

Do not rewrite any part of my essay. Give me feedback only, so that I can revise it myself.

[Paste draft here]

For generating counterarguments

I am writing an essay arguing the following position: [state your thesis in 2–3 sentences].

My main arguments are:

  1. [First argument]
  2. [Second argument]
  3. [Third argument]

Please argue against my position as forcefully and specifically as possible. Identify the weakest points in my reasoning and the most significant objections a critical reader would raise. Do not suggest how I might fix these weaknesses — only identify them clearly so that I can decide how to respond.

In both cases, remind students that the AI’s feedback is a prompt for their own thinking, not a set of instructions to follow mechanically. Part of the intellectual work is deciding which of the AI’s observations are worth acting on, which are not, and why — a judgement that only the student, with their own understanding of the material, is in a position to make.

Last updated on