When can language models be used?

Generative AI, such as language models, can be powerful tools in education, but their use requires thoughtful consideration. Before delegating a task to an AI, ask yourself these four critical questions.

What happens if the output is wrong?

If your task requires absolute accuracy—such as diagnosing an issue, grading exams, or publishing official materials—AI may not be the safest choice. Language models can produce plausible-sounding but incorrect or misleading answers, often referred to as hallucinations (see Limitations).

When accuracy is critical, use AI as part of an augmented decision-making process rather than the sole source of information. For example, ask AI to generate a draft that you can refine and verify.

How will you check the accuracy of the answer?

AI is your assistant, not a replacement. Only use AI for tasks you could solve independently, given enough time. If you’re unfamiliar with the topic, the AI’s errors may go unnoticed, potentially misleading your students.

What data are you entering?

Generative AI models are hosted by operators, meaning any data you provide is sent to their servers. Avoid inputting sensitive data such as personal information, student grades, or proprietary materials (unless you have obtained informed consent).

As a rule of thumb, we recommend that you put nothing into a prompt that you would not post publicly on your personal social media.

If privacy is critical, explore running AI models locally on your computer. Open-source models such as LLaMA can offer privacy, but they require basic programming knowledge and adequate hardware.

Do you require ownership of the answer?

Generative AI outputs can usually be claimed as your own, provided you comply with the model’s terms of use and applicable laws. However, there are nuances to consider, particularly if the output is for commercial purposes or resembles existing copyrighted material.

  • Non-unique responses may not qualify as intellectual property as they could also be generated for other users.
  • When an output is provided directly to users (i.e., without a human in the loop), it must be clear to users that the answers do not originate from a human being.
  • Language models must not be used for legal infringements (e.g., copyright infringements). Asking AI to create something “in the style of [Author/Artist X]” could lead to outputs that resemble copyrighted work. In this case, it could be legally unclear who owns the result - the user, the artist or the model developer.