When can language models be used?
Generative AI, such as language models, can be powerful tools in education, but their use requires thoughtful consideration. Before delegating a task to an AI, ask yourself these four critical questions.
What happens if the output is incorrect?
Before using AI, consider the potential consequences of an incorrect output. If you are using AI to generate an Email to your colleague to request a meeting and the AI sounds too formal or informal given your relationship, that is a minor inconvenience but unlikely to have any meaningful consequences. However, AI is not a safe choice for high-stakes decision-making such as evaluating (and possibly failing) students. Language models can produce plausible-sounding but incorrect or misleading outputs, often referred to as hallucinations (see Limitations).
Use AI as part of an augmented decision-making process to amplify your creativity and productivity, but not as the sole source of information.
In practice, augmented decision-making means the model can help you generate options, structure your thinking, or summarize informationābut a human remains responsible for the final decision and can justify it without ābecause the AI said soā.
Examples of safer augmentation in education:
- Brainstorming alternative lesson ideas or explanations
- Drafting rubrics, then revising them to match your learning goals
- Suggesting feedback phrasing, while you verify it against the studentās work
Examples that are typically not appropriate (or require very strict safeguards and institutional guidance):
- Automatically grading, ranking, or passing/failing students
- Using a modelās output as evidence of misconduct or plagiarism
- Making disciplinary, admission, or support decisions based on AI-generated āriskā predictions
If a tool is used anywhere in a decision pipeline, ensure there is clear accountability (who decides), transparency (what was used and how), and a way to contest the outcome.
How will you check the accuracy of the output?
AI is your assistant, not a replacement. Only use AI for tasks you could confidently solve independently, given enough time. For example, AI can quickly generate complex legal documents that seem polished and thorough. But unless youāre trained in law, you might not be able to spot errors or gaps in those documents. Thus, if youāre unfamiliar with a topic, the AIās errors may go unnoticed, potentially misleading your students. Thatās why itās important to use AI within the limits of your own knowledge.
We have written more extensively about how to fact-check AI-generated outputs in the article on Limitations.
What data are you sharing with the AI?
Generative AI models are hosted by operators, meaning any data you provide is sent to their servers and may be processed outside your control. Avoid inputting sensitive data such as personal information, student grades, or proprietary materials (unless you have obtained informed consent).
Treat prompts as semi-public. As a rule of thumb, we recommend that you put nothing into a prompt that you would not post on your personal social media. While some platforms offer āincognitoā modes or data privacy settings, you are still potentially sending confidential information to be processed by a third party.
If privacy is critical, explore running AI models locally on your computer. Open-source models such as LLaMA can offer privacy, but they require basic programming knowledge and adequate hardware.
Do you require ownership of the output?
Generative AI outputs can usually be claimed as your own,1 provided you comply with the modelās terms of use and applicable laws. However, there are nuances to consider, particularly if the output is for commercial purposes or resembles existing copyrighted material.
- When the creative outcome is driven by predominant human intellectual activityāwhere a person makes free and creative choices during conception, execution, or revisionāthe work is generally eligible for copyright protection.2 The Court of Justice of the European Union supports this position, emphasizing that existing IP laws apply when AI is used as an assistive tool.3
- On the other hand, if the AI operates autonomously with little to no human contribution to the creative output, IP protection may not apply. Minor modifications to AI-generated outputs, such as light editing, are usually insufficient to establish legal ownership.4
- Language models must not be used for legal infringements (e.g., copyright infringements). Asking AI to create something āin the style of [Author/Artist X]ā could lead to outputs that resemble copyrighted work. In this case, it could be legally unclear who owns the result - the user, the artist or the model developer.5 6
- Importantly, when an output is provided directly to users (i.e., without a human in the loop), it must be clear to users that the answers do not originate from a human being.
References & Footnotes
Footnotes
-
Eshraghian, J. K. (2020). Human ownership of artificial creativity. Nature Machine Intelligence, 2(3), 157ā160. https://doi.org/10.1038/s42256-020-0161-xĀ ā©
-
Hugenholtz, P. B., & Quintais, J. P. (2021). Copyright and artificial creation: Does EU copyright law protect AI-assisted output? International Review of Intellectual Property and Competition Law, 52(9), 1190ā1216. https://doi.org/10.1007/s40319-021-01115-0Ā ā©
-
Court of Justice of the European Union (CJEU), 1 December 2011, case Cā145/10, Painer, ECLI:EU:C:2011:798Ā ā©
-
Novelli, C., Casolari, F., Hacker, P., Spedicato, G., & Floridi, L. (2024). Generative AI in EU law: Liability, privacy, intellectual property, and cybersecurity (Version 4). arXiv. https://doi.org/10.48550/ARXIV.2401.07348Ā ā©
-
Guadamuz, A. (2021). Do androids dream of electric copyright?: Comparative analysis of originality in artificial intelligence generated works. In J.-A. Lee, R. Hilty, & K.-C. Liu (Eds.), Artificial Intelligence and Intellectual Property (1st ed., pp. 147ā176). Oxford University Press. https://doi.org/10.1093/oso/9780198870944.003.0008Ā ā©
-
Hilty, R. M., Hoffmann, J., & Scheuerer, S. (2021). Intellectual property justification for artificial intelligence. In J.-A. Lee, R. Hilty, & K.-C. Liu (Eds.), Artificial Intelligence and Intellectual Property (1st ed., pp. 50ā72). Oxford University Press. https://doi.org/10.1093/oso/9780198870944.003.0004Ā ā©