🚧 This site is under development; its content is not final and may change at any time. 🚧

When can language models be used?

Generative AI, such as language models, can be powerful tools in education, but their use requires thoughtful consideration. Before delegating a task to an AI, ask yourself these four critical questions.

What happens if the output is incorrect?

Before using AI, consider the potential consequences of an incorrect output. If you are using AI to generate an Email to your colleague to request a meeting and the AI sounds too formal or informal given your relationship, that is a minor inconvenience but unlikely to have any meaningful consequences. However, AI is not a safe choice for high-stakes decision-making such as evaluating (and possibly failing) students. Language models can produce plausible-sounding but incorrect or misleading outputs, often referred to as hallucinations (see Limitations).

Use AI as part of an augmented decision-making process to amplify your creativity and productivity, but not as the sole source of information.

How will you check the accuracy of the output?

AI is your assistant, not a replacement. Only use AI for tasks you could confidently solve independently, given enough time. For example, AI can quickly generate complex legal documents that seem polished and thorough. But unless you’re trained in law, you might not be able to spot errors or gaps in those documents. Thus, if you’re unfamiliar with a topic, the AI’s errors may go unnoticed, potentially misleading your students. That’s why it’s important to use AI within the limits of your own knowledge.

We have written more extensively about how to fact-check AI-generated outputs in the article on Limitations.

What data are you sharing with the AI?

Generative AI models are hosted by operators, meaning any data you provide is sent to their servers and may be processed outside your control. Avoid inputting sensitive data such as personal information, student grades, or proprietary materials (unless you have obtained informed consent).

Treat prompts as semi-public. As a rule of thumb, we recommend that you put nothing into a prompt that you would not post on your personal social media. While some platforms offer “incognito” modes or data privacy settings, you are still potentially sending confidential information to be processed by a third party.

If privacy is critical, explore running AI models locally on your computer. Open-source models such as LLaMA can offer privacy, but they require basic programming knowledge and adequate hardware.

Do you require ownership of the output?

Generative AI outputs can usually be claimed as your own,1 provided you comply with the model’s terms of use and applicable laws. However, there are nuances to consider, particularly if the output is for commercial purposes or resembles existing copyrighted material.

  • When the creative outcome is driven by predominant human intellectual activity—where a person makes free and creative choices during conception, execution, or revision—the work is generally eligible for copyright protection.2 The Court of Justice of the European Union supports this position, emphasizing that existing IP laws apply when AI is used as an assistive tool.3
  • On the other hand, if the AI operates autonomously with little to no human contribution to the creative output, IP protection may not apply. Minor modifications to AI-generated outputs, such as light editing, are usually insufficient to establish legal ownership.4
  • Language models must not be used for legal infringements (e.g., copyright infringements). Asking AI to create something “in the style of [Author/Artist X]” could lead to outputs that resemble copyrighted work. In this case, it could be legally unclear who owns the result - the user, the artist or the model developer.5 6
  • Importantly, when an output is provided directly to users (i.e., without a human in the loop), it must be clear to users that the answers do not originate from a human being.

References & Footnotes

Footnotes

  1. Eshraghian, J. K. (2020). Human ownership of artificial creativity. Nature Machine Intelligence, 2(3), 157–160. https://doi.org/10.1038/s42256-020-0161-x ↩

  2. Hugenholtz, P. B., & Quintais, J. P. (2021). Copyright and artificial creation: Does EU copyright law protect AI-assisted output? International Review of Intellectual Property and Competition Law, 52(9), 1190–1216. https://doi.org/10.1007/s40319-021-01115-0 ↩

  3. Court of Justice of the European Union (CJEU), 1 December 2011, case C‐145/10, Painer, ECLI:EU:C:2011:798 ↩

  4. Novelli, C., Casolari, F., Hacker, P., Spedicato, G., & Floridi, L. (2024). Generative AI in EU law: Liability, privacy, intellectual property, and cybersecurity (Version 4). arXiv. https://doi.org/10.48550/ARXIV.2401.07348 ↩

  5. Guadamuz, A. (2021). Do androids dream of electric copyright?: Comparative analysis of originality in artificial intelligence generated works. In J.-A. Lee, R. Hilty, & K.-C. Liu (Eds.), Artificial Intelligence and Intellectual Property (1st ed., pp. 147–176). Oxford University Press. https://doi.org/10.1093/oso/9780198870944.003.0008 ↩

  6. Hilty, R. M., Hoffmann, J., & Scheuerer, S. (2021). Intellectual property justification for artificial intelligence. In J.-A. Lee, R. Hilty, & K.-C. Liu (Eds.), Artificial Intelligence and Intellectual Property (1st ed., pp. 50–72). Oxford University Press. https://doi.org/10.1093/oso/9780198870944.003.0004 ↩

Last updated on