The video, What are Large Language Models, uses words like understand and conversation when referring to what LLMs can do. To be clear, while LLMs use sophisticated algorithms to decode language, they are incapable of human understanding and conversation. As in the example of Generative Adversarial Networks (GANs), they learn in a trial-and-error fashion until they can generate outputs that replicate or mimic true meaning. But the outputs may not be true or accurate. LLMs are, therefore, prone to "hallucinations", meaning they will always generate a confident or authoritative response, even if it is inaccurate. This is because LLMs are incapable of understanding truth or the meaning of words; they can only generate responses based on learned patterns.
If Large Language Models are prone to "hallucinations", what does this mean for you, the end user?
The use of generative AI in university raises significant ethical dilemmas. A few major concerns are
While these are significant challenges, with proper oversight, GenAI can be used in an ethical and responsible way. Complete the activities below to test your understanding of (i) the ethical dilemmas GenAI poses for learning in university and (ii) some of the strategies you can adopt to use it ethically and responsibly.
Activity 1 - Drag and Drop |
Students can adopt several strategies to address the ethical dilemmas that arise from using GenAI in a university setting. Being well-informed, understanding limitations, seeking guidance from professors and instructors, and engaging in open discussions are just a few. In this activity, select at least three strategies that best address the ethical dilemmas associated with the use of generative AI to complete your coursework.
Activity 2 - Select Strategies |
The University of Saskatchewan's main campus is situated on Treaty 6 Territory and the Homeland of the Métis.
© University of Saskatchewan
Disclaimer|Privacy