The video, What are Large Language Models, uses words like understand and conversation when referring to what LLMs can do. However, it is important to note that while LLMs employ advanced algorithms to decode language, they do not possess human-like understanding or conversational abilities. Similar to Generative Adversarial Networks (GANs), LLMs learn through trial and error until they can generate outputs that replicate or mimic true meaning. But, the outputs may not be true or accurate. LLMs are prone to "hallucinations," meaning they will generate confident or authoritative responses even when inaccurate because they cannot comprehend truth or meaning; they merely generate responses based on learned patterns.
If Large Language Models are prone to hallucinations, what does this mean for you, the end user?
The use of generative AI in university raises significant ethical dilemmas. A few major concerns are
While these are significant challenges, with proper oversight, GenAI can be used in an ethical and responsible way. Complete the activities below to test your understanding of (i) the ethical dilemmas GenAI poses for learning in university and (ii) some of the strategies you can adopt to use it ethically and responsibly.
To learn more about these and other ethical considerations, see the section on Using Generative AI in the Library's GenAI Guide.
Activity 1 - Drag and Drop |
Activity 2 - Select at least three strategies that best address the ethical dilemmas associated with the use of generative AI to complete your coursework. |
The University of Saskatchewan's main campus is situated on Treaty 6 Territory and the Homeland of the Métis.
© University of Saskatchewan
Disclaimer|Privacy