In the video Large Language Models Explained Briefly, we learn that LLMs don't "look up" answers the way Google does. Instead, they generate text by predicting the most likely next word based on patterns learned from vast amounts of training data.
While this can produce fluent responses, LLMs don’t understand context unless it's clearly provided, and they may confidently generate false or misleading information—known as “hallucinations.” Even a well-structured prompt cannot guarantee accuracy, so it’s essential to fact-check outputs using credible sources.
As a student using GenAI for learning and research, it's important to know that
By combining thoughtful prompts with fact-checking, you can use AI more effectively and ethically in your academic work.
At the end of this module, you should be able to
The University of Saskatchewan's main campus is situated on Treaty 6 Territory and the Homeland of the Métis.
© University of Saskatchewan
Disclaimer|Privacy