Skip to Main Content
Skip to main content

Generative AI and Learning

This self-paced series of learning modules is designed to help you build AI literacy.

Why Learn about Prompting?

In the video Large Language Models Explained Briefly, we learn that LLMs don't "look up" answers the way Google does. Instead, they generate text by predicting the most likely next word based on patterns learned from vast amounts of training data.

While this can produce fluent responses, LLMs don’t understand context unless it's clearly provided, and they may confidently generate false or misleading information—known as “hallucinations.” Even a well-structured prompt cannot guarantee accuracy, so it’s essential to fact-check outputs using credible sources.

As a student using GenAI for learning and research, it's important to know that

  • Vague prompts can lead to inaccurate or biased responses
  • Clear, specific prompts improve the quality and relevance of results
  • Strong prompting and critical evaluation reduce the risk of misinformation

By combining thoughtful prompts with fact-checking, you can use AI more effectively and ethically in your academic work.


Learning Outcomes

 At the end of this module, you should be able to

  1. Distinguish between an AI prompt and a traditional keyword or Google search
  2. Identify the key components of an effective AI prompt
  3. Apply prompting techniques and strategies to improve AI-generated responses
  4. Describe the limitations of generative AI as an information source
  5. Use AI outputs ethically and transparently in academic settings