Skip to Main Content
Skip to main content

Generative AI and Learning

This self-paced series of learning modules is designed to help you build AI literacy.

Why Fact Checking Matters

At the end of this section, you should be able to explain why Large Language Models (LLMs) aren't reliable for finding trustworthy information and know how to clearly disclose your use of AI in academic work. 

Even with well-crafted prompts, AI tools can still produce responses that are inaccurate, biased, or entirely fabricated. Large language models (LLMs) do not inherently “look up” answers online; they generate text based on patterns learned from training data. Unless connected to a live search or database, they cannot access real-time information. While newer models have improved accuracy and guardrails, they may still generate content that sounds convincing but is incorrect or incomplete. In practice, this means

  • They may lack current or specific details
  • They cannot independently verify facts
  • They can sometimes present made-up content with confidence

Search engines, though more up-to-date, have their own limitations. Personalized filters, algorithmic bias, and sponsored content can all influence what appears first, and not always in the most accurate or impartial way.

That’s why fact-checking matters. Whether using AI or a search engine, it's important to critically evaluate online information and verify it with credible, authoritative sources.

Generative AI and other digital tools can support your learning in meaningful ways, but they should never be your sole method of finding reliable, up-to-date and verifiable information. If you choose to use them for academic purposes, it’s important to understand the risks and adopt responsible practices, such as fact-checking, proper attribution and transparent disclosure, to ensure ethical and responsible use.

Strategies for Verifying AI-Generated Content

By now, you understand that AI generates content based on patterns, not verified facts, and that search engines can also present biased or sponsored results. To ensure your academic work is accurate, trustworthy, and well-supported, it’s essential to go beyond the surface and verify the information you find, whether it comes from an AI tool or a search engine

Here are some practical ways to do that:

  • Use Google Scholar, library databases, and other authoritative sources
  • Verify both AI-generated and search engine content against trusted sources to confirm credibility
  • Use strategies like SIFT and lateral reading to strengthen your evaluation and fact-checking skills

The SIFT method, developed by Mike Caulfield, is a simple, reliable approach for evaluating online information. Originally designed for news and social media, it can easily be adapted to help you critically assess AI-generated content as well.

Diagram illustrating the SIFT method: Stop, Investigate, Find, Trace.
Adapted from SIFT - The Four Moves by Mike Caulfeld on Hapgood, under a CC BY-NC-SA 4.0 license


 

Citing and Disclosing Your Use of AI

Whether brainstorming, organizing ideas, or summarizing content, it's important to clearly distinguish your own work from anything generated by AI. If you use AI tools to support your learning, be transparent. There should be no doubt about what you created and what was generated by a tool.

⚠️ Never use AI to complete graded assignments without explicit permission from your instructor or supervisor. If you do use AI and attribution is required, make sure it is clearly cited and disclosed.

Why AI Should Not be Listed as an Author

AI-generated content may sound authoritative, but it is not created by accountable human authors. In other words, AI cannot be held responsible for the accuracy or ethical implications of its output. For this reason, major citation styles, including APA, MLA, and Chicago, recommend not listing AI tools as authors. Instead, verify the information with credible human sources and cite those reliable sources directly. Think of AI as a helpful tool, similar to a calculator or spellchecker—useful for support, but not something you credit as an original source.


If you must include AI-generated content in your work

  • Treat it as an algorithmic output, not as a credible source of information
  • Explain how it contributed to your process (e.g., to generate ideas or create an outline)
  • Ensure all claims are supported by reputable, published sources
📝 Not sure how to cite AI in an assignment? Check with your instructor and refer to the Library’s AI Citation Guide.

Citation vs. Disclosure: What’s the Difference?

Though related, citation and disclosure serve different purposes:

Citation

Citations show your reader when words, ideas, or visuals in your work come from another source, including content generated by AI tools. Just like citing a book or an article, citing AI-generated content is essential to give proper credit and avoid plagiarism.

Use a citation when:

  • You directly quote, paraphrase, or summarize content generated by an AI tool
  • You include AI-generated material in your work (e.g., a paragraph, definition, or image created by tools like ChatGPT, Copilot, or DALL·E)

Example (APA 7):

Microsoft. (2025). Copilot (GPT-4) [Large language model]. https://copilot.microsoft.com/


Disclosure

A disclosure statement explains how you used AI in your academic work, even if you didn’t include any AI-generated content directly. Disclosure promotes transparency and helps your instructor understand your research and writing process.

Provide a disclosure statement when:

  • You used AI tools to support your work (e.g., for brainstorming, organizing ideas, checking grammar), but the final product is your own
  • The AI output influenced your thinking, even if the content wasn't directly included in your assignment

As a rule of thumb, an AI disclosure statement should clearly communicate which AI tool you used, how you used it, and what role it played in completing the work. A list of sample prompts and a statement of your responsibility for the final product is also recommended.

Example:

I used Microsoft Copilot (April, 2025) to brainstorm and outline the initial structure of this essay. The types of prompts I used include: Summarize the key arguments in this article for a first-year university student" and “Suggest a clear structure for organizing this content into three sections.” I used the output from these prompts as a starting point for drafting my paper. I reviewed, fact-checked, and revised the content to ensure accuracy. This assignment reflects my own analysis and interpretation and I take full responsibility for the final submission.

Test Your Understanding

Test Your Understanding