Skip to Main Content
Skip to main content

Generative Artificial Intelligence: Evaluating GenAI Outputs

USask Library Guide
Using GenAI ACCURATE-LE

To ensure the reliability of information generated by GenAI tools, it is of utmost importance to evaluate their outputs for accuracy and credibility. This not only enhances the trustworthiness of the information but also sharpens critical thinking skills, upholds academic integrity, and contributes to responsible AI use.

Ask yourself the following questions before, during and after using generative artificial intelligence tools (GenAI).
GenAI is good at sounding like it is providing the right answer, but it can easily make things up (also known as hallucinations) or misinterpret your question. Always investigate by consulting reliable sources from the library or find alternate sources to ask yourself whether the information provided is true and correct. After getting content from GenAI, ask yourself the following questions:
  • What information was used to train the model?
  • Does the output have links?
    • Have you checked that the links work?
    • Does the information at that link match the citation you were given?
    • Who authored the linked resource, and are they a reputable source?
    • Do they provide a balanced viewpoint?
  • Where can I find supporting evidence to back up GenAI’s claims? Verify that GenAI is correct by answering the following:
    • When I compare my supporting evidence to the answer from GenAI, was the GenAI response accurate?
    • Did GenAI miss an important detail?
    • Would I add anything?
If you have not given the GenAI the context or background information you are working with, it makes a best guess at the answer and can sometimes provide irrelevant information. After getting output from GenAI, ask yourself:
  • Is the answer missing some details that you can provide that may help produce a better answer? Try revising your prompt with more details.
  • Would it provide it a better answer if you include the who, what, when, where, why and how in your prompt?
Some content produced by GenAI may use awkward words, or it may organize a paragraph illogically. It’s important to read through and adjust as needed. Review the GenAI output, and ask yourself:
  • Are there words that you would not use that you can replace?
  • Is some content organized in a way that makes it hard to read or understand the point?
  • Is the content repetitive?
Depending on the GenAI model, the information that it has been trained on may not be the most current information. It may also provide links to older information. To confirm the timeliness of the information:
  • Ask the GenAI, “When was your last information update?”
  • Do an extra search to verify that you are not missing more current information (see Accuracy) and edit your output as needed.
Sometimes generative artificial intelligence will misinterpret a question or provide unnecessary details that makes the answer irrelevant. To determine if GenAI’s response is relevant, ask yourself:
  • Did you use the best tool designed for your task?
  • Does it answer your question?
  • Does it overexplain, or include unrelated topics or unnecessary information?
  • Do you need to alter the prompt?
  • Do you need to edit the results yourself?
  • Is the output formatted as expected (e.g., bullet points instead of paragraphs)?
  • If you asked for code, does the output generated work? Have you tested it?
The Committee on Publication Ethics (2023) has stated that “AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.”. Different journals, and agencies have different requirements for transparency and citing requirements for using GenAI that should be reviewed before using GenAI. In addition, you can ask yourself:
  • How much of your content was created by artificial intelligence?
  • Did you clearly indicate your use of GenAI?
  • Did you provide enough detail about use as required by your guidelines?
  • Did you include in-text citations for anything produced by GenAI?
  • Did you cite the use of GenAI correctly based on your selected c itation style? See our citing generative AI help guide for more information.
  • Do you need to disclose your collaboration with GenAI by using labels? A suggested resource is Martine Peter’s icons for the transparent use of artificial intelligence.
  • Do you need a disclosure statement about how you used artificial intelligence in your project overall? Here is an example of a disclosure statement from Western Canadian Deans of Graduate Studies Working Group(2023): This [work] was created through a synergy between human skills and AI algorithms. Specifically, [Generative AI tool name] was used to find relevant material and suggest high-level categories for analysis. The final document was comprehensively reviewed and edited by our team. Each element was written by our team, with copy-editing and phrasing help through [Generative AI tool name]. The use of AI in this manner is consistent with the guidelines and recommendations provided to us by our instructor.
GenAI outputs may be superficial or miss key information. If you are working on a project that allows you to use GenAI, review the requirements, guidelines, or rubrics, and cross-reference with the GenAI outputs and ask yourself:
  • Are you missing any aspects?
Content used to train GenAI models are often based on English and westernized content which means that the model often has biases in its language use. Ask yourself if the GenAI output is:
  • Is the output reinforcing any stereotypes?
  • Is the output skewed towards portraying a group of people a certain way?
  • Is the output trying to persuade you of a certain viewpoint?
  • Is it presenting an opinion or anecdote as fact?
  • For images: Why did that specific image come up?
    • What ideas do the image reinforce?
Adhering to these legal frameworks helps to protect user data, creator rights such as copyrights, Indigenous data sovereignty, and to maintain industry standards. Failure to comply with laws can lead to legal consequences, financial penalties, and reputational damage. Before using these tools, ask yourself:
  • Are you following the policies or guidelines for artificial intelligence use set out by your institution or organization, including privacy and copyright guidelines?
  • If you’re putting information into a GenAI system, did you write it yourself?
    • Is the information potentially confidential or private such as transcripts, emails or student work?
    • Is the information proprietary or does it belong to someone else?
      • If so, have you secured formal permission (such as a written contract) to use?
  • Are you removing any personal or identifying information (such as names, addresses, birthdates etc.) before putting that content into a GenAI system?
  • If you’re using an AI tool in research, are did you disclose it in the ethics application?
    • Did you disclose it to participants?
    • Did you disclose it to co-authors or collaborators?
When considering the use of GenAI systems, make sure to prioritize fairness, sustainability, transparency, accountability and respecting individual rights.
  • Are you allowed to use generative artificial intelligence? For what purposes?
  • Do you need to ask for approval to use generative artificial intelligence?
  • Do you need to ask for consent from others to use GenAI? (for use in a study, or transcribing audio files of oral stories, or other group members etc.)
  • Do you need to credit the use of GenAI? (see Attribution)
  • Will you face legal or reputational damage if someone found out you used GenAI?
  • Are you able and willing to take responsibility for any misinformation or inaccurate information provided by GenAI on your behalf?