![]() |
At the end of this section, you should be able to distinguish between an AI prompt and a traditional keyword or Google search to choose the most effective tool for academic tasks. |
At first glance, prompting an AI tool and typing a query into Google might feel similar. But they operate in fundamentally different ways.
An AI prompt is an instruction that tells an AI system how to generate a desired response. The quality of the prompt generally determines the quality of the output, and there are several strategies you can adopt to help you create or refine a prompt. Unlike a Google search, which retrieves information from indexed web pages based on keywords, an AI prompt engages with a language model to generate new content or analyze information using patterns learned from its training data. |
Here are some key differences to consider:
Search Engines (e.g. Google) | Generative AI (e.g. ChatGPT) | ||
Retrieve web information in real time | Generates responses from training data | ||
Provide links to existing, traceable content | May generate outdated, inaccurate, or biased data | ||
Deliver results based on keyword matching | Produces outputs based on prompt clarity and structure | ||
Use relatively low energy per query | Consumes significantly more energy per query | ||
Best for fact-checking, locating credible sources | Best for brainstorming, summarizing content, drafting explanations |
Choosing the right tool for your academic task can significantly impact the quality, reliability, and credibility of your work. Search engines like Google or Google Scholar are best suited for finding accurate, up-to-date, and verifiable sources, especially when you need to cite evidence or support claims. Generative AI tools, on the other hand, are helpful for brainstorming, summarizing complex ideas, or exploring unfamiliar topics. However, their responses can sometimes be inaccurate, outdated, or difficult to verify, so fact-checking is essential.
By understanding the strengths and limitations of each tool, you can make informed decisions, work more efficiently, and meet academic expectations for both accuracy and integrity.
While earlier AI models (like ChatGPT-3) generated responses solely from pre-existing training data, newer tools, like Perplexity and Copilot, use a method called retrieval-augmented generation (RAG) to do both, generate and retrieve. Hybrid tools combine elements of generative models (that produce responses based on training data) with retrieval-based models (that access real-time, external information). This approach can improve accuracy, reduce hallucinations, and provide more up-to-date information.
However, even is a tool uses RAG, outputs still require critical evaluation. The quality of retrieved sources can vary, and AI may misinterpret or misrepresent the information it finds. As with any academic tool, your judgment is essential. Always verify claims, cross-check with original sources, and consider whether the information meets the requirements of your assignment or the standards of your discipline.
Some tools (like ChatGPT-3 or ChatGPT-4 without web access) were trained on older data and cannot search the internet. If you need up-to-date or verifiable information, choose a tool that includes real-time web search or use a traditional search engine instead.
Tools like Copilot or Perplexity often provide source links or citations. Basic LLMs do not. If your work requires evidence-based support, make sure you’re using a tool that provides traceable information.
Make sure you're using an AI tool that complies with univeristy policies. For example, Copilot is supported at USask, meaning your data is protected and not used to train future models.
Check the platform’s data policy. Some tools store your prompts and use them to improve the model, while others (like Copilot) do not use your data for training. This matters if you’re entering sensitive or original academic work.
While many generative AI tools are available online, not all of them prioritize your privacy or align with university policies. When using AI tools for academic purposes, especially for learning, research, or group projects, it’s important to choose a tool that is vetted and safe.
⚠️ Avoid using public AI tools for academic tasks unless you fully understand their privacy policies and terms of use. Public tools may collect your data or use your input to train future models, which can raise serious privacy and intellectual property concerns.
The Campus-Supported Version of MS Copilot is the Safest ChoiceUSask's version of Copilot has been specifically configured for use by students, staff, and faculty. It offers similar capabilities for brainstorming, summarizing, or exploring different perspectives, while also protecting your privacy and data security. Here’s why Copilot is the recommended tool for students:
|
Accessing CopilotFollow these instructions in PAWS to access the USask version of MS Copilot. Look for the green shield once you have logged in. |
![]() |
🛡️ Tip: To know if an AI tool is RAG enabled or searching the web in real-time, look for signs like citations or numbered references at the end of a response, source links you can click to view the original website, or mentions of very recent events (e.g., news from the last few days). These clues suggest the tool is retrieving up-to-date information from the web. However, even when using a USask-approved tool like Copilot, it’s still your responsibility to think critically and verify the information.
Test Your Understanding |
Considering you have permission to use AI in your course, it's important to use it thoughtfully and appropriately. Now that you’ve explored how prompting differs from traditional keyword searching, and that some tools can both search and generate, think carefully about which option, an AI prompt or Google search, would be most appropriate for each academic task in the activity below. |
The University of Saskatchewan's main campus is situated on Treaty 6 Territory and the Homeland of the Métis.
© University of Saskatchewan
Disclaimer|Privacy