Skip to Main Content

Student Guide to ChatGPT

Ask a Librarian

Getting help has never been easier. Your MJC librarians are here to help.

Live Chat

Email: ask@mjc.libanswers.com

Drop-In research help

Meet with a librarian

Phone:

  • 209-575-6230 (East Campus) or
  • 209-575-6949 (West Campus)

Text: (209) 710-5270

Ask Us a Question click to access the form

Fact Checking is Always Needed

AI "hallucination"

The official term in the field of AI is "hallucination." This refers to the fact that it sometimes "makes stuff up." This is because these systems are probabilistic, not deterministic. When you prompt it, it draws on all the data it has been fed and looks for patterns. It produces an answer based on the probability that it is the correct answer based on your prompt and the data on which it has been trained.
 

Which models are less prone to this?

GPT-4 (the more capable model behind ChatGPT Plus and Microsoft Copilot) has improved and is less prone to hallucination. According to OpenAI, it's "40% more likely to produce factual responses than GPT-3.5 on our internal evaluations." But it's still not perfect. So verification of the output is still needed.
 

ChatGPT often makes up fictional sources

One area where ChatGPT usually gives fictional answers is when asked to create a list of sources. See this CNBC story for an explanation: AI Chatbots Can 'Hallucinate' and Make Things Up--Why it Happens and How to Spot it.

 

There is progress in making these models more truthful

Generative AI tools can create answers that are plausible but incorrect. Luckily, there is progress in making these systems more truthful by grounding them in external sources of knowledge, which means some chatbots are beginning to link their answers to sources from which they got the information. Some examples are Microsoft Copilot and Perplexity AI, which use internet search results to ground answers. However, the Internet sources used, could also contain misinformation or disinformation. But at least with Copilot and Perplexity you can link to the sources used to begin verification.
 

Scholarly sources as grounding

There are also systems that combine language models with scholarly sources. For example:

Tips for Fact Checking

Remember that ChatGPT is not meant to be used as a search engine for finding information. If you try to use it that way, you'll find that AI gives you seemingly complete, reliable information often without any references. This unsourced output makes it difficult to check the veracity of the information provided. For now, it's best to use Library databases, the Library Catalog, or Google Scholar for fact finding and research. This may change in the future with more specialized search tools based on LLMs. However, if you use AI in this way here are some tips for fact checking. 

Try to find the same information elsewhere

  • When you need to know if a claim is true or false, go outside of your AI-generated material and scan multiple sources to see what the expert consensus seems to be.
  • Don't just believe something based on what AI is telling you. You want to know what the general understanding is on the information being provided.
  • You don't need to agree with the consensus, but knowing the context and history of the information will help you better evaluate it and form a starting point for further investigation.

Always evaluate information

Don't just accept information at face value. You need to delve deeper by asking yourself a few simple questions:

  • Where did this information originate?
  • What data was this AI trained on?
  • Can I find similar information using trusted sources?

To learn more, see...

Attribution

This guide is based on "Student Guide to ChatGPT" by University of Arizona Libraries is licensed under CC BY 4.0