Skip to Main Content

Faculty Guide to AI Literacy in the Age of ChatGPT

Faculty can use this guide to explore issues surrounding teaching and generative artificial intelligence

Fact Checking is Always Needed

6 things to know about AI infographic

AI "hallucination"

The official term in the field of AI is "hallucination." This refers to the fact that it sometimes "makes stuff up." This is because these systems are probabilistic, not deterministic. When you prompt it, it draws on all the data it has been fed and looks for patterns. It produces an answer based on the probability that it is the correct answer based on your prompt and the data on which it has been trained.

 

ChatGPT and fictional sources

One area where ChatGPT usually gives fictional answers is when asked to create a list of sources. See this CNBC story for an explanation: AI Chatbots Can 'Hallucinate' and Make Things Up--Why it Happens and How to Spot it.

 

There is progress in making these models more truthful

Generative AI tools can create answers that are plausible but incorrect. Luckily, there is progress in making these systems more truthful by grounding them in external sources of knowledge, which means some chatbots are beginning to link their answers to sources from which they got the information. Some examples are Microsoft Copilot and Perplexity AI, which use internet search results to ground answers. However, the Internet sources used, could also contain misinformation or disinformation. But at least with Copilot and Perplexity you can link to the sources used to begin verification.

Some models use scholarly sources

There are also systems that combine language models with scholarly sources. For example: