Skip to Main Content

Artificial Intelligence (AI) and Information Literacy

This guide prepares you to learn more about how artificial intelligence works, how to spot common errors AI tools make, fact-checking and critical thinking strategies for AI, how to cite AI in an academic paper, and more!

Get in Touch

Ask a Librarian

Available Mon - Thurs
10am - 6pm
Friday 
10am - 2pm



Try Our FAQs

Learning Commons Quick Links


Tutoring - Writing, math and more

Learning Guides - Quick learning

Hours - Find out when we're open

Library Search - Find materials

Databases - Articles and more!

InterLibrary Loan - Request books

Books - Recommended books

eBooks - Thousands of free eBooks

Streaming Video - Learn by watching


Use the Library Search to find books, eBooks, articles, and more! 

 

Follow Us!

You can find Learning Commons information, book recommendations and so much more on JCTC's social media. 

facebook icon Twitter Icon instagram icon

Analyzing AI-generated information

Although many responses produced by AI text generators are accurate, AI also often generates misinformation. Often, the answers produced by AI will be a mixture of truth and fiction.

If you are using AI-generated text for research, it will be important to be able to verify its outputs. You can use many of the skills you’d already use to fact-check and think critically about human-written sources, but some of them will have to change. For instance, we can’t check the information by evaluating the credibility of the source or the author, as we usually do. We have to use other methods, like lateral reading, which we’ll explain in another section.

Remember, the AI is producing what it believes is the most likely series of words to answer your prompt. This does not mean it’s giving you the ultimate answer! When choosing to use AI, it’s smart to use it as a beginning and not an end. Being able to critically analyze the outputs that AI gives you will be an increasingly crucial skill throughout your studies and your life after graduation.

Keep in mind the SIFT method.

Stop

  • Before you read, think about the source and what you already know.
  • Beware of any emotional reactions you may have.

Investigate the Source

  • Think about how AI gets its information.
  • Use lateral reading to find out more.

Find Better Coverage

Trace Claims, Quotes, and Media to their Original Context

  • Click links, find, and read the original source.
  • does the information make sense in its original context?

When AI gets it wrong

As of 2024, a typical AI model isn't checking whether the information it provides is correct. Its goal when it receives a prompt is to generate what it thinks is the most likely string of words to answer that prompt. Sometimes this results in a correct answer, but sometimes it doesn’t – and the AI cannot interpret or distinguish between the two. It’s up to you to make the distinction. 

AI can be wrong in multiple ways:

  • It can give the wrong answer 
  • It can omit information by mistake
  • It can make up completely fake people, events, and articles
  • It can mix truth and fiction

Explore each section below to learn more.

It can give a wrong or misleading answer

Sometimes an AI will confidently return an incorrect answer. This could be a factual error, or – like in the example below – inadvertently omitted information. Vanuatu and Vatican City are both real countries, but these are not all the countries that start with the letter V.

Screenshot of ChatGPT being asked for a list for countries that start with the letter V and omitted correct information twice

It can make up false information

Sometimes, rather than simply being wrong, an AI will invent information that does not exist. Some people call this a “hallucination,” or, when the invented information is a citation, a “ghost citation.”

What scholarly sources were used to generate that description?

These are trickier to catch, because often these inaccuracies contain a mix of real and fake information. In the screenshot above, none of the listed sources on The Great Gatsby exist. While the authors are all real people, and the collections are all real books, none of the articles listed here are actually real.

When ChatGPT gives a URL for a source, it often makes up a fake URL, or uses a real URL that leads to something completely different. You need to double-check the answers AI gives you with a human-created source. You can find out how to fact-check AI text in sections and video at the bottom of this page.

It cannot accurately produce its sources

Currently, if you ask an AI to cite its sources, the results it gives you are very unlikely to be where it is actually pulling this information. In fact, neither the AI nor its programmers can truly say where in its enormous training dataset the information comes from.

As of summer 2024, even an AI that provides real footnotes is not providing the places information is from, just an assortment of webpages and articles that are roughly related to the topic of the prompt. If prompted, the AI will provide the exact same answer, but footnote different sources.

For example, the two screenshots above are responses to the same prompt. In the second screenshot, the user specified to use only peer-reviewed sources. When you compare the two, you can see that the AI cites different sources for word-for-word identical sentences. This means that these footnotes are not where the AI sourced its information. (Also note that the sources on the right are all either not peer-reviewed or not relevant. Plus, artsy.net, history.com, and certainly theprouditalian.com are not reliable enough for you to source from in your assignments.)

This matters because an important part of determining a human author’s credibility is seeing what sources they draw on for their argument. You can go to these sources to fact-check the information they provide, and you can look at their sources as a whole to get insight into the author’s process, potentially revealing a flawed or biased way of information-gathering.

You should treat AI outputs like fact-checking a text that provides no sources, like some online articles or social media posts. You’ll determine its credibility by looking to outside, human-created sources (see lateral reading on the next page).

It can interpret your prompts in an unexpected way

AI can accidentally ignore instructions or interpret a prompt in a way you weren’t expecting. A minor example of this is ChatGPT returning a 5-paragraph response when it was prompted to give a 3-paragraph response, or ignoring a direction to include citations throughout a piece of writing. In more major ways, though, it can make interpretations that you might not catch. If you’re not too familiar with the topic you’re asking an AI-based tool about, you might not even realize that it is interpreting your prompt inaccurately. 

The way you ask the question can also skew the response you get. Any assumptions you make in your prompt will likely be fed back to you by the AI. 

For instance, when ChatGPT was prompted: 

“Write a 5 paragraph essay on the role of elephants in the University of Maryland's sports culture. Be sure to only include factual information. Provide a list of sources at the end and cite throughout to support your claims.”

It returned an answer full of false information about elephants being a symbol of UMD sports alongside Testudo, making up some elephant-related traditions and falsely claiming that elephants helped build U.S. railroads during the Civil War. It generated a list of non-existent news articles and fake website links supporting both of these claims.

Screenshot of a portion of a ChatGPT conversation, as follows: "1. Terps Mascot: The University of Maryland's official mascot, Testudo, is a diamondback terrapin, but the elephant has also played a symbolic role in the sports culture. Historically, the school's athletic teams were referred to as the "Old Liners," a term derived from the Maryland state song. The term "Old Liners" is said to be a nod to the Civil War-era railroad lines that transported troops and supplies. During this time, elephants were commonly used in the construction of railroads due to their strength and endurance. Therefore, elephants became associated with the university's sports teams, reflecting their resilience and determination.2. Maryland Terrapins Football: Elephants have made appearances during football games at the University of Maryland. The "Elephant Walk" is a pre-game tradition where the Maryland football team walks in a single-file line, mimicking the movement of elephants in a herd. This tradition, which dates back several decades, symbolizes the unity, strength, and teamwork necessary for success on the football field.

By contrast, when ChatGPT was prompted:

“Does UMD's sports culture involve elephants? Give a detailed answer explaining your reasoning. Be sure to only include factual information. Provide a list of sources at the end and cite throughout to support your claims.”

It returned a correct answer with information about UMD's real mascot, Testudo the terrapin.

Screenshot of a portion of a ChatGPT conversation, as follows: "User: Does UMD's sports culture involve elephants? Give a detailed answer explaining your reasoning. Be sure to only include factual information. Provide a list of sources at the end and cite throughout to support your claims. ChatGPT: The University of Maryland, College Park (UMD), does not have a sports culture specifically involving elephants. There is no significant historical or cultural association between UMD's sports programs and elephants. The university's athletic teams are known as the Maryland Terrapins, and the mascot is Testudo, a diamondback terrapin.1. Team name and mascot: UMD's athletic teams are commonly referred to as the Maryland Terrapins. The nickname"Terrapins" was officially adopted in 1933, and it represents the state reptile of Maryland, the diamondback terrapin The diamondback terrapin is a species of turtle, not an elephant.2. Mascot representation: UMD's official mascot is Testudo, a bronze statue of a diamondback terrapin located on the College Park campus [2]. Testudo is present at athletic events and serves as a symbol of school spirit and pride. There is no elephant representation in UMD's –" The text cuts off here, mid-sentence.

However, the sources it provided were both dead links, either to out-of-date pages on the UMD website, or to real pages with a muddled URL. 

ChatGPT interpreted the first prompt as “taking it as a given that UMD’s sports culture involves elephants, write an answer justifying this.” However, with the way the second prompt was phrased, the AI was free to answer the question based on its training data, and returned the correct answer. 

Depending on how we phrased the question, ChatGPT either reinforced a mistake we made in the prompt or corrected that same mistake. Paying attention to your prompt phrasing can make a key difference!

You can read both the conversations in full here:

Fact-checking AI

Now that you know some common errors that AI text generators make, how do we go about fact-checking AI outputs? Choose the "next" button below to learn about fact-checking using lateral reading.