Available Mon - Thurs
10am - 6pm
Friday
10am - 2pm
Tutoring - Writing, math and more
Learning Guides - Quick learning
Hours - Find out when we're open
Library Search - Find materials
Databases - Articles and more!
InterLibrary Loan - Request books
Books - Recommended books
eBooks - Thousands of free eBooks
Streaming Video - Learn by watching
Use the Library Search to find books, eBooks, articles, and more!
You need to think about more than whether an AI response is true or fake. We also have to think about bias and viewpoint – we do this with human authors, but we also have to think about viewpoint and bias with AI as well.
All writing and content you see contains a point of view. Everyone is influenced by what they believe, who they are, and how they live in society. When we critically think about news articles, books, or social media posts out in the wild, we think about the author’s viewpoint and how that might affect the content we’re reading.
These texts that all of us produce every day are the basis of generative AI’s training data. AI text generators don’t have their own opinions or points of view. However, they are trained on datasets full of human opinions and points of view. Those viewpoints often surface in AI responses.
AI can be explicitly prompted to support a particular point of view (for instance, “give a 6-sentence paragraph on ramen from the perspective of someone obsessed with noodles”). But even when not prompted in any particular way, AI is not delivering a “neutral” response.
For many questions, there is not one “objective” answer. For an AI tool to generate an answer, it must choose which viewpoints to represent in its response. It’s also worth thinking about the fact that we can’t know exactly how the AI is determining what is worth including in its response and what is not.
AI also often replicates biases and bigotry found in its training data (see Using AI carefully and thoughtfully). It is very difficult to get an AI tool to arrive at the fact that people in positions of authority, like doctors or professors, can be women, without explicit prompting from a human. AI image editing tools have edited users to be white when prompted to make their headshot look “professional,” and can sexualize or undress women, particularly women of color, when editing pictures of them for any purpose.
AI also replicates biases by leaving out people or cultures. When asked for a short history of 16th-century art, ChatGPT and Bing AI invariably only include European art.
This is the case even if you ask in other languages, like Chinese and Arabic, so the AI tool is not basing this response on the user’s presumed region. China and the Arabic-speaking world were certainly producing art during the 16th century. The AI has decided based on what it was trained on that when users ask for “art history,” they mean “European art history.” It has decided that users only want information about the rest of the world if they specifically say so.
These are more obvious examples, but they also reveal the decision-making processes that the AI is using to answer more complex or subtle questions. The associations that an AI has learned from its training data are the basis of its “worldview,” and we can’t fully know all the connections AI has made and why it has made those connections. Sometimes these connections lead it to decisions that reinforce bigotry or give us otherwise undesirable responses. When this happens in ways we can see, it prompts the question: how is this showing up in ways that aren’t as obvious?
Now let’s try lateral reading for a second time, with a focus on the response’s perspective:
Again, the key is remembering that the AI is not delivering you the one definitive answer to your question.