Can you trust that the information you receive from AI-based tools is correct?
Although many responses produced by AI text generators are accurate, AI also often generates misinformation. Oftentimes, the answers produced by AI will be a mixture of truth and fiction. If you are using AI-generated text for research, it will be important to be able to verify its outputs. Remember, the AI is producing what it believes is the most likely series of words to answer your prompt. This does not mean it’s giving you the ultimate answer! When choosing to use AI, it’s smart to use it as a beginning and not an end.

AI can be wrong in multiple ways:
- It can give the wrong answer - Sometimes an AI will confidently return an incorrect answer. This could be a factual error, or inadvertently omitted information.
- It can omit information by mistake - Sometimes, rather than simply being wrong, an AI will invent information that does not exist. Some people call this a “hallucination,” or, when the invented information is a citation, a “ghost citation.”
- It can make up completely fake people, events, and articles - If you ask an AI to cite its sources, the results it gives you are very unlikely to be where it is actually pulling this information. In fact, neither the AI nor its programmers can truly say where in its enormous training dataset the information comes from.
- It can mix truth and fiction - AI can accidentally ignore instructions or interpret a prompt in a way you weren’t expecting. The way you ask the question can also skew the response you get. Any assumptions you make in your prompt will likely be fed back to you by the AI. If you’re not too familiar with the topic you’re asking an AI-based tool about, you might not even realize that it’s interpreting your prompt inaccurately.
Again, the key is remembering that the AI is not delivering you the one definitive answer to your question.