Fort Hays State University recently hosted a panel discussion focused on the growing challenge of identifying misinformation and online media manipulation, particularly with the rise of artificial intelligence. Experts from journalism, academia, and libraries gathered to provide practical strategies for navigating the complex digital information landscape.
Key Takeaways
- Identifying fake information online is increasingly difficult, with AI making it tougher.
- Slowing down when consuming news and using methods like SIFT can help verify content.
- Checking multiple sources (consensual validation) is crucial for confirming facts.
- Traditional editorial oversight is largely absent on online platforms, placing more responsibility on users.
- AI tools can generate convincing but false information, especially citations, and should be used cautiously.
- Peer accountability is vital to combat the spread of incorrect information online.
The Rising Challenge of Digital Deception
The event, held as part of the Hays Public Library's How-To Series, addressed a critical question: how can individuals determine the truth in an era where artificial intelligence blurs the lines between fact and fiction? Panelists highlighted the increasing difficulty of this task.
Andy Tincknell, Learning Commons Coordinator at Fort Hays State University (FHSU), referenced a study from the Communications of the Association for Computing Machinery. He stated,
"It's a 50/50 coin toss whether we can really identify whether something is fake or real. It's getting tough, and it's going to get tougher."
Tincknell explained that current technology designed to detect false information is struggling to keep pace with the rapid advancements in AI. These advancements make it easier to create and spread convincing misinformation. This technological gap poses a significant challenge for public information literacy.
Fact: AI and Misinformation
Studies suggest that the average person has a 50% chance of correctly identifying fake online information. This percentage is expected to decrease as AI technology improves its ability to create deceptive content.
Strategies for Verifying Online Information
Panelists offered several practical methods to help the public combat misinformation. Robyn Hartman, FHSU Information Literacy Librarian, advised individuals to slow down when consuming news and information online. She specifically recommended the SIFT method.
The SIFT method is an acronym for four key steps:
- Stop: Pause and consider the information before sharing or accepting it.
- Investigate the Source: Look into who created the content and their potential biases or agenda.
- Find Better Coverage: Search for other reliable sources reporting on the same topic.
- Trace the Context: Understand the original source of the information and how it has been presented or altered.
This approach applies not only to written articles but also to visual content. The panel suggested using tools like Google's reverse image search to trace the origin of videos and images. This can help determine if visual content has been manipulated or taken out of context.
The Importance of Consensual Validation
Brittney Reed, an FHSU Communication Studies Instructor, introduced another crucial strategy: consensual validation. This involves checking multiple sources to confirm the same set of facts. She emphasized the danger of relying on a single piece of information.
"If you're just seeing it from one source, I will often tell my students to take that with a grain of salt," Reed said. "You want to find several sources that are saying the same thing, particularly when it's information that seems too crazy to be true. Be looking into those other avenues."
This method encourages a proactive approach to information consumption, empowering individuals to become their own fact-checkers. While an audience member questioned whether most people would invest this much time, journalist Lynn Ann Huntington responded,
"It depends on how badly they want to know the truth."
Background: Traditional vs. Online Media
In traditional media, editors act as 'gatekeepers,' filtering out weak or false stories before publication. Online platforms often lack this level of oversight, shifting the responsibility of verification to the individual user. This change requires a different set of skills for information assessment.
The Double-Edged Sword of AI in Research
The discussion also touched upon the risks associated with using artificial intelligence for academic or general research. Panelists warned that AI can generate highly convincing but entirely false citations and articles. This means users might need to conduct further independent research to verify any information provided by AI tools.
Several panelists shared their experiences with students using AI in the classroom. Robyn Hartman recounted instances where students requested help finding articles that, upon investigation, simply did not exist. She explained,
"I ask, 'Where did you get the title of this?' and they eventually confess that they used AI for sources they could use. It's close, maybe the author has written on that topic, but then AI says, 'Here's a book that they wrote,' and that author has not written that book."
This highlights a critical issue: AI models can 'hallucinate' or invent information, presenting it as factual. This can lead to serious academic integrity problems and the spread of fabricated content.
Statistic: AI Hallucinations
While exact figures vary, studies show that Large Language Models (LLMs) can 'hallucinate' or generate incorrect information between 15% and 30% of the time, depending on the task and model. This includes fabricating sources or facts.
Leveraging AI Responsibly and Promoting Accountability
Despite the warnings, the panelists also acknowledged potential benefits of AI. They noted that AI can be a valuable tool for brainstorming ideas or generating initial drafts, rather than serving as a definitive source of truth. Using AI as a starting point, followed by rigorous human verification, is a more responsible approach.
Brittney Reed emphasized the need for greater peer accountability in the digital age. She urged individuals to hold each other responsible for the information they share online. Reed observed that online platforms can sometimes lead to "online disinhibition," where people feel less accountable for their posts.
"We get this online disinhibition where we don't have to face the consequences of posting something that was incorrect," Reed stated. "That's a message we need to share with each other to try to shift the culture a little bit."
This cultural shift involves fostering an environment where users question questionable content and encourage others to verify before sharing. Promoting critical thinking and a sense of shared responsibility for online information quality are key to navigating the evolving digital landscape.





