Detecting AI-Generated News: A Hybrid Classifier to Distinguish Real vs. LLM-Based Fabrication

Authors

  • Krunal Panchal University of Massachusetts, Boston, USA

DOI:

https://doi.org/10.47941/ijce.3212

Keywords:

Fake News Detection, AI-Generated Content, Large Language Models, Entropy Analysis, Watermarking, Content Moderation

Abstract

Purpose: The widespread adoption of large language models (LLMs) has made it possible to generate convincing news articles at scale, posing significant risks to information credibility and audience trust. Beyond text, these AI-generated narratives are increasingly repurposed into short-form videos on platforms such as YouTube Reels, Instagram Stories, and Snapchat. This study seeks to develop a reliable detection framework capable of identifying such fabricated content in both textual and video-transcribed forms.

Methodology: A hybrid classification approach was designed, combining three complementary strategies: (i) watermark signal detection to trace hidden statistical markers, (ii) token-level probability profiling to capture generation patterns, and (iii) entropy-based analysis to measure text variability. The evaluation was carried out on a purpose-built dataset consisting of authentic articles from established news outlets, synthetic outputs from models such as GPT-3.5, GPT-4, and Claude, and manually collected transcripts from video news segments.

Findings: The hybrid model attained an overall accuracy of 89.3%, with precision, recall, and F1-scores consistently above 87%. Compared to baseline models using perplexity or probability alone, the proposed method demonstrated superior robustness. Moreover, the system correctly flagged 62% of synthetic video transcripts, showing its potential for multimodal applications.

Unique Contribution to Theory, Policy, and Practice: This work introduces a novel methodological integration that advances theoretical research in AI-content verification. It further informs emerging policy discussions, including compliance with the EU AI Act and platform-level content authenticity standards. From a practical perspective, the framework offers media companies and social platforms an operational tool for moderating AI-generated misinformation before it gains viral momentum.

Downloads

Download data is not yet available.

Author Biography

Krunal Panchal, University of Massachusetts, Boston, USA

Research Scholar

References

[1] A. Vaswani et al., "Attention is All You Need," in NeurIPS, 2017.

[2] J. Kirchenbauer, J. Geiping, and T. Goldstein, "A Watermark for Large Language Models," arXiv:2301.10226, 2023.

[3] D. Ippolito et al., "Automatic Detection of Generated Text is Easiest when Humans are Fooled," arXiv:1911.00650, 2020.

[4] I. Solaiman et al., "Release Strategies and the Social Impacts of Language Models," OpenAI, 2019.

[5] R. Zellers et al., "Defending Against Neural Fake News," in NeurIPS, 2019.

[6] N. Carlini et al., "Detecting AI‑Generated Text via Watermarking," arXiv:2301.11093, 2023.

[7] T. Brown et al., "Language Models are Few‑Shot Learners," in NeurIPS, 2020.

Downloads

Published

2025-09-25

How to Cite

Panchal, K. (2025). Detecting AI-Generated News: A Hybrid Classifier to Distinguish Real vs. LLM-Based Fabrication. International Journal of Computing and Engineering, 7(21), 46–51. https://doi.org/10.47941/ijce.3212

Issue

Section

Articles