Abstract
This paper investigates how AI challenges traditional verification, citation, and evaluation principles, thereby eroding truth. AI-generated content, lacking verifiable citations, operates on probabilistic models that present information as probable rather than factual. The widespread reliance on AI for information-seeking risks diminishing critical evaluation, blending fact with fiction, and compromising academic integrity. To explore these effects, a mixed-method study was conducted involving 240 Chulalongkorn University students who used various online resources like Google, Wikipedia, or ChatGPT to research Rudolf Carnap’s verificationism views. Their choices were analyzed to assess dependency patterns on these platforms. Further, a qualitative analysis compared AI-generated responses with verified sources to gauge their accuracy. This comprehensive approach revealed significant insights into students’ preferences for information sources and the critical importance of validating AI outputs in academic settings, underscoring the nuanced impacts of digital tools on traditional knowledge standards.
Presenters
Pavel SlutskiyAssociate Professor, Communication Arts, Chulalongkorn University, Krung Thep Maha Nakhon [Bangkok], Thailand
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
KEYWORDS
AI-generated content, Knowledge preservation, Truth in the digital age