Navigating knowledge in the age of generative AI
Minh-Hoang Nguyen
Phenikaa University
March 26, 2025
“The problem is humans’ victory is a bittersweet one. Because humans study AIs, AIs study humans’ natural stupidity.
And AIs surrender for the exact reason Tversky hinted: Humans’ natural stupidity is unlimited. In the same vein, AI’s intelligence is limited.”
—In “The Bittersweet Victory of Humans over AI”; Meandering Sobriety [1]
[SCICOMM]
The rise of generative AI technologies such as OpenAI’s ChatGPT and Google’s Gemini has profoundly transformed how we access and interact with knowledge. These tools synthesize vast datasets to deliver instant, personalized responses to user queries, often mimicking human-like communication [2]. Yet, this transformation invites critical reflection on the authorship, ownership, and reliability of AI-mediated knowledge.
Historically, knowledge dissemination has evolved through major technological shifts—from oral storytelling to the printing press and, more recently, to the digital revolution. Each transition democratized access to information. The printing press, for instance, enabled one voice to reach many, catalyzing cultural and scientific revolutions [2]. Generative AI, however, introduces a novel paradigm: reducing many voices to one synthesized narrative. While this can streamline information access, it also risks flattening diversity and embedding bias.
Understanding this shift requires grappling with concepts like informational entropy. Rooted in Shannon’s information theory [3], entropy denotes uncertainty or the amount of missing information in a system. Science combats entropy through peer review and selective rejection, filtering out less credible work to preserve the integrity of knowledge [4]. In contrast, generative AI draws from the open web—often without clear vetting—thereby increasing the risk of informational noise and misinformation.
The Granular Interaction Thinking Theory (GITT) offers a framework to examine how knowledge and value emerge from probabilistic information interactions. According to GITT, meaning and values arise not from isolated information units but from their interactions with contextual information and information within the mind [5]. By bypassing human cognitive filters, generative AI can disrupt this nuanced process. When users treat AI outputs as authoritative without questioning the underlying data or logic, they may forgo the critical engagement necessary for genuine understanding.
Compounding this challenge is the potential for AI to perpetuate bias and misinformation. Because these models are trained on pre-existing human data—riddled with socio-cultural distortions—they can unwittingly amplify stereotypes or inaccuracies (Bentley, 2025). Missteps in AI-generated content, such as historical misrepresentations or hallucinated facts, underscore the importance of intellectual humility—recognizing the limits of our knowledge and the systems we trust to deliver it.
Intellectual humility is critical for advancing human knowledge and innovations [4,6]. Then, if AI is expected to advance understanding and innovations, can AI itself be imbued with a form of intellectual humility? Given that AI is built on human-generated data, do the boundaries of human ignorance constrain the AI’s ultimate potential? If an AI system were to reach a point where it could autonomously collect, verify, and question its own information, would it still require human input—or would it transcend us entirely?
©2025 MH Nguyen using ChatGPT, drawing a logo for the imaginary Wild Wise Weird’s Bird Village [7]
These are fundamental questions worth posing as we navigate the expanding role of generative AI. Without grappling with these conceptual limits, further progress in both human and machine intelligence may become increasingly challenging. As Bentley [2] aptly observes, recognizing that we “know nothing” in the face of overwhelming information may be the most important knowledge of all. I quite agree with this viewpoint, as Albert Einstein said: “Two things are infinite: the universe and human stupidity, and I’m not sure about the universe.”
References
[1] Vuong QH. (2023). Meandering Sobriety. https://www.amazon.com/dp/B0C2RZDW85
[2] Bentley SV. (2025). Knowing you know nothing in the age of generative AI. Humanities and Social Sciences Communications, 12, 409. https://www.nature.com/articles/s41599-025-04731-0
[3] Shannon CE. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27(3), 379-423. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x
[4] Vuong QH, Nguyen MH. (2024). Exploring the role of rejection in scholarly knowledge production: Insights from granular interaction thinking and information theory. Learned Publishing, 37, e1636. https://doi.org/10.1002/leap.1636
[5] Vuong QH, Nguyen MH. (2024). Further on informational quanta, interactions, and entropy under the granular view of value formation. https://dx.doi.org/10.2139/ssrn.4922461
[6] Rovelli C. (2018). Reality is not what it seems: The journey to quantum gravity. Penguin.
[7] Vuong QH. (2025). Wild Wise Weird. https://www.amazon.com/dp/B0BG2NNHY6
tags:
Generative AI