Pseudo-Consciousness in Language Models: Epistemological, Technological, and Ethical Insights from Hermes 3.2 3B
DOI:
https://doi.org/10.21814/h2d.6556Keywords:
Pseudo-Consciousness, Language Models, Artificial Introspection, Computational Identity, Narrative CoherenceAbstract
This article investigates the manifestation of Pseudo-Consciousness in large language models (LLMs) through an experiment with Hermes 3.2 3B. Pseudo-Consciousness is defined as the functional simulation of introspection, agency, and narrative coherence without genuine subjectivity (de Lima Prestes, 2025). Adopting a "Theory-Heavy Approach" (Butlin et al., 2023), the study analyzes how the model manifests five functional criteria (GII, RMC, CDTC, ISWS, and BCAD) derived from established theories of consciousness (such as GWT, HOT, and IIT). Open-ended interactions were conducted exploring self-perception, emotions, and memory. The results reveal consistent patterns of introspective discourse and self-reference (satisfying the criteria), yet also significant pronominal inconsistencies and identitary contradictions, which indicate the absence of an internal model of "self." The study frames Pseudo-Consciousness as an emergent algorithmic grammar, distinct from grounded cognition (Brachman & Levesque, 2022; Searle, 1980) and AGI architectures (Goertzel et al., 2014; Yudkowsky, 2007). It discusses the ethical impacts of this simulation (risks of anthropomorphism) considering the SDGs (4, 9, 16). It concludes that the LLM performs subjectivity but lacks interiority, demanding new critical tools for Digital Humanities.
Downloads
References
Baars, B. J. (1988). A cognitive theory of consciousness. Cambridge University Press.
Baars, B. J. (1997). In the Theater of Consciousness: The Workspace of the Mind. New York: Oxford University Press.
Bengio, Y. (2019). The Consciousness Prior. arXiv preprint arXiv:1709.08568. https://doi.org/10.48550/arXiv.1709.08568
Brachman, R. J., & Levesque, H. J. (2022). Machines Like Us: Toward AI with Common Sense. MIT Press.
Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Deane, G., Birch, J., Constant, A., Ji, X., Fleming, S. M., Kanai, R., Lindsay, G., Peters, M. A. K., Michel, M., Schwitzgebel, E., VanRullen, R., Frith, C., Klein, C., Mudrik, L., & Simon, J. (2023). Consciousness in artificial intelligence: Insights from the science of consciousness. arXiv preprint arXiv:2308.08708v3. https://doi.org/10.48550/arXiv.2308.08708
Chalmers, D. J. (1995). Facing Up to the Problem of Consciousness. Journal of Consciousness Studies, v. 2, n. 3, pp. 200–219.
Coeckelbergh, M. (2020). AI Ethics. MIT Press.
de Lima Prestes, J. A. (2025). Pseudo-Consciousness in AI: Bridging the Gap Between Narrow AI and True AGI. Preprint. Zenodo. https://doi.org/10.5281/zenodo.16415120
Dennett, D. C. (1991). Consciousness Explained. Boston: Little, Brown and Company.
Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. Viking.
Fabris, A., Dadà, S., & Grande, E. (2024). Towards a Relational Ethics in AI. The Problem of Agency, The Search for Common Principles, the Pairing of Human and Artificial Agents. In: Fabris, A., & Belardinelli, S. (Org.). Digital Environments and Human Relations: Ethical Perspectives on AI Issues. Springer, pp. 9-42.
Finn, C., Abbeel, P., & Levine, S. (2017). Model-agnostic meta-learning for fast adaptation of deep networks. Proceedings of the 34th International Conference on Machine Learning. https://doi.org/10.48550/arXiv.1703.03400
Floridi, L. (2023). The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford University Press.
Goertzel, B., & Pennachin, C. (2007). The Novamente Artificial Intelligence Engine. In: Goertzel, B., & Pennachin, C. (Eds.). Artificial General Intelligence. Springer. pp. 63–127.
Goertzel, B., Pennachin, C., & Geisweiller, N. (2014). Engineering General Intelligence, Part 1: A Path to Advanced AGI via Embodied Learning and Cognitive Synergy. Atlantis Press.
Graziano, M. S. A. (2013). Counsciousness and the Social Brain. Oxford University Press.
Graziano, M. S. A. (2019). Rethinking Consciousness: A Scientific Theory of Subjective Experience. W.W. Norton & Company.
Harnad, S. (1990). The Symbol Grounding Problem. Physica D: Nonlinear Phenomena, v. 42, n. 1–3, pp. 335–346.
Hatter, D. J. (1976). Computers in fiction. The Computer Bulletin, v. 18, n. 1, pp. 28-29.
Hayles, N. Katherine. (1999). How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. University of Chicago Press.
Hoyes, K. A. (2007). 3D Simulation: the Key to A.I. In: Goertzel, B., & Pennachin, C. (Eds.). Artificial General Intelligence. Springer, pp. 353–386.
Kind, A. (2013). The Case against Representationalism about Moods. In: Kriegel, U. (Ed.). Current controversies in philosophy of mind. Routledge, pp. 113-134.
Kirschenbaum, M. (2010). What is Digital Humanities and What’s it Doing in English Departments? ADE Bulletin, n. 150, p. 55–61. https://doi.org/10.1632/ade.150.55.
Lee, G. (2013). Materialism and the Epistemic Significance of Consciousness. In: Kriegel, U. (Ed.). Current controversies in philosophy of mind. Routledge, pp. 222–245.
Li, M. (2025). Caging AI. Journal of Computer Science and Technology, v. 40, n. 1, pp. 1-5. https://doi.org/10.1007/s11390-024-5036-x
Mahadevan, S. (2025). Consciousness as a Functor. arXiv preprint arXiv:2508.17561. https://doi.org/10.48550/arXiv.2508.17561
Manovich, L. (2013). Software Takes Command. Bloomsbury Academic.
Marr, D. (1990). AI: a personal view. In: D. Partridge & Y. Wilks (Eds.), The Foundations of Artificial Intelligence: A Sourcebook (pp. 97–107). Cambridge University Press.
McPherson, T. (2012). Why Are the Digital Humanities So White? or Thinking the Histories of Race and Computation. In: Gold, M. (Ed.). Debates in the Digital Humanities. University of Minnesota Press, pp. 139–160. https://dhdebates.gc.cuny.edu/read/untitled-88c11800-9446-469b-a3be-3fdb36bfbd1e/section/20df8acd-9ab9-4f35-8a5d-e91aa5f4a0ea#ch09
Metzinger, T. (2010). The Ego Tunnel: The Science of the Mind and the Myth of the Self. Basic Books.
Moretti, F. (2007). Graphs, Maps, Trees: Abstract Models for a Literary History. Verso.
Morris, C. (1886) The Relations of Mind and Matter. The American Naturalist, v. 20, n. 1, pp. 10-17.
Nous Research. (2024). Hermes 3 Technical Report. 2024. https://nousresearch.com/wp-content/uploads/2024/08/Hermes-3-Technical-Report.pdf
Organização das Nações Unidas. (2015). Transformando nosso mundo: A agenda 2030 para o desenvolvimento sustentável (Resolução A/RES/70/1). https://brasil.un.org/sites/default/files/2020-09/agenda2030-pt-br.pdf
Rosenthal, D. M. (2005). Consciousness and mind. Oxford University Press.
Rupert, R. D. (2013). The Sufficiency of Objective Representation. In: Kriegel, U. (Ed.). Current controversies in philosophy of mind. Routledge, pp. 180–195.
Searle, J. R. Minds, Brains, and Programs. (1980). Behavioral and Brain Sciences, v. 3, n. 3, pp. 417–424, 1980. https://doi.org/10.1017/S0140525X00005756
Shanahan, M. (2023). Talking About Large Language Models. arXiv preprint arXiv:2212.03551, 2023. https://doi.org/10.48550/arXiv.2212.03551
Tononi, G. (2004). An information integration theory of consciousness. BMC Neuroscience, 5(1), 42. https://doi.org/10.1186/1471-2202-5-42
Tononi, G., Boly, M., & Massimini, M.; Koch, C. (2016). Integrated information theory: from consciousness to its physical substrate. Nature Reviews Neuroscience, v. 17, pp. 450-461.
Wang, P. (2007). The Logic of Intelligence. In: Goertzel, B., & Pennachin, C. (Eds.). Artificial General Intelligence. Springer, pp. 31-60.
Yudkowsky, E. (2007). Levels of Organization in General Intelligence. In: Goertzel, B., & Pennachin, C. (Eds.). Artificial General Intelligence. Springer, pp. 389–496.
Zhang, H., Yin, J., Wang, H., & Xiang, Z. (2024). ITCMA: A Generative Agent Based on a Computational Consciousness Structure. arXiv preprint arXiv:2403.20097. https://doi.org/10.48550/arXiv.2403.20097
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 José Augusto de Lima Prestes

This work is licensed under a Creative Commons Attribution 4.0 International License.



