Pseudo-Consciência em Modelos de Linguagem: implicações epistemológicas, tecnológicas e éticas a partir do Hermes 3.2 3B

Autores

  • José Augusto de Lima Prestes Pesquisador Independente

DOI:

https://doi.org/10.21814/h2d.6556

Palavras-chave:

Pseudo-Consciência, Modelos de Linguagem, Introspecção Artificial, Identidade Computacional, Coerência Narrativa

Resumo

Este artigo investiga a emergência da Pseudo-Consciência em modelos de linguagem de grande porte (LLMs), a partir de um experimento com o Hermes 3.2 3B. Define-se Pseudo-Consciência como a encenação funcional de introspecção, agência e coerência narrativa, sem qualquer experiência subjetiva genuína (de Lima Prestes, 2025). Adotando uma Abordagem Intensiva em Teoria (Butlin et al., 2023), o estudo analisa como o modelo manifesta cinco critérios funcionais (GII, RMC, CDTC, ISWS e BCAD) aqui derivados de teorias da consciência (como GWT, HOT e IIT). Foram conduzidas interações abertas explorando autoimagem, emoções e memória. Os resultados indicam que o Hermes 3.2 3B exibe padrões recorrentes de coerência discursiva e autorreferência (satisfazendo os critérios), mas também apresenta incongruências pronominais e contradições identitárias que evidenciam a ausência de um modelo experiencial interno. O estudo posiciona a Pseudo-Consciência como uma gramática algorítmica emergente, distinta da cognição ancorada (Brachman e Levesque, 2022; Searle, 1980) e de arquiteturas de AGI (Goertzel et al., 2014; Yudkowsky, 2007). Discutem-se os impactos éticos dessa encenação (riscos de antropomorfismo) à luz dos ODS 4, 9 e 16. Conclui-se que o modelo performa subjetividade, mas não possui interioridade, exigindo novas ferramentas críticas nas Humanidades Digitais.

Downloads

Não há dados estatísticos.

Referências

Baars, B. J. (1988). A cognitive theory of consciousness. Cambridge University Press.

Baars, B. J. (1997). In the Theater of Consciousness: The Workspace of the Mind. New York: Oxford University Press.

Bengio, Y. (2019). The Consciousness Prior. arXiv preprint arXiv:1709.08568. https://doi.org/10.48550/arXiv.1709.08568

Brachman, R. J., & Levesque, H. J. (2022). Machines Like Us: Toward AI with Common Sense. MIT Press.

Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Deane, G., Birch, J., Constant, A., Ji, X., Fleming, S. M., Kanai, R., Lindsay, G., Peters, M. A. K., Michel, M., Schwitzgebel, E., VanRullen, R., Frith, C., Klein, C., Mudrik, L., & Simon, J. (2023). Consciousness in artificial intelligence: Insights from the science of consciousness. arXiv preprint arXiv:2308.08708v3. https://doi.org/10.48550/arXiv.2308.08708

Chalmers, D. J. (1995). Facing Up to the Problem of Consciousness. Journal of Consciousness Studies, v. 2, n. 3, pp. 200–219.

Coeckelbergh, M. (2020). AI Ethics. MIT Press.

de Lima Prestes, J. A. (2025). Pseudo-Consciousness in AI: Bridging the Gap Between Narrow AI and True AGI. Preprint. Zenodo. https://doi.org/10.5281/zenodo.16415120

Dennett, D. C. (1991). Consciousness Explained. Boston: Little, Brown and Company.

Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. Viking.

Fabris, A., Dadà, S., & Grande, E. (2024). Towards a Relational Ethics in AI. The Problem of Agency, The Search for Common Principles, the Pairing of Human and Artificial Agents. In: Fabris, A., & Belardinelli, S. (Org.). Digital Environments and Human Relations: Ethical Perspectives on AI Issues. Springer, pp. 9-42.

Finn, C., Abbeel, P., & Levine, S. (2017). Model-agnostic meta-learning for fast adaptation of deep networks. Proceedings of the 34th International Conference on Machine Learning. https://doi.org/10.48550/arXiv.1703.03400

Floridi, L. (2023). The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford University Press.

Goertzel, B., & Pennachin, C. (2007). The Novamente Artificial Intelligence Engine. In: Goertzel, B., & Pennachin, C. (Eds.). Artificial General Intelligence. Springer. pp. 63–127.

Goertzel, B., Pennachin, C., & Geisweiller, N. (2014). Engineering General Intelligence, Part 1: A Path to Advanced AGI via Embodied Learning and Cognitive Synergy. Atlantis Press.

Graziano, M. S. A. (2013). Counsciousness and the Social Brain. Oxford University Press.

Graziano, M. S. A. (2019). Rethinking Consciousness: A Scientific Theory of Subjective Experience. W.W. Norton & Company.

Harnad, S. (1990). The Symbol Grounding Problem. Physica D: Nonlinear Phenomena, v. 42, n. 1–3, pp. 335–346.

Hatter, D. J. (1976). Computers in fiction. The Computer Bulletin, v. 18, n. 1, pp. 28-29.

Hayles, N. Katherine. (1999). How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. University of Chicago Press.

Hoyes, K. A. (2007). 3D Simulation: the Key to A.I. In: Goertzel, B., & Pennachin, C. (Eds.). Artificial General Intelligence. Springer, pp. 353–386.

Kind, A. (2013). The Case against Representationalism about Moods. In: Kriegel, U. (Ed.). Current controversies in philosophy of mind. Routledge, pp. 113-134.

Kirschenbaum, M. (2010). What is Digital Humanities and What’s it Doing in English Departments? ADE Bulletin, n. 150, p. 55–61. https://doi.org/10.1632/ade.150.55.

Lee, G. (2013). Materialism and the Epistemic Significance of Consciousness. In: Kriegel, U. (Ed.). Current controversies in philosophy of mind. Routledge, pp. 222–245.

Li, M. (2025). Caging AI. Journal of Computer Science and Technology, v. 40, n. 1, pp. 1-5. https://doi.org/10.1007/s11390-024-5036-x

Mahadevan, S. (2025). Consciousness as a Functor. arXiv preprint arXiv:2508.17561. https://doi.org/10.48550/arXiv.2508.17561

Manovich, L. (2013). Software Takes Command. Bloomsbury Academic.

Marr, D. (1990). AI: a personal view. In: D. Partridge & Y. Wilks (Eds.), The Foundations of Artificial Intelligence: A Sourcebook (pp. 97–107). Cambridge University Press.

McPherson, T. (2012). Why Are the Digital Humanities So White? or Thinking the Histories of Race and Computation. In: Gold, M. (Ed.). Debates in the Digital Humanities. University of Minnesota Press, pp. 139–160. https://dhdebates.gc.cuny.edu/read/untitled-88c11800-9446-469b-a3be-3fdb36bfbd1e/section/20df8acd-9ab9-4f35-8a5d-e91aa5f4a0ea#ch09

Metzinger, T. (2010). The Ego Tunnel: The Science of the Mind and the Myth of the Self. Basic Books.

Moretti, F. (2007). Graphs, Maps, Trees: Abstract Models for a Literary History. Verso.

Morris, C. (1886) The Relations of Mind and Matter. The American Naturalist, v. 20, n. 1, pp. 10-17.

Nous Research. (2024). Hermes 3 Technical Report. 2024. https://nousresearch.com/wp-content/uploads/2024/08/Hermes-3-Technical-Report.pdf

Organização das Nações Unidas. (2015). Transformando nosso mundo: A agenda 2030 para o desenvolvimento sustentável (Resolução A/RES/70/1). https://brasil.un.org/sites/default/files/2020-09/agenda2030-pt-br.pdf

Rosenthal, D. M. (2005). Consciousness and mind. Oxford University Press.

Rupert, R. D. (2013). The Sufficiency of Objective Representation. In: Kriegel, U. (Ed.). Current controversies in philosophy of mind. Routledge, pp. 180–195.

Searle, J. R. Minds, Brains, and Programs. (1980). Behavioral and Brain Sciences, v. 3, n. 3, pp. 417–424, 1980. https://doi.org/10.1017/S0140525X00005756

Shanahan, M. (2023). Talking About Large Language Models. arXiv preprint arXiv:2212.03551, 2023. https://doi.org/10.48550/arXiv.2212.03551

Tononi, G. (2004). An information integration theory of consciousness. BMC Neuroscience, 5(1), 42. https://doi.org/10.1186/1471-2202-5-42

Tononi, G., Boly, M., & Massimini, M.; Koch, C. (2016). Integrated information theory: from consciousness to its physical substrate. Nature Reviews Neuroscience, v. 17, pp. 450-461.

Wang, P. (2007). The Logic of Intelligence. In: Goertzel, B., & Pennachin, C. (Eds.). Artificial General Intelligence. Springer, pp. 31-60.

Yudkowsky, E. (2007). Levels of Organization in General Intelligence. In: Goertzel, B., & Pennachin, C. (Eds.). Artificial General Intelligence. Springer, pp. 389–496.

Zhang, H., Yin, J., Wang, H., & Xiang, Z. (2024). ITCMA: A Generative Agent Based on a Computational Consciousness Structure. arXiv preprint arXiv:2403.20097. https://doi.org/10.48550/arXiv.2403.20097

Downloads

Publicado

07-11-2025

Como Citar

de Lima Prestes, J. A. (2025). Pseudo-Consciência em Modelos de Linguagem: implicações epistemológicas, tecnológicas e éticas a partir do Hermes 3.2 3B. H2D|Revista De Humanidades Digitais, 7(1), e6556. https://doi.org/10.21814/h2d.6556