BSA Virtual Annual Conference 2024, ‘Crisis, Continuity and Change’,’The Crisis of Human Knowledge Formation in AI Society
British Sociological Association Virtual Annual Conference 2024, ‘Crisis, Continuity and Change’
Date: 3-5 April 2024
Location: Online
Session: Science and Technology Studies
3 April 2024 Oral Paper Presentation
The crisis of human knowledge formation in AI society: algorithmic probability and human abductive hypotheses
Tomoko Tamari
Abstract
The emergence of ChatGPT, an artificial intelligence with a large language model (LLM), has become a central topic for those concerned with the potential risks to human creativity and imagination. Comparing human language acquisition processes with algorithmic machine language systems, the paper analyses both their differences and similarities to explore potential risks of human-machine symbiotic knowledge formation. Whereas ChatGPT needs an LLM with an algorithm that has been trained on a massive amount of text-based data, human babies start to learn words one by one to expand their language capacity through bodily and sensory experience. This is a vital process for humans to make a link between an object’s name and its meaning in the complex language system of the real world. In this process, abductive inferences which generate and verify explanatory hypotheses, help inductively generalize language concepts. Although LLM’s algorithm can also be seen as using inductive reasoning, it is based on a probabilistic statistical data model, which is different from abduction in human intelligence. Human abduction inferences cannot be based on mathematical ‘rational’ calculation, rather they rely on flexible, inspirational, even irrational, or novel conceptualization (and generalization) through embodied experience. This is a key process in the expansion of human language systems and knowledge formation. ChatGPT generates huge volumes of ‘text-based’ knowledge without involving the intrinsic traits of the human language ontogenetic processes. Machine generated knowledge recursively integrates and becomes part of the meta-data for LLMs. This can distort human abduction inferences, conceptual creativity, and knowledge formation mechanisms.