Large-Language-Models 1

Evolving Minds, Evolving Language: Metaphor as a Process of Conceptual Adaptation to Artificial Intelligence

Manuscript

Argues that cognitively loaded terms such as “understanding” and “reasoning,” when applied to large language models, are best understood neither literally nor as merely loose talk, but as metaphors that adapt our conceptual repertoire to new circumstances under pressure of conceptual needs. Drawing on the Strawson-Kant tradition on imagination and concept-application, inferentialist semantics, Bermudez’s theory of rational framing, and Gentner’s structure-mapping account of analogy, it develops a four-part framework: inferential transfer rather than referential transfer, the aptic normativity of conceptual needs, the empirical evaluation of transferred inferences through mechanistic interpretability, and the career of AI metaphors from novelty toward possible conventionalization. The result is a shift away from all-or-nothing disputes about the literal applicability of cognitive terms to AI toward graded and empirically tractable questions about which inferential transfers are apt, revealing why our cognitive vocabulary for AI is evaluatively and practically consequential.

philosophy of AI, concepts, evolution of language, inference, large language models, metaphor

PDF coming soon