Philosophy-of-Ai 3

Evolving Minds, Evolving Language: Metaphor as a Process of Conceptual Adaptation to Artificial Intelligence

Manuscript (+Slideshow)

Argues that cognitively loaded terms such as “understanding” and “reasoning,” when applied to large language models, are best understood neither literally nor as merely loose talk, but as metaphors that adapt our conceptual repertoire to new circumstances under pressure of conceptual needs. Drawing on the Strawson-Kant tradition on imagination and concept-application, inferentialist semantics, Bermudez’s theory of rational framing, and Gentner’s structure-mapping account of analogy, it develops a four-part framework: inferential transfer rather than referential transfer, the aptic normativity of conceptual needs, the empirical evaluation of transferred inferences through mechanistic interpretability, and the career of AI metaphors from novelty toward possible conventionalization. The result is a shift away from all-or-nothing disputes about the literal applicability of cognitive terms to AI toward graded and empirically tractable questions about which inferential transfers are apt, revealing why our cognitive vocabulary for AI is evaluatively and practically consequential.

philosophy of AI, concepts, evolution of language, inference, large language models, metaphor

View presentation

Mechanistic Indicators of Understanding in Large Language Models

Philosophical Studies. 2026. With Pierre Beckmann. doi:10.1007/s11098-026-02513-1

Draws on detailed technical evidence from research on mechanistic interpretability (MI) to argue that while LLMs differ profoundly from human cognition, they do more than tally up word co-occurrences: they form internal structures that are fruitfully compared to different forms of human understanding, such as conceptual, factual, and principled understanding. We synthesize MI’s most relevant findings to date while embedding them within an integrative theoretical framework for thinking about understanding in LLMs. As the phenomenon of “parallel mechanisms” shows, however, the differences between LLMs and human cognition are as philosophically fruitful to consider as the similarities.

explainable AI, LLM, mechanistic interpretability, philosophy of AI, understanding, conceptual change

Download PDF

Explainability through Systematicity: The Hard Systematicity Challenge for Artificial Intelligence

Minds and Machines 35 (35): 1–39. 2025. doi:10.1007/s11023-025-09738-9

Offers a framework for thinking about “the systematicity of thought” that distinguishes four senses of the phrase, defuses the alleged tension between systematicity and connectionism that Fodor and Pylyshyn influentially diagnosed, and identifies a “hard” form of the systematicity challenge that continues to defy connectionist models.

AI, explainable AI, philosophy of AI, rationality, systematicity, conceptual change

Download PDF