Before the Systematicity Debate: Recovering the Rationales for Systematizing Thought

Over the course of the twentieth century, the notion of the systematicity of thought has acquired a much narrower meaning than it used to carry for much of its history. The so-called “systematicity debate” that has dominated the philosophy of language, cognitive science, and AI research over the last thirty years understands the systematicity of thought in terms of the compositionality of thought. But there is an older, broader, and more demanding notion of systematicity that is now increasingly relevant again. To recover this notion from under the shadow of the systematicity debate, I distinguish between (i) the systematicity of thinkable contents, (ii) the systematicity of thinking, and (iii) the ideal of systematic thought. I then deploy this distinction to critically evaluate Fodor’s systematicity-based argument for the language of thought hypothesis before recovering the notion of the systematicity of thought as a regulative ideal, which has historically shaped our understanding of what it means for thought to be rational, authoritative, and scientific. To assess how much systematicity we need from AI models, I argue that we must look to the rationales for systematizing thought. To this end, I recover five such rationales from the history of philosophy and identify five functions served by systematization. Finally, I show how these can be used to arrive at a dynamic understanding of the need to systematize thought that can tell us what kind of systematicity is called for and when.


Function-Based Conceptual Engineering and the Authority Problem

Mind 131 (524): 1247–1278. 2022.

In this paper, I identify a central problem for conceptual engineering: the problem of showing concept-users why they should recognise the authority of the concepts advocated by engineers. I argue that this authority problem cannot generally be solved by appealing to the increased precision, consistency, or other theoretical virtues of engineered concepts. Outside contexts in which we anyway already aim to realise theoretical virtues, solving the authority problem requires engineering to take a functional turn and attend to the functions of concepts. But this then presents us with the problem of how to specify a concept’s function. I argue that extant solutions to this function specification problem are unsatisfactory for engineering purposes, because the functions they identify fail to reliably bestow authority on concepts, and hence fail to solve the authority problem. What is required is an authoritative notion of conceptual function: an account of the functions of concepts which simultaneously shows why concepts fulfilling such functions should be recognised as having authority. I offer an account that meets this combination of demands by specifying the functions of concepts in terms of how they tie in with our present concerns.


Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains

A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but cohesive, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and cohesiveness promise to facilitate progress towards comprehensiveness in an LLM’s representation of the world. However, philosophers have identified reasons to doubt that the truth is systematic across all domains of thought, arguing that in normative domains, in particular, the truth is not necessarily systematic. I argue that insofar as the truth in normative domains is asystematic, this renders it correspondingly harder for LLMs to make progress, because they cannot rely on the consistency and cohesiveness of the truth to work towards comprehensiveness. And the less LLMs can rely on the systematicity of truth, the less we can rely on them to do our practical deliberation for us, as there is correspondingly more of a role for human agency in navigating asystematic normative domains.


Can Word Models be World Models? Language as a Window onto the Conditional Structure of the World

LLMs are, in the first instance, models of the statistical distribution of tokens in the vast linguistic corpus they have been trained on. But their often surprising emergent capabilities raise the question of how much understanding of the extralinguistic world LLMs can glean from this statistical distribution of words alone. Here, I explore and evaluate the idea that the probability distribution of words in the public corpus offers a window onto the conditional structure of the world. To become a good next-token predictor, an LLM has to become a good pattern completer, and the patterns laid up in language mirror the patterns embodied in the world to a considerable extent. When this is the case—and this is no empty condition—it allows LLMs to compress into their weights two distinct, but complementary forms of understanding of the world. The first form of understanding stored is an understanding of which properties tend to cluster together; I spell out this idea by drawing on Millikan’s account of a multidimensional clumpy world. The second is an understanding of the inferential structure that propositions about these property clusters tend to be enmeshed in; I spell out this idea by drawing on Brandom’s account of the isomorphism between deontic normative conceptual relations of incompatibility and consequence among commitments and alethic modal relations of incompatibility and consequence among states of affairs. On the resulting picture, word models can be world models insofar as linguistic patterns track real patterns between facts.


From Paradigm-Based Explanation to Pragmatic Genealogy

Mind 129 (515): 683–714. 2020.

Why would philosophers interested in the points or functions of our conceptual practices bother with genealogical explanations if they can focus directly on paradigmatic examples of the practices we now have? To answer this question, I compare the method of pragmatic genealogy advocated by Edward Craig, Bernard Williams, and Miranda Fricker—a method whose singular combination of fictionalising and historicising has met with suspicion—with the simpler method of paradigm-based explanation. Fricker herself has recently moved towards paradigm-based explanation, arguing that it is a more perspicuous way of reaping the same explanatory pay-off as pragmatic genealogy while dispensing with its fictionalising and historicising. My aim is to determine when and why the reverse movement from paradigm-based explanation to pragmatic genealogy remains warranted. I argue that the fictionalising and historicising of pragmatic genealogy is well-motivated, and I outline three ways in which the method earns its keep: by successfully handling historically inflected practices which paradigm-based explanation cannot handle; by revealing and arguing for connections to generic needs we might otherwise miss; and by providing comprehensive views of practices that place and relate the respects in which they serve both generic and local needs.


How Genealogies Can Affect the Space of Reasons

Synthese 197 (5): 2005–2027. 2020.

Can genealogical explanations affect the space of reasons? Those who think so commonly face two objections. The first objection maintains that attempts to derive reasons from claims about the genesis of something commit the genetic fallacy—they conflate genesis and justification. One way for genealogies to side-step this objection is to focus on the functional origins of practices—to show that, given certain facts about us and our environment, certain conceptual practices are rational because apt responses. But this invites a second objection, which maintains that attempts to derive current from original function suffer from continuity failure—the conditions in response to which something originated no longer obtain. This paper shows how normatively ambitious genealogies can steer clear of both problems. It first maps out various ways in which genealogies can involve non-fallacious genetic arguments before arguing that some genealogies do not invite the charge of the genetic fallacy if they are interpreted as revealing the original functions of conceptual practices. However, they then incur the burden of showing that the conditions relative to which practices function continuously obtain. Taking its cue from the genealogies of E. J. Craig, Bernard Williams, and Miranda Fricker, the paper shows how model-based genealogies can avoid continuity failures by identifying bases of continuity in the demands we face.


Genealogy and Knowledge-First Epistemology: A Mismatch?

The Philosophical Quarterly 69 (274): 100–120. 2019.

This paper examines three reasons to think that Craig’s genealogy of the concept of knowledge is incompatible with knowledge-first epistemology and finds that far from being incompatible with it, the genealogy lends succour to it. This reconciliation turns on two ideas. First, the genealogy is not history, but a dynamic model of needs. Secondly, by recognizing the continuity of Craig’s genealogy with Williams’s genealogy of truthfulness, we can see that while both genealogies start out from specific needs explaining what drives the development of certain concepts rather than others, they then factor in less specific needs which in reality do not come later at all, and which have also left their mark on these concepts. These genealogies thereby reveal widespread functional dynamics driving what I call the de-instrumentalization of concepts, the recognition of which adds to the plausibility of such instrumentalist approaches to concepts.


The Points of Concepts: Their Types, Tensions, and Connections

Canadian Journal of Philosophy 49 (8): 1122–1145. 2019.

In the literature seeking to explain concepts in terms of their point, talk of ‘the point’ of concepts remains under-theorised. I propose a typology of points which distinguishes practical, evaluative, animating, and inferential points. This allows us to resolve tensions such as that between the ambition of explanations in terms of the points of concepts to be informative and the claim that mastering concepts requires grasping their point; and it allows us to exploit connections between types of points to understand why they come apart, and whether they do so for problematic ideological reasons or for benignly functional reasons.


Williams’s Pragmatic Genealogy and Self-Effacing Functionality

Philosophers’ Imprint 18 (17): 1–20. 2018.

In Truth and Truthfulness, Bernard Williams sought to defend the value of truth by giving a vindicatory genealogy revealing its instrumental value. But what separates Williams’s instrumental vindication from the indirect utilitarianism of which he was a critic? And how can genealogy vindicate anything, let alone something which, as Williams says of the concept of truth, does not have a history? In this paper, I propose to resolve these puzzles by reading Williams as a type of pragmatist and his genealogy as a pragmatic genealogy. On this basis, I show just in what sense Williams’s genealogy can by itself yield reasons to cultivate a sense of the value of truth. Using various criticisms of Williams’s genealogical method as a foil, I then develop an understanding of pragmatic genealogy which reveals it to be uniquely suited to dealing with practices exhibiting what I call self-effacing functionality—practices that are functional only insofar as and because we do not engage in them for their functionality. I conclude with an assessment of the wider significance of Williams’s genealogy for his own oeuvre and for further genealogical inquiry.


Davidsonian Causalism and Wittgensteinian Anti-Causalism: A Rapprochement

Ergo: An Open Access Journal of Philosophy 5 (6): 153–72. 2018.

A longstanding debate in the philosophy of action opposes causalists to anti-causalists. Causalists claim the authority of Davidson, who offered powerful arguments to the effect that intentional explanations must be causal explanations. Anti-causalists claim the authority of Wittgenstein, who offered equally powerful arguments to the effect that reasons cannot be causes. My aim in this paper is to achieve a rapprochement between Davidsonian causalists and Wittgensteinian anti-causalists by showing how both sides can agree that reasons are not causes, but that intentional explanations are causal explanations. To this end, I first defuse Davidson’s Challenge, an argument purporting to show that intentional explanations are best made sense of as being explanatory because reasons are causes. I argue that Wittgenstein furnishes anti-causalists with the means to resist this conclusion. I then argue that this leaves the Master Argument for the claim that intentional explanations are causal explanations, but that by distinguishing between a narrow and a wide conception of causal explanation, we can resolve the stalemate between Wittgensteinian anti-causalists impressed by the thought that reasons cannot be causes and Davidsonian causalists impressed by the thought that intentional explanations must be causal explanations.


Debunking Concepts

Midwest Studies in Philosophy 47 (1): 195–225. 2023.

Genealogies of belief have dominated recent philosophical discussions of genealogical debunking at the expense of genealogies of concepts, which has in turn focused attention on genealogical debunking in an epistemological key. As I argue in this paper, however, this double focus encourages an overly narrow understanding of genealogical debunking. First, not all genealogical debunking can be reduced to the debunking of beliefs—concepts can be debunked without debunking any particular belief, just as beliefs can be debunked without debunking the concepts in terms of which they are articulated. Second, not all genealogical debunking is epistemological debunking. Focusing on concepts rather than beliefs brings distinct forms of genealogical debunking to the fore that cannot be comprehensively captured in terms of epistemological debunking. We thus need a broader understanding of genealogical debunking, which encompasses not just epistemological debunking, but also what I shall refer to as metaphysical debunking and ethical debunking.


Two Orders of Things: Wittgenstein on Reasons and Causes

Philosophy 92 (3): 369–97. 2017.

This paper situates Wittgenstein in what is known as the causalism/anti-causalism debate in the philosophy of mind and action and reconstructs his arguments to the effect that reasons are not a species of causes. On the one hand, the paper aims to reinvigorate the question of what these arguments are by offering a historical sketch of the debate showing that Wittgenstein’s arguments were overshadowed by those of the people he influenced, and that he came to be seen as an anti-causalist for reasons that are in large part extraneous to his thought. On the other hand, the paper aims to recover the arguments scattered in Wittgenstein’s own writings by detailing and defending three lines of argument distinguishing reasons from causes. The paper concludes that Wittgenstein’s arguments differ from those of his immediate successors; that he anticipates current anti-psychologistic trends; and that he is perhaps closer to Davidson than historical dialectics suggest.


Defending Genealogy as Conceptual Reverse-Engineering

Analysis 84 (2): 385–400. 2024.

In this paper, I respond to three critical notices of The Practical Origins of Ideas: Genealogy as Conceptual Reverse-Engineering, written by Cheryl Misak, Alexander Prescott-Couch, and Paul Roth, respectively. After contrasting genealogical conceptual reverse-engineering with conceptual reverse-engineering, I discuss pragmatic genealogy’s relation to history. I argue that it would be a mistake to understand pragmatic genealogy as a fiction (or a model, or an idealization) as opposed to a form of historical explanation. That would be to rely on precisely the stark dichotomy between idealization and history that I propose to call into question. Just as some historical explanations begin with a functional hypothesis arrived at through idealization as abstraction, some pragmatic genealogies embody an abstract form of historiography, stringing together, in a way that is loosely indexed to certain times and places, the most salient needs responsible for giving a concept the contours it now has. I then describe the naturalistic stance that I find expressed in the pragmatic genealogies I consider in the book before examining the evaluative standard at work in those genealogies, defusing the charge that they involve a commitment to a ‘stingy axiology’.  


Précis of The Practical Origins of Ideas

Analysis 84 (2): 341–344. 2024.

In this précis of The Practical Origins of Ideas: Genealogy as Conceptual Reverse-Engineering, I summarize the keys claims of the book for a symposium in Analysis. The book describes, develops, and defends an underappreciated methodological tradition: the tradition of pragmatic genealogy, which aims to identify what our loftiest and most inscrutable conceptual practices do for us by telling strongly idealized, but still historically informed stories about what might have driven people to adopt and elaborate them as they did. What marks out this methodological tradition, I argue, is that it synthesizes two genres of philosophical genealogy that are standardly set against each other: state-of-nature fictions on the one hand and patiently documentary historiography on the other. These two genres of genealogy are usually taken to be mutually exclusive and to answer to radically different philosophical interests and temperaments. But I offer a systematic account of a tradition that combines both genres into a single genealogical method, augmenting genealogy’s power and range by harnessing the strengths and possibilities of both genres.


The Essential Superficiality of the Voluntary and the Moralization of Psychology

Philosophical Studies 179 (5): 1591–1620. 2022.

Is the idea of the voluntary important? Those who think so tend to regard it as an idea that can be metaphysically deepened through a theory about voluntary action, while those who think it a superficial idea that cannot coherently be deepened tend to neglect it as unimportant. Parting company with both camps, I argue that the idea of the voluntary is at once important and superficial—it is an essentially superficial notion that performs important functions, but can only perform them if we refrain from deepening it. After elaborating the contrast between superficial and deepened ideas of the voluntary, I identify the important functions that the superficial idea performs in relation to demands for fairness and freedom. I then suggest that theories trying to deepen the idea exemplify a problematic moralization of psychology—they warp psychological ideas to ensure that moral demands can be met. I offer a three-tier model of the problematic dynamics this creates, and show why the pressure to deepen the idea should be resisted. On this basis, I take stock of what an idea of the voluntary worth having should look like, and what residual tensions with moral ideas this leaves us with.


Genealogy, Evaluation, and Engineering

The Monist 105 (4): 435–51. 2022.

Against those who identify genealogy with reductive genealogical debunking or deny it any evaluative and action-guiding significance, I argue for the following three claims: that although genealogies, true to their Enlightenment origins, tend to trace the higher to the lower, they need not reduce the higher to the lower, but can elucidate the relation between them and put us in a position to think more realistically about both relata; that if we think of genealogy’s normative significance in terms of a triadic model that includes the genealogy’s addressee, we can see that in tracing the higher to the lower, a genealogy can facilitate an evaluation of the higher element, and where the lower element is some important practical need rather than some sinister motive, the genealogy can even be vindicatory; and finally, that vindicatory genealogies can offer positive guidance on how to engineer better concepts.


Left Wittgensteinianism

European Journal of Philosophy 29 (4): 758–77. 2021. With Damian Cueni.

Social and political concepts are indispensable yet historically and culturally variable in a way that poses a challenge: how can we reconcile confident commitment to them with awareness of their contingency? In this article, we argue that available responses to this problem—FoundationalismIronism, and Right Wittgensteinianism—are unsatisfactory. Instead, we draw on the work of Bernard Williams to tease out and develop a Left Wittgensteinian response. In present-day pluralistic and historically self-conscious societies, mere confidence in our concepts is not enough. For modern individuals who are ineluctably aware of conceptual change, engaged concept-use requires reasonable confidence, and in the absence of rational foundations, the possibility of reasonable confidence is tied to the possibility of critically discriminating between conceptual practices worth endorsing and those worth rejecting. We show that Left Wittgensteinianism offers such a basis for critical discrimination through point-based explanations of conceptual practices which relate them to the needs of concept-users. We end by considering how Left Wittgensteinianism guides our understanding of how conceptual practices can be revised in the face of new needs.


Revealing Social Functions through Pragmatic Genealogies

In: Social Functions in Philosophy: Metaphysical, Normative, and Methodological Perspectives. Edited by Rebekka Hufendiek, Daniel James and Raphael van Riel, 200–218. London: Routledge, 2020.

Social Functions in Philosophy

There is an under-appreciated tradition of genealogical explanation that is centrally concerned with social functions. I shall refer to it as the tradition of pragmatic genealogy. It runs from David Hume and the early Friedrich Nietzsche through E. J. Craig to Bernard Williams and Miranda Fricker. These pragmatic genealogists start out with a description of an avowedly fictional “state of nature” and end up ascribing social functions to particular building blocks of our practices – such as the fact that we use a certain concept, or live by a certain virtue – which we did not necessarily expect to have such a function at all. That the seemingly archaic device of a fictional state-of-nature story should be a helpful way to get at the functions of our actual practices must seem a mystifying proposal, however; I shall therefore endeavor to demystify it in what follows. My aim in this chapter is twofold. First, by delineating the framework of pragmatic genealogy and contrasting it with superficially similar methods, I argue that pragmatic genealogies are best interpreted as dynamic models whose point is to reveal the function – and non-coincidentally often the social function – of certain practices. Second, by buttressing this framework with something it notably lacks, namely an account of the type of functionality it operates with, I argue that both the type of functional commitment and the depth of factual obligation incurred by a pragmatic genealogy depend on what we use the method for: the dynamic models of pragmatic genealogy can be used merely as heuristic devices helping us spot functional patterns, or more ambitiously as arguments grounding our ascriptions of functionality to actual practices, or even more ambitiously as bases for functional explanations of the resilience or the persistence of practices. By bringing these distinctions into view, we gain the ability to distinguish strengths and weaknesses of the method’s application from strengths and weaknesses of the method itself.


On Ordered Pluralism

Australasian Philosophical Review 3 (3): 305–11. 2019.

This paper examines Miranda Fricker’s method of paradigm-based explanation and in particular its promise of yielding an ordered pluralism. Fricker’s starting point is a schism between two conceptions of forgiveness, Moral Justice Forgiveness and Gifted Forgiveness. In the light of a hypothesis about the basic point of forgiveness, she reveals the unity underlying the initially baffling plurality and brings order into it, presenting a paradigmatic form of forgiveness as explanatorily basic and other forms as derivative. The resulting picture, she claims, is an ‘explanatorily satisfying ordered pluralism.’ But what is this ordered pluralism and how does Fricker’s method deliver it? And to what extent can this strategy be generalised to other conceptual practices? By making explicit and critically examining the conception of ordered pluralism implicit in Fricker’s procedure, I assess the promise that her approach holds as a way of resolving stand-offs between warring conceptions of ideas or practices more widely. I argue that it holds great promise in this respect, but that if we are to avoid reproducing just the schismatic debates that the pluralism of paradigm-based explanation is supposed to overcome at the level of what is to be regarded as a paradigm case, we need to take seriously the thought that what counts as a paradigm is partly determined by our purposes in giving a paradigm-based explanation.


Wittgenstein on the Chain of Reasons

Wittgenstein-Studien 7 (1): 105–130. 2016.

In this paper, I examine Wittgenstein’s conception of reason and rationality through the lens of his conception of reasons. Central in this context, I argue, is the image of the chain, which informs not only his methodology in the form of the chain-method, but also his conception of reasons as linking up immediately, like the links of a chain. I first provide a general sketch of what reasons are on Wittgenstein’s view, arguing that giving reasons consists in making thought and action intelligible by delineating reasoning routes; that something is a reason not in virtue of some intrinsic property, but in virtue of its role; and that citing something as a reason characterises it in terms of the rational relations it stands in according to context-dependent norms. I then argue that on Wittgenstein’s view, we misconceive chains of reasons if we think of them on the model of chains of causes. Chains of reasons are necessarily finite, because they are anchored in and held in place by our reason-giving practices, and it is in virtue of their finitude that chains of reasons can guide, justify and explain. I argue that this liberates us from the expectation that one should be able to give reasons for everything, but that it limits the reach of reasons by tying them to particular reasoning-practices that they cannot themselves justify. I end by comparing and reconciling Wittgenstein’s dichotomy between chains of reasons and chains of causes with seemingly competing construals of the dichotomy, and I clarify its relation to the dichotomy between explanation and justification.