# On the Fundamental Limitations of AI Moral Advisors

Author: Matthieu Queloz
Published in: Philosophy & Technology 38 (71): 1–4. 2025. Invited commentary. [doi:10.1007/s13347-025-00896-3](https://doi.org/10.1007/s13347-025-00896-3)
DOI: [10.1007/s13347-025-00896-3](https://doi.org/10.1007/s13347-025-00896-3)
Canonical entry: https://www.matthieuqueloz.com/entries/on-the-fundamental-limitations-of-ai-moral-advisors/
Published PDF: https://philpapers.org/archive/QUEOTF.pdf

Machine-readable text companion generated from the PDF. Page markers follow the printed pagination.

[p. 1]

## Abstract

In “Against Personalized AI Moral Advisors” (Philosophy & Technology, 38(2):45, 2025), Muriel Leuenberger has argued that the personal nature of practical deliberation, which I stressed in my “Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains” (Philosophy & Technology, 38(34):1 -27, 2025a), counterintuitively militates against the development of personalized AI moral advisors and in favour of generalist AI moral advisors. Here, I take up and develop this line of thought, drawing out how the asystematicity of normative domains reveals the fundamental limitations of both personalized and generalist AI moral advisors.

Keywords: Artificial intelligence · AI ethics · Personalization · Respectful Disagreement · Moralism

I am grateful to Muriel Leuenberger for her insightful commentary, “Against Personalized AI Moral Advisors” (2025), which engages with my “Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains” (2025a). Leuenberger correctly identifies not only the central thrust of my argument—that the asystematicity of normative domains, stemming from the plurality, incompatibility, and incommensurability of values, poses a challenge to AI’s ability to comprehensively model these domains—but also its crucial corollary: that the asystematicity of normative truth underscores the indispensable role of human agency in practical deliberation. Where values conflict, we are required to make judgements of importance, deciding which considerations matter most to us in a particular situation. These judgements are irreducibly first-personal (“What should I do?”) and often deeply

[p. 2]

personal, touching upon questions of identity and authenticity (“What kind of person am I, or do I choose to become, by prioritizing this value here?”).

Building upon this shared foundation, Leuenberger brings out a surprising implication of this argument for the design of AI moral advisors. One might expect the first-personal and personal nature of practical reasoning to count in favour of personalization. But she argues that the very features of practical deliberation highlighted by asystematicity—its first-personal and personal nature, its susceptibility to genuine conflict, and its role in self-constitution—militate against the development of personalized AI moral advisors and in favour of generalist ones.

I find this extension of the argument compelling. Indeed, I would like to push this line of thought even further, highlighting how the asystematicity of normative domains reveals the fundamental limitations of both personalized and generalist AI moral advisors.

Leuenberger rightly stresses the dangers of conservatism and the inhibition of self-development inherent in personalized models. These are not merely unfortunate side-effects; they represent a fundamental mismatch between the AI’s assumed architecture of value and the reality of our normative lives. The very ambition to extrapolate what users should do from the patterns observable in their histories relies on the implicit assumption that users’ value-profiles form stable systems. Already in treating value-profiles as predictable, therefore, AI models impute a degree of consistency and coherence to them that the asystematicity of normative truths calls into question.

Asystematicity means that normative truths often force us into situations where considerations pull in incompatible directions. Navigating these conflicts of values is a more dynamic and potentially transformative process than the ideal of a personalized AI moral advisor allows. That process sometimes necessitates going beyond simply registering previously revealed values. It requires judgements of importance that actively shape who we become. As Nietzsche might have put it, “becoming who you are” is a genuinely creative task, and not just a matter of optimally filling in the rectangle defined by past patterns.

This is where the fundamental limitation of personalized AI moral advisors becomes evident. A personalized AI, attempting to predict the “correct” decision based on past behaviour, fundamentally misunderstands the nature of the task. It tries to provide a read-out of an identity that, in these crucial moments of conflict, is precisely what is being forged or discovered through the agent’s eventual judgement of importance. The AI seeks to predict what the agent must decide. What Leuenberger’s worry about inhibiting self-development fundamentally registers is the spurious systematicity and predictability being imposed onto a domain where true agency involves navigating – and being shaped by – inherent fragmentation and the need for self-creation.

This vindicates Leuenberger’s preference for generalist AI advisors. However, recognizing the asystematicity of normative truths also calls for a crucial qualification to what such a generalist advisor can realistically achieve. Leuenberger suggests, rightly, that it could map the relevant values, highlight conflicts, and perhaps facilitate Socratic reflection.

But while a generalist AI might indefatigably lay out competing considerations, it remains inherently incapable of resolving the incommensurability that often charac-

[p. 3]

terizes these conflicts. It can reflect the fragmented nature of normative domains, as Sorensen et al. (2024) have shown, but it cannot perform the necessary judgement of importance on the agent’s behalf.

This resolution of conflict between incommensurable goods is the locus of the first-personal and often deeply personal dimension of practical reason that asystematicity necessitates. It is an act grounded not in inferential coherence within a system, but in commitment, self-understanding, and an acceptance of the inevitability of loss and regret. A generalist AI can help illuminate the structure of a value conflict, but it cannot provide the substance of the resolution, which must come from the agent.

Therefore, while endorsing Leuenberger’s shift towards generalist models, we must be clear about their function. They are tools for enhancing the agent’s capacity to grapple with a fragmented and often recalcitrant normative reality. Their role is to broaden awareness of the landscape’s features, fault lines, and tensions, thereby providing richer input for what irreducibly remains the agent’s own task: to make commitments and take responsibility for the path chosen.

Leuenberger also hints at another role for AI moral advisors: to foster empathy and understanding for people who choose a different path by drawing attention to countervailing considerations. I agree. As I have argued elsewhere (Queloz, 2024, p. 454; 2025b, pp. 280, 345–60), the ethical and political value of recognizing the asystematicity of normative truths is that it enables respectful disagreement: it opens up conceptual room for the thought that those who choose a different path are not necessarily confused or irrational. They might reasonably be choosing to give more weight to genuinely countervailing considerations. By promoting this sympathetic understanding of the other side in a value conflict, AI moral advisors can offer an antidote to officious moralism.

In showing how the very nature of practical deliberation renders personalized AI advisors problematic, Leuenberger thus sharpens our understanding of the appropriate role for AI in our lives. Her arguments reinforce my contention that the less systematicity normative truths exhibit, the greater the burden—and indeed, the prerogative—of human agency. AI can potentially assist us in bearing that burden, but it cannot, and should not, relieve us of it.

[p. 4]

## References

Leuenberger, M. (2025). Against personalized AI moral advisors: Commentary on ‘can AI rely on the systematicity of Truth?’ by Matthieu Queloz. Philosophy & Technology, 38(2), 45. ​h​t​t​p​s​:​/​/​d​o​i​.​o​r​g​/​1​

Queloz, M. (2024). Moralism as a dualism in ethics and politics. Political Philosophy, 1(2), 433–462. https://doi.org/10.16995/pp.17532

Queloz, M. (2025a). Can AI rely on the systematicity of truth? The challenge of modelling normative domains. Philosophy & Technology, 38(34), 1–27. https://doi.org/10.1007/s13347-025-00864-x

Queloz, M. (2025b). The ethics of conceptualization: Tailoring thought and Language to need. Oxford

Sorensen, T., Jiang, L., Hwang, J. D., Levine, S., Pyatkin, V., West, P., & Choi, Y. (2024). Value kaleidoscope: Engaging AI with pluralistic human values, rights, and duties. Proceedings of the AAAI Conference on Artificial Intelligence, 38(18), 19937–19947. https://doi.org/10.1609/aaai.v38i18.29970
