Naeem (2023). Is a subpersonal virtue epistemology possible? Philosophical Explorations.

Virtue reliabilists argue that an agent can only gain knowledge if she responsibly employs a reliable belief-forming process. This in turn demands that she is either aware that her process is reliable or is sensitive to her process’s reliability in some other way. According to a recent argument in the philosophy of mind, sometimes a cognitive mechanism (i.e. precision estimation) can ensure that a belief-forming process is only employed when it’s reliable. If this is correct, epistemic responsibility can sometimes be explained entirely on the subpersonal level. In this paper, I argue that the mechanism of precision estimation — the alleged new variety of epistemic responsibility — is a more ubiquitous phenomenon than epistemic responsibility. I show that precision estimation operates at levels that are not always concerned with the epistemic domain. Lastly, I broaden this argument to explain how all subpersonal epistemologies are likely to fall prey to the problem of demarcating cognitive agency and the problem of attributing beliefs.


LLMs, Beliefs, and Virtues

In this paper, I argue that there are three important ways a cognitive agent can responsibly employ a Large Language Model (LLM) in an educational setting. The first way is a non-reflective route to what virtue epistemologists call cognitive integration. For this route, the agent ought to employ the LLM frequently over a period, so her cognitive character becomes familiar with the kinds of beliefs it forms. In this way, the agent can responsibly form beliefs with an LLM that can be properly attributed to her and may even be knowledge. The second is a reflective route to cognitive integration which requires that the agent understand that her process is reliable, what makes it so, and what would make it not work well. For this route, the agent ought to go through a thorough training process to understand how LLMs work and how best to employ them. This route also allows the agent to form beliefs that can be attributed to her and may be knowledge. The third way to responsibly employ an LLM is to cultivate and manifest intellectual virtues (such as open-mindedness and curiosity) with it.

Degrees of epistemic responsibility

This paper investigates how we responsibly form beliefs with technologies that we employ seamlessly. While the literature focuses on the virtue epistemology concept of cognitive integration and its snug fit with the extended cognition framework to make sense of our seamless employment of technologies, I make room for cases that may not qualify as (or may not successfully be called) cases of cognitive extension.

Title redacted for journal review

with Julian Hauser

Michael Wheeler advises developing AI systems such that they are not transparently employed and therefore do not extend our cognition. He is concerned that extending our cognition with such AI will incorporate strange erroneous processes into our cognition. We develop the following arguments against Wheeler’s claim. First, most unreliable processes will be filtered out automatically since the agent is unlikely to repeatedly employ a process that is not reliable. Secondly, transparency and opacity are not mutually exclusive so if we employ a process transparently we can also opaquely reflect on it at a later time or the same time. Third, AI-based processes can integrate into our cognitive processes in a certain way that allows us to (in a way) monitor their reliability. In fact, to encourage such integration it would even make sense to design AI systems, so they are transparently employed. However, we ultimately concur with Wheeler’s general worry about AI extension and provide the following reasons for our concern: Not all AI systems will integrate or integrate well. These systems can be wrong in strange ways that we may not be able to properly monitor. Some AI systems can give an illusion of integration when they are not properly integrated.

Title redacted for journal review

This paper investigates how we ought to attribute beliefs in the kinds of human-AI interactions that give rise to extended beliefs. Compared to classical cases of extension, AI-extended agents are only minimally involved in forming extended beliefs, and this paper explores how we might nonetheless be able to ascribe beliefs to them. Toward this goal, I seek a suitable account of belief attribution in the extended mind literature. I examine the dynamical systems theory (DST), an account based on Markov blankets, and the concept of cognitive integration as presented by virtue reliabilists. I find all of these accounts wanting. While I find that the cognitive integration account is best suited to explain how we attribute beliefs in non-AI extension cases, it still ultimately fails in cases of AI extension. I argue that we ascribe beliefs to agents for monitoring the reliabilities of their extended integrated processes. However, AI systems can learn, adapt, and alter their algorithms to monitor their own reliabilities and therefore manage their own integration. This is a difficulty because these autonomous learning systems frequently manage their reliabilities by altering their algorithms. Because the AI systems’ general reliability is mostly unaffected, agents may not be able to recognize these changes. As a result, agents can employ AI systems even though they are no longer integrated and so the beliefs formed cannot be ascribed to the agents. This study, therefore, identifies a gap in the literature, one that — if filled — can help us employ AI systems responsibly and trustingly.

Title redacted for journal review

with Julian Hauser

We argue that phenomenal transparency is necessary for cognitive extension. While a popular claim among early supporters of the extended mind hypothesis, it has recently come under attack. A new consensus seems to be emerging, with various authors arguing that transparency characterises neither internal cognitive processes nor those that extend into the environment. We take this criticism as an opportunity to flesh out the concept of transparency as it is relevant for cognitive extension. In particular, we highlight that transparency, as relevant for cognitive extension, is a property of an agent’s employment of a resource – and not merely the lack of conscious apprehension of (or attentional focus on) an object. Understanding this crucial point then allows us to explain how, in certain cases, an object may at the same time be transparent and opaque for the agent. Once we understand transparency in this nuanced way, the various counterarguments lose their bite, and it becomes clear why transparency is, after all, necessary for cognitive extension.

PhD Thesis

Is a subpersonal epistemology possible? Re-evaluating cognitive integration for extended cognition

I have a PhD in philosophy from the University of Edinburgh. My thesis expands on the extended cognition and extended epistemology debate. I argue against subpersonal virtue epistemology and instead motivate ‘cognitive integration’ to make sense of the epistemology of extended cognition. Specifically, I demonstrate how Andy Clark’s subpersonal virtue epistemology falls short in explaining extended knowledge. My research also provides general reasons to steer away from subpersonal epistemologies. You may download my PhD thesis here.

My PhD advisors were Duncan Pritchard, Orestis Palermos, and Mog Stapleton, and my examiners were Richard Menary and Dave Ward.

During my PhD, I also helped my department with the annual Edinburgh Graduate Epistemology Conference, and organised events for the Women in Philosophy Group.


Artificial Intelligence and Big Data Ethics in Military and Humanitarian Healthcare | forthcoming Jun 2024 Title: Epistemic responsbility in seamless reliance on technology

Digital Words Workshop (Center for Collaboration and Ethics at the University of Texas Rio Grande Valley) | forthcoming Apr 2024 Title: Epistemic responsbility in seamless reliance on technology

AI in Education: Ethical and Epistemic Perspectives (Eindhoven University of Technology) | Mar 2024 Title: Plagiarism or extended belief

Frontiers of AI: Philosophical explorations (Polish Academy of Arts and Sciences - online) | Dec 2023
Title: Transparency and AI extension – with Julian Hauser

1st Annual Web Conference of the International Society for the Philosophy of the Sciences of the Mind ISPSM2023 (online) | Nov 2023
Title: Belief attribution and AI extension

Machine Discovery and Creation Workshop (Leibniz University Hannover - online) | Aug 2023
Title: Belief attribution in generative AI extension

International Association of Computing and Philosophy IACAP 2023 (Prague) | Jul 2023
Title: Belief attribution in human-AI interaction (declined because couldn’t get a visa in time)

The 97th Joint Session of the Aristotelian Society and the Mind Association conference (Birkbeck University, London) | Jul 2023
Title: Belief attribution in AI extension

Invited talk (Bogaziçi University) | May 2023
Title: My beliefs or Alexa’s? Belief attribution in human-AI interaction

Artificial intelligence and simulation of behaviour AISB (Swansea University - online) | Apr 2023
Title: Belief attribution in human-AI interaction

Invited talk on Feminist Philosophy (Hong Kong Baptist University - online) | Jan 2023
Title: Oppression and epistemic injustice

Responsible Beliefs (The University of Helsinki) | Jun 2022
Title: Epistemic responsibility in subpersonal virtue epistemology

28th Conference of the European Society for Philosophy and Psychology (online) | Sep 2021
Title: Is a subpersonal epistemology possible?

Demarcation of Epistemic & Extended Agency (Vrije Universiteit Brussels - online) | Aug 2019
Title: Defeaters and extended cognition

Reliabilist Rationale with Sandy Goldberg (The University of Edinburgh) | Oct 2019
Title: Integration and the reliabilist rationale

Social Dimensions of Cognition (The University of Edinburgh) | Oct 2017
Title: Integration in social and extended cognition

36th Annual Pakistan Philosophical Congress (University of the Punjab) | May 2014
Title: On basic beliefs