Publication

Naeem, Hadeel (2023). Is a subpersonal virtue epistemology possible? Philosophical Explorations. https://doi.org/10.1080/13869795.2023.2183240

Virtue reliabilists argue that an agent can only gain knowledge if she responsibly employs a reliable belief-forming process. This in turn demands that she is either aware that her process is reliable or is sensitive to her process’s reliability in some other way. According to a recent argument in the philosophy of mind, sometimes a cognitive mechanism (i.e. precision estimation) can ensure that a belief-forming process is only employed when it’s reliable. If this is correct, epistemic responsibility can sometimes be explained entirely on the subpersonal level. In this paper, I argue that the mechanism of precision estimation — the alleged new variety of epistemic responsibility — is a more ubiquitous phenomenon than epistemic responsibility. I show that precision estimation operates at levels that are not always concerned with the epistemic domain. Lastly, I broaden this argument to explain how all subpersonal epistemologies are likely to fall prey to the problem of demarcating cognitive agency and the problem of attributing beliefs.

Naeem, Hadeel and Julian Hauser (forthcoming). Philosophy and Technology. Download penultimate version here.

We might worry that our seamless reliance on AI systems makes us prone to adopting the strange errors that these systems commit. One proposed solution is to design AI systems so that they are not phenomenally transparent to their users. This stops cognitive extension and the automatic uptake of errors. Although we acknowledge that some aspects of AI extension are concerning, we can address these concerns without discouraging transparent employment altogether. First, we believe that the potential danger should be put into perspective – many unreliable technologies are unlikely to be used transparently precisely because they are unreliable. Second, even an agent who transparently employs a resource may reflect on its reliability. Finally, agents can rely on a process transparently and be yanked out of their transparent use when it turns problematic. When an agent is responsive to the reliability of their process in this way, they have epistemically integrated it, and the beliefs they form with it are formed responsibly. This prevents the agent from automatically incorporating problematic beliefs. Responsible (and transparent) use of AI resources – and consequently responsible AI extension – is hence possible. We end the paper with several design and policy recommendations that encourage epistemic integration of AI-involving belief-forming processes.

Naeem, Hadeel (forthcoming). “Epistemic responsibility and using AI systems seamlessly” in the edited book Artificial Intelligence and Big Data Ethics in Military and Humanitarian Healthcare.

Work-in-progress

A paper on seamless use of technology

This paper examines how we responsibly form beliefs with technologies when we use them seamlessly. The existing literature suggests that an agent ought to integrate her new belief-forming process into her cognitive system, that is, develop sensitivity to its reliability. Once her process is integrated, she can responsibly generate knowledge. I argue that this way of understanding epistemic responsibility is inadequate. It doesn’t appreciate that we may want to evaluate epistemic pursuits other than knowledge, such as when beliefs are creditable to us, a good inquiry, understanding, and so on. Different degrees of integration may be required for different pursuits. While a higher degree of integration may be required for knowledge, a lesser degree may be sufficient for beliefs to be creditable to us. Cognitive integration also fails to accommodate other forms of responsibility, such as when we open-mindedly, curiously, and creatively pursue an epistemic goal (while employing technology seamlessly). This paper, therefore, argues for a holistic epistemic responsibility in seamless cases. Responsibility may be present in degrees, be of various kinds, and be relevant to different epistemic goals.

A paper on phenomenal transparency

with Julian Hauser

We argue that phenomenal transparency is necessary for cognitive extension. While a popular claim among early supporters of the extended mind hypothesis, it has recently come under attack. A new consensus seems to be emerging, with various authors arguing that transparency characterises neither internal cognitive processes nor those that extend into the environment. We take this criticism as an opportunity to flesh out the concept of transparency as it is relevant for cognitive extension. In particular, we highlight that transparency, as relevant for cognitive extension, is a property of an agent’s employment of a resource – and not merely the lack of conscious apprehension of (or attentional focus on) an object. Understanding this crucial point then allows us to explain how, in certain cases, an object may at the same time be transparent and opaque for the agent. Once we understand transparency in this nuanced way, the various counterarguments lose their bite, and it becomes clear why transparency is, after all, necessary for cognitive extension.

A paper on learning with AI systems

There is a tension between educational goals and the overwhelming use of Large Language Models (LLMs) in the educational setting. They believe that the primary aim of education is not just to impart factual knowledge, but also to develop students’ cognitive characters. In line with this perspective, I recommend designing educational AIs should be designed to encourage students to think independently by asking questions and presenting analogies. LLM chatbots that stimulate students to derive answers on their own through questions and analogies can help foster intellectual virtues such as open-mindedness, creativity, and curiosity.

A paper on AI extension and responsibility gaps

In some cases, when we interact with AI systems, our cognitive processes extend into these systems, outwith our skin and skull boundaries. This is called AI extension. In these situations, we can often address responsibility gaps by determining how to attribute beliefs to the human agent. According to Breyer and Greco (2008) and Roberts (2012), belief ownership can be explained by the cognitive integration account. I expand on this analysis by arguing that cognitive integration can help us understand how to attribute beliefs to agents in cases of AI extension. By doing so, we can assign responsibility to people for forming beliefs with AI systems, at least in certain cases. In this way, we can address some belief gaps, therefore solving some responsibility gaps.

A paper on epistemic hygiene and cognitive extension

According to Clark’s (2015) extended knowledge dilemma, there is a conflict between our epistemic hygiene practices and cognitive extension. That is, when our cognitive processes are extended, it’s difficult to also have knowledge. Epistemic hygiene requires ample agential involvement and those conditions are contradictory to cognitive extension. Extending a cognitive process requires seamlessly employing an artefact, and it’s not clear how we can responsibly generate knowledge while employing artefacts seamlessly. Philosophers propose various models of cognitive integration and at least one subpersonal epistemic responsibility account to combat this problem. In this paper, I reject the subpersonal epistemic responsibility account and instead analyze different cognitive integration accounts to motivate extended knowledge.

PhD Thesis

Is a subpersonal epistemology possible? Re-evaluating cognitive integration for extended cognition

I have a PhD in philosophy from the University of Edinburgh. My thesis expands on the extended cognition and extended epistemology debate. I argue against subpersonal virtue epistemology and instead motivate ‘cognitive integration’ to make sense of the epistemology of extended cognition. Specifically, I demonstrate how Andy Clark’s subpersonal virtue epistemology falls short in explaining extended knowledge. My research also provides general reasons to steer away from subpersonal epistemologies. You may download my PhD thesis here.

My PhD advisors were Duncan Pritchard, Orestis Palermos, and Mog Stapleton, and my examiners were Richard Menary and Dave Ward.

During my PhD, I also helped my department with the annual Edinburgh Graduate Epistemology Conference, and organised events for the Women in Philosophy Group.

Talks

Virtue Ethics and Technology conference (KU Leuven) | (forthcoming) Sep 2024
Title: Learning with conversational AI systems

Artificial Intelligence and Big Data Ethics in Military and Humanitarian Healthcare | Jun 2024
Title: Epistemic responsibility in seamless reliance on technology

Digital Words Workshop (Center for Collaboration and Ethics at the University of Texas Rio Grande Valley) | Apr 2024
Title: Epistemic responsibility in seamless reliance on technology

AI in Education: Ethical and Epistemic Perspectives (Eindhoven University of Technology) | Mar 2024
Title: Plagiarism or extended belief

Frontiers of AI: Philosophical explorations (Polish Academy of Arts and Sciences - online) | Dec 2023
Title: Transparency and AI extension – with Julian Hauser

1st Annual Web Conference of the International Society for the Philosophy of the Sciences of the Mind ISPSM2023 (online) | Nov 2023
Title: Belief attribution and AI extension

Machine Discovery and Creation Workshop (Leibniz University Hannover - online) | Aug 2023
Title: Belief attribution in generative AI extension

International Association of Computing and Philosophy IACAP 2023 (Prague) | Jul 2023
Title: Belief attribution in human-AI interaction (declined because couldn’t get a visa in time)

The 97th Joint Session of the Aristotelian Society and the Mind Association conference (Birkbeck University, London) | Jul 2023
Title: Belief attribution in AI extension

Invited talk (Bogaziçi University) | May 2023
Title: My beliefs or Alexa’s? Belief attribution in human-AI interaction

Artificial intelligence and simulation of behaviour AISB (Swansea University - online) | Apr 2023
Title: Belief attribution in human-AI interaction

Invited talk on Feminist Philosophy (Hong Kong Baptist University - online) | Jan 2023
Title: Oppression and epistemic injustice

Responsible Beliefs (The University of Helsinki) | Jun 2022
Title: Epistemic responsibility in subpersonal virtue epistemology

28th Conference of the European Society for Philosophy and Psychology (online) | Sep 2021
Title: Is a subpersonal epistemology possible?

Demarcation of Epistemic & Extended Agency (Vrije Universiteit Brussels - online) | Aug 2019
Title: Defeaters and extended cognition

Reliabilist Rationale with Sandy Goldberg (The University of Edinburgh) | Oct 2019
Title: Integration and the reliabilist rationale

Social Dimensions of Cognition (The University of Edinburgh) | Oct 2017
Title: Integration in social and extended cognition

36th Annual Pakistan Philosophical Congress (University of the Punjab) | May 2014
Title: On basic beliefs