Research

Work-in-progress

Immediately integrating technologies
Hadeel Naeem  

Can a technology upload a cognitive ability in our cognitive system? At some point in the first Matrix movie, Neo says, “I know kung fu”, but does Neo know kung fu if that his knowledge-how to kung fu has been uploaded by a machine plugged into his brain? This paper asks a similar question about knowledge-that cognitive abilities: can they be immediately uploaded into our cognitive systems?

redacted for journal review
Hadeel Naeem and Julian Hauser 

Michael Wheeler advises developing AI systems such that they are not transparently employed and therefore do not extend our cognition. He is concerned that extending our cognition with such AI will incorporate strange erroneous processes into our cognition. We develop the following arguments against Wheeler’s claim. First, most unreliable processes will be filtered out automatically since the agent is unlikely to repeatedly employ a process that is not reliable. Secondly, transparency and opacity are not mutually exclusive so if we employ a process transparently we can also opaquely reflect on it at a later time or the same time. Third, AI-based processes can integrate into our cognitive processes in a certain way that allows us to (in a way) monitor their reliability. In fact, to enourage such integration it would even make sense to design AI systems so they are transparently employed. However, we ultimately concur with Wheeler’s general worry about AI exension and provide the following reasons for our concern: Not all AI systems will integrate or integrate well. These systems can be wrong in strange ways that we may not be able to properly monitor. Some AI systems can give an illusion of integration when they are not properly integrated.

redacted for journal review
Hadeel Naeem

This paper investigates how we ought to attribute beliefs in the kinds of human-AI interactions that give rise to extended beliefs. Compared to classical cases of extension, AI-extended agents are only minimally involved in forming extended beliefs, and this paper explores how we might nonetheless be able to ascribe beliefs to them. Toward this goal, I seek a suitable account of belief attribution in the extended mind literature. I examine the dynamical systems theory (DST), an account based on Markov blankets, and the concept of cognitive integration as presented by virtue reliabilists. I find all of these accounts wanting. While I find that the cognitive integration account is best suited to explain how we attribute beliefs in non-AI extension cases, it still ultimately fails in cases of AI extension. I argue that we ascribe beliefs to agents for monitoring the reliabilities of their extended integrated processes. However, AI systems can learn, adapt, and alter their algorithms to monitor their own reliabilities and therefore manage their own integration. This is a difficulty because these autonomous learning systems frequently manage their reliabilities by altering their algorithms. Because the AI systems’ general reliability is mostly unaffected, agents may not be able to recognise these changes. As a result, agents can employ AI systems even though they are no longer integrated and so the beliefs formed cannot be ascribed to the agents. This study, therefore, identifies a gap in the literature, one that — if filled — can help us employ AI systems responsibly and trustingly.

If you’d like to read a draft of this paper, please write me at hadeel@hadeelnaeem.com

redacted for journal review 
Julian Hauser and Hadeel Naeem 

We argue that phenomenal transparency is necessary for cognitive extension. While a popular claim among early supporters of the extended mind hypothesis, it has recently come under attack. A new consensus seems to be emerging, with various authors arguing that transparency characterises neither internal cognitive processes nor those that extend into the environment. We take this criticism as an opportunity to flesh out the concept of transparency as it is relevant for cognitive extension. In particular, we highlight that transparency, as relevant for cognitive extension, is a property of an agent’s employment of a resource – and not merely the lack of conscious apprehension of (or attentional focus on) an object. Understanding this crucial point then allows us to explain how, in certain cases, an object may at the same time be transparent and opaque for the agent. Once we understand transparency in this nuanced way, the various counterarguments lose their bite, and it becomes clear why transparency is, after all, necessary for cognitive extension.

If you’d like to read a draft, please write me at hadeel@hadeelnaeem.com





PhD Thesis

Is a subpersonal epistemology possible? Re-evaluating cognitive integration for extended cognition

I completed my PhD at the University of Edinburgh in 2021. My thesis was on the extended cognition and extended knowledge debate. I was advised by Duncan Pritchard, Orestis Palermos, and Mog Stapleton, and my examiners were Richard Menary and Dave Ward.

My thesis argues against a subpersonal virtue epistemology. Andy Clark claims that a subpersonal mechanism of the predictive mind, called precision estimation, can sufficiently explain how an epistemic agent responsibly employs a reliable belief-forming process (and therefore generate knowledge). I show that precision estimation cannot be sufficient for epistemic responsibility and also generally advise against subpersonal epistemologies.

You can download my PhD thesis here.